HN Front Page - June 28
Text Size:   Decrease text size   Increase text size    

Magic-Wormhole – Get things from one computer to another, safely

Build Status

Get things from one computer to another, safely.

This package provides a library and a command-line tool named wormhole, which makes it possible to get arbitrary-sized files and directories (or short pieces of text) from one computer to another. The two endpoints are identified by using identical "wormhole codes": in general, the sending machine generates and displays the code, which must then be typed into the receiving machine.

The codes are short and human-pronounceable, using a phonetically-distinct wordlist. The receiving side offers tab-completion on the codewords, so usually only a few characters must be typed. Wormhole codes are single-use and do not need to be memorized.



% wormhole send
Sending 7924 byte file named ''
On the other computer, please run: wormhole receive
Wormhole code is: 7-crossover-clockwork
Sending (<-
100%|=========================| 7.92K/7.92K [00:00<00:00, 6.02MB/s]
File sent.. waiting for confirmation
Confirmation received. Transfer complete.


% wormhole receive
Enter receive wormhole code: 7-crossover-clockwork
Receiving file (7924 bytes) into:
ok? (y/n): y
Receiving (->tcp:
100%|===========================| 7.92K/7.92K [00:00<00:00, 120KB/s]
Received file written to


$ pip install magic-wormhole

Or on macOS with homebrew: $ brew install magic-wormhole

On Debian/Ubuntu systems, you may first need apt-get install python-pip build-essential python-dev libffi-dev libssl-dev. On Fedora it's dnf install python-pip python-devel libffi-devel openssl-devel gcc-c++ libtool redhat-rpm-config. On OS-X, you may need to install pip and run xcode-select --install to get GCC. On Windows, python2 may work better than python3. On older systems, pip install --upgrade pip may be necessary to get a version that can compile all the dependencies.

If you get errors like fatal error: sodium.h: No such file or directory on Linux, either use SODIUM_INSTALL=bundled pip install magic-wormhole, or try installing the libsodium-dev / libsodium-devel package. These work around a bug in pynacl which gets confused when the libsodium runtime is installed (e.g. libsodium13) but not the development package.

Developers can clone the source tree and run tox to run the unit tests on all supported (and installed) versions of python: 2.7, 3.4, 3.5, and 3.6.


  • Moving a file to a friend's machine, when the humans can speak to each other (directly) but the computers cannot
  • Delivering a properly-random password to a new user via the phone
  • Supplying an SSH public key for future login use

Copying files onto a USB stick requires physical proximity, and is uncomfortable for transferring long-term secrets because flash memory is hard to erase. Copying files with ssh/scp is fine, but requires previous arrangements and an account on the target machine, and how do you bootstrap the account? Copying files through email first requires transcribing an email address in the opposite direction, and is even worse for secrets, because email is unencrypted. Copying files through encrypted email requires bootstrapping a GPG key as well as an email address. Copying files through Dropbox is not secure against the Dropbox server and results in a large URL that must be transcribed. Using a URL shortener adds an extra step, reveals the full URL to the shortening service, and leaves a short URL that can be guessed by outsiders.

Many common use cases start with a human-mediated communication channel, such as IRC, IM, email, a phone call, or a face-to-face conversation. Some of these are basically secret, or are "secret enough" to last until the code is delivered and used. If this does not feel strong enough, users can turn on additional verification that doesn't depend upon the secrecy of the channel.

The notion of a "magic wormhole" comes from the image of two distant wizards speaking the same enchanted phrase at the same time, and causing a mystical connection to pop into existence between them. The wizards then throw books into the wormhole and they fall out the other side. Transferring files securely should be that easy.


The wormhole tool uses PAKE "Password-Authenticated Key Exchange", a family of cryptographic algorithms that uses a short low-entropy password to establish a strong high-entropy shared key. This key can then be used to encrypt data. wormhole uses the SPAKE2 algorithm, due to Abdalla and Pointcheval1.

PAKE effectively trades off interaction against offline attacks. The only way for a network attacker to learn the shared key is to perform a man-in-the-middle attack during the initial connection attempt, and to correctly guess the code being used by both sides. Their chance of doing this is inversely proportional to the entropy of the wormhole code. The default is to use a 16-bit code (use --code-length= to change this), so for each use of the tool, an attacker gets a 1-in-65536 chance of success. As such, users can expect to see many error messages before the attacker has a reasonable chance of success.


The program does not have any built-in timeouts, however it is expected that both clients will be run within an hour or so of each other. This makes the tool most useful for people who are having a real-time conversation already, and want to graduate to a secure connection. Both clients must be left running until the transfer has finished.


The wormhole library requires a "Rendezvous Server": a simple WebSocket-based relay that delivers messages from one client to another. This allows the wormhole codes to omit IP addresses and port numbers. The URL of a public server is baked into the library for use as a default, and will be freely available until volume or abuse makes it infeasible to support. Applications which desire more reliability can easily run their own relay and configure their clients to use it instead. Code for the Rendezvous Server is included in the library.

The file-transfer commands also use a "Transit Relay", which is another simple server that glues together two inbound TCP connections and transfers data on each to the other. The wormhole send file mode shares the IP addresses of each client with the other (inside the encrypted message), and both clients first attempt to connect directly. If this fails, they fall back to using the transit relay. As before, the host/port of a public server is baked into the library, and should be sufficient to handle moderate traffic.

The protocol includes provisions to deliver notices and error messages to clients: if either relay must be shut down, these channels will be used to provide information about alternatives.

CLI tool

  • wormhole send [args] --text TEXT
  • wormhole send [args] FILENAME
  • wormhole send [args] DIRNAME
  • wormhole receive [args]

Both commands accept additional arguments to influence their behavior:

  • --code-length WORDS: use more or fewer than 2 words for the code
  • --verify : print (and ask user to compare) extra verification string


The wormhole module makes it possible for other applications to use these code-protected channels. This includes Twisted support, and (in the future) will include blocking/synchronous support too. See docs/ for details.

The file-transfer tools use a second module named wormhole.transit, which provides an encrypted record-pipe. It knows how to use the Transit Relay as well as direct connections, and attempts them all in parallel. TransitSender and TransitReceiver are distinct, although once the connection is established, data can flow in either direction. All data is encrypted (using nacl/libsodium "secretbox") using a key derived from the PAKE phase. See src/wormhole/cli/ for examples.


To set up Magic Wormhole for development, you will first need to install virtualenv.

Once you've done that, cd into the root of the repository and run:

virtualenv venv
source venv/bin/activate
pip install --upgrade pip setuptools

Now your virtualenv has been activated. You'll want to re-run source venv/bin/activate for every new terminal session you open.

To install Magic Wormhole and its development dependencies into your virtualenv, run:

pip install -e .[dev]

Running Tests

Within your virtualenv, the command-line program trial will run the test suite:

trial wormhole

This tests the entire wormhole package. If you want to run only the tests for a specific module, or even just a specific test, you can specify it instead via Python's standard dotted import notation, e.g.:

trial wormhole.test.test_cli.PregeneratedCode.test_file_tor


Every so often, you might get a traceback with the following kind of error:

pkg_resources.DistributionNotFound: The 'magic-wormhole==0.9.1-268.g66e0d86.dirty' distribution was not found and is required by the application

If this happens, run pip install -e .[dev] again.

License, Compatibility

This library is released under the MIT license, see LICENSE for details.

This library is compatible with python2.7, 3.4, 3.5, and 3.6 . It is probably compatible with py2.6, but the latest Twisted (>=15.5.0) is not.

Close this section

Stanford Made Available Code From Cars That Entered Darpa Challenges

'); }()); gptadRenderers['SF_ProjectSum_HubIcon_200x90_A'] = function(){ // jshint ignore:line googletag.cmd.push(function() { googletag.display('div-gpt-ad-1394209358854-0'); }); }; gptadRenderers['SF_ProjectSum_HubIcon_200x90_A'](); // jshint ignore:line }

Close this section

MIT’s gas-powered drone is able to stay in the air for five days at a time

Last month, a team of MIT engineers launched Jungle Hawk Owl from the back of a compact car. It was the first flight for the 24-foot-wide drone, which the team believes is capable of staying in the air for five days on a single tank of gas.

The craft was designed to address a challenge posed by the U.S. Air Force. The teams were asked to design a UAV (unmanned aerial vehicle) powered by solar energy that was able to stay in the air over long periods. The idea was to design a vehicle that could help deliver communications to areas impacted by natural disasters or other emergencies. Weather balloons have traditionally been the choice, but they drift with the wind and often don’t stay in the air long enough to be really effective.

Several teams at MIT’s Beaver Works lab got to work on the problem, soon abandoning the solar option. According to team co-lead, Professor Warren Hoburg, current solar technologies would require a much larger drone with a much larger surface area for panels, coupled with a large, heavy battery. Solar also runs into issues during the winter months and at latitudes far from the equator because of shortened daylight hours.

“It’s true that it’s less appealing to be running on gasoline [than solar],” he tells TechCrunch. “But building the solar airplane would be a big boondoggle. With the design we chose, we’ve already had a first flight. It was easy to build compared to the other aircraft available, and the cost and fuel consumption are really low. We spent more fuel getting to the launch site than flying the airplane for three days.”

The winning team designed a prototype of the drone using  GPkit, a Python-based modeling tool designed by Hoburg. The final design was built out of lightweight materials like carbon fiber and Kevlar, weighing a total of 55 pounds (closer to 150 with payload and a tank full of gas). The parts can be easily dissembled and shipped to affected areas and the payload is the perfect size for carrying a shoebox-sized communication device designed by MIT’s Lincoln Labs, which helped support the project.

In addition to supporting areas in the wake of a disaster, the team believes the drone could go a ways toward helping tech companies like Google and Facebook achieve their longstanding (and in one case recently abandoned) dream of delivering internet access to rural areas. But there’s still a lot of work to be done, and the school is working with the FAA for permission to keep the drone in the air for the full five days as it continues its testing over the summer. 


  2. DSC_0002

  3. DSC_0012

  4. DSC_0055

  5. DSC_0058

  6. DSC_0079

  7. DSC_0081

  8. DSC_0084

  9. DSC_0182

  10. DSC_0184

  11. DSC_0198

  12. DSC_0200

  13. DSC_0295

  14. DSC_0298

  15. DSC_0317

  16. DSC_0339

  17. DSC_0356

  18. DSC_0359

  19. DSC_0363

  20. DSC_0390

  21. IMG_2551

Close this section

How I learned to code in my 30s

How to fail

I had fallen out of love with my first career of seven years — I couldn’t see myself doing it for the rest of my life — and I decided that I wanted to be a software engineer. I don’t know why I wanted to do it. I just felt a magnetic attraction. I wanted to build things. For context, I am bad at math, I didn’t know anyone who was a programmer, and I had no idea what I was getting into or whether I would like it. Friends helpfully suggested a) this was nuts and b) I was too old.

In January 0f 2014, I went to a General Assembly bootcamp for Ruby/Rails in SF. It was relatively early days for bootcamps and the experience was pretty raw. There was a large class size (they had combined two cohorts) with different levels of preparation. The curriculum was in flux. It felt chaotic. A few weeks passed, we began to hear stories about grads still looking for jobs, and a palpable sense of herd anxiety set in — were we actually going to become software engineers after quitting our jobs and investing $10,000? I left before I had to pay for the second semester. However, I did learn a lot, there were some good instructors, and a number of my classmates went on to great careers as web engineers, but it didn’t seem obvious at the time.

I then took a month to build a front end portfolio, and bootstrapped myself as a Javascript contractor applying for small projects. This early focus was productive — I actually landed some work. But just as my optimism was up, a few months passed with unsteady work, and doubt set in.

Things I learned from contracting as an independent junior engineer:

  • You spend as much time sourcing work as coding
  • Getting projects lined up back-to-back is very hard
  • You don’t get a lot of technical feedback
  • Inconsistent income creates stress at home
  • There is no one to tell you if you are learning the right things

I felt adrift. I started looking at what it would take to find full time employment. I had heard there were a lot of self-taught programmers in Silicon Valley. I was confused and frustrated when all the job postings seemed to indicate otherwise. Every junior web engineer posting seemed to require: “a degree in computer science or two years of professional experience”. How do you get two years of professional experience without a degree, if a degree is required? How do self-taught engineers get jobs?

I started sending out applications despite the requirements. I began to research the interview process, hoping against hope that I’d get one. I realized I knew zero about data structures and algorithms and had no idea how to get started. Suddenly, I felt hopeless. I wasn’t on track to meet the requirements for any jobs I wanted, and I doubted I would pass the interview if I did.

It was a humbling moment. Six months in, having strained my finances and relationships, I was little more than a bootcamp drop-out and a semi-employed Javascript contractor. So I made a very practical decision—I gave up. I told friends and family I had made an impulsive and expensive mistake, and I found a job that was a better fit based on my prior career.

Close this section

What It Costs to Open a Restaurant in San Francisco

For chef Adam Tortosa, opening a restaurant is more than just a pipe dream — it’s about proving something. Tortosa, a fair, lanky 31-year-old from San Diego, was first introduced to the Bay Area when he opened 1760 with the Acquerello team in 2013. Four months later, he was slammed by a mediocre review from Michael Bauer. He quickly “resigned.”

Fast forward four years, and Tortosa is set to open Robin, a firmly untraditional omakase restaurant in Hayes Valley. Throughout the nearly two-year opening process, Eater has closely followed Tortosa, from sitting in on meetings to filming the construction of the restaurant. Tortosa has opened up his financial books, sharing the price of everything from public relations to the (very expensive) plates on the tables. The result is a deep dive into what it really takes — financially, operationally, and emotionally — to open a restaurant in a major city like San Francisco.

After everything that happened at 1760, Tortosa nearly left restaurants altogether. The experience prompted a six-month depression, during which he didn’t work at all, instead spending that entire stretch of time obsessing over what went wrong.

“I was underqualified for the 1760 job, for sure. I wasn’t ready, so I felt like I needed to show off, to show that I had some technique or skill and that I belonged here. And because my two role models, chef-wise, were two guys that yelled, that’s the personality I adapted in the kitchen. I was probably a piece of shit to be around,” Tortosa recounts, almost as if it happened to someone else. “Obviously, I made a shitload of mistakes. The way I treated people, the way I worked in general. It changed the way I work now.”

Eventually, he came to the conclusion — with the help of some anti-depression medication and a newfound belief in meditation — that he wanted to try again. So Tortosa fell back on his sushi training from Los Angeles’ master sushi chef Katsuya Uechi and worked behind the bar at Akiko’s for two years while rebuilding his confidence. Slowly, the itch to be in charge of his own place returned, and the idea for Robin took form.

“I was very over restaurants after 1760. But if I left — if I just went back to Los Angeles — I would lose this round,” he says. “In the last few years, I’ve had some time to reflect and grow up. With Robin, I don’t feel like I need to fit in now. It’s more what I believe in.”

Unlike many of the omakase restaurants in SF (Omakase, Kusakabe, Sasaki), Robin’s sushi sharply veers away from what’s found in Japan. Tortosa tops his nigiri with unexpected ingredients straight from the farmers market, like Cara Cara oranges or confited tomato — ingredients that make more sense considering his California roots.

For the past year and a half, Tortosa has painstakingly built his personal pipe dream — and it took a hell of a lot more than hopes and wishes. He raised $700,000. His team constructed the space from scratch. He secured all the necessary city permits. He hired staff and created a menu. Now, Robin is ready to open on Thursday, July 6.

Want a real answer to what it takes to open and run a restaurant? Here it is, complete with the hard numbers that typically stay out of public view.

Money Raised From Investors: $600,000

With a plan in place, Tortosa needed to go out and woo some sugar mamas and daddies. In his — and many other’s — case, this turned out to include his actual mom and dad, who along with some other family members and friends contributed about half of the $600,000 he raised.

For the rest, Tortosa had to go the traditional route of raising money from outside investors, a necessary evil for him. It’s a skill that relies heavily on salesmanship, and he found it incredibly uncomfortable to ask people he knew for upwards of $50,000. “I really hate, hate, hate doing this,” he said at the time. “I’m bad at talking about myself, like self-promotion and taking compliments.”

Indeed, when asked to describe himself, Tortosa stammers before simply saying, as if being tortured, “I can’t.” Talk to him about the restaurant industry or what he’s learned in his career or his beliefs, and you’ll get very thoughtful — if expletive-filled — answers. Others he’s worked with during this process are quick with compliments like “sweet,” “talented,” and “creative,” but he has trouble even acknowledging them. It’s something he’s working on with a therapist, who he sees off and on.

Self-image aside, he needed to get the money. At one point, he had an offer on the table from someone willing to give the entire $600,000 in return for half-ownership of the restaurant. It would have completely solved his money issues.

“Obviously that amount of money is very important, but I have to really trust that person,” he says. “I have to trust that they’re not going to cause more problems than essentially the amount of money they give me. If every investor brings a lot of headache, then it’s not worth it.”

So rather than give up that kind of control, Tortosa instead turned to past customers from Katsuya in Los Angeles and Akiko’s here in San Francisco. Investors gave money in $50,000 increments, receiving a share of ownership and perks, like $500 a year in dining credit, in return. But as limited partners, they have absolutely no creative control.

If the restaurant does not succeed, the hard truth is they will not get their money back. If it does, though, the investors stand to profit for as long as Robin is open — after they’ve recouped their investment, of course. Until they are fully paid back, 100 percent of profits go to them — a common financial arrangement for first-time restaurants like Robin. After that, those investors still collectively own 25 percent of the restaurant, and thus will continue to get 25 percent of the profits in perpetuity. If financial projections go according to plan, investors will have their money back in under three years.

The arrangement weighs heavily on Tortosa’s mind and comes up often when people ask about his goals for the restaurant. “My first priority is getting people this money back. They’re all people I know, and they put a lot of trust into me with that money,” he says. “As much fun and everything that a restaurant is and how everyone’s like an ‘artist’ and all that shit, it’s a business.”

Tenant Improvement: $100,000

Negotiated into the lease were what is called tenant improvement or “TI” kickbacks, a bonus that sometimes comes with newer spaces. Essentially, the landlord would cover anything that Tortosa paid for that was an improvement to the building itself — meaning he couldn’t take it away with him if Robin moved or closed.

Since Robin was an empty box when Tortosa got in there, he paid to add all the plumbing, electrical, and more, collecting $100,000 from his landlord to offset the expense.

Final Total Funding: $700,000

Consulting: $25,000

For someone who describes himself as not self-promotional, Tortosa managed to meet an inordinate number of people while working behind the Akiko’s bar. One such person was David Steele, an owner of Ne Timeas, the restaurant group that comprises Flour & Water, Central Kitchen, and Salumeria.

When he made sushi for Steele in November of 2015, Tortosa had the start of a business plan, but no idea how to execute it. So he took a chance and Googled Steele’s email address to reach out.

“I was kind of stuck. I didn’t know what to do next,” Tortosa says. “I basically was like, ‘This guy has to know what the fuck he’s doing. Worst case scenario, he tells me to go fuck off.’”

Turns out Tortosa was contacting Steele at the perfect time: He and business partner David White were building a consulting portion of Ne Timeas. Today restaurants like Trick Dog, Comal, and Urban Putt have all paid White and Steele to consult on their projects. Although the Ne Timeas name isn’t well-known to people outside the industry, the group has quietly helped shape a significant part of the San Francisco dining scene in the last decade.

“The food at Akiko’s is terrific,” says Steele. “And my understanding as I sat there is they give the chefs a lot of creative freedom, so I was pretty impressed with Adam. We just hit it off. Adam had a rough go of it at 1760. He’s really a super sweet guy. Maybe he didn’t have the reputation for being the sweetest guy when he was there, but he’s just been a pleasure to work with.”

With Steele on board, Tortosa began to have direction. Steele and White guided Tortosa through sharpening a business plan, winning investors, finding a space, and connecting him with everyone that helps a restaurant come together: a realtor, accountant, lawyer, architect, contractor, and so much more. Tortosa has upwards of five hundred emails from Steele alone in his inbox.

“I call it the process of demystification,” says Steele. “This is Adam’s creation. We have no involvement in the creativity of this. But one of the most important things we do for our clients is the investor deck. It’s so critical to create an attractive document that gives the impression to anyone who reads it that the person creating this restaurant has their shit together, which means one has to actually have their shit together and think through everything this restaurant is going to be. That’s what we do.”

Their services for this particular project cost $25,000 up-front — plus a future percentage of profits for seven years. Steele notes that each project has different time requirements, and thus different costs.

The other part of Ne Timeas’ involvement — the less quantifiable part — is the sense of confidence that an established company with a successful track record lends a project.

So is all of that worth $25,000? If you ask Tortosa, the answer is an adamant yes. “I maybe, with a lot more time and mistakes and money, could have done this on my own,” he says. “But they introduced me to investors, and any time I needed anything, they were there with the answer. The amount of time I spent with either of the Davids is insane.”

Amount Spent to Date: $25,000

Rent & Utilities: $84,269

With investors secured and a solid business plan in place, Tortosa needed to find a space. Headlines love to tout San Francisco as having the most expensive rents in America — an obvious challenge for restaurants across the city, where profit margins are so razor-thin. So Tortosa took his time finding the right location. He was targeting the Tenderloin-Nob Hill area or Hayes Valley.

“I wanted a space where we would fit into a community of other businesses. I don’t look at other restaurants as competition. It’s more like an ecosystem — all of a sudden, X area becomes really good, because it’s all of these restaurants and bars together,” he says.

It sounds great in theory, but it’s another story to actually find this. Out of a dozen locations he saw (“They were such shitboxes,” he says), only one space — a ground-floor storefront in a brand-new micro-unit building in Hayes Valley — stood out. But there was already a letter of intent on it, meaning that another prospective restaurant owner had put in a bid. The deal fell through, however, and Tortosa scooped up the space.

For his 1,250 square feet, Tortosa pays $8,000 a month, or $62 per square foot. In June of 2016, he signed a triple-net lease, wherein the tenant pays all real estate taxes, building insurance, and maintenance on the property in addition to rent and utilities. He put down a three-month security deposit and first month’s rent, totaling $32,000.

Written into his contract was a bonus six-month reprieve from rent for the build-out, since it was an empty space. Tortosa planned to be open by the time rent kicked in. But this is San Francisco and ubiquitous permitting delays shoved Robin another six months down the line. Which means Tortosa also had to pay $48,000 rent before he ever opened.

The remaining $4,269 in this category went toward utilities over the year he’s had the space, like gas, electricity, garbage, water, telephone, and Internet.

Amount Spent to Date: $109,269

Architecture: $34,500

Since Robin was in a completely new building, Tortosa could construct it any way he wanted. Of course, he had no idea how to do that. So he turned once again to Steele and White, who put him in touch with a few architects, including Todd Davis, who built his company working on residential projects.

“I just vibed with him the best. That’s how I pick everyone basically, on vibe,” Tortosa says. Like a lot of creative chef types, he’s not a numbers guy and runs almost entirely on feeling.

It also helped that Davis wanted to break into restaurant design. To get his foot in that door, Davis gave Tortosa a break in pricing, settling on $25,000 to draw up the plan and push through the permitting.

“I got to eat Adam’s sushi at Akiko’s, and it was one of those experiences where you’re like, ‘Oh, okay, you’re legit,’” Davis says. “There are a lot of fakers in San Francisco, but that meal was one of those food experiences that inspired me even more to make it with this project.”

Davis spent the majority of his time either dealing with the city to get permit approvals or working on the design of the space. He connected Tortosa with the slatemaker who created the custom slate-top sushi bar, which lies just a few inches higher than the wood bar where guests sit.

“The sushi cases are built down into the bar, so there is nothing blocking the view of chefs making the sushi. It’s one-of-a-kind like that where they can set up their stuff and they’re making the sushi right in front of you,” Davis says. It’s one of the elements of the design he’s most proud of.

Robin’s rendering Robin’s rendering
Robin’s rendering, which provided the layout for the space, though design details have changed
Todd Davis

The other $9,500 in this category went to mechanical, engineering, and plumbing design, which another company handled. This comprised the gritty details like heating, ventilation, air conditioning lines, and an air-flow system. Once the company decided the best places for things like California-compliant Title 24 lights and water drains, they tested it all with the city.

Throughout this time, Tortosa was working on Robin during the day and still slinging sushi at Akiko’s by night.

Amount Spent to Date: $143,769

Permitting: $22,100

Dealing with the city’s Department of Building Inspection is the bane of every restaurateur’s existence. It’s why almost every SF restaurant is delayed for months on end. Once all the architectural and system plans are in place, the city’s planning department has to approve them before construction can start. This is the point of the process where most restaurants get delayed. Robin was no exception. This restaurant had an especially complicated procedure, since the space was originally approved as a retail location. That means on top of regular restaurant permits, Robin needed a dreaded change-of-use approval.

Approvals dragged on for months and included steep fees: A fire department permit worker told Tortosa that it would take him two to four weeks to even look at Robin’s paperwork — or, for $536 ($134 per hour of overtime with a four-hour minimum), Tortosa could pay for him to look at it right now. Tortosa forked up the cash, and one day later, his permit was approved.

But that was just the fire department permit. In total, there were 14 subsequent (not simultaneous) permit stops: planning, building, fire, mechanical, health, public utilities, and more. One stop in the chain of approval took anywhere from one day all the way up to six weeks. The process was especially long if a permitter requested a change to plans, because the approver needs to see the fix before signing off on it. Then there are ridiculous things like this: One permitter would not approve plans because the font Robin used on the paperwork was too small. She requested that it be 1/8 of an inch or larger before she would even read it.

To make it all happen, Davis biked down to the planning department almost daily, while Tortosa followed up with each department via email, practically begging the process to move forward.

“One city worker forgot to drop our plans in the correct health bin, and instead it sat on the corner of his desk for a week. Who knows how long it would have been there if I didn’t ask. That one week of a complete waste of time cost me about $2,000 in rent alone,” Tortosa says.

As he was learning in an acute way, time was money. For that reason, some restaurants will pay companies that specialize in permit expedition, but Tortosa did not go that route. “They’re like the fucking mafia,” he says.

He did, however, decide to get professional help with the alcoholic beverage control (ABC) permit. By paying a company $5,000 to secure his beer-and-wine license, Tortosa was able to skip the work himself of that particularly involved process, which includes minutiae like mailing notices to every single resident within 500 feet of the restaurant. For Robin, the company mailed out 586 notices.

In September, during this process, Tortosa left Akiko’s to focus full-time on Robin. It turned out to be a premature decision, since permits took so much longer than anticipated. So Tortosa planned a last-minute Japan research trip, where he traveled the country for two weeks just eating along the way. It was his second time in the country.

Finally, on December 15, 2016 — pretty much the day he had hoped to open to the public, and nine months after the permit process started — Tortosa got the official green light from the city of San Francisco for construction to start.

Amount Spent to Date: $165,869

Construction, Kitchen Design, & Equipment: $298,800

Because Tortosa chose a brand-new building that had never been occupied, every single thing had to be built in. The upside is that he was able to create his dream space from scratch. The downside: That it cost him much more time and money.

Again, with Ne Timeas’ help, Tortosa gathered three bids from Bay Area construction companies. He settled on the least expensive one, which at $170,000 — $230,000 less than his most expensive bid — would require a heavy amount of project management from him.

For construction, Tortosa budgeted $225,000 with a $25,000 contingency. He ended up paying $220,800. Because of that difference, he was able to go above budget on other things.

A separate design company, for a flat rate of $4,500, created the layout of the kitchen and sushi bar, with a professional focus on what makes the most sense operationally for things like sink placement and refrigeration. That company then put together an equipment list and sent it out to different companies to bid. Tortosa went with a $53,000 bid, which covered big items like refrigerators and the sushi cases.

All stainless steel elements, like tables, sinks, and shelving cost another $20,500. Even with a limited kitchen — Robin does not have any hoods, meaning no stove, which saved Tortosa about $30,000 — kitchen design and equipment still managed to be one of the largest expenses.

Over the course of January through May, the construction crew built Robin, framing the space, building the bar, installing plumbing, adding wiring, and a whole lot more. The costs on top of the bid include things like the required horn strobes for the fire alarms ($2,800), tile ($6,000), acoustical treatments ($12,000), and eco-grip flooring for the kitchen and behind the sushi bar ($12,000).

While that happened, Tortosa had a lot of free time, especially since he wasn’t working at Akiko’s anymore. He catered some private dinners while figuring out final design details, but he spent the majority of his time trying to ensure he was as mentally strong as possible.

“I was reading a lot and meditating. Just trying to get my personal life in order. I knew that I was about to walk into a very stressful project,” he says.

Tortosa tries to meditate for at least 20 minutes a day. He even attended a meditation seminar at one point. As the opening has gotten closer, however, it’s become less of a regular thing.

He’s also devoured books on management. With bad memories from 1760 in mind, he wants to avoid an unproductive kitchen dynamic this time around.

“Obviously that situation was very difficult for me. I’m glad for where I’m at from it, but I wish I treated people differently. I treated a lot of people not well,” he says. “I put way too much pressure on myself. It’s just a restaurant. It’s just food. It’s really not life or death, but that’s how I acted.”

Amount Spent to Date: $464,669

Furniture & Equipment: $37,585

Robin’s dining room chairs Robin’s dining room chairs
The dining room chairs, which many people tried to talk Tortosa out of for their desk chair-like qualities. He’s pleased with how they look in the space.
Patricia Chang

As with all of the design elements, Tortosa spent a lot of hours searching for the perfect tables and chairs. He had a very specific idea in his mind of what he wanted them to look like, and he scoured the Internet to find them. Finally he located a company that would custom-create what he envisioned. In the end, the 24 burnt orange leather chairs, 13 wood sushi bar stools, and nine tables — finished with the Japanese wood burning shou sugi ban method — set him back $19,760.

“I was very excited when the chairs came and then everyone I showed them to hated them. Like everyone. They either hated the color of leather, or they hated the design, or both,” he says with a laugh. “I’ve known what I wanted Robin to look like from before day one. So when people would just see little aspects of it, they would think, ‘It doesn’t make sense or doesn’t go.’ But then when people saw them in the space, they liked them.”

Through this process, Tortosa’s outlook has vacillated between confident and insecure. But when it comes to the design, he has stayed consistently certain of his vision.

“He’s been both confident and humble. He’s open to any and all suggestions from us and other people who he respects,” Steele says. “But at the same time he puts his foot down when he feels strongly about something. It’s an extension of him.”

Nine thousand dollars backed point of sale devices, an alarm system, and a music setup — from which rap and old-school hip hop will blare, creating what Tortosa hopes is a relaxed, raucous atmosphere.

“Most high-end sushi places right now have a very temple-like environment. It’s not the most fun environment and kind of intimidating,” Tortosa says. “I really want Robin to have personality and soul. The most important part is that the guest has fun, in my opinion. They’re coming in to eat, yes, but I’d rather them feel something than just be like, ‘Oh, that food was great.’ I’d rather them say, ‘I had a great time.’”

The rest of the money in this category went toward service stations, a host stand, and office infrastructure like a desk, printer, and computer.

Amount Spent to Date: $502,254

Design, Artwork, & Smallwares: $59,953

The design of Robin kept Tortosa up at night. He spent countless hours scouring the Internet and creating Pinterest boards to communicate his ideas.

“A lot of people are going to hate the design. It’s not for everyone, and I understand that. But the feeling of the restaurant is so important to me and I wanted to be 100 percent involved in every aspect,” he says.

The result is bold and moody — distinctly unlike the industrial, spare aesthetic common to San Francisco. There’s a showy coral- and black-tile backdrop for the alder wood bar, custom-painted walls with thick rose gold resin drips flowing down, and quirky commissioned artwork. Then there’s the bathroom, which is as bright and colorful as the main room is dark, with splashy, saturated walls and a penny tile floor that Tortosa and his parents spent hours making together.

Tortosa sourced from almost exclusively California artisans using local goods for all design elements. Bay Area decorative artist Caroline Lizarraga has worked on restaurants like Nightbird, The Riddler, Black Cat, and more. This project especially excited her.

“It was quite contagious to get excited with Adam about Robin. A lot of restaurants are worried about making it, so they don’t want to be risk-takers,” Lizarraga says. “But Adam was just wanting to go for it. Being an artist, that’s a very appealing scenario. He also trusted my work and wanted me to express myself, which upped my creative game quite a bit. I think it will be a breath of fresh air for San Francisco.”

Lizarraga hand-poured the rose gold resin that drips down the main room walls, a technique she has never used with this medium before. Hanging on those walls will be Ferris Plock’s character-based custom artwork, which combines contemporary pop culture with the aesthetic of Japanese ukiyo-e. In one, a deranged Donald Duck head sits on a kabuki-style body. All that custom work didn’t come cheap, totaling $43,250, of which the top ticket item was the aforementioned $14,500 slate bar for the sushi chefs to work on.

Then there are the smallwares like plates, glasses, and cutlery. One wooden spoon cost $4. Each custom Japanese-made hinoki cypress chopstick was $5. A wine glass, imported from Gabriel Glas, was $35. A single small bowl, one of 412 custom ceramic pieces from Jered’s Pottery in the East Bay, cost $20.

Jered’s is the Bay Area’s go-to fancy ceramicist whose work is also in Mister Jiu’s, Rich Table, and Michael Mina. Tortosa spent a ton of time with its owner Jered Nelson, traveling to his Richmond workshop upwards of 15 times. “I would tell him what I liked and didn’t like, and then I would go snoop around his shop and pick other glazes or designs,” Tortosa elaborates.

The two talked a lot about the functionality of some of the pieces Tortosa wanted, since Nelson had never made a chopstick rest or oshibori (wet towel) holder before. One of the most unique pieces from Nelson is a white ceramic hand (pictured above) on which Tortosa will serve some pieces of nigiri. It’s inspired by an influential experience from one of his trips to Japan where the chef served sushi directly into diners’ hands.

Amount Spent to Date: $562,207

Branding & Public Relations: $23,900

In a saturated market like San Francisco, restaurants need more than just great food in a nice room — they benefit from a cohesive look to tie the web presence together with the physical space. Graphic designer Jordan Ma created Robin’s brand, drew the logo Tortosa became so attached to, built the website, designed the menus and business cards, and created the custom soap labels. Ma says he was inspired by pairing Japanese minimalism and elegance with a younger, more modern look.

Tortosa wanted a lot of thoughtful, unique details throughout Robin, so he focused on partnerships with local artisans. Plans to make an exclusive beer with a local brewery fell through because his order size was too small to make it worth it for the brewer. But Tortosa managed to find plenty of Bay Area makers in his quest. See the organic hand soap he commissioned from local skin care company Botnia for $500, and the 30 black aprons with rose gold rivets and straps designed by Oakland-based Guro Designs for $2,200.

Conveying all this information to the media and public is Magnum PR, the public relations firm Tortosa hired, “based on the vibes” he got from owner Jen Pelka. Magnum also represents major city restaurants such as Mister Jiu’s, The Riddler, Souvla, and Rich Table. For Robin’s pre-opening PR, Tortosa paid $12,000. Magnum has already started sending out press releases and invitations to media (including Eater) and influencer dinners, which will undoubtedly sway the way Robin is portrayed to the public in the weeks to come.

Amount Spent to Date: $586,107

Labor & Fees: $38,988

There are many mundane elements of opening a restaurant, like legal and accounting fees, recruitment services, taxes, and lots and lots of insurance. All those fees racked up $16,988.

Labor is the other huge expense, though more so once the restaurant opens. In the past few weeks, Tortosa has brought on consulting manager Michael Huffman (Aatxe) and beverage director Anna Nguyen (Liholiho Yacht Club). Salaries paid to date tally up to $22,000. That number includes Tortosa’s salary, which he started collecting in mid-May.

Once Robin opens, Tortosa will provide health insurance for any employees who work over 30 hours a week and opt in. Through human resources app Gusto — which also handles all of his payroll and onboarding — Tortosa chose the Kaiser Permanente Platinum Plan, in which he will cover 75 percent of the fees, working out to $350 per person per month.

“It signifies that we care about the people who work here,” he says as to why he wanted to provide this benefit. But it’s also more personal than that for him, harkening back to his six-month, post-1760 depression.

“Eventually I got prescribed Wellbutrin. It definitely helped. A lot, a lot, a lot,” he says. “I’m sure some of my co-workers have been depressed at some point, but it’s not something that you talk about. Unfortunately that’s not what young males or people in general do in the kitchen.”

Chefs are slowly coming forward to discuss mental illness in the restaurant industry, and Tortosa wants to continue that momentum. Providing health insurance for his employees is one way to do just that.

Amount Spent to Date: $625,095

Opening Food & Alcohol: $26,000

Clockwise: A fried nori chip topped with A5 beef tartare, Fort Bragg uni, pickled shallots, micro wasabi, Asian pear, and togarashi
Sesame noodles with black truffles, Japanese chimichurri, and black and white sesame seeds
Canary rockfish being seared on a Japanese binchotan. Robin will do this in front of guests.
Santa Barbara uni with a shiro dashi-emulsified egg yolk, lemon, and soy

“I’m not really worried about the food being good,” Tortosa says in his typical effusive way, which could easily be mistaken for hubris. But a wave of omakase restaurants has flooded San Francisco within the last year, and with each opening, Tortosa gets more and more nervous. He hopes his food is different enough to both fit well into the existing sushi scene, yet also make Robin stand out.

“There’s already a lot of good sushi in San Francisco. Every omakase restaurant has its own unique perspective and so do I,” he says. “Growing up in California and not being Japanese, I grew up eating different food, you know? Not right, not wrong, just different.”

Tortosa’s unique perspective is to combine traditional Japanese techniques with more contemporary Californian flavors. He worked for four years under master sushi chef Katsuya Uechi in Los Angeles, before cooking at New American restaurant Ink in LA, then moving on to open 1760 in San Francisco, and eventually work behind the bar at Akiko’s.

All of that experience has informed unusual pieces of nigiri such as flounder with Meyer lemon, micro shiso, and blood orange kosho (a Japanese chili pepper paste typically made with yuzu), using local fish and produce. Tortosa will try to keep his ingredients as sustainable as possible, only serving hook-and-line caught or sustainably farmed fish. The majority of his product comes from five purveyors: three local companies, one in Baja, and one in Japan. Fish will include San Diego uni, Baja pink grouper, live scallops from Boston, Bay Area ling cod, Half Moon Bay swordfish, and more.

“The restaurant is not going to be for everyone. Some people want a place that serves 50 different types of rolls. But someone who enjoys good fish will like it here, because all we have is a ton of good fish,” Tortosa says. “It’s not like we’re bastardizing the fish and covering it with mayo. We’re just taking good fish and elevating it a little bit differently.”

To prep for the opening, including the complimentary friends and family practice dinners taking place this week, Tortosa will purchase $26,000 worth of food, sake, beer, wine, and non-alcoholic beverages.

Amount Spent to Date: $651,095

Contingency: $50,000

Needless to say, food is not solely what diners pay for when they go out to eat. Rather, it’s all of the above — not to mention the costs that kick in once a restaurant opens, such as increased labor, higher insurance fees, reservation services, flowers, cleaning, taxes, and so much more. This is often why the price of an avocado toast or an entire omakase can feel cringeworthy.

Tortosa has about $50,000 set aside to get him through the first few months, should he need a windfall for any reason. To recoup all of these costs, his financial projections have Robin hopefully making $1.8 million in total sales in the first year. That accounts for losing money the first month and then slowly increasing profits month by month. If the projections hold true, there will be a seven percent profit — just $135,000 — in the first year, all of which will go straight to investors.

Amount Spent to Date: $701,095

$701,095 and an unquantifiable amount of time and stress later, Robin is ready.

“I’m nervous about everything that could ever go wrong. Like what if no one comes to the restaurant,” Tortosa says. “So many great restaurants fail for so many reasons. It’s so close [to opening] and there’s still a lot of shit to get done.”

He’s spending his final days before opening curing fish, unpacking ceramics, training staff, and trying to meditate as much as possible.

“I understand it’s very unhealthy, but I base my self-worth off the success of my job. When you work in restaurants, you dedicate so much time and effort to something that doesn’t pay you that well, that doesn’t have good hours. There are not that many redeeming qualities on that end. So success is what matters to me,” Tortosa says. “I hope it works. More than hope — I’m not leaving it up to hope. It’s everything to me that it works.”

Stefanie Tuder is Eater SF’s former senior editor. She now lives in New York City and writes for Eater NY.
Patricia Chang is a photographer in San Francisco.
Albert Law is a photographer in San Francisco.
Editors: Carolyn Alburger and Ellen Fort

Close this section

Typecasting: The Use (and Misuse) of Period Typography in Movies (2001)

Chocolat (2000, Mirimax) wasn’t a bad movie. It managed to get five Academy Award Nominations. But if they gave out Oscars for Best Type Direction, it would not have been among the nominees.

The movie is set in a small town in provincial France, mid-1950s. About halfway through the film, the town’s mayor puts up notices forbidding anyone to eat anything but bread and weak tea during Lent (which of course coincides with the opening of the new chocolaterie). I almost laughed when they showed a close-up of the notice. The headline was set in ITC Benguiat, a typeface which debuted in 1978 and was mainly popular in the ’80s.

Perhaps the mistake is understandable. ITC Benguiat was designed in a quasi-Art Nouveau style. It is likely that Art Nouveau typefaces would still be in use in provincial France of the mid-fifties. But not ITC Benguiat. It didn’t exist.

Noticing little slips like this in movies can happen to anyone with knowledge in any specialized field. A friend of mine in high school was a telephone nut and liked to point out that the kind of phone booth that appears in a scene near the end of American Graffitti (1973) didn’t exist in 1962. To him it was as glaring as if they had had Paul Le Mat driving a Camaro.

It’s probably unrealistic to expect this level of attention to detail in movies. There are more important things to attend to in movie making. Besides, the number of people who notice things like anachronistic type choices is small. I’m sure they seldom complain.

Until now.

What follows is a brief survey of films that have caught my attention over the years for their use (or misuse) of period typography.

At the outset, I should point out that typefaces used in titles don’t necessarily count since they exist outside the world depicted in a movie. For instance, the movie Eight Men Out (1988) used the Emigré typeface Modula (1987) in its titles (designed by M&Co.). The movie is set in 1919. It may be debatable whether it was an appropriate choice, but it would be a matter of taste, not historical accuracy.

Ratings are given from one to five stars indicating how well type is used in the each film:

 Nearly perfect use of period typography; errors, if any, are difficult to find

 Good effort to use period typography; minor mistakes here and there

 Uneven use of period typography; major mistakes occasionally

 Little attention to period typography; period-correct type appears only on actual period artifacts

 No attempt at historically accurate typography; only free fonts from Apple or Microsoft were used.

Dead Men Don’t Wear Plaid (1982, Universal Pictures) In this case the movie is a parody of the film noir genre so the titles are part of the world the film portrays. The choices here, Newport (1932) and Brush Script (1942), fit the period, but the style of the credits feels wrong. In the forties, movie titles were usually hand-lettered on cards shown in sequence.

Apart from the titles, careful attention is paid to get details right. They even hired veteran Hollywood costume designer Edith Head (for which she won an Oscar) and created sets and lighting to blend with existing footage from classic films. The movie got a lot of praise for its attention to such details, but of course nobody mentioned the use of Blippo, a pop-art typeface from the early seventies, on the cruise brochure.

The newspapers seen in several scenes are also problematic. They look more like children’s readers than real newspapers.

On the other hand, the use of signs (especially the hand-lettered one in the medicine cabinet) is right on target. All in all, a very funny film, but spotty in its use of type. 

Tucker: The Man and His Dream (1988, Lucasfilm, distributed by Paramount Pictures). This was Francis Ford Coppola’s paean to Preston Tucker, the too-far-ahead-of-his-time automotive genius of the 1940s. If Tucker had had his way, all cars would have had seat belts as standard equipment by 1950 and we’d all be driving cars with steerable headlights. Three of them. In any case, this is a fine film, lovingly crafted, which does a credible job of recreating the post-war forties.

There’s not a lot of type in this film, but what there is is right on the money, including the titles, which seem as if they have been taken from an actual forties-era film. There are a lot of great signs (like the giant TUCKER factory sign), but there is one that isn’t quite right.

Tucker’s workshop on his family’s farm has a sign (below) set in large, three dimensional... Helvetica. Don’t know how they missed that one.

Although Helvetica (1957) is part of a long line of sans serifs that have been around since the late 1800s, it was not common to see such letterforms on American signs until at least the 1960s, especially in the generic way it is used in the movie. 

Dead Again (1991, Paramount Pictures). Kenneth Branagh and Emma Thompson play a modern-day couple who are reincarnations of an ill-fated pair, one of whom was executed for murdering the other in the late 1940s. The titles feature a montage of close-ups of newspaper clippings chronicling the sad tale of the earlier couple.

The clippings are fairly well-done and even appear to be printed with letterpress, as most newspapers were until the 1970s. I noticed a few oddities, of course. First, while all the typefaces used were consistent with the era, the text type in the clippings was Caledonia, a book typeface that would be a very unlikely choice for a newspaper. Newspapers generally used (and still use), well, newspaper typefaces. The other thing is that although some of the headlines appear to be set with wood type—still a common practice in the forties—they are all very nicely kerned.

Technically, it was possible to kern wood type by physically cutting away parts of the type, but it would be a rather impractical practice at a newspaper. 

Ed Wood (1994, Touchstone Pictures). I love this movie, but not for its use of type.

It starts out well, perfectly matching the lettering style of a real 1950s Ed Wood movie in the opening credits, but as soon as signs and newspapers start appearing, things just go downhill. Close-ups of newspapers feature headlines set in various members of the Helvetica family next to vintage headlines apparently taken from real newspapers (mostly Erbar Light, 1934).

Even more implausible, some of them are optically distorted--a practice that didn’t become common until the advent of digital typesetting, and in fact would have been practically impossible in 1950s newspaper printing. Another glaring anachronism is the sign on the “Screen Classics” building which is set in Chicago, the original Macintosh system font (TrueType version, 1991). This is very strange to see, as the sign is composed of large, apparently hand-constructed three-dimensional letters mounted on the building.

Just as odd, the same logo is hand-painted on a window of a door inside the building. I’ve always thought that Chicago had an oddly Art Deco quality to it. Apparently some people think it is an Art Deco face.

On the bright side, there are hand-lettered banners in a few scenes that are just exactly right. 

The Hudsucker Proxy (1994, Warner Bros.). I’m a big fan of the Coen brothers’ movies and this is a favorite of mine. Typographically, though, their films are a mixed bag. One complication with critiquing the typography in this movie is that it’s difficult to say exactly what decade it’s supposed to be. According to the story, it’s set in the late fifties, but it often looks more like the forties, or even the thirties. Nevertheless, much of the typography is, at least technically, out of place. For the most part they’ve chosen typefaces that look the part but didn’t actually exist fifty years ago. A good example is the Hudsucker corporate logo which looks like it’s from the thirties or forties, but is actually set in Bodega Sans (1991).

Also used a lot in the film is Univers, a sans serif face that—although released in 1957—was not a common sight until the late sixties, especially for such a pedestrian use as a mechanical job board, for example. 

That Thing You Do (1996, 20th Century Fox). This is a fun movie to watch. Although I was only eight in 1964, this movie really seems to capture the look and feel of the period.

There really is a lot of attention to typographic detail: record labels, industry trade magazines, newspapers, even product packaging for cold remedies. Everything looks just the way it should.

Somebody did their homework on this one (or spent a lot of time in vintage collectible shops). The Patterson’s appliance store looks like it belongs in the Smithsonian, it’s so accurate. I was only able to find one bit of type out of place: Early in the film, a billboard flashes across the screen briefly which has a few words set in Helvetica Bold.

Even this is only slightly implausible. 

L.A. Confidential (1997, Warner Bros.). A highly regarded film, tightly written, well-acted, beautifully filmed, but pretty mediocre in its use of type. This one is set in the early ’50s, but the type was clearly not. “HUSH-HUSH,” a Hollywood gossip magazine, is featured prominently sporting a logo set in Helvetica Compressed (1974).

A newspaper dated 1953 has headlines set in Helvetica Black (1959) and Univers (1957)—typefaces which weren’t commonly available in the U.S. until the sixties.

Another newspaper has the word “EXTRA” emblazoned across the top in an optically expanded ITC Kabel Black (1976).

Granted, there are vintage bits of typography and signage here and there, but it appears that when it came to creating typographic props from scratch, they pretty much just guessed. 

Pleasantville (1998, New Line Cinema). This is a film that seems to be obsessed with details, as if they expected it to be viewed under a microscope by nit-pickers like me.

It seems almost too perfect in its period details—not the way the fifties actually looked, but the idealized way television portrayed it. There is actually very little type in this film. I thought I caught a slip up in an early scene—Comic Sans (1995) used in a pseudo-fifties-era promo spot on a quasi-Nick-at-Nite network—until I realized it was supposed to have been done in the present (presumably by designers trying not-quite-successfully to do a retro look).

If it was intentional, it was a very subtle use of type. There are a lot of nicely done hand-painted signs and banners and nothing much to complain about, at least typographically. 

Almost Famous (2000, Dreamworks SKG). This is Cameron Crow’s fictionalized (and entertaining) account of how he started writing for Rolling Stone magazine. It’s supposed to be 1973, a year I remember very well. I have to say they did a very good job of capturing it, but then, it wasn’t that long ago. Still, there are ample opportunities for type flubs in a movie about a guy who writes for a magazine. Surprisingly, not a lot of type is shown on screen, and what little is shown is correct for the period (the pre-ITC version of Kabel Black for instance). Except for one little thing near the end of the movie. Had I not been an avid Rolling Stone reader back in the seventies, I might have missed it. There is a montage which includes a close-up of a stack of RS just dropped off at the newsstand. It features aspiring young writer Will’s triumphant first cover story.

The logo and photo look fine, but the main headline is set in ITC Galliard (1978). In addition to the fact it’s five years too early, as far as I know it’s never been used on the cover of RS. [Reader Tim Horrigan also points out that RS was folded over with a smaller cover before 1974, and that the overall design is uncharacteristic for the magazine during that period. —MS] 

* * *

Anachronistic typography in movies is certainly not one of the world’s pressing problems. At worst, it reflects badly on a film in a subtle way that suggests careless production values to the typographically aware, even when everything else is well-crafted. Getting the type right is not that hard, especially nowadays when so many historical typefaces are readily available in electronic form. Historical information on typography is easier to find than ever.

I hope to add more examples in a follow-up article. If you have any film/type gaffes to share, drop me a line.

Update 7/1/2004: In lieu of a follow-up article, I’m posting more examples in the new Notebook section, filed under Son of Typecasting.

See also:
Typecasting Trailer

Você fala português?
¿Habla usted español?
Vostede fala galego?

Close this section

Avast Antivirus Remote Stack Buffer Overflow with Magic Numbers

If I told you I found a remotely triggerable stack-based buffer overflow in a conventional anti-virus product, in what part of the software would you expect it to be? A reasonable guess may be: “Probably in the parsing code of some complicated and likely obsolete file format”.

In fact, the most recent anti-virus stack buffer overflows clearly show that the implementation of a parser for complex file formats is extremely challenging.

However, I would like to start this blog series with a stack-based buffer overflow that is not of this kind.


Let us set up the scene. Given a new file, the anti-virus software needs to decide of what file type it is, such that it can analyze it in the right context. Therefore, the first part of the scanning process usually involves finding the so-called magic numbers that are hinting at the file type. For example, PDF files begin with the ASCII string %PDF-. Now, Avast Antivirus tries to be very thorough with this, scanning the file for occurrences of numerous different magic numbers. For some of those types, such as PDF or RAR, it is not satisfied with just one occurrence, but tries to find multiple occurrences.

Getting into the Details

In the algo module of Avast’s engine, there is a function find_magicnums that scans a given file for various magic numbers (e.g. Rar! or %PDF-).

When a magic number is found, a variable of type magicnum_t is created:

typedef struct {
  uint32_t type;
  uint32_t offset;
  uint32_t priority;
} magicnum_t;

The field type is an integer that maps to a filetype (such as PDF or RAR), and offset is the offset at which the magic number appears (measured from the beginning of the file).

Having created the variable, it is stored in a stack allocated structure of type magicnum_collection_t:

typedef struct {
  uint32_t max_magicnum_count;
  uint32_t magicnum_count;
  magicnum_t magicnums[MAXMAGICNUMCOUNT];
} magicnum_collection_t;

The function add_magicnum is responsible for inserting a given magic number into the field magicnums of the collection. It does so while making sure that the entries are ordered with respect to their offset, and with respect to their priority in case the offset is equal.

add_magicnum looks somehow like this.

void add_magicnum(magicnum_collection_t *magicnums, magicnum_t *insertmagicnum) {
  uint32_t magicnum_count = magicnums->magicnum_count;
  uint32_t insertrank = 0;

  //we skip those ranks with < offset
  while (insertrank < magicnum_count
      && magicnums->magicnums[insertrank].offset < insertmagicnum->offset) {

  //we skip those ranks with == offset or with <= priority
  while (insertrank < magicnum_count
      && magicnums->magicnums[insertrank].offset == insertmagicnum->offset
      && magicnums->magicnums[insertrank].priority <= insertmagicnum->priority) {

  if (insertrank < magicnum_count && insertrank + 1 < magicnums->max_magicnum_count) {
    memmove(&magicnums->magicnums[insertrank + 1] /*destination*/,
            &magicnums->magicnums[insertrank]  /*source*/,
            sizeof(magicnum_t) * (magicnum_count - insertrank));

  if (insertrank < magicnums->max_magicnum_count) {
      magicnum_t *new_magicnum = &magicnums->magicnums[insertrank];
      new_magicnum->type = insertmagicnum->type;
      new_magicnum->offset = insertmagicnum->offset;
      new_magicnum->priority = insertmagicnum->priority;

It starts by computing the insertrank, which is the index into the magicnums array where the given insertmagicnum should be inserted.

If the new magic number needs to be inserted before another magic number in the collection (that is, if insertrank < magicnum_count), all elements in the magicnums array beginning from insertrank are shifted by sizeof(magicnum_t) bytes in order to make space for the new magic number.

When doing this, we need to be careful not to overflow the magicnums buffer. This is what the check insertrank+1 < magicnums->max_magicnum_count tries to ensure. However, depending on the order in which magic numbers are inserted, it is possible that the array is full, but the computed insertrank is nevertheless (much) smaller than max_magicnum_count-1.

I believe a correct alternative check would ensure that magicnum_count+1 < magicnums->max_magicnum_count (this could be checked even before computing insertrank).

Triggering the Bug

That sounds nice, but are we actually able to insert magic numbers in such a way that the bug is triggered? It is clear that this will depend on how exactly the function add_magicnum is used.

Looking at the function find_magicnums quickly reveals that PDF magic numbers are inserted before RAR magic numbers. Moreover, I estimate MAXMAGICNUMCOUNT to be roughly 32.

Okay, so let us feed the engine with a file that starts with a couple of Rar!’s, followed by some %PDF-’s.


If the PDF magic numbers are inserted first, the RAR magic numbers should get a low enough insertrank and eventually overflow the buffer.

As desired, we get the following:

(438.8a8): Break instruction exception - code 80000003 (first chance)
eax=00000000 ebx=715c38f4 ecx=76d50544 edx=1398db41 esi=00000000 edi=1398ec3c
eip=76d50325 esp=1398dd88 ebp=1398de04 iopl=0         nv up ei pl zr na pe nc
cs=0023  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00000246
76d50325 cc              int     3

On Attacker Control and Exploitation

Now, the attacker has numerous possibilities to overwrite the stack with those 12 byte magicnum_t structs. First and most importantly, she has full control over the offset field. Moreover, she can choose between many different values for the type field and the priority field to write on the stack. In fact, the type field is assigned values from 7 to 449. Only a few remain unused, so that the total number of actually used magic number types is approximately 300 (in the meantime, it may be more).

Obviously, this vulnerability can be easily exploited remotely, for example by sending an email with a crafted file as attachment to the victim.

However, to exploit the vulnerability for arbitrary Remote Code Execution, another bug would be required to circumvent the stack canary, as Avast Antivirus uses /GS on Windows and I assume -fstack-protector is used on Linux.


We have seen that highly critical memory corruption bugs can appear even in very simple functions. This is probably as simple as it gets. There is no need for complicated file parsers.

Having said that, you can expect posts about very involved bugs in anti-virus file parsers to appear on this blog.

Do you have any comments, feedback, doubts, or complaints? I’d love to hear them. You can find my email address on the about page.

Alternatively, you are invited to join the discussion on HackerNews or on /r/netsec.

Timeline of disclosure

  • 09/23/2016 - Discovery
  • 09/24/2016 - Reported
  • 09/29/2016 - Confirmed and patch rolled out
  • 12/16/2016 - Bug bounty paid

Thanks & Acknowledgements

I want to thank Avast Software and especially Igor Glücksmann for their fast response. Fixing a vulnerability and actually rolling out the patch within such a short time frame is remarkable.

Close this section

Redesigning Google News for everyone

Finally, we know you want to be in control of your news, so we are making it easier to update things under the hood, with all settings in one place. And to make Google News personal, new capabilities allow you to name your custom sections, edit existing sections, type in interests you want to see in the “For You” stream, and identify news sources that you want to see more (or less) of.

We’re rolling out this update globally in the coming days. We hope the new design enables you to easily access quality journalism, bolstered with meaningful insights and comprehensive coverage.

Close this section

The Paradox of the Elephant Brain

We have long deemed ourselves to be at the pinnacle of cognitive abilities among animals. But that is different from being at the pinnacle of evolution in a number of very important ways. As Mark Twain pointed out in 1903, to presume that evolution has been a long path leading to humans as its crowning achievement is just as preposterous as presuming that the whole purpose of building the Eiffel Tower was to put that final coat of paint on its tip. Moreover, evolution is not synonymous with progress, but simply change over time. And humans aren’t even the youngest, most recently evolved species. For example, more than 500 new species of cichlid fish in Lake Victoria, the youngest of the great African lakes, have appeared since it filled with water some 14,500 years ago.

Still, there is something unique about our brain that makes it cognitively able to ponder even its own constitution and the reasons for its own presumption that it reigns over all other brains. If we are the ones putting other animals under the microscope, and not the other way around,1 then the human brain must have something that no other brain has.

Sheer mass would be the obvious candidate: If the brain is what generates conscious cognition, having more brain should only mean more cognitive abilities. But here the elephant in the room is, well, the elephant—a species that is larger-brained than humans, but not equipped with behaviors as complex and flexible as ours. Besides, equating larger brain size with greater cognitive capabilities presupposes that all brains are made the same way, starting with a similar relationship between brain size and number of neurons. But my colleagues and I already knew that all brains were not made the same. Primates have a clear advantage over other mammals, which lies in an evolutionary turn of events that resulted in the economical way in which neurons are added to their brain, without the massive increases in average cell size seen in other mammals.

elephant brain 1
HELLO HANDSOMESince the late 1960s, psychologists have speculated whether the ability to recognize oneself in a mirror was indicative of intelligence and self-awareness.
James Balog / Getty Images

We also knew how many neurons different brains were made of, and so we could rephrase “more brain” and test it. Sheer number of neurons would be the obvious candidate, regardless of brain size, because if neurons are what generates conscious cognition, then having more neurons should mean more cognitive capabilities. Indeed, even though cognitive differences among species were once thought to be qualitative, with a number of cognitive capabilities once believed to be exclusive to humans, it is now recognized that the cognitive differences between humans and other animals are a matter of degree. That is, they are quantitative, not qualitative, differences.

Our tool use is impressively complex, and we even design tools to make other tools—but chimpanzees use twigs as tools to dig for termites, monkeys learn to use rakes to reach for food that is out of sight, and crows not only shape wires to use as tools to get food, but also keep them safe for later reuse. Alex, the African gray parrot owned by psychologist Irene Pepperberg, learned to produce words that symbolize objects, and chimpanzees and gorillas, though they cannot vocalize for anatomical reasons, learn to communicate with sign language. Chimpanzees can learn hierarchical sequences: They play games where they must touch squares in the ascending order of the numbers previously shown, and they do it as well and as fast as highly trained humans. Chimpanzees and elephants cooperate to secure food that is distant and can’t be reached by their efforts alone. Chimpanzees, but also other primates, appear to infer others’ mental state, a requirement for showing deceitful behavior. Even birds seem to have knowledge of other individuals’ mental state, as magpies will overtly cache food in the presence of onlookers and then retrieve and move it to a secret location as soon as the onlookers are gone. Chimpanzees and gorillas, elephants, dolphins, and also magpies appear to recognize themselves in the mirror, using it to inspect a visible mark placed on their heads.

Did the African elephant brain, more than three times as heavy as ours, really have more neurons?

These are fundamental discoveries that attest to the cognitive capacities of nonhuman species—but such one-of-a-kind observations do not serve the types of cross-species comparisons we need to make if we are to find out what it is about the brain that allows some species to achieve cognitive feats that are outside the reach of others. And here we run into another problem, the biggest one at this point: how to measure cognitive capabilities in a large number of species and in a way that generates measurements that are comparable across all those species.

A 2014 study tested for self-control, a cognitive ability that relies on the prefrontal, associative part of the cerebral cortex, among a number of animal species—mostly primates, but also small rodents, doglike carnivores, the Asian elephant, and a variety of bird species. They found that the best correlate with correct performance in the test of self-control was absolute brain volume—except for the Asian elephant, which, despite being the largest-brained in the set, failed miserably at the task. A number of reasons come to mind, from “It did not care about the food or the task” to “It enjoyed annoying its caretakers by not performing.” (I like to think that the reason why it’s so hard to train monkeys to do things that are easily learned by humans is their dismay at the obviousness of the task: “C’mon, you want me to move to do just that? Gimme something more challenging to do! Gimme videogames!”)

elephant brain 2
BRAINIACSuzana Herculano-Houzel seeks to learn exactly what it is about the human brain that allows it to perform much more complex maneuvers than other animal brains seem to. Here, she gives a TED Talk.
James Duncan Davidson, courtesy of TED

The most interesting possibility to me, however, is that the African elephant might not have all the prefrontal neurons in the cerebral cortex that it takes to solve self-control decision tasks like the ones in the study. Once we had recognized that primate and rodent brains are made differently, with different numbers of neurons for their size, we had predicted that the African elephant brain might have as few as 3 billion neurons in the cerebral cortex and 21 billion neurons in the cerebellum, compared to our 16 billion and 69 billion, despite its much larger size—if it was built like a rodent brain.

On the other hand, if it was built like a primate brain, then the African elephant brain might have a whopping 62 billion neurons in the cerebral cortex and 159 billion neurons in the cerebellum. But elephants are neither rodents nor primates, of course; they belong to the superorder Afrotheria, as do a number of small animals like the elephant shrew and the golden mole we had already studied—and determined that their brains did, in fact, scale very much like rodent brains.

Why spend $100,000 when a handheld butcher knife would do?

Here was a very important test, then: Did the African elephant brain, more than three times as heavy as ours, really have more neurons than our brain? If it did, then my hypothesis that cognitive powers come with sheer absolute numbers of neurons would be disproved. But if the human brain still had many more neurons than the much larger African elephant brain, then that would support my hypothesis that the simplest explanation for the remarkable cognitive abilities of the human species is the remarkable number of its brain neurons, equaled by none other, regardless of the size of the brain. In particular, I expected the number of neurons to be larger in the human than in the African elephant cerebral cortex.

The logic behind my expectation was the cognitive literature that had long hailed the cerebral cortex (or, more precisely, the prefrontal part of the cerebral cortex) as the sole seat of higher cognition—abstract reasoning, complex decision making, and planning for the future. However, nearly all of the cerebral cortex is connected to the cerebellum through loops that tie cortical and cerebellar information processing to each other, and more and more studies have been implicating the cerebellum in the cognitive functions of the cerebral cortex, with the two structures working in tandem. And, because these two structures together accounted for the vast majority of all neurons in the brain, cognitive capabilities should correlate equally well with the number of neurons in the whole brain, in the cerebral cortex, and in the cerebellum.

Which is why our findings for the African elephant brain turned out to be better than expected.

Brain Soup by the Gallon

The brain hemisphere of an African elephant weighs more than 2.5 kilograms, which meant that it would obviously have to be cut into hundreds of smaller pieces for processing and counting since turning brains into soup to determine the number of neurons inside works with chunks of no more than 3 to 5 grams of tissue at a time. I wanted the cutting to be systematic, instead of haphazard. We had previously used a deli slicer to turn a human brain hemisphere into one such full series of thin cuts. The slicer was wonderful for separating cortical gyri—but it had one major drawback: Too much of the human brain matter stayed on its circular blade, precluding estimates of the total number of cells in the hemisphere. If we wanted to know the total number of neurons in the elephant brain hemisphere, we had to cut it by hand, and in thicker slices, to minimize eventual losses to the point of making them negligible.

And so the day started at the hardware store, where my daughter and I (school vacation having just started) went looking for L-brackets to serve as solid, flat, regular frames for cutting the elephant hemisphere, plus the longest knife I could hold in one hand. (Here was an opportunity not to be missed for a young teenager, who years later could say, “Hey, Mom, remember the day we sliced up an elephant brain?”) We first sawed off the structural reinforcements of the L-brackets then made the elephant brain fit inside. Sure, there are fancy $100,000 machines that would do the job to perfection, but why spend that much money when a handheld butcher knife would do the job well enough?

I laid the hemisphere flat on the bench top, framed inside the two L-brackets. A student held the frames in position while I held the hemisphere down with my left hand and sliced firmly but gently through the brain with the right, in back-and-forth movements. Several cuts later, also into the back half as well as the cerebellum, and we had a completely sliced elephant brain “loaf” lying flat on our benchtop: 16 sections through the cortical hemisphere, eight through the cerebellum, plus the entire brainstem and the gigantic, 20-gram olfactory bulb (10 times the mass of a rat brain) lying separately.

elephant brain 3
COUNTING NEURONSSuzana Herculano-Houzel and her students cross-sectioned an elephant brain, shown here, to determine the number of neurons it has and compare that with what’s found in the human brain.
Courtesy of the author

Next, we had to separate the internal structures—striatum, thalamus, hippocampus—from the cortex, then cut the cortex into smaller pieces for processing, then separate each of these pieces into gray and white matter. In all, we had 381 pieces of tissue, most of which were still several times larger than the 5 grams we could process at one time. It was by far the most tissue we had processed. One person working alone and processing one piece of tissue per day would need well over one year—nonstop—to finish the job. This clearly had to be a team effort, especially if I wanted to have the results in no more than six months. But, even with a small army of undergraduates, it was taking too long: two months went by and only one-tenth of the brain hemisphere had been processed. Something had to be done.

Capitalism came to the rescue. I did some math and realized I had some $2,500 to spare—roughly $1 per gram of tissue to be processed. I gathered the team and made them an offer: Anybody could help, and everyone would be rewarded financially by the same amount. Small partnerships quickly formed, with one student doing the grinding, the other doing the counting, and both sharing the proceeds. It worked wonders. My husband would visit the lab and comment, in awe, on the crowd of students at the bench, chatting animatedly while working away (until then, they mostly worked in shifts, it being a small lab). Jairo Porfírio took over the large batches of antibody stains, I did all the neuron counts at the microscope—and in just under six months we had the entire African elephant brain hemisphere processed, as planned.

And the Winner Is …

Lo and behold, the African elephant brain had more neurons than the human brain. And not just a few more: a full three times the number of neurons, 257 billion to our 86 billion neurons. But—and this was a huge, immense “but”—a whopping 98 percent of those neurons were located in the cerebellum, at the back of the brain. In every other mammal we had examined so far, the cerebellum concentrated most of the brain neurons, but never much more than 80 percent of them. The exceptional distribution of neurons within the elephant brain left a relatively meager 5.6 billion neurons in the whole cerebral cortex itself. Despite the size of the African elephant cerebral cortex, the 5.6 billion neurons in it paled in comparison to the average 16 billion neurons concentrated in the much smaller human cerebral cortex.

So here was our answer. No, the human brain does not have more neurons than the much larger elephant brain—but the human cerebral cortex has nearly three times as many neurons as the over twice as large cerebral cortex of the elephant. Unless we were ready to concede that the elephant, with three times more neurons in its cerebellum (and, therefore, in its brain), must be more cognitively capable than we humans, we could rule out the hypothesis that total number of neurons in the cerebellum was in any way limiting or sufficient to determine the cognitive capabilities of a brain.

Only the cerebral cortex remained, then. Nature had done the experiment that we needed, dissociating numbers of neurons in the cerebral cortex from the number of neurons in the cerebellum. The superior cognitive capabilities of the human brain over the elephant brain can simply—and only—be attributed to the remarkably large number of neurons in its cerebral cortex.

While we don’t have the measurements of cognitive capabilities required to compare all mammalian species, or at least those for which we have numbers of cortical neurons, we can already make a testable prediction based on those numbers. If the absolute number of neurons in the cerebral cortex is the main limitation to the cognitive capabilities of a species, then my predicted ranking of species by cognitive abilities based on number of neurons in the cerebral cortex would look like this:

elephant brain 4

which is more intuitively reasonable than the current ranking based on brain mass, which places animals such as the giraffe above many primate species, like this:

elephant brain 5

As it turns out, there is a simple explanation for how the human brain, and it alone, can be at the same time similar to others in its evolutionary constraints, and yet so different to the point of endowing us with the ability to ponder our own material and metaphysical origins. First, we are primates, and this bestows upon humans the advantage of a large number of neurons packed into a small cerebral cortex. And second, thanks to a technological innovation introduced by our ancestors, we escaped the energetic constraint that limits all other animals to the smaller number of cortical neurons that can be afforded by a raw diet in the wild.

So what do we have that no other animal has? A remarkable number of neurons in the cerebral cortex, the largest around, attainable by no other species, I say. And what do we do that absolutely no other animal does, and which I believe allowed us to amass that remarkable number of neurons in the first place? We cook our food. The rest—all the technological innovations made possible by that outstanding number of neurons in our cerebral cortex, and the ensuing cultural transmission of those innovations that has kept the spiral that turns capacities into abilities moving upward—is history.


1. Amusing science-fiction stories notwithstanding, like the mice in Douglas Adams’s universe who have been studying human scientists all along …

From The Human Advantage: A New Understanding of How Our Brain Became Remarkable by Suzana Herculano-Houzel published by The MIT Press.

Close this section

Canon Cat Resources – Jef Raskin's Forth-Powered Word Processing Appliance

Originally collected on

Canon Cat

Using the Canon Cat

An introduction to using the Canon Cat



Close this section

Reduce your startup's payroll taxes through the new Federal R&D Tax Credit

Starting this 2017, there’s a new tax credit on the block, and it’s a big one: Businesses who qualify can claim up to $250,000 per fiscal year. That’s a lot of money! So how can you get your hands on this sparkly new credit? We’ll walk you through it. Read on to learn the basics about the new credit – from what it is to how to claim it.

What is this new credit?

The Federal R&D Tax Credit is a tax credit for small businesses to help offset their costs for research and development. Beginning in 2017, qualified small businesses may claim up to $250,000 per fiscal year, applying it against their Social Security taxes.

Am I eligible?

In order to qualify for the credit, your business generally needs to tick all three boxes below. Your business must:  

  • Have less than $5 million of gross receipts
  • Have gross receipts for five years or less
  • Have qualified research and development costs

Companies in industries like technology and science are most likely to use this new credit. However, businesses in any industry may qualify if they are actively developing new products or processes.

Cool, so how do I get the credit?

The first step is to conduct an audit of your business’s R&D expenses. Be sure to work with your accountant on this. They’ll be able to determine if you qualify and what research and development costs can be applied.

With the audit complete, your accountant will file Form 6765, Credit for Increasing Research Activities with your business’s annual Income Tax Return.

It’s important to note that the credit you claim this fiscal year is based on your R&D costs for last fiscal year. So if you want to claim the credit to reduce your employer social security taxes now, you needed to conduct the audit on your 2016 R&D expenses and file Form 6765 with your 2016 income tax returns.

Didn’t file for the R&D tax credit last year? Don’t worry: the IRS is throwing you a bone for 2016 only. If you missed reporting this on your business’s 2016 income tax return, you can still conduct the audit and file an amended return by December 31, 2017.

But then how do I get the money back?

The answer is payroll! Like many tax credits, you can apply the Federal R&D Tax Credit to your business’s income taxes, but many small businesses don’t have enough income tax liability to use up the credit. So this year the IRS changed the rules to allow businesses to apply the credit against their Social Security taxes.

After filing Form 6765 with the IRS, you can begin claiming the credit with your next quarterly payroll tax filing. In order to claim the credit, it must be included on your quarterly Form 941 filing with a completed Form 8974 attached. The IRS will then issue you a refund for the social security taxes you have paid in that quarter (which the IRS estimates will take 6-8 weeks).

Does unused credit expire?

No! You can continue to apply any remaining credit each quarter until the credit is fully claimed, and you can even carry unused credit forward to the next fiscal year.

So how does this work with my payroll provider?

This is a new credit, so many payroll companies are still figuring out if they can support it and how. If you would like to claim this credit, talk to your payroll provider as soon as possible to see if they support it.

Does Gusto support the Federal R&D tax credit?

Yes! Gusto will complete and file Forms 941 and 8974 for customers so they can claim the credit with their payroll filings. You can learn how to get it set up in Gusto here.

And there you have it! Your most pressing R&D credit questions, answered. For more info, please consult your tax advisor and check with the IRS.

Close this section

Why Is NumPy Only Now Getting Funded?

Recently we announced that NumPy, a foundational package for scientific computing with Python, had received its first-ever grant funding since the start of the project in 2006. Community interest in this announcement was massive, raising our website traffic by over 2600%. The predominant reaction was one of surprise—how could it be that NumPy, of all projects, had never before secured funding?

Sometimes this surprise was expressed as skepticism, or a critique pointing out that funding of a kind had gone to NumPy prior to this grant. Wes McKinney, for example, pointed out on Twitter that Numeric and Numarray (pre-NumPy projects) were supported through NASA funding at Space Telescope Science Institute, development time was put into NumPy by paid developers at Enthought, and (mostly now former) academics like Travis Oliphant spent time developing NumPy in lieu of writing academic research papers.

Without the huge sacrifices of Travis O, Eric J, Fernando P, John Hunter, and many others, things might have gone a different way

Why should open source software development require “huge sacrifices?”

But why have “huge sacrifices” been necessary to produce and maintain these projects? And why are sustainable funding and resources so difficult to come by?

The answers to these questions touch upon a host of challenges related to open source software development in general: burnout, overwork generated by the tragedy of the commons, and the mistaken notion that critical open source work can be sustainably produced on an all-volunteer basis.

NumFOCUS has identified three primary challenges to sustainability for open source projects in scientific computing:

1) Lack of funding mechanisms to support the ongoing maintenance and improvement of existing software
2) institutional barriers to the development of software as a research endeavor, and
3) the hindrance to academic career advancement for those who develop and support research software.

NumFOCUS recently contributed a submission in response to the NSF Dear Colleague Letter: Request for Information on Future Needs for Advanced Cyberinfrastructure to Support Science and Engineering Research (NSF CI 2030) that outlines these three challenges in depth. Over the next few weeks, we’ll be posting a blog series exploring these challenges in more detail.

The Looming Crisis in Scientific Computing

The problem of sustainability for open source scientific software projects is significant. Arguably, it affects the whole of contemporary scientific inquiry, insofar as that inquiry requires software tools that promote reproducible results (i.e. open source).

In August of 2011, Fernando Perez, founder of IPython/Jupyter, gave a keynote at Euro SciPy explaining that the entire scientific Python stack was essentially relying upon on the “free time” work of only about 30 people—and no one had funding! The key slides from Fernando’s talk:


NumFOCUS was founded in 2012 as a response to this looming sustainability crisis.

NumFOCUS is designed to provide a home for open source scientific software projects that offers independence, stability, logistical support, and access to monetary resources.

Many of the sustainability challenges Fernando highlighted in 2011 have yet to be fully addressed, but we are hard at work tackling them. This year, NumFOCUS launched our Sustainability Program, headed up by Projects Director, Christie Koehler, and supported by our Sustainability Advisory Board. The initiatives Christie is developing are designed to help secure a sustainable future for key projects in the open source scientific computing community. Keep an eye on our blog for more posts exploring sustainability in open source scientific computing. And if you’d like to take action to contribute to project sustainability, consider becoming a NumFOCUS member today.

NumFOCUS is working towards a future in which "huge sacrifices" won't be necessary to do foundational work in OS scientific computing.

Close this section

Java and SIMD

I have wanted to experiment with Java for a long time to find out whether or not it can take advantage of Single Instruction, Multiple Data (SIMD) instructions to speed up CPU-intensive computations. I found very little information while I was researching this, so I decided to share my own findings.

What are SIMD instructions?

SIMD instructions allow the CPU to perform the same operation on multiple values simultaneously. For example we would like to perform four multiplications on eight values:

z1 = x1 * y1
z2 = x2 * y2
z3 = x3 * y3
z4 = x4 * y4

Normally that would require eight instructions to load values from memory into registers and four multiplication instructions. Using SIMD instructions, the CPU can load all four x values into the xmm0 with a single MOVUPS instruction,, another MOVUPS to load the four y values into the xmm1 register and a single MULPS instruction to multiply them

|   x3  |   x2  |   x1  |   x0  | xmm0
    *       *       *       *
|   y3  |   y2  |   y1  |   y0  | xmm1
    =       =       =       =
| x3*y3 | x2*y2 | x1*y1 | x0*y0 | xmm0

The key feature here is that this multiplication will be performed simultaneously on all four values, which will be four times faster! Isn’t that great? :) SIMD instructions are often called vectorized instructions, because you can think of them as operating on vectors of values.

The first SIMD instructions in desktop/server CPUs were introduced in 1996 by Intel’s MMX extension the Pentium processors. Afterwards those instructions were expanded by the SSE and AVX standards. Now it is safe to assume that almost every CPU has some level of SIMD support. Nevertheless it is important to know whether your hardware supports SIMD operations that you want to use. For example many instructions operating on 64bit integers were added only in the latest AVX512 standard.

The problem

Let’s take a step back and show this problem in a real-life engineering use case. PrestoDB, a distributed analytical SQL engine for Big Data (eg. large datasets in HDFS clusters), often has to partition the same data using the same columns multiple times one after another. For example to perform a distributed hash JOIN algorithm, after reading the data from HDFS, Presto has to:

  1. Distribute the rows among the worker nodes.
  2. Within each worker, distribute the rows among CPU cores to further parallelize the execution
  3. Put each row in a hash table bucket.

This creates multiple layers of distributions for which we have to ensure that rows with the same values in the key end up in the same bucket. Obviously Presto cannot re-use same hash value at each step of the partitioning (otherwise only one bucket from 2. and 3. would be used). However calculating new hashes on each step can become a bottleneck, so Presto tries to simplify and optimize the hashing/scrambling algorithms as much as it is possible.

One trick is that in step 2., Presto computes the hash (let’s call it rawHash) and it does not have to re-calculate a complicated hash in the next step (3.). Instead we can re-use rawHash value by just scrambling its bits using some simple function. For this quick scramblling Presto uses the following code:

    private static int getHashPosition(long rawHash, long mask)
        rawHash ^= rawHash >>> 33;
        rawHash *= 0xff51afd7ed558ccdL;
        rawHash ^= rawHash >>> 33;
        rawHash *= 0xc4ceb9fe1a85ec53L;
        rawHash ^= rawHash >>> 33;

        return (int) (rawHash & mask);

Despite being so simple it can sometimes be the most CPU-intensive operation. This makes getHashPosition function a perfect candidate for vectorization, because it could be calculated simultaneously for multiple rawHashes from consecutive rows.

Because this function uses 64 bit integers and during writing this blog I did not have an access to any CPU supporting AVX512, I have rewritten it to version operating on 32 bit integers:

    private static int getHashPosition(int rawHash, int mask)
        rawHash ^= rawHash >>> 15;
        rawHash *= 0xed558ccd;
        rawHash ^= rawHash >>> 15;
        rawHash *= 0x1a85ec53;
        rawHash ^= rawHash >>> 15;

        return rawHash & mask;

Java and SIMD

As of Java 8, there is no way to use SIMD intrinsics in Java directly as can be done in C++ or C#, for example. In gcc we can define our data type to be a vector and perform SIMD operations directly as described in gcc documentation.

In C# there is a similar mechanism and one can use System.Numerics.

However, Java can also generate SIMD code under some conditions. If it detects that subsequent iterations of a loop perform independent calculations, Java can try to vectorize such loop. Roughly speaking, instead of doing this:

    for (int i = 0; i < x.length; i++) {
        z[i] = x[i] * y[i];

Java can try to do this (some pseudo code):

    for (int i = 0; i < x.length; i += 4) {
        Load x[i, i+1, i+2, i+3] into xmm0
        Load y[i, i+1, i+2, i+3] into xmm1
        Multiply xmm0 * xmm1 and store result in xmm0
        Store xmm0 into z[i, i+1, i+2, i+3]

This optimization can be turned on/off by a JVM switch “-XX:+UseSuperWord” which is turned ON by default.

This should work fine with the getHashPosition function. For example, we could pre-calculate those hashes in batches and store the results in a small array. Batches should be of a reasonable size, so that our temporary array fits into CPU caches. In the next section let’s try if this works out.

Vectorizing loop

Simple incrementation

Let’s start with some simple loop over integer values. Our first benchmark is an incrementation of values in an array.

@Fork(value = 1, jvmArgsAppend = {
@Warmup(iterations = 5)
@Measurement(iterations = 10)
public class BenchmarkSIMDBlog
    public static final int SIZE = 1024;

    public static class Context
        public final int[] values = new int[SIZE];
        public final int[] results = new int[SIZE];

        public void setup()
            Random random = new Random();
            for (int i = 0; i < SIZE; i++) {
                values[i] = random.nextInt(Integer.MAX_VALUE / 32);

    public int[] increment(Context context)
        for (int i = 0; i < SIZE; i++) {
            context.results[i] = context.values[i] + 1;
        return context.results;

JMH is used here for micro benchmarking. Results with -XX:-UseSuperWord and -XX:+UseSuperWord are the following:


That’s great! Four times faster. Thanks to the -XX:CompileCommand=print,*BenchmarkSIMDBlog.increment we can look at the code that JIT produced for this benchmark in both versions. With SuperWord we can easily find instructions from AVX2 extension that are responsible for this speedup:

  0x00007f7354e59638: vmovq  -0xe0(%rip),%xmm0
  0x00007f7354e59640: vpunpcklqdq %xmm0,%xmm0,%xmm0
  0x00007f7354e59644: vinserti128 $0x1,%xmm0,%ymm0,%ymm0
  0x00007f7354e5964a: nopw   0x0(%rax,%rax,1)
  0x00007f7354e59650: vmovdqu 0x10(%r10,%r8,4),%ymm1
  0x00007f7354e59657: vpaddd %ymm0,%ymm1,%ymm1
  0x00007f7354e5965b: vmovdqu %ymm1,0x10(%r11,%r8,4)

Hashing integers

Now we can try vectorizing our getHashPosition method by adding another benchmark:

    public int[] hashLoop(Context context)
        for (int i = 0; i < SIZE; i++) {
            context.results[i] = getHashPosition(context.values[i], 1048575);
        return context.results;

    private static int getHashPosition(int rawHash, int mask)
        rawHash ^= rawHash >>> 15;
        rawHash *= 0xed558ccd;
        rawHash ^= rawHash >>> 15;
        rawHash *= 0x1a85ec53;
        rawHash ^= rawHash >>> 15;

        return rawHash & mask;

Again we are using integers rather than longs. Unfortunately the results are not what one would expect.


Output produced by JIT tells as that both hashLoop versions look exactly the same, so for some reason Java wasn’t able to vectorize this loop. There is no fundamental reason why it shouldn’t work. Arithmetic used in hashLoop is more complicated, but it still could be easily translated to a sequence of SIMD operations using only two registers. So what went wrong?

Let’s check if the reason why Java did not do the optimization is that the method body is too big. Let’s try splitting getHashPosition into smaller functions:

    public void hashLoopPart(Context context)
        for (int i = 0; i < SIZE; i++) {
            context.results[i] = getHashPosition1(context.values[i]);

    private static int getHashPosition1(int rawHash)
        rawHash ^= rawHash >>> 15;
        rawHash *= 0xed558ccd;
        return rawHash;


Simplifying the getHashPosition function by dropping two thirds of its code allowed JIT to vectorize this smaller function. Let’s see what happens if we implement getHashPosition as a chain of three smaller loops instead of one larger loop.

    public int[] hashLoopSplit(Context context)
        for (int i = 0; i < SIZE; i++) {
            context.results[i] = getHashPosition1(context.values[i]);

        for (int i = 0; i < SIZE; i++) {
            context.results[i] = getHashPosition2(context.results[i]);

        for (int i = 0; i < SIZE; i++) {
            context.results[i] = getHashPosition3(context.results[i], 1048575);

        return context.results;

    private static int getHashPosition2(int rawHash)
        rawHash ^= rawHash >>> 15;
        rawHash *= 0x1a85ec53;
        return rawHash;

    private static int getHashPosition3(int rawHash, int mask)
        rawHash ^= rawHash >>> 15;
        return rawHash & mask;


Bingo! We have a factor four speed up of the vectorized version over the non-vectorized. Sacrificing some performance (~6%) by splitting the loop into three we convinced the JVM to vectorize each of the smaller loops. This gives us a speed up of almost four times over the original hashLoop.


After presenting those results to my colleagues, they argued that maybe there is some other underlying issue with this code that makes it impossible to vectorize. To check this hypothesis I have rewritten the hashLoop benchmark into C++ code. For compilation of the C++ code I have used g++ 4.8 with -O2 -ftree-vectorize switches (-ftree-vectorize is turned on by default with -O3).

hashLoop C++

This clearly shows that C++ has no problems with vectorizing original getHashPosition method, so this must be some JVM’s JIT limitation that it can not vectorize getHashPosition.

Java 9

This made me wonder, whether there is some kind of switch that enables more aggressive loop vectorization in the JVM. I have not found anything like this. However while browsing through the JVM source code that handles the UseSuperWord switch I have noticed that it has grown and changed a lot between Java 8 version that I have used (Oracle’s Java 1.8.0_101) in the above benchmarks and latest master branch. I downloaded OpenJDK’s source code and compiled the latest Java 9 JVM to check if it’s more clever. Here are the results:

hashLoop JAVA9

Nice! With arithmetic done on integers the latest Java version was able to fully vectorize the getHashPosition loop without the need for the hacky splitting of the method body.


First of all, one must be aware when and how SIMD instructions may help improve performance. If the code is bottlenecked on memory access, using SIMD instructions won’t help a bit. When arithmetic is a bottleneck of an algorithm, it still might not be possible to use SIMD instructions. Not all algorithms are easy to vectorize, especially if calculations are dependent on one another.

Secondly, even if we have code that could be speeded up by using SIMD instructions, Java doesn’t support it very well. We cannot explicitly express that a variable is a vector of values and we cannot manually instruct the compiler to use SIMD instructions for operations on those vectors, as it is possible in C++ or C#. We just have to rely on JIT to be able to vectorize loops. If we have a simple tight loop, that might work, but sometimes it won’t. The loop could be too complicated. Sometimes there might not be any loop to vectorize. Currently in such cases Java programmers are stuck and are not able to unleash the full potential computational power of modern CPUs. This is a shame, because at is clearly visible in the above benchmarks, that using SIMD instructions can speed up code multiple times with only a little bit of effort.

Source Code

Close this section

A Slow-Motion Trainwreck Facing the Meal-Kit Industry

Some time near the end of 2010, Andrew Mason and Eric Lefkofsky did a little bit of subtraction, and some multiplication, and wound up with a $950 million check for Groupon’s Series G. The math worked like this: Groupon pays some customer acquisition cost to get a new user. Let’s say it takes $2 worth of advertising. A user signs up, and uses Groupon maybe once a month. On a $30 deal, Groupon collects about $18 in gross profit, spends around $15 (and dropping!) on acquiring and servicing small business users, representing an annuity of $3 a month. This implies a payback period of a couple weeks, and is a good reason for Groupon investors to dump some lighter fluid on the fire and fund the company’s wildest user acquisition dreams.

Having raised another ~$960m in IPO and secondary sales, Groupon’s value now sits at about two thirds of their pre-money valuation from their late 2010 round. Why didn’t the Groupon math add up after all?

The problem Groupon faced was that their initial success validated a model that anyone could copy, and everyone who copied it increased both Groupon’s cost of customer acquisition and its churn rate. Instead of paying $2 for a $3/month annuity that lasted for years, they ended up paying $20 for a $3/month annuity that lasted just a few months. (A friend who worked in the industry — and who swears that he will never, ever be involved in or invest in group buying again — says that the average cost per lead went from $1 to $20 in under a year when the industry got hot.)

In theory, Groupon could have just stopped spending until the craziness died down. The problem was that the craziness had embedded optionality: the public markets had enough demand that the #1 group buying company, and probably the #2 and #3, would be able to IPO. And given their growth rates (Groupon grew revenues 23x the year before it went public), every company with material revenue had a meaningful shot at cashing out in an IPO.

With an industry in flux like that, Groupon’s only option was to at least keep growing revenue faster than the next-biggest competitor in dollar terms. And that meant paying the market price for new users even when that market price made no sense.

But growth has to come from somewhere, and who better for a group buying site to target than the people who are already using Groupon? LivingSocial, Gilt City, and BuyWithMe all shamelessly copied one another’s ad copy and ad targets — so the users Groupon competitors acquired were often coming from Groupon itself. This pushed up churn enough to completely wreck the original logic of the business.

Groupon did, in fact, manage to make it to IPO, and they successfully buried their competitors under a pile of marketing overspending. But in all probability, the gold rush of group buying permanently reduced the size of the market: too many users got burned out on daily deal emails, and too many merchants got burned, period. Groupon exists, and it’s a viable business — GAAP profitable by 2018, if analyst consensus is to be believed — but Groupon is less of a success than it could have been.

The Groupon Scramble, Redux

Groupon comes to mind because of this memento mori about LivingSocial’s valuation going from $6bn to $0 in the Washington Post. But it’s also notable because the story is happening again, with a different cast of characters but the same math and the same incentives.

Now, the stylized model is: if users sign up to pay $60/week for three meal-in-a-box deliveries, and we make an incremental profit of $10/week, and just half of users like it and end up subscribed for another year, then we’re at breakeven paying ~$250 to acquire new users. And that’s before we upsell people on wine! A $250/user LTV covers a lot of marketing sins.

(We’ve long since passed the days when a Groupon marketing intern could sign up for a $50 Facebook ads trial account and net 50 new customers by lunchtime.)

But Blue Apron, the leading meal-in-a-box service, faces a familiar set of challenges:

  • A competitor for the next big IPO: HelloFresh filed for an IPO, then withdrew their filing because their valuation was below the valuation their lead investor wanted. It is a truth universally acknowledged that in a sufficiently hot sector, the #2 player will be valued at the #1 player’s price/sales multiple, plus a couple turns for good measure. So the hotness of Blue Apron’s IPO is a predictor of how torrid HelloFresh’s marketing will be in the following six months.
  • There are endless, super-specific clones: Groupon had to deal with local clones, demographically targeted clones, vertical-focused clones, even a gluten-free clone. Blue Apron has to deal with local clones, demographically-targeted clones, vegan clones, and even — yes! — a gluten-free clone.
  • Since the group-buying bubble, ad options have narrowed. Facebook and Google have gained share, and any user acquisition tool that really scales has to scale on those platforms. The only new factor is the advertorial networks like Outbrain, Taboola, and Zergnet — the ones responsible for, say, this:

Tragically for Blue Apron, HelloFresh, and the rest, this new ad market is an even more efficient market than it used to be: what distinguishes Facebook/Instagram ads, AdWords, and advertorials is that they all appear side-by-side with — and compete with — organic content. This tends to make the ads more visible, and easier to track. With fewer sites controlling more traffic, competitive intelligence is easier: you don’t need to track a thousand publishers to see what ad Blue Apron is running. You just need to look at Taboola ad units, which will be identical across all the big publishers.

Will It Burst?

For a bubble to get crazy, you need two things: bad incentives and slow feedback. Mercifully, the meal-kit industry has pretty fast feedback. If Blue Apron starts losing users, they’ll be able to identify and plug the leaks.

Really, if you wanted proof that there was a bubble, you’d look for a refrigerated warehouse REIT. This is, as a business, not such a bad idea. REITs are popular because they’re tax-advantaged; they’re unpopular because the IRS only grants these tax advantages to businesses that are, in some sense, buying real estate and renting it out. But it turns out that, if you squint just right, you can have a datacenter REIT, or even a billboard REIT. A meal-in-a-box REIT would be a sight to behold: it would rent warehouses outside of major cities, and then lease space in those warehouses to whichever meal-in-a-box service could pay the most. And since warehouse space is never the biggest cost for these services, but schlepping boxes around without spoiling food is their biggest headache, the REIT would have extreme pricing power. Pricing power plus a tax advantage, plus downside protection (when the bubble bursts, you can always sell your warehouse to a grocery store, a food wholesaler, or Amazon), all adds up to a theoretically responsible way to gamble on a bubble.

That hasn’t happened yet. In fact, the ecosystem of meal-in-a-box-adjacent services is surprisingly weak; so far, each of these companies has rolled their own. So we’re still early, but if recent history is any guide, results will be disappointing.

About Me: I’ve worked as an investment analyst at a hedge fund, a digital marketing/growth-hacking consultant, a more Python-and-SQL -flavored analyst at a fintech company, and, one memorable summer, a clerk at an onion processing plant. Currently planning my next move, which will be somewhere at the intersection of finance and technology. If you enjoyed this piece, sign up for my newsletter to see more on investing in tech.

Close this section

Wood pulp extract stronger than carbon fiber or Kevlar (2012)

The Forest Products Laboratory of the US Forest Service has opened a US$1.7 million pilot plant for the production of cellulose nanocrystals (CNC) from wood by-products materials such as wood chips and sawdust. Prepared properly, CNCs are stronger and stiffer than Kevlar or carbon fibers, so that putting CNC into composite materials results in high strength, low weight products. In addition, the cost of CNCs is less than ten percent of the cost of Kevlar fiber or carbon fiber. These qualities have attracted the interest of the military for use in lightweight armor and ballistic glass (CNCs are transparent), as well as companies in the automotive, aerospace, electronics, consumer products, and medical industries.

Cellulose is the most abundant biological polymer on the planet and it is found in the cell walls of plant and bacterial cells. Composed of long chains of glucose molecules, cellulose fibers are arranged in an intricate web that provides both structure and support for plant cells. The primary commercial source for cellulose is wood, which is essentially a network of cellulose fibers held together by a matrix of lignin, another natural polymer which is easily degraded and removed.


Read the site and newsletter without ads. Use the coupon code EOFY before June 30 for 30% off the usual price.


Cellulose structures in trees from logs to molecules

Wood pulp is produced in a variety of processes, all of which break down and wash away the lignin, leaving behind a suspension of cellulose fibers in water. A typical cellulose wood fiber is only tens of microns wide and about a millimeter long.

Micrographs of cellulose fibers from wood pulp

The cellulose in wood pulp, when dry, has the consistency of fluff or lint - a layer of wood pulp cellulose has mechanical properties reminiscent of a wet paper towel. Not what you might expect to be the source of one of the strongest materials known to Man. After all, paper is made from the cellulose in wood pulp, and doesn't show extraordinary strength or stiffness.

Cellulose fibers and the smaller structures within them - a) fiber from wood pulp; b) microcrystalline cellulose; c) microfibrils of cellulose; d) nanofibrils of cellulose; e) cellulose nanocrystals from wood pulp; f) CNCs from sea squirts (the only animal source of cellulose microfibrils); and g,h) cellulose nanofibrils from other sources

Further processing breaks the cellulose fibers down into nanofibrils, which are about a thousand times smaller than the fibers. In the nanofibrils, cellulose takes the form of three-dimensional stacks of unbranched, long strands of glucose molecules, which are held together by hydrogen bonding. While not being "real" chemical bonds, hydrogen bonds between cellulose molecules are rather strong, adding to the strength and stiffness of cellulose nanocrystals.

The upper figure shows the structure of the cellulose polymer; the middle figure shows a nanofibril containing both crystalline and amorphous cellulose; the lower figure shows the cellulose nanocrystals after the amorphous cellulose is removed by acid hydrolysis

Within these nanofibrils are regions which are very well ordered, in which cellulose chains are closely packed in parallel with one another. Typically, several of these crystalline regions appear along a single nanofibril, and are separated by amorphous regions which do not exhibit a large degree of order. Individual cellulose nanocrystals are then produced by dissolving the amorphous regions using a strong acid.

At present the yield for separating CNCs from wood pulp is about 30 percent. There are prospects for minor improvements, but the limiting factor is the ratio of crystalline to amorphous cellulose in the source material. A near-term goal for the cost of CNCs is $10 per kilogram, but large-scale production should reduce that figure to one or two dollars a kilo.

Cross-sectional structure of various types of cellulose nanocrystals showing various crystalline arrangements of the individual cellulose polymer molecules (the rectangular boxes)

CNCs separated from wood pulp are typically a fraction of a micron long and have a square cross-section a few nanometers on a side. Their bulk density is low at 1.6 g/cc, but they exhibit incredible strength. An elastic modulus of nearly 150 GPa, and a tensile strength of nearly 10 GPa. Here's how its strength to compares to some better-known materials:

  • Material...........................Elastic Modulus................Tensile Strength
  • CNC......................................150 GPa.............................7.5 GPa
  • Kevlar 49..............................125 GPa.............................3.5 GPa
  • Carbon fiber.........................150 GPa.............................3.5 GPa
  • Carbon nanotubes..............300 GPa............................20 GPa
  • Stainless steel.....................200 GPa............................0.5 GPa
  • Oak..........................................10 GPa.............................0.1 GPa

The only reinforcing material that is stronger than cellulose nanocrystals is a carbon nanotube, which costs about 100 times as much. Stainless steel is included solely as a comparison to conventional materials. The relatively very low strength and modulus of oak points out how much the structure of a composite material can degrade the mechanical properties of reinforcing materials.

As with most things, cellulose nanocrystals are not a perfect material. Their greatest nemesis is water. Cellulose is not soluble in water, nor does it depolymerize. The ether bonds between the glucose units of the cellulose molecule are not easily broken apart, requiring strong acids to enable cleavage reactions.

The hydrogen bonds between the cellulose molecules are also too strong in aggregate to be broken by encroaching water molecules. Indeed, crystalline cellulose requires treatment by water at 320° C and 250 atmospheres of pressure before enough water intercalates between the cellulose molecules to cause them to become amorphous in structure. The cellulose is still not soluble, just disordered from their near-perfect stacking in the crystalline structure.

But cellulose contains hydroxyl (OH) groups which protrude laterally along the cellulose molecule. These can form hydrogen bonds with water molecules, resulting in cellulose being hydrophilic (a drop of water will tend to spread across the cellulose surface). Given enough water, cellulose will become engorged with water, swelling to nearly double its dry volume.

Swelling introduces a large number of nano-defects in the cellulose structure. Although there is little swelling of a single CNC, water can penetrate into amorphous cellulose with ease, pushing apart the individual cellulose molecules in those regions. In addition, the bonds and interfaces between neighboring CNC will be disrupted, thereby significantly reducing the strength of any material reinforced with CNCs. To make matters worse, water can move easily over the surface/interfaces of the CNCs, thereby allowing water to penetrate far into a composite containing CNCs.

There are several approaches to make CNC composite materials viable choices for real world applications. The simplest, but most limited, is to choose applications in which the composite will not be exposed to water. Another is to alter the surface chemistry of the cellulose so that it becomes hydrophobic, or water-repelling. This is easy enough to do, but will likely substantially degrade the mechanical properties of the altered CNCs. A third approach is to choose a matrix material which is hydrophobic, and preferably that forms a hydrophobic interface with CNCs. While not particularly difficult from a purely chemical viewpoint, there is the practical difficulty that interfaces between hydrophobic and hydrophilic materials are usually severely lacking in strength.

Perhaps the most practical approach will simply be to paint or otherwise coat CNC composite materials in some material that keeps water away. For such a prize - inexpensive strong and rigid materials - we can be sure that innovations will follow to make the theoretical practical.

Source: US Forest Service

View gallery - 12 images

Close this section

A Path Less Taken to the Peak of the Math World

The Accidental Apprentice

Huh was born in 1983 in California, where his parents were attending graduate school. They moved back to Seoul, South Korea, when he was two. There, his father taught statistics and his mother became one of the first professors of Russian literature in South Korea since the onset of the Cold War.

After that bad math test in elementary school, Huh says he adopted a defensive attitude toward the subject: He didn’t think he was good at math, so he decided to regard it as a barren pursuit of one logically necessary statement piled atop another. As a teenager he took to poetry instead, viewing it as a realm of true creative expression. “I knew I was smart, but I couldn’t demonstrate that with my grades, so I started to write poetry,” Huh said.

Huh wrote many poems and a couple of novellas, mostly about his own experiences as a teenager. None were ever published. By the time he enrolled at Seoul National University in 2002, he had concluded that he couldn’t make a living as a poet, so he decided to become a science journalist instead. He majored in astronomy and physics, in perhaps an unconscious nod to his latent analytic abilities.

When Huh was 24 and in his last year of college, the famed Japanese mathematician Heisuke Hironaka came to Seoul National as a visiting professor. Hironaka was in his mid-70s at the time and was a full-fledged celebrity in Japan and South Korea. He’d won the Fields Medal in 1970 and later wrote a best-selling memoir called The Joy of Learning, which a generation of Korean and Japanese parents had given their kids in the hope of nurturing the next great mathematician. At Seoul National, he taught a yearlong lecture course in a broad area of mathematics called algebraic geometry. Huh attended, thinking Hironaka might become his first subject as a journalist.

Initially Huh was among more than 100 students, including many math majors, but within a few weeks enrollment had dwindled to a handful. Huh imagines other students quit because they found Hironaka’s lectures incomprehensible. He says he persisted because he had different expectations about what he might get out of the course.

“The math students dropped out because they could not understand anything. Of course, I didn’t understand anything either, but non-math students have a different standard of what it means to understand something,” Huh said. “I did understand some of the simple examples he showed in classes, and that was good enough for me.”

After class Huh would make a point of talking to Hironaka, and the two soon began having lunch together. Hironaka remembers Huh’s initiative. “I didn’t reject students, but I didn’t always look for students, and he was just coming to me,” Hironaka recalled.

Huh tried to use these lunches to ask Hironaka questions about himself, but the conversation kept coming back to math. When it did, Huh tried not to give away how little he knew. “Somehow I was very good at pretending to understand what he was saying,” Huh said. Indeed, Hironaka doesn’t remember ever being aware of his would-be pupil’s lack of formal training. “It’s not anything I have a strong memory of. He was quite impressive to me,” he said.

As the lunchtime conversations continued, their relationship grew. Huh graduated, and Hironaka stayed on at Seoul National for two more years. During that period, Huh began working on a master’s degree in mathematics, mainly under Hironaka’s direction. The two were almost always together. Hironaka would make occasional trips back home to Japan and Huh would go with him, carrying his bag through airports and even staying with Hironaka and his wife in their Kyoto apartment.

“I asked him if he wanted a hotel and he said he’s not a hotel man. That’s what he said. So he stayed in one corner of my apartment,” Hironaka said.

In Kyoto and Seoul, Hironaka and Huh would go out to eat or take long walks, during which Hironaka would stop to photograph flowers. They became friends. “I liked him and he liked me, so we had that kind of nonmathematical chatting,” Hironaka said.

Meanwhile, Hironaka continued to tutor Huh, working from concrete examples that Huh could understand rather than introducing him directly to general theories that might have been more than Huh could grasp. In particular, Hironaka taught Huh the nuances of singularity theory, the field where Hironaka had achieved his most famous results. Hironaka had also been trying for decades to find a proof of a major open problem — what’s called the resolution of singularities in characteristic p. “It was a lifetime project for him, and that was principally what we talked about,” Huh said. “Apparently he wanted me to continue this work.”

In 2009, at Hironaka’s urging, Huh applied to a dozen or so graduate schools in the U.S. His qualifications were slight: He hadn’t majored in math, he’d taken few graduate-level classes, and his performance in those classes had been unspectacular. His case for admission rested largely on a recommendation from Hironaka. Most admissions committees were unimpressed. Huh got rejected at every school but one, the University of Illinois, Urbana-Champaign, where he enrolled in the fall of 2009.

A Crack in a Graph

At Illinois, Huh began the work that would ultimately lead him to a proof of the Rota conjecture. That problem was posed 56 years ago by the Italian mathematician Gian-Carlo Rota, and it deals with combinatorial objects — Tinkertoy-like constructions, like graphs, which are “combinations” of points and line segments glued together.

Consider a simple graph: a triangle.


Mathematicians are interested in the following: How many different ways can you color the vertices of the triangle, given some number of colors and adhering to the rule that whenever two vertices are connected by an edge, they can’t be the same color. Let’s say you have q colors. Your options are as follows:

  • q options for the first vertex, because when you’re starting out you can use any color.
  • q – 1 options for the adjacent vertex, because you can use any color save the color you used to color the first vertex.
  • q – 2 options for the third vertex, because you can use any color save the two colors you used to color the first two vertices.

Chromatic polynomial

The total number of colorings will be all options multiplied together, or in this case q x (q – 1) x (q – 2) = q3 – 3q2 + 2q.

That equation is called the chromatic polynomial for this graph, and it has some interesting properties.

Take the coefficients of each term: 1, –3 and 2. The absolute value of this sequence — 1, 3, 2 — has two properties in particular. The first is that it’s “unimodal,” meaning it only peaks once, and before that peak the sequence only ever rises, and after that peak it only ever falls.

The second property is that the sequence of coefficients is “log concave,” meaning that any three consecutive numbers in the sequence follow this rule: The product of the outside two numbers is less than the square of the middle number. The sequence (1, 3, 5) satisfies this requirement (1 x 5 = 5, which is smaller than 32), but the sequence (2, 3, 5) does not (2 x 5 = 10, which is greater than 32).

You can imagine an infinite number of graphs — graphs with more vertices and more edges connected in any number of ways. Every one of these graphs has a unique chromatic polynomial. And in every graph that mathematicians have ever studied, the coefficients of its chromatic polynomial have always been both unimodal and log concave. That this fact always holds is called “Read’s conjecture.” Huh would go on to prove it.

Read’s conjecture is, in a sense, deeply counterintuitive. To understand why, it helps to understand more about how graphs can be taken apart and put back together. Consider a slightly more complicated graph — a rectangle:


The chromatic polynomial of the rectangle is harder to calculate than that of the triangle, but any graph can be broken up into subgraphs, which are easier to work with. Subgraphs are all the graphs you can make by deleting an edge (or edges) from the original graph:

Rectangle with deleted edge

Or by contracting two vertices into one:

Rectangle with contracted edge

The chromatic polynomial of the rectangle is equal to the chromatic polynomial of the rectangle with one edge deleted minus the chromatic polynomial of the triangle. This makes intuitive sense when you recognize that there should be more ways to color the rectangle with the deleted edge than the rectangle itself: The fact that the top two points aren’t connected by an edge gives you more coloring flexibility (you can, for instance, color them the same color, which you’re not allowed to do when they’re connected). Just how much flexibility does it give you? Precisely the number of coloring options for the triangle.

The chromatic polynomial for any graph can be defined in terms of the chromatic polynomials of subgraphs. And the coefficients of all of these chromatic polynomials are always log concave.

Yet when you add or subtract two log concave sequences, the resulting sequence is usually not itself log concave. Because of this, you’d expect log concavity to disappear in the process of combining chromatic polynomials. Yet it doesn’t. Something else is going on. “This is what made people curious of this log concavity phenomenon,” Huh said.

Close this section

Another Ransomware Outbreak Is Going Global

Ransomware is causing severe problems for major critical infrastructure providers today. (Photo credit: DAMIEN MEYER/AFP/Getty Images)

Ukraine's government, National Bank, its transportation services and largest power companies are bearing the brunt of what appears to be a massive ransomware outbreak that's fast spreading across the world and hitting a significant number of critical infrastructure providers.

Whispers of WannaCry abound, though some security experts said on Tuesday that a different breed, named Petya, was to blame. "[We're seeing] several thousands of infection attempts at the moment, comparable in size to WannaCry's first hours," said Kaspersky Lab's Costin Raiu, who added that the infections are occurring in many different countries. Another firm, BitDefender, said it believed a similar strain called GoldenEye was responsible. Later, security firms, including Kaspersky and Avast, said the malware responsible was actually an entirely new ransomware that had borrowed Petya code.

Regardless of the malware, the attacks are now global. Danish shipping and energy company Maersk reported a cyberattack on Tuesday, noting on its website: "We can confirm that Maersk IT systems are down across multiple sites and business units due to a cyberattack." Russian oil industry giant Rosnoft said it was facing a "powerful hacker attack." Major British advertiser WPP said on Facebook it was also hit by an attack, while law firm DLA Piper also confirmed it had been targeted by hackers. None of the companies offered specifics on the nature of those hacks.

Attacks on the U.S. pharmaceuticals company Merck extended to its to global offices, sources told Forbes. Both phones and PCs were out of action at Merck's Ireland offices, and employees were sent home. Merck Sharp & Dohme (MSD), the U.K. subsidiary of Merck, confirmed its network was compromised. "We're trying to understand the level of impact," a spokesperson said. "We're trying to operate as normally as possible."

Ukraine the main target

The impact initially appeared to be most severe in Ukraine, with very few instances in the U.S., according to Kaspersky. The government organization managing the zone of the Chernobyl disaster fallout said it had to switch radiation monitoring services on industrial sites to manual as they had to shut down all Windows computers. Automated systems for the rest of the zone operated normally. The main Chernobyl plant website has also been closed.

Ransomware outbreak chart from Kaspersky Lab

Kaspersky Lab

The ransomware outbreak has affected Ukraine and Russia the worst in its early stages. There were USA targets, however, Kaspersky said.

Other victims included major energy companies such as the state-owned Ukrenergo and Kiev's main supplier Kyivenergo. Government officials have reportedly sent images of their infected computers, including this from deputy prime minister Pavlo Rozenko, who later said the whole government network was down:

It appears on the images posted across social media, the ransomware note is in English and demands $300 in Bitcoin to unlock the files, a request similar to the WannaCry ransom. Ransomware encrypts files and requires payment for the keys to unlock them.

Going global

A Ukrenergo spokesperson told Forbes  power systems were unaffected, adding: "On June 27, a part of Ukrenergo's computer network was cyberattacked. Similarly, as it is already known with the media, networks and other companies, including the energy sector, were attacked.

"Our specialists take all the necessary measures for the complete restoration of the computer system, including the official [website]." The site remains down at the time of publication.

The National Bank blamed an "unknown virus" as the culprit, hitting several Ukrainian banks and some commercial enterprises. "As a result of cyberattacks, these banks have difficulties with customer service and banking operations," a statement on the organization's website read.

The deputy general director of Kiev's Borispol Airport, Eugene Dykhne, said in a Facebook post: "Our IT services are working together to resolve the situation. There may be delays in flights due to the situation... The official Site of the airport and the flight schedules are not working."

Kiev Metro, meanwhile, said today in a Twitter alert that it wasn't able to accept bank card payments as a result of a ransomware infection.

It's currently unclear whether the attacks are purely ransomware, or if myriad attacks are currently hitting various parts of Ukraine. Attacks on Ukraine's power grid in 2015 and 2016 were believed to have been perpetrated by Russia, though the country denies all cyberattacks on foreign soil.

Though ransomware is typically used by cybercriminals, with WannaCry it was alleged a nation state was likely responsible for spreading the malware: North Korea. Cyber intelligence companies and the NSA believe with medium confidence that the nation used leaked NSA cyber weapons to carry out the attacks that took out hospitals in the U.K and infected hundreds of thousands of others.

How the ransomware spreads

Security researchers fear the latest outbreak is hitting systems via the same leaked NSA vulnerabilities used by WannaCry. Early analysis of some of the ransomware samples confirmed that the malware creators used the so-called EternalBlue exploits, which targeted a now-patched vulnerability in Microsoft Windows.

But, the federal cyber emergency team for Belgium, pointed to a different flaw in Windows. As noted by security firm FireEye in April, attacks exploiting the bug allow a hacker to run commands on a user's PC after the user opened a malicious document. Office documents that contained the hack and downloaded popular malware types onto target computers, FireEye reported.

CEO of Hacker House, Matthew Hickey, said the initial attacks appeared to have been delivered by that latter attack, using phishing emails containing Excel files. The malware may have used the worm features of the NSA attack to spread so quickly, he said. Hickey also confirmed that the ransomware's code used EternalBlue. But it's still unclear if the second flaw was used in these hacks as no phishing emails have yet emerged.

What's clear is the latest ransomware variant is spreading quickly, even on patched Windows PCs, thanks to some added features in the malware, now being dubbed NotPetya.


Got a tip? Email at or for PGP mail. Get me on Signal on +447837496820 or use SecureDrop to tip anyone at Forbes.

Close this section

Compiling a subset of Python syntax to x86-64 assembly for fun and zero profit

June 2017

Summary: I used Python’s built-in AST module to parse a subset of Python syntax and turn it into an x86-64 assembly program. It’s basically a toy, but it shows how easy it is to use the ast module to co-opt Python’s lovely syntax for your own ends.

One of the reasons people like Python is because of its readable, low-punctuation syntax. You can write an algorithm in pseudocode, and then a minute later realize you’ve just written valid Python. So why not borrow Python’s syntax – and its parser – for your own evil schemes?

I got onto this recently when I was doing some performance work on Python bytecode (a story for another day), and I wondered, “Hmmm, what if Python bytecode was just x86 code?” That is, why have a bytecode interpreter when you have a super-speedy native interpreter right there in your CPU? I’m sure that idea isn’t original with me, and it has probably been tried and found wanting. It’s obviously not portable, and Python’s data and memory model is much different and higher-level than the x86’s.

I still think that’s an interesting idea, but I decided to try my hand at a much smaller and more fun problem: using the Python ast (abstract syntax tree) module to parse Python syntax, then recursively visit the AST nodes to turn them into another language. I thought it’d be easiest to generate C, but then I decided that going straight to (very unoptimized) assembly would be more fun.

For example, we’ll be turning this:

Into something like this:

        movq    8(%rbp), %rdx
        movq    $65, %rax
        addq    %rdx, %rax

Enter pyast64

I’ve written a bunch of 8086 and 80386 assembly back in the day, but have never actually had the need to write 64-bit x86 code. (“What’s this rax thing? I’ve only heard of ax and eax.”) These days almost everything is 64-bit, and I spend most of my time (alas) on macOS, so I’m generating x86-64 code for macOS. It wouldn’t be hard to port it to Linux, though Windows might be a bit more work.

Outputting assembly source is easier than outputting machine code and an executable directly, so that’s what I did, letting as and ld take care of assembly and linking. I’m not a fan of AT&T assembly syntax, but that’s what’s built in, so I used it.

I chose to keep it really simple: my language looks like Python, but it’s definitely not. The only data type is integers, and the only output mechanism is a predefined putc function (adding getc for input is left as an exercise for the reader).

The compiler uses a simple %rbp-based stack frame to handle an arbitrary number of local variables. It supports while loops, as well as for i in range() loops – for is implemented in terms of while. It supports if/else, comparisons, logical and and or, the four basic math operators, and recursion.

If I felt the urge to make it more than a toy, I’d use Python 3’s type annotations to support different data types like floating point and strings, and I’d add some features to allocate and use memory arrays. Oh, and things like *args and default arguments.

To keep it simple, the assembly output is very dumb, basically ignoring the fact that the x86-64 has a whole slew of registers, and just using %rax and %rdx for calculations and the stack for the rest. However, there’s a small peephole optimizer which turns push-then-pop sequences into mov instructions.

A quick look at the implementation

The full source is on GitHub at benhoyt/pyast64, but here’s a quick look at the implementation.

When writing AST-handling code, you typically write a visitor class that implements visit_* methods to visit each AST node type, for example visit_FunctionDef or visit_Add. There’s a simple standard library ast.NodeVisitor class that you can subclass to do the lookups, but I implemented my own because I wanted it to fail hard on unknown node types instead of calling the generic_visit fallback.

Python’s dynamic attributes and getattr() function makes this trivial:

class Compiler:
    def visit(self, node):
        name = node.__class__.__name__
        visit_func = getattr(self, 'visit_' + name, None)
        assert visit_func is not None, '{} not supported'.format(name)

    def visit_Module(self, node):
        for statement in node.body:

    def visit_FunctionDef(self, node):

And here is how the meat of the node types are implemented (for a selection of simple node types):

    def visit_Num(self, node):
        # A constant, just push it on the stack
        self.asm.instr('pushq', '${}'.format(node.n))

    def local_offset(self, name):
        # Calculate the offset of the given local variable
        index = self.locals[name]
        return (len(self.locals) - index) * 8 + 8

    def visit_Name(self, node):
        # Push the value of a local on the stack
        offset = self.local_offset(
        self.asm.instr('pushq', '{}(%rbp)'.format(offset))

    def visit_Assign(self, node):
        # Assign (set) the value of a local variable
        assert len(node.targets) == 1, \
            'can only assign one variable at a time'
        offset = self.local_offset(node.targets[0].id)
        self.asm.instr('popq', '{}(%rbp)'.format(offset))

    def simple_binop(self, op):
        self.asm.instr('popq', '%rdx')
        self.asm.instr('popq', '%rax')
        self.asm.instr(op, '%rdx', '%rax')
        self.asm.instr('pushq', '%rax')

    def visit_Add(self, node):

    def visit_Sub(self, node):

    def visit_Call(self, node):
        assert not node.keywords, 'keyword args not supported'
        for arg in node.args:
        if node.args:
            # Caller cleans up the arguments from the stack
            self.asm.instr('addq', '${}'.format(
                    8 * len(node.args)), '%rsp')
        # Return value is in rax, so push it on the stack now
        self.asm.instr('pushq', '%rax')

When a FunctionDef node is encounted, we use another visitor class (this time an ast.NodeVisitor subclass) to whip through the AST of that function and find the names and number of local variables. We store a dict of variable name to index that’s used to calculate stack offsets when fetching and storing locals.

In my toy language, the only node types that can “create” locals are assignment and for loops, so here’s that visitor class in its entirety:

class LocalsVisitor(ast.NodeVisitor):
    def __init__(self):
        self.local_names = []

    def add(self, name):
        if name not in self.local_names:

    def visit_Assign(self, node):
        assert len(node.targets) == 1, \
            'can only assign one variable at a time'

    def visit_For(self, node):
        for statement in node.body:

In addition to the Compiler class, there’s an Assembler class that actually outputs the assembly instructions, labels, etc. This class also implements the peephole optimizer to combine sequences of pushes and pops into moves. Here’s the structure of that:

class Assembler:
    def __init__(self, output_file=sys.stdout, peephole=True):
        self.output_file = output_file
        self.peephole = peephole
        # Current batch of instructions, flushed on label and
        # end of function
        self.batch = []

    def instr(self, opcode, *args):
        # Output a single instruction with given args
        self.batch.append((opcode, args))

    def flush(self):
        if self.peephole:
        for opcode, args in self.batch:
            args_str = ', '.join(str(a) for a in args)
            print('\t{}\t{}'.format(opcode, args_str),
        self.batch = []

    def optimize_pushes_pops(self):
        """This finds runs of push(es) followed by pop(s) and combines
        them into simpler, faster mov instructions. For example:

        pushq   8(%rbp)
        pushq   $100
        popq    %rdx
        popq    %rax

        Will be turned into:

        movq    $100, %rdx
        movq    8(%rbp), %rax

Again, go to the GitHub repo to browse the full source – it’s only 500 lines of code including blanks and comments.

Example of output

There are a couple of examples (the *.p64 files) in the source tree. Below is the simplest of them: forloop.p64, which simply prints the letters A through J using a for loop:

def loop():
    for i in range(10):
        putc(65 + i)            # 65 is 'A'

def main():

Note that for i in range(10) is not compiled directly, but expanded to a while loop:

i = 0
while i < 10:
    putc(65 + i)
    i = i + 1

To give you a taste of the output, the loop function compiles to the following assembly (with some comments added afterward):

        pushq   $0              # allocate stack space for "i"
        pushq   %rbp            # save and setup frame pointer
        movq    %rsp, %rbp
        movq    $0, 8(%rbp)     # i = 0
        movq    $10, %rdx       # rax = 1 if i < 10 else 0
        movq    8(%rbp), %rax
        cmpq    %rdx, %rax
        movq    $0, %rax
        jnl     loop_3_less
        incq    %rax
        cmpq    $0, %rax        # if bool is zero, break
        jz      loop_2_break
        movq    8(%rbp), %rdx   # 65 + i
        movq    $65, %rax
        addq    %rdx, %rax
        pushq   %rax            # putc()
        call    putc
        addq    $8, %rsp
        movq    $1, %rdx        # i = i + 1
        movq    8(%rbp), %rax
        addq    %rdx, %rax
        movq    %rax, 8(%rbp)
        jmp     loop_1_while
        popq    %rbp            # restore frame pointer
        leaq    8(%rsp),%rsp    # deallocate stack space for "i"
        ret                     # return to caller

A quick benchmark

Again, this is very much a toy, and the assembly output is pretty terrible. But benchmarks are fun (and Python is slow at integer math), so I did a quick benchmark that sums the first 100,000,000 integers:

def main():
    sum = 0
    for i in range(100000000):
        sum += i

I also compared to a slightly more idiomatic version that only works in actual Python using sum(range(100000000)), as well as a C version of the same loop for reference. On my MacBook Pro 2.5GHz i7, here are the results:

versiontime (s)ratio
python 3.5 with sum()2.23.2
pyast64 without peephole0.5512.7
pyast64 with peephole0.2429.7
C version (gcc -O0)0.2231.8

So there you go: my 500-line toy compiler executes an integer-summing loop 30x as fast as Python, and on a par with gcc with optimizations disabled (gcc -O2 optimizes the entire loop away to a constant).

Enjoy! Just don’t try to use it for real projects anytime soon. :-)

Please write your comments on Hacker News or programming reddit.

Close this section

Bitmovin (YC S15) Is Hiring a Head of Demand Generation

The Company

Bitmovin, a YCombinator company, is the industry-leading video encoding, player and analytics provider and is a fast growing privately owned company located in San Francisco, CA and Klagenfurt, Austria. The company was founded by the co-creators of the MPEG-DASH video streaming standard that is used today by companies like Netflix and YouTube and is responsible for over 50% of peak U.S. internet traffic. Working at Bitmovin is international, fast-paced, fun and challenging. We’re looking for talented, passionate and inspired people who want to change the way the world watches video.

[Full-Time, San Francisco, CA]

The Role

Demand Generation Manager builds and owns the demand generation process with the goal of driving and optimizing global lead generation and customer acquisition. We are seeking a highly quantitative, data-driven marketer to help support and grow our inbound and outbound marketing efforts.

You will hit the ground running to develop, execute, and manage customer acquisition and marketing automation activities. You’ll be a key member of our team, helping to manage and optimize global pipeline efforts. We are looking for someone highly analytical that is comfortable with data and enjoys testing new technologies and platforms.


  • Build and own the demand generation process across all digital channels (SEM, Organic, Facebook, Linkedin, Twitter, Retargeting, etc.), from design to test to execution.
  • Design and manage outbound marketing programs. Campaigns include email, webinars, PPL programs, landing pages, complex nurture tracks and database segmentation.
  • Continuously brainstorm, test, and execute optimization experiments to improve channel performance and drive profitable growth
  • Analyze marketing campaign data and run A/B tests to regularly assess ongoing value of acquisition efforts, ultimately leading to improved ROI
  • Work directly with the CEO and CRO to collaborate on strategy and drive results
  • Build comprehensive dashboards that track channel and campaign KPIs to provide actionable insights to key stakeholders
  • Collaborate with internal creative team to develop and refresh campaign assets, as well as proactively share learnings and results with key partners

The Skillset

  • 3-5+ years of experience in digital/acquisition marketing
  • Experience creating demand generation campaigns for acquiring, nurturing, and qualifying leads
  • Strong project management skills, including managing complex projects and bringing them to completion
  • A data-driven mindset, including experience using metrics to validate hypotheses
  • A natural curiosity and willingness to learn
  • Experience with Outbound, SEM, SEO, Paid Social advertising, Display marketing, and Retargeting
  • Willingness to work in a fast-paced, always-on, start-up environment; willing to roll up his/her sleeves to get the job done


  • Working with an innovative, fast growing and international team
  • Opportunity to make an impact on the business and the video industry, domestically and internationally
  • Competitive salary and equity
  • Health, dental, and vision insurance
  • 401k
  • Investment in growth and education
  • Regular and fun team activities (hackathons, skiing days, …)
  • The opportunity to work for an exciting start-up focused on solving complex video problems

Empower your future and join us now!

Apply via

Close this section

Infiniswap: Efficient Memory Disaggregation

Infiniswap is a remote memory paging system designed specifically for an RDMA network. It opportunistically harvests and transparently exposes unused memory to unmodified applications by dividing the swap space of each machine into many slabs and distributing them across many machines' remote memory. Because one-sided RDMA operations bypass remote CPUs, Infiniswap leverages the power of many choices to perform decentralized slab placements and evictions.

Extensive benchmarks on workloads from memory-intensive applications ranging from in-memory databases such as VoltDB and Memcached to popular big data software Apache Spark, PowerGraph, and GraphX show that Infiniswap provides order-of-magnitude performance improvements when working sets do not completely fit in memory. Simultaneously, it boosts cluster memory utilization by almost 50%.

Detailed design and performance benchmarks are available in our NSDI'17 paper.


The following minimum prerequisites are required to use Infiniswap:

  • Software

    • Operating system: Ubuntu 14.04 (kernel 3.13.0)
    • Container: LXC (or any other container technologies) with cgroup (memory and swap) enabled
    • RDMA NIC driver: MLNX OFED 4.0, or select the right version for your operating system.
  • Hardware

    • Mellanox ConnectX-3 (InfiniBand)
    • An empty and unused disk partition

Code Organization

The Infiniswap codebase is organized under three directories.

  • infiniswap_bd: Infiniswap block device (kernel module).
  • infiniswap_daemon: Infiniswap daemon (user-level process) that exposes its local memory as remote memory.
  • setup: setup scripts.

Important Parameters

Some important parameters in Infiniswap:

  • infiniswap_bd/infiniswap.h
    • BACKUP_DISK [disk partition]
      It's the name of the backup disk in Infiniswap block device.
      How to check the disk partition status and list?
      "sudo fdisk -l"
    • STACKBD_SIZE_G [size in GB]
      It defines the size of Infiniswap block device (also backup disk).
    • MAX_SGL_LEN [num of pages]
      It specifies how many pages can be included in a single swap-out request (IO request).
    • BIO_PAGE_CAP [num of pages]
      It limits the maximum value of MAX_SGL_LEN.
    • MAX_MR_SIZE_GB [size]
      It sets the maximum number of slabs from a single Infiniswap daemon. Each slab is 1GB.
// example, in "infiniswap.h" 
#define BACKUP_DISK "/dev/sda4"  
#define STACKBD_SZIE_G 12  // 12GB
#define MAX_SGL_LEN 32  // 32 x 4KB = 128KB, it's the max size for a single "struct bio" object.
#define BIO_PAGE_CAP 32
#define MAX_MR_SIZE_GB 32 //this infiniswap block device can get 32 slabs from each infiniswap daemon.
  • infiniswap_daemon/rdma-common.h
    • MAX_FREE_MEM_GB [size]
      It is the maximum size (in GB) of remote memory this daemon can provide (from free memory of the local host).
    • MAX_MR_SIZE_GB [size]
      It limits the maximum number of slabs this daemon can provide to a single infiniswap block device.
      This value should be the same of "MAX_MR_SIZE_GB" in "infiniswap.h".
    • MAX_CLIENT [number]
      It defines how many infiniswap block devices a single daemon can connect to.
      This is the "HeadRoom" mentioned in our paper.
      When the remaining free memory of the host machines is lower than this threshold, infiniswap daemon will start to evict mapped slabs.
// example, in "rdma-common.h" 
#define MAX_CLIENT 32     

/* Followings should be assigned based on 
 * memory information (DRAM capacity, regular memory usage, ...) 
 * of the host machine of infiniswap daemon.    
#define MAX_FREE_MEM_GB 32    
#define MAX_MR_SIZE_GB  32    

How to Build and Install

In a simple one-to-one experiment, we have two machines (M1 and M2).
Applications run in container on M1. M1 needs remote memory from M2.
We need to install infiniswap block device on M1, and install infiniswap daemon on M2.

  1. Setup InfiniBand NIC on both machines:
cd setup  
# ./ <ip>    
# assume all IB NICs are connected in the same LAN (192.168.0.x)
# M1:, M2:
sudo ./
  1. Compile infiniswap daemon on M2:
cd infiniswap_daemon
  1. Install infiniswap block device on M1:
cd infiniswap_bd  
sudo make install

How to Run

  1. Start infiniswap daemon on M2:
cd infiniswap_daemon   
# ./infiniswap-daemon <ip> <port> 
# pick up an unused port number
./infiniswap-daemon 9400
  1. Prepare server (portal) list on M1:
# Edit the port.list file (<infiniswap path>/setup/portal.list)
# portal.list format, the port number of each server is assigned above.  
Line1: number of servers
Line2: <server1 ip>:<port>  
Line3: <server2 ip>:<port>
Line4: ...
# in this example, M1 only has one server
  1. Disable existing swap partitions on M1:
# check existing swap partitions
sudo swapon -s

# disable existing swap partitions
sudo swapoff <swap partitions>
  1. Create an infiniswap block device on M1:
cd setup
# create block device: nbdx-infiniswap0
# make nbdx-infiniswap0 a swap partition
sudo ./
  1. Configure memory limitation of container (LXC)
# edit "memory.limit_in_bytes" in "config" file of container (LXC)

# For example, this container on M1 can use 5GB local memory at most.
# Additional memory data will be stored in the remote memory provided by M2.   
lxc.cgroup.memory.limit_in_bytes = 5G

Now, you can start your applications (in container).
The extra memory data from applications will be stored in remote memory.


  1. Does infiniswap support transparent huge page?
    Yes. Infiniswap relies on the swap mechanism in the original Linux kernel. Current kernel (we have tested up to 4.10) splits the huge page into basic pages (4KB) before swapping out the huge page.
    (In mm/vmscan.c, shrink_page_list() calls split_huge_page_to_list() to split the huge page.)
    Therefore, whether transparent huge page is enabled or not makes no difference for infiniswap.

  2. Can we use Docker container, other than LXC?
    Yes. Infiniswap requires container-based environment. However, it has no dependency on LXC. Any container technologies that can limit memory resource and enable swapping should be feasible.
    We haven't tried Docker yet. If you find any problems when running infiniswap in a Docker environment, please contact us.


This work is by Juncheng Gu, Youngmoon Lee, Yiwen Zhang, Mosharaf Chowdhury, and Kang G. Shin. You can email us at infiniswap at umich dot edu, file issues, or submit pull requests.

Close this section

Artistically shaped magnets may make stellarators easier to manage than ITER

Fusion powers the Sun, where hydrogen ions are forced together by the high pressure and temperature. The nuclei join to create helium and release a lot of energy in the process. Doing the same thing on Earth means creating the same conditions that drive hydrogen nuclei together, which is easier said than done. Humans are very clever, but achieving fusion in a magnetic bottle will probably be one of our cleverer tricks. Making that bottle is difficult, and Ars recently had the chance to visit the people and facilities behind one of our most significant attempts at it.

For most people, magnetic bottles for fusion bring to mind the tokamak, a donut-shaped device that confines the plasma in a ring. But actually, the tokamak is just one approach; there's a more complicated version that is helical in shape. Somewhere in between the two is the stellarator. Here, the required magnetic field is a bit easier to create than for a helix, but it's still far more complicated than for a tokamak.

At the Max Planck Institute for Plasma Physics (MPIPP) in Greifswald, located on the Baltic coast in Germany, the latest iteration of the stellarator design is preparing to restart after its first trial run. The researchers putting it all together are pretty excited by the prospect—frankly every engineer and scientist would be excited by the prospect of turning on a new piece of hardware. But it's even more so the case at MPIPP since the new gear happens to be something they designed and built. The stellarator is something special: the realization of a design that is more than 50 years in the making.

Self-organized confinement

The heliac, the stellarator, and the tokamak are all trying to achieve the same thing: confine a plasma tightly in a magnetic bottle, tightly enough to push protons in close to each other. They all use a more-or-less donut shape, but that more-or-less involves some really important differences. That difference makes the stellarator a pretty special science and engineering challenge. To highlight that challenge, we can start with the simpler and more familiar tokamak.

The tokamak begins with a donut-shaped vacuum vessel. The magnetic field is applied by a series of flat coils that are wrapped around the tube of the donut (as in the diagram). This, along with a few other magnets, creates a magnetic field that runs in parallel lines around the interior of the donut. When a plasma is injected, its charged particles corkscrew around the field lines. At first sight this looks like it should confine the plasma in a series of tubes.

This doesn't happen, though. As Professor Thomas Klinger, head of the stellarator project at the MPIPP says, "The vacuum magnetic field has no confinement properties because it’s a purely toroidal field. And a purely toroid field does not confine a plasma at all; that was already realised by Fermi in 1951."

The problem is that the charged particles can drift from magnetic field line to magnetic field line. Since the magnetic field doesn't have the same strength across the cross-section of the torus, particle drift to the outside is much more energetically favorable. So the plasma simply expands outward and hits the wall.

To obtain high plasma temperatures in a tokamak, this drift has to be stopped. To do this, a large current has to flow through the plasma. "You have to twist the magnetic field lines, which is done by the current," says Klinger. The current generates a second magnetic field, which distorts the applied field so that the field lines run in a twisted spiral.

A charged particle in the very short-term can still be thought of as corkscrewing around a single field line. But, because the field line spirals around, it is better to think of a series of nested surfaces (like a matryoshka doll), with the particles in the plasma confined on these surfaces. One consequence of this design is that, while particles still hop between field lines, they can now drift from low magnetic field to high magnetic field, and vice versa—an outward flow is no longer favorable. So, on average, the rate at which particles escape confinement is much smaller.

Strong confinement means that the plasma has to support a large current to generate the right magnetic field shape. For the international thermonuclear experimental reactor (ITER), the plasma will generate several million amps of current. Unfortunately, the current through the plasma, the plasma density, and temperature don't end up the same everywhere, and these differences have the potential to destabilize the current.

In particular, if the current is not evenly distributed across the plasma, the lovely nested surfaces that confine the plasma may be destroyed. This process can rapidly spiral out of control, dumping all the current in the plasma to the vessel walls in an event called a disruption. A disruption is not something to be taken lightly, as Klinger notes. "A grown-up tokamak like JET [joint European tokamak] or our ASDEX upgrade [axially symmetric diverter experiment] starts to jump in the case of a disruption," he says. "These are big machines; imagine such a big machine starts jumping."

So while the tokamak can use a self-organizing magnetic field to confine the plasma, that field is subject to various instabilities. To avoid these building into problems, the tokamak has to operate in pulsed mode (though those pulses may be hours in duration), and it requires a lot of sensors, control systems, and feedback to minimize the instabilities.

To get this right, you need a good physical model of the plasma physics. Researchers use the model to look for the telltale signs that indicate the beginning of an instability. "My modeling is mostly related to how do we control these instabilities. How do we affect these instabilities so that they either do not occur or that, when they occur, we suppress them or ameliorate their presence," says Dr. Egbert Westerhof from the Dutch Institute for Fundamental Energy Research (DIFFER).

In the tokamak, this sort of modeling is simplified by the symmetry of the device, which reduces a 3D problem to 2D. The results from these physics-based models are then used to create empirical models that do not really contain detailed physics, but they can quickly provide predictive results within some limited range of plasma properties.

This simplicity has helped produce models that can calculate the tokamak's behavior faster than the tokamak can misbehave, a necessity for a successful control system. This hasn't really happened with the stellarator designs. "They are really far [ahead of us] in tokamaks because they have these models that work really well. They have been tested. And now they can actually predict the temperature and density profiles faster than real time, which is incredible. But we don’t have these models yet," explains Dr. Josefine Proll, an assistant professor at Technical University Eindhoven.

Externally organized confinement

The stellarator has little to no current in the plasma. This is because the externally applied magnetic field has all the properties required to confine the plasma. So, although the vacuum vessel is still basically a toroid, the magnets that loop around the tube are not planar. Instead, they have the shape needed to generate a twisted magnetic field. "If you shape your field in a clever way then you can make it so that the drifts basically cancel out, at least for those that would leave the plasma," says Proll.

Theoretically, that is. In practice, well, we're still working on it. To give a magnetic field precisely the right shape requires extensive calculation at many different scales, and all of it must happen in a 3D space.

So, computer code that simulates the plasma over the entire volume of a stellarator had to be developed, and that had to wait for computers that were powerful enough to perform the calculations. "These machines, these supercomputers of the '80s, made it possible to crank through the equations, to solve the equations simultaneously, and then it was found out, okay, the stellarator needs optimization," says Klinger.

Calling it optimization kind of undersells the problem, though. Scientists had to decide what parameters of the system need to be optimized and in what range. To make that decision more difficult, no single computer model can encompass the vast range of physics that needed to be included. To get an accurate picture of the plasma in a stellarator, you need separate models that calculate the applied magnetic field and the plasma's fluid-like behavior, called a magnetohydrodynamic model. Then, to test the magnetic field confinement against particle drift and particle collisions, you need models that track individual particles along field lines and other models that deal with diffusion. All of these models needed to be created and then verified against experimental data before optimization was even possible.

Listing image by Max Planck Institute for Plasma Physics

Close this section

Show HN: StackImpact – Python Production Profiler: CPU, Memory, Exceptions


StackImpact is a performance profiler for production applications. It gives developers continuous and historical view of application performance with line-of-code precision, which includes CPU, memory allocation and blocking call hot spots as well as execution bottlenecks, errors and runtime metrics. Learn more at



  • Automatic hot spot profiling for CPU, memory allocations, blocking calls
  • Automatic bottleneck tracing for HTTP handlers and other libraries
  • Exception monitoring
  • Health monitoring including CPU, memory, garbage collection and other runtime metrics
  • Anomaly alerts on most important metrics
  • Multiple account users for team collaboration

Learn more on the features page (with screenshots).


See full documentation for reference.

Supported environment

  • Linux, OS X or Windows. Python version 2.7, 3.4 or higher.
  • Memory allocation profiler and some GC metrics are only available for Python 3.
  • CPU and Time profilers only support Linux and OS X.
  • Time (blocking call) profiler supports threads and gevent.

Getting started

Create StackImpact account

Sign up for a free account at

Installing the agent

Install the Python agent by running

pip install stackimpact

And import the package in your application

Configuring the agent

Start the agent in the main thread by specifying the agent key and application name. The agent key can be found in your account's Configuration section.

agent = stackimpact.start(
    agent_key = 'agent key here',
    app_name = 'MyPythonApp')

Add the agent initialization to the worker code, e.g., if applicable.

Other initialization options:

  • app_version (Optional) Sets application version, which can be used to associate profiling information with the source code release.
  • app_environment (Optional) Used to differentiate applications in different environments.
  • host_name (Optional) By default, host name will be the OS hostname.
  • debug (Optional) Enables debug logging.

Analyzing performance data in the Dashboard

Once your application is restarted, you can start observing continuous CPU, memory, I/O, and other hot spot profiles, execution bottlenecks as well as process metrics in the Dashboard.


To enable debug logging, add debug = True to startup options. If the debug log doesn't give you any hints on how to fix a problem, please report it to our support team in your account's Support section.


The agent overhead is measured to be less than 1% for applications under high load.

Close this section

Growing fibers

Good day, Schemers!

Over the last 12 to 18 months, as we were preparing for the Guile 2.2 release, I was growing increasingly dissatisfied at not having a good concurrency story in Guile.

I wanted to be able to spawn a million threads on a core, to support highly-concurrent I/O servers, and Guile's POSIX threads are just not the answer. I needed something different, and this article is about the search for and the implementation of that thing.

on pthreads

It's worth being specific why POSIX threads are not a great abstraction. One is that they don't compose: two pieces of code that use mutexes won't necessarily compose together. A correct component A that takes locks might call a correct component B that takes locks, and the other way around, and if both happen concurrently you get the classic deadly-embrace deadlock.

POSIX threads are also terribly low-level. Asking someone to build a system with mutexes and cond vars is like building a house with exploding toothpicks.

I want to program network services in a straightforward way, and POSIX threads don't help me here either. I'd like to spawn a million "threads" (scare-quotes!), one for each client, each one just just looping reading a request, computing and writing the response, and so on. POSIX threads aren't the concrete implementation of this abstraction though, as in most systems you can't have more than a few thousand of them.

Finally as a Guile maintainer I have a duty to tell people the good ways to make their programs, but I can't in good conscience recommend POSIX threads to anyone. If someone is a responsible programmer, then yes we can discuss details of POSIX threads. But for a new Schemer? Never. Recommending POSIX threads is malpractice.

on scheme

In Scheme we claim to be minimalists. Whether we actually are that or not is another story, but it's true that we have a culture of trying to grow expressive systems from minimal primitives.

It's sometimes claimed that in Scheme, we don't need threads because we have call-with-current-continuation, an ultrapowerful primitive that lets us implement any kind of control structure we want. (The name screams for an abbreviation, so the alias call/cc is blessed; minimalism is whatever we say it is, right?) Unfortunately it turned out that while call/cc can implement any control abstraction, it can't implement any two. Abstractions built on call/cc don't compose!

Fortunately, there is a way to build powerful control abstractions that do compose. This article covers the first half of composing a concurrency facility out of a set of more basic primitives.

Just to be concrete, I have to start with a simple implementation of an event loop. We're going to build on it later, but for now, here we go:

(define (run sched)
  (match sched
    (($ $sched inbox i/o)
     (define (dequeue-tasks)
       (append (dequeue-all! inbox)
               (poll-for-tasks i/o)))
     (let lp ()
       (for-each (lambda (task) (task))

This is a scheduler that is a record with two fields, inbox and i/o.

The inbox holds a queue of pending tasks, as thunks (procedure of no arguments). When something wants to enqueue a task, it posts a thunk to the inbox.

On the other hand, when a task needs to wait in some external input or output being available, it will register an event with i/o. Typically i/o will be a simple combination of an epollfd and a mapping of tasks to enqueue when a file descriptor becomes readable or writable. poll-for-tasks does the underlying epoll_wait call that pulls new I/O events from the kernel.

There are some details I'm leaving out, like when to have epoll_wait return directly, and when to have it wait for some time, and how to wake it up if it's sleeping while a task is posted to the scheduler's inbox, but ultimately this is the core of an event loop.

a long digression

Now you might think that I'm getting a little far afield from what my goal was, which was threads or fibers or something. But that's OK, let's go a little farther and talk about "prompts". The term "prompt" comes from the experience you get when you work on the command-line:

/home/wingo% ./prog

I don't know about you all, but I have the feeling that the /home/wingo% has a kind of solid reality, that my screen is not just an array of characters but there is a left-hand-side that belongs to the system, and a right-hand-side that's mine. The two parts are delimited by a prompt. Well prompts in Scheme allow you to provide this abstraction within your program: you can establish a program part that's a "system" facility, for whatever definition of "system" suits your purposes, and a part that's for the "user".

In a way, prompts generalize a pattern of system/user division that has special facilities in other programming languages, such as a try/catch block.

try {
} catch (e) {

Here again I put the "user" code in italics. Some other examples of control flow patterns that prompts generalize would be early exit of a subcomputation, coroutines, and nondeterminitic choice like SICP's amb operator. Coroutines is obviously where I'm headed here in the context of this article, but still there are some details to go over.

To make a prompt in Guile, you can use the % operator, which is pronounced "prompt":

(use-modules (ice-9 control))

(% expr
   (lambda (k . args) #f))

The name for this operator comes from Dorai Sitaram's 1993 paper, Handling Control; it's actually a pun on the tcsh prompt, if you must know. Anyway the basic idea in this example is that we run expr, but if it aborts we run the lambda handler instead, which just returns #f.

Really % is just syntactic sugar for call-with-prompt though. The previous example desugars to something like this:

(let ((tag (make-prompt-tag)))
  (call-with-prompt tag
    ;; Body:
    (lambda () expr)
    ;; Escape handler:
    (lambda (k . args) #f)))

(It's not quite the same; % uses a "default prompt tag". This is just a detail though.)

You see here that call-with-prompt is really the primitive. It will call the body thunk, but if an abort occurs within the body to the given prompt tag, then the body aborts and the handler is run instead.

So if you want to define a primitive that runs a function but allows early exit, we can do that:

(define-module (my-module)
  #:export (with-return))

(define-syntax-rule (with-return return body ...)
  (let ((t (make-prompt-tag)))
    (define (return . args)
      (apply abort-to-prompt t args))
    (call-with-prompt t
      (lambda () body ...)
      (lambda (k . rvals)
        (apply values rvals)))))

Here we define a module with a little with-return macro. We can use it like this:

(use-modules (my-module))

(with-return return
  (+ 3 (return 42)))
;; => 42

As you can see, calling return within the body will abort the computation and cause the with-return expression to evaluate to the arguments passed to return.

But what's up with the handler? Let's look again at the form of the call-with-prompt invocations we've been making.

(let ((tag (make-prompt-tag)))
  (call-with-prompt tag
    (lambda () ...)
    (lambda (k . args) ...)))

With the with-return macro, the handler took a first k argument, threw it away, and returned the remaining values. But the first argument to the handler is pretty cool: it is the continuation of the computation that was aborted, delimited by the prompt: meaning, it's the part of the computation between the abort-to-prompt and the call-with-prompt, packaged as a function that you can call.

If you call the k, the delimited continuation, you reinstate it:

(define (f)
  (define tag (make-prompt-tag))
  (call-with-prompt tag
   (lambda ()
     (+ 3
        (abort-to-prompt tag)))
   (lambda (k) k)))

(let ((k (f)))
  (k 1))
;; =& 4

Here, the abort-to-prompt invocation behaved simply like a "suspend" operation, returning the suspended computation k. Calling that continuation resumes it, supplying the value 1 to the saved continuation (+ 3 []), resulting in 4.

Basically, when a delimited continuation suspends, the first argument to the handler is a function that can resume the continuation.

tasks to fibers

And with that, we just built coroutines in terms of delimited continuations. We can turn our scheduler inside-out, giving the illusion that each task runs in its own isolated fiber.

(define tag (make-prompt-tag))

(define (call/susp thunk)
  (define (handler k on-suspend) (on-suspend k))
  (call-with-prompt tag thunk handler))

(define (suspend on-suspend)
  (abort-to-prompt tag on-suspend))

(define (schedule thunk)
  (match (current-scheduler)
    (($ $sched inbox i/o)
     (enqueue! inbox (lambda () (call/susp thunk))))))

So! Here we have a system that can run a thunk in a scheduler. Fine. No big deal. But if the thunk calls suspend, then it causes an abort back to a prompt. suspend takes a procedure as an argument, the on-suspend procedure, which will be called with one argument: the suspended continuation of the thunk. We've layered coroutines on top of the event loop.

Guile's virtual machine is a normal register virtual machine with a stack composed of function frames. It's not necessary to do full CPS conversion to implement delimited control, but if you don't, then your virtual machine needs primitive support for call-with-prompt, as Guile's VM does. In Guile then, a suspended continuation is an object composed of the slice of the stack captured between the prompt and the abort, and also the slice of the dynamic stack. (Guile keeps a parallel stack for dynamic bindings. Perhaps we should unify these; dunno.) This object is wrapped in a little procedure that uses VM primitives to push those stack frames back on, and continue.

I say all this just to give you a mental idea of what it costs to suspend a fiber. It will allocate storage proportional to the stack depth between the prompt and the abort. Usually this is a few dozen words, if there are 5 or 10 frames on the stack in the fiber.

We've gone from prompts to coroutines, and from here to fibers there's just a little farther to go. First, note that spawning a new fiber is simply scheduling a thunk:

(define (spawn-fiber thunk)
  (schedule thunk))

Many threading libraries provide a "yield" primitive, which simply suspends the current thread, allowing others to run. We can do this for fibers directly:

(define (yield)
  (suspend schedule))

Note that the on-suspend procedure here is just schedule, which re-schedules the continuation (but presumably at the back of the queue).

Similarly if we are reading on a non-blocking file descriptor and detect that we need more input before we can continue, but none is available, we can suspend and arrange for the epollfd to resume us later:

(define (wait-for-readable fd)
   (lambda (k)
     (match (current-scheduler)
       (($ $sched inbox i/o)
        (add-read-fd! i/o fd
                      (lambda () (schedule k))))))))

In Guile you can arrange to install this function as the "current read waiter", causing it to run whenever a port would block. The details are a little gnarly currently; see the Non-blocking I/O manual page for more.

Anyway the cool thing is that I can run any thunk within a spawn-fiber, without modification, and it will run as if in a new thread of some sort.

solid abstractions?

I admit that although I am very happy with Emacs, I never really took to using the shell from within Emacs. I always have a terminal open with a bunch of tabs. I think the reason for that is that I never quite understood why I could move the cursor over the bash prompt, or into previous expressions or results; it seemed like I was waking up groggily from some kind of dream where nothing was real. I like the terminal, where the only bit that's "mine" is the current command. All the rest is immutable text in the scrollback.

Similarly when you make a UI, you want to design things so that people perceive the screen as being composed of buttons and so on, not just lines. In essence you trick the user, a willing user who is ready to be tricked, into seeing buttons and text and not just weird pixels.

In the same way, with fibers we want to provide the illusion that fibers actually exist. To solidify this illusion, we're still missing a few elements.

One point relates to error handling. As it is, if an error happens in a fiber and the fiber doesn't handle it, the exception propagates out of the fiber, through the scheduler, and might cause the whole program to error out. So we need to wrap fibers in a catch-all.

(define (spawn-fiber thunk)
   (lambda ()
     (catch #t thunk
       (lambda (key . args)
         (print-exception (current-error-port) #f key args))))))

Well, OK. Exceptions won't propagate out of fibers, yay. In fact in Guile we add another catch inside the print-exception, in case the print-exception throws an exception... Anyway. Cool.

Another point relates to fiber-local variables. In an operating system, each process has a number of variables that are local to it, notably in UNIX we have the umask, the current effective user, the current directory, the open files and what file descriptors they are associated with, and so on. In Scheme we have similar facilities in the form of parameters.

Now the usual way that parameters are used is to bind a new value within the extent of some call:

(define (with-output-to-string thunk)
  (let ((p (open-output-string)))
    (parameterize ((current-output-port p))
    (get-output-string p)))

Here the parameterize invocation established p as the current output port during the call to thunk. Parameters already compose quite well with prompts; Guile, like Racket, implements the protocol described by Kiselyov, Shan, and Sabry in their Delimited Dynamic Binding paper (well worth a read!).

The one missing piece is that parameters in Scheme are mutable (by default). Normally if you call (current-input-port), you just get the current value of the current input port parameter. But if you pass an argument, like (current-input-port p), then you actually set the current input port to that new value. This value will be in place until we leave some parameterize invocation that parameterizes the current input port.

The problem here is that it could be that there's an interesting parameter which some piece of Scheme code will want to just mutate, so that all further Scheme code will use the new value. This is fine if you have no concurrency: there's just one thing running. But when you have many fibers, you want to avoid mutations in one fiber from affecting others. You want some isolation with regards to parameters. In Guile, we do this with the with-dynamic-state facility, which isolates changes to the dynamic state (parameters and so on) within the extent of the with-dynamic-state call.

(define (spawn-fiber thunk)
  (let ((state (current-dynamic-state)))
     (lambda ()
       (catch #t
         (lambda ()
           (with-dynamic-state state thunk))
         (lambda (key . args)
           (print-exception (current-error-port) #f key args))))))

Interestingly, with-dynamic-state solves another problem as well. You would like for newly spawned fibers to inherit the parameters from the point at which they were spawned.

(parameterize ((current-output-port p))
   ;; New fiber should inherit current-output-port
   ;; binding as "p"
   (lambda () ...)))

Capturing the (current-dynamic-state) outside the thunk does this for us.

When I made this change in Guile, making sure that with-dynamic-state did not impose a continuation barrier, I ran into a problem. In Guile we implemented exceptions in terms of delimited continuations and dynamic binding. The current stack of exception handlers was a list, and each element included the exceptions handled by that handler, and what prompt to which to abort before running the exception handler. See where the problem is? If we ship this exception handler stack over to a new fiber, then an exception propagating out of the new fiber would be looking up handlers from another fiber, for prompts that probably aren't even on the stack any more.

The problem here is that if you store a heap-allocated stack of current exception handlers in a dynamic variable, and that dynamic variable is captured somehow (say, by a delimited continuation), then you capture the whole stack of handlers, not (in the case of delimited continuations) the delimited set of handlers that were active within the prompt. To fix this, we had to change Guile's exceptions to instead make catch just rebind the exception handler parameter to hold the handler installed by the catch. If Guile needs to walk the chain of exception handlers, we introduced a new primitive fluid-ref* to do so, building the chain from the current stack of parameterizations instead of some representation of that stack on the heap. It's O(n), but life is that way sometimes. This way also, delimited continuations capture the right set of exception handlers.

Finally, Guile also supports asynchronous interrupts. We can arrange to interrupt a Guile process (or POSIX thread) every so often, as measured in wall-clock or process time. It used to be that interrupt handlers caused a continuation barrier, but this is no longer the case, so now we can add pre-emption to a fibers using interrupts.

summary and reflections

In Guile we were able to create a solid-seeming abstraction for fibers by composing other basic building blocks from the Scheme toolkit. Guile users can take an abstraction that's implemented in terms of an event loop (any event loop) and layer fibers on top in a way that feels "real". We were able to do this because we have prompts (delimited continuation) and parameters (dynamic binding), and we were able to compose the two. Actually getting it all to work required fixing a few bugs.

In Fibers, we just use delimited continuations to implement coroutines, and then our fibers are coroutines. If we had coroutines as a primitive, that would work just as well. As it is, each suspension of a fiber will allocate a new continuation. Perhaps this is unimportant, given the average continuation size, but it would be comforting in a way to be able to re-use the allocation from the previous suspension (if any). Other languages with coroutine primitives might have an advantage here, though delimited dynamic binding is still relatively uncommon.

Another point is that because we use prompts to suspend fiberss, we effectively are always unwinding and rewinding the dynamic state. In practice this should be transparent to the user and the implementor should make this transparent from a performance perspective, with the exception of dynamic-wind. Basically any fiber suspension will be run the "out" guard of any enclosing dynamic-wind, and resumption will run the "in" guard. In practice we find that we defer "finalization" issues to with-throw-handler / catch, which unlike dynamic-wind don't run on every entry or exit of a dynamic extent and rather just run on exceptional exits. We will see over time if this situation is acceptable. It's certainly another nail in the coffin of dynamic-wind though.

This article started with pthreads malaise, and although we've solved the problem of having a million fibers, we haven't solved the communications problem. How should fibers communicate with each other? This is the topic for my next article. Until then, happy hacking :)

Close this section

Moderate drinking associated with atrophy in brain related to memory, learning

The question

Popular belief, backed up by various studies, holds that a moderate amount of alcohol can be good for your heart. Might it have a similar effect on your brain?

This study

The study tracked 550 adults for 30 years, starting when they were, on average, 43 years old, periodically assessing their alcohol consumption and cognitive performance. None of the participants had an alcohol dependency. Standardized testing showed that people who drank the most during the three decades had a faster and greater decline in cognitive functioning than those who consumed less alcohol. Brain MRIs at the end of the study revealed greater hippocampal atrophy, a loss of cells in the region of the brain that is key to memory and learning, among heavier drinkers compared with lighter drinkers. But even moderate drinkers were three times as likely to have brain atrophy as non-drinkers. The researchers found no brain-related benefits for alcohol consumption at any level, including very light drinking, compared with abstinence.

Who may be affected?

Adults who consume alcohol. Current U.S. guidelines describe moderate drinking as one drink a day for women, and two for men. Examples of standard alcoholic drinks include a 12-ounce beer, a five-ounce glass of wine and a 1.5-ounce drink of 80-proof liquor. The study authors wrote that their findings “call into question the current U.S. guidelines.”


Most study participants were men. Data on alcohol consumption came from the participants’ responses on questionnaires. Factors other than alcohol may have contributed to brain changes in the participants.

Find this study

Online June 6 in The BMJ (; search for “alcohol consumption”).

Learn more

Information on how alcohol can affect health is available at­publications (click on “Beyond Hangovers . . . ”) and (search for “moderate alcohol use”).

The research described in Quick Study comes from credible, peer-reviewed journals.

Close this section

How to read and understand a scientific paper: a guide for non-scientists

jennifer raffFrom vaccinations to climate change, getting science wrong has very real consequences. But journal articles, a primary way science is communicated in academia, are a different format to newspaper articles or blogs and require a level of skill and undoubtedly a greater amount of patience. Here Jennifer Raff has prepared a helpful guide for non-scientists on how to read a scientific paper. These steps and tips will be useful to anyone interested in the presentation of scientific findings and raise important points for scientists to consider with their own writing practice.

My post, The truth about vaccinations: Your physician knows more than the University of Google sparked a very lively discussion, with comments from several people trying to persuade me (and the other readers) that their paper disproved everything that I’d been saying. While I encourage you to go read the comments and contribute your own, here I want to focus on the much larger issue that this debate raised: what constitutes scientific authority?

It’s not just a fun academic problem. Getting the science wrong has very real consequences. For example, when a community doesn’t vaccinate children because they’re afraid of “toxins” and think that prayer (or diet, exercise, and “clean living”) is enough to prevent infection, outbreaks happen.

“Be skeptical. But when you get proof, accept proof.” –Michael Specter

What constitutes enough proof? Obviously everyone has a different answer to that question. But to form a truly educated opinion on a scientific subject, you need to become familiar with current research in that field. And to do that, you have to read the “primary research literature” (often just called “the literature”). You might have tried to read scientific papers before and been frustrated by the dense, stilted writing and the unfamiliar jargon. I remember feeling this way!  Reading and understanding research papers is a skill which every single doctor and scientist has had to learn during graduate school.  You can learn it too, but like any skill it takes patience and practice.

I want to help people become more scientifically literate, so I wrote this guide for how a layperson can approach reading and understanding a scientific research paper. It’s appropriate for someone who has no background whatsoever in science or medicine, and based on the assumption that he or she is doing this for the purpose of getting a basic understanding of a paper and deciding whether or not it’s a reputable study.

The type of scientific paper I’m discussing here is referred to as a primary research article. It’s a peer-reviewed report of new research on a specific question (or questions). Another useful type of publication is a review article. Review articles are also peer-reviewed, and don’t present new information, but summarize multiple primary research articles, to give a sense of the consensus, debates, and unanswered questions within a field.  (I’m not going to say much more about them here, but be cautious about which review articles you read. Remember that they are only a snapshot of the research at the time they are published.  A review article on, say, genome-wide association studies from 2001 is not going to be very informative in 2013. So much research has been done in the intervening years that the field has changed considerably).

Before you begin: some general advice

Reading a scientific paper is a completely different process than reading an article about science in a blog or newspaper. Not only do you read the sections in a different order than they’re presented, but you also have to take notes, read it multiple times, and probably go look up other papers for some of the details. Reading a single paper may take you a very long time at first. Be patient with yourself. The process will go much faster as you gain experience.

Most primary research papers will be divided into the following sections: Abstract, Introduction, Methods, Results, and Conclusions/Interpretations/Discussion. The order will depend on which journal it’s published in. Some journals have additional files (called Supplementary Online Information) which contain important details of the research, but are published online instead of in the article itself (make sure you don’t skip these files).

Before you begin reading, take note of the authors and their institutional affiliations. Some institutions (e.g. University of Texas) are well-respected; others (e.g. the Discovery Institute) may appear to be legitimate research institutions but are actually agenda-driven. Tip: google “Discovery Institute” to see why you don’t want to use it as a scientific authority on evolutionary theory.

Also take note of the journal in which it’s published. Reputable (biomedical) journals will be indexed by Pubmed. [EDIT: Several people have reminded me that non-biomedical journals won’t be on Pubmed, and they’re absolutely correct! (thanks for catching that, I apologize for being sloppy here). Check out Web of Science for a more complete index of science journals. And please feel free to share other resources in the comments!]  Beware of questionable journals.

As you read, write down every single word that you don’t understand. You’re going to have to look them all up (yes, every one. I know it’s a total pain. But you won’t understand the paper if you don’t understand the vocabulary. Scientific words have extremely precise meanings).

how to read a sci paper

Step-by-step instructions for reading a primary research article

1. Begin by reading the introduction, not the abstract.

The abstract is that dense first paragraph at the very beginning of a paper. In fact, that’s often the only part of a paper that many non-scientists read when they’re trying to build a scientific argument. (This is a terrible practice—don’t do it.).  When I’m choosing papers to read, I decide what’s relevant to my interests based on a combination of the title and abstract. But when I’ve got a collection of papers assembled for deep reading, I always read the abstract last. I do this because abstracts contain a succinct summary of the entire paper, and I’m concerned about inadvertently becoming biased by the authors’ interpretation of the results.

2. Identify the BIG QUESTION.

Not “What is this paper about”, but “What problem is this entire field trying to solve?”

This helps you focus on why this research is being done.  Look closely for evidence of agenda-motivated research.

3. Summarize the background in five sentences or less.

Here are some questions to guide you:

What work has been done before in this field to answer the BIG QUESTION? What are the limitations of that work? What, according to the authors, needs to be done next?

The five sentences part is a little arbitrary, but it forces you to be concise and really think about the context of this research. You need to be able to explain why this research has been done in order to understand it.

4. Identify the SPECIFIC QUESTION(S)

What exactly are the authors trying to answer with their research? There may be multiple questions, or just one. Write them down.  If it’s the kind of research that tests one or more null hypotheses, identify it/them.

Not sure what a null hypothesis is? Go read this, then go back to my last post and read one of the papers that I linked to (like this one) and try to identify the null hypotheses in it. Keep in mind that not every paper will test a null hypothesis.

5. Identify the approach

What are the authors going to do to answer the SPECIFIC QUESTION(S)?

6. Now read the methods section. Draw a diagram for each experiment, showing exactly what the authors did.

I mean literally draw it. Include as much detail as you need to fully understand the work.  As an example, here is what I drew to sort out the methods for a paper I read today (Battaglia et al. 2013: “The first peopling of South America: New evidence from Y-chromosome haplogroup Q”). This is much less detail than you’d probably need, because it’s a paper in my specialty and I use these methods all the time.  But if you were reading this, and didn’t happen to know what “process data with reduced-median method using Network” means, you’d need to look that up.

battaglia-et-al-methodsImage credit: author

You don’t need to understand the methods in enough detail to replicate the experiment—that’s something reviewers have to do—but you’re not ready to move on to the results until you can explain the basics of the methods to someone else.

7. Read the results section. Write one or more paragraphs to summarize the results for each experiment, each figure, and each table. Don’t yet try to decide what the results mean, just write down what they are.

You’ll find that, particularly in good papers, the majority of the results are summarized in the figures and tables. Pay careful attention to them!  You may also need to go to the Supplementary Online Information file to find some of the results.

 It is at this point where difficulties can arise if statistical tests are employed in the paper and you don’t have enough of a background to understand them. I can’t teach you stats in this post, but here, here, and here are some basic resources to help you.  I STRONGLY advise you to become familiar with them.

Things to pay attention to in the results section:

  • Any time the words “significant” or “non-significant” are used. These have precise statistical meanings. Read more about this here.
  • If there are graphs, do they have error bars on them? For certain types of studies, a lack of confidence intervals is a major red flag.
  • The sample size. Has the study been conducted on 10, or 10,000 people? (For some research purposes, a sample size of 10 is sufficient, but for most studies larger is better).

8. Do the results answer the SPECIFIC QUESTION(S)? What do you think they mean?

Don’t move on until you have thought about this. It’s okay to change your mind in light of the authors’ interpretation—in fact you probably will if you’re still a beginner at this kind of analysis—but it’s a really good habit to start forming your own interpretations before you read those of others.

9. Read the conclusion/discussion/Interpretation section.

What do the authors think the results mean? Do you agree with them? Can you come up with any alternative way of interpreting them? Do the authors identify any weaknesses in their own study? Do you see any that the authors missed? (Don’t assume they’re infallible!) What do they propose to do as a next step? Do you agree with that?

10. Now, go back to the beginning and read the abstract.

Does it match what the authors said in the paper? Does it fit with your interpretation of the paper?

11. FINAL STEP: (Don’t neglect doing this) What do other researchers say about this paper?

Who are the (acknowledged or self-proclaimed) experts in this particular field? Do they have criticisms of the study that you haven’t thought of, or do they generally support it?

Here’s a place where I do recommend you use google! But do it last, so you are better prepared to think critically about what other people say.

(12. This step may be optional for you, depending on why you’re reading a particular paper. But for me, it’s critical! I go through the “Literature cited” section to see what other papers the authors cited. This allows me to better identify the important papers in a particular field, see if the authors cited my own papers (KIDDING!….mostly), and find sources of useful ideas or techniques.)

UPDATE: If you would like to see an example of how to read a science paper using this framework, you can find one here.

I gratefully acknowledge Professors José Bonner and Bill Saxton for teaching me how to critically read and analyze scientific papers using this method. I’m honored to have the chance to pass along what they taught me.

I’ve written a shorter version of this guide for teachers to hand out to their classes. If you’d like a PDF, shoot me an email: jenniferraff (at) utexas (dot) edu. For further comments and additional questions on this guide, please see the Comments Section on the original post.

This piece originally appeared on the author’s personal blog and is reposted with permission.

Featured image credit: Scientists in a laboratory of the University of La Rioja by Urcomunicacion (Wikimedia CC BY3.0)

Note: This article gives the views of the authors, and not the position of the LSE Impact blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Jennifer Raff (Indiana University—dual Ph.D. in genetics and bioanthropology) is an assistant professor in the Department of Anthropology, University of Kansas, director and Principal Investigator of the KU Laboratory of Human Population Genomics, and assistant director of KU’s Laboratory of Biological Anthropology. She is also a research affiliate with the University of Texas anthropological genetics laboratory. She is keenly interested in public outreach and scientific literacy, writing about topics in science and pseudoscience for her blog (, the Huffington Post, and for the Social Evolution Forum.

Close this section

Show HN: GreenPiThumb – A Raspberry Pi Gardening Bot

This is the story of GreenPiThumb: a gardening bot that automatically waters houseplants, but also sometimes kills them.

GreenPiThumb full system

The story begins about a year ago, when I was struck by a sudden desire to own a houseplant. A plant would look nice, supply me with much needed oxygen, and imply to guests that I’m a responsible grown-up, capable of caring for a living thing.

But I’m a programmer, not a gardener. If I had a plant, I’d have to water it and check the plant’s health a few times per week. I decided it would be much easier if I just spent several hundred hours building a robot to do that for me. If the plant lives to be 80 years old, I come out slightly ahead.

Like most software projects I take on, my main motivation with GreenPiThumb was to learn new technologies. I wrote my previous app, ProsperBot, to teach myself Go, Ansible, and Redis. I saw GreenPiThumb as a chance to learn front end development, specifically JavaScript and AngularJS.

My friend Jeet had just started learning to program, so I asked if he’d be interested in collaborating with me on GreenPiThumb. It seemed like a good opportunity for him to learn about healthy software engineering practices like code reviews, unit tests, and continuous integration. Jeet was up for it, so we set off on what we thought would be a two- or three-month endeavor.

The Raspberry Pi is a small, inexpensive computer built for hobbyists. People have used Raspberry Pis to create futuristic smart mirrors, run old video games, and drive electric skateboards.

Raspberry Pi

I’ve been playing with Raspberry Pis for the past few years, but I’m a software guy, so I had never used them for anything more than cheap toy servers. For most of the enthusiast community, the Raspberry Pi’s main draw is how well it integrates with consumer electronics.

With the number of sensors and integration guides available for it, the Raspberry Pi was a natural fit for GreenPiThumb. I figured using the Raspberry Pi would also challenge me to learn its hardware capabilities and finally figure out what those “GPIO pins” actually do.

MOSFET melting breadboard
Raspberry Pi and its mysterious GPIO pins

We were certainly not the first people to think of building a Raspberry Pi-powered gardening bot. Two cool projects that preceded us were PiPlanter and Plant Friends, but there have been a handful of others as well.

We decided to build our own for two reasons: it’s fun to make your own stuff, and we wanted our bot’s software to be a first-class concern.

The majority of Raspberry Pi projects are created by enthusiasts who are great with electronics but don’t have professional software experience. We wanted to be the opposite – great software, but the hardware barely works and sometimes gets so hot that it melts our breadboard.

GPIO pins
An early prototype that likely had a “catching on fire” problem

The code for GreenPiThumb is open-source and features:

  • Full unit tests
  • Code coverage tracking
  • Continuous integration
  • Debug logging
  • Thorough documentation – both READMEs and code comments
  • Consistent adherence to a style guide
  • An installer tool
GreenPiThumb wiring diagram
GreenPiThumb wiring diagram (downloadable file)

The Raspberry Pi reads digital signals, so it’s not capable of reading analog sensors directly. We use the MCP3008 analog-to-digital converter to produce digital readings from the analog soil moisture sensor and light sensor.

The DHT11 sensor detects temperature and humidity in the air. It produces a digital signal, so it can plug right into the Raspberry Pi.

Lastly, we have a 12V water pump, but the Raspberry Pi can only output 5V, so we connect a 12V power adapter to the pump in series with a MOSFET. The Raspberry Pi uses the MOSFET as a digital switch, breaking or completing the circuit when it wants to turn its pump off or on.

GreenPiThumb software architecture
GreenPiThumb software architecture

GreenPiThumb back end

The back end does the heavy lifting of GreenPiThumb. It’s a Python app responsible for:

  • Managing the physical sensors (soil moisture, temperature, etc.)
  • Turning the water pump on and off
  • Recording events and sensor readings in the database

GreenPiThumb web API

The web API is an HTTP interface that serves information about GreenPiThumb’s state and history. It’s a thin wrapper over GreenPiThumb’s database. It translates everything into JSON, which makes it easier for web applications to understand.

GreenPiThumb web dashboard

The web dashboard shows GreenPiThumb’s current state and creates graphs of sensor readings over time.

GPIO pins

Our Raspberry Pi isn’t quite up to the challenge of acting as an internet-facing web server, but here’s a static mirror of the GreenPiThumb dashboard that’s identical to our local one:


To deploy GreenPiThumb to our Raspberry Pi device, we use Ansible, an open source IT automation tool.

We created a custom GreenPiThumb Ansible configuration (or “role” in Ansible terms) for deploying all of the software GreenPiThumb needs. The Ansible role downloads and installs GreenPiThumb’s back end and front end code, as well as the third-party software components that GreenPiThumb depends on.

With just a few commands, you can use this tool on a fresh Raspberry Pi device and have all of GreenPiThumb’s software up and running within minutes.

GreenPiThumb took over a year to complete, much longer than expected due to roadblocks that halted progress for weeks at a time. I’ve described some of our more interesting obstacles below.

Water distribution

The other Raspberry Pi gardening projects don’t talk about how they spread water over the soil, which is a shame because we still haven’t figured it out.

The first time we pumped water into our planter, the tube directed a small stream into one spot, completely soaking that area but leaving the rest of the soil dry. We considered coiling the rubber tubing around the inner wall of the planter and poking holes in the tube, but we weren’t sure if this would get enough water to the center part of the soil. We tried using a showerhead, but couldn’t figure out how to fasten it water-tight to the tubing and still control the stream’s direction.

We ultimately settled on “spray and pray.” It was a solution borne out of looking around my apartment and randomly grabbing things that might solve our problem. We cut a finger off of a small kitchen glove, fastened it to the water tube with a tightly doubled rubber band, and made lots of holes in the glove using a sewing needle and nail clippers.

We turned on the pump, and the severed finger of the glove immediately shot off the tubing, spraying water all over my apartment’s wall. We reattached everything, but this time, stuck a safety pin just in front of the rubber bands so that they couldn’t slide forward.

Water sprayer (front) Water sprayer (side)
Kitchen glove turned water distributor

It’s not the most elegant solution, but it mostly works.

The gardening part wasn’t supposed to be hard

Electronics were supposed to be the big challenge of GreenPiThumb. Gardening didn’t seem that hard. Green beans, in particular, are frequently described as a hardy plant that requires only basic gardening skills to grow.

It turned out that we didn’t have basic gardening skills. GreenPiThumb is intended to automate the human part of tending an indoor garden, but to automate anything, a human has to know what “correct” looks like. It was hard to decide whether GreenPiThumb was watering too much or too little because we ourselves had no idea how much water to use. That’s how we ended up accidentally making this horticultural snuff film:

How hard can it be to measure moisture?

Our most vexing problem was dirt.

When we set out to build GreenPiThumb, we expected that soil moisture would increase on days we watered the plant and decrease on days we didn’t. GreenPiThumb’s job would simply be to maintain the correct moisture level by adding water whenever the reading dropped below a certain threshold.

Below, we’ve used expensive and complex modeling software to visualize the soil moisture pattern we expected for GreenPiThumb:

Soil moisture pattern
Soil moisture pattern, imagined

Bad readings

Soil refused to cooperate with us. In our initial build, the soil moisture reading oscillated from 95% to 100%, then slowly converged to ~99.5%. We took out the soil sensor and tested it against different media: air, water, a wet paper towel, our hands, totally dry soil. All of these things seemed to get sensible readings, but soil with any kind of moisture made the sensor shoot up to nearly 100%.

Soil moisture level
Soil moisture readings, original moisture sensor

We originally used Dickson Chow’s Plant Friends soil probe, but we swapped it out for the SparkFun soil sensor. The new sensor got a reading of 82% in damp soil, and it would jump to 85% immediately after the soil was watered. Within a few hours, however, it would sink back down to 82% and remain there for days. The sensor seemed unable to distinguish between soil that was watered three hours ago and soil that hadn’t been watered for five days.

Soil moisture level
Soil moisture readings, SparkFun moisture sensor

I think my dirt is broken

Miracle-Gro soil

Maybe it was the dirt’s fault. Our potting soil was this pre-packaged mix from Miracle-Gro that featured an “easy to water formula.” A bit suspicious, no? Clearly, this was evil, genetically engineered dirt that never dries. That’s why our poor soil sensors were so confused.

We needed dirt that wouldn’t play games with us, so we purchased this organic potting mix. As a test, we filled a plastic cup with the organic soil, added water, poked holes in the bottom to let it drain, then let it sit for three days to match the soil conditions in our GreenPiThumb planter. At the end of three days, we tested our sensor in both types of soil.

We got the exact same reading: 82% in each. So, we couldn’t blame the soil…

Giving up

Out of ideas, we revisited the projects that inspired GreenPiThumb. How did they solve this problem?

Plant Friends doesn’t pump water at all. PiPlanter measures the soil moisture, but waters on a fixed schedule, regardless of moisture level. Some Googling turned up a few Raspberry Pi gardening projects that claim that they water solely based on soil moisture, but none of them publish their source code nor share their result data. Therefore, we felt it was fair to assume that watering based on moisture level is impossible and that GreenPiThumb is doing the best it possibly can, given certain inexorable limits of the physical world.

We ultimately decided to switch to a hybrid system. GreenPiThumb now waters the plant if the soil gets too dry or if seven days have elapsed since the last watering.

Below are some images of our completed GreenPiThumb build:

GreenPiThumb full system GreenPiThumb full system
GreenPiThumb electronics GreenPiThumb pump GreenPiThumb reservoir

The timelapses have been the most fun part of this process. This one is from our first batch of green beans (R.I.P.). We didn’t realize how quickly the plants would outgrow the close overhead angle. We eventually switched to a larger bendy mount, which gets a better angle on the plant’s full lifecycle, but our original setup caught a great timelapse of the first few days of growth:

For the second batch, we kept the camera in the exact same position throughout growth. This is the progress of batch two so far:

Nothing is as simple as it seems

I thought this would be a straightforward two- to three-month project, but it took us over a year to complete because nothing is as simple as it seems.

It’s a lesson I learned long ago from Joel Spolsky, software essayist extraordinaire, and it’s a lesson I expect to learn again and again on many software projects to come.

Electronics: start with the basics

Arduino starter kit

At the start of GreenPiThumb, my only knowledge of electronics was based on faint memories of high school physics. I bought the Arduino starter kit and went through the tutorials to build a foundation in electronics.

These tutorials turned out to be quite enjoyable and useful. They do a good job of starting off easy and incrementally building to more advanced topics. I recommend this kit to any beginners who are interested in electronics.

Test hardware in isolation

Some who have worked with me on software projects have described me as “anal retentive” or “overly pedantic” when it comes to writing code. I prefer to think of my coding style as “rigorous.” We implemented the software part of GreenPiThumb first, rigorously peer reviewing and testing each software component piece by piece.

When it came to the hardware, we were very un-rigorous. I dare say we were a bit cavalier and laughably naïve. Our original process for testing the hardware components was to write a basic version of GreenPiThumb’s software, wire up all the sensors on a test board, run it, and see what it produced.

Nothing. It produced nothing. Because that was a terrible strategy for testing hardware. Every electronics component in a system has the potential to fail, either because the component itself is defective or because it’s been installed incorrectly. By connecting everything at once, we had no way of figuring out which piece or pieces were broken.

Over time, we learned to test our sensors in isolation. We created standalone diagnostic scripts for each hardware component. Every time we tweak the hardware now, the first thing we do is run through the diagnostic scripts to verify that we’re getting sane readings. When a new hardware piece is not working, we use our multimeter to systematically detect the root cause. We should have purchased the multimeter much earlier. It only cost $13, but would have saved us countless hours of frustration and headscratching.

The tables below show the equipment we used to build GreenPiThumb. We’re sharing the exact parts so that it’s easy for you to follow our model, but many of these are commodity components that you can swap out for something functionally identical.

GreenPiThumb essentials

Common electronics components

The items below are generic electronics tools and components that you can use for many projects. We bought them because we had zero electronics equipment, so we include them for completeness:

Gardening supplies

Optional components

Item Notes Cost
Third hand soldering tool We started with this basic clamp stand, but it was awkward to position and adjust. The bendy model was several times more expensive, but it made the task of soldering simpler and more pleasant. $44.95
Bendy camera mount Great for holding the camera. Long and flexible enough to give you lots of options for finding a good angle and range. $29.95
PEX tubing cutter Makes nice clean cuts to the water tubing. $20.99
Digital multimeter We highly recommend you buy a basic multimeter. There’s nothing special about this particular one, but it served us well. $12.99
Pi camera mount Allows you to attach the Raspberry Pi camera to a standard 1/4” camera mount, such as the bendy mount above. $8.45
Pi camera extension cable (1m) Necessary for positioning the Raspberry Pi camera far away from the Raspberry Pi device itself. $8.44
Zip ties For fastening tubing or wiring in place. $5.19

Big thanks to those who helped us with this project:

Close this section

The Tinkerings of Robert Noyce (1983)

Esquire Magazine,, December 1983, pp. 346-374.

America is today in the midst of a great technological revolution. With the advent of the silicon chip, information processing, communications, and the national economy have been strikingly altered. The new technology is changing how we live, how we work, how we think. The revolution didn't just happen; it was engineered by a small number of people, principally Middle Americans, whose horizons were as unlimited as the Iowa sky. collectively, they engineered Tomorrow. Foremost among them is Robert Noyce.

Tom Wolfe

The Tinkerings of Robert Noyce

How the Sun Rose on the Silicon Valley

First published in Esquire Magazine, December 1983,
Copyright© by Tom Wolfe.
Reproduced by Permission of International Creative Management. For academic use only.

In 1948 there were seven thousand people in Grinnell, Iowa, including more than one who didn't dare take a drink in his own house without pulling the shades down first. It was against the law to sell liquor in Grinnell, but it was perfectly legal to drink it at home. So it wasn't that. It wasn't even that someone might look in through the window and disapprove. God knew Grinnell had more than its share of White Ribbon teetotalers, but by 1948 alcohol was hardly the mark of Cain it had once been. No, those timid souls with their fingers through the shade loops inside the white frame houses on Main Street and Park Street were thinking of something else altogether.

They happened to live on land originally owned by the Congregational minister who had founded the town in 1854, Josiah Grinnell. Josiah Grinnell had sold off lots with covenants, in perpetuity, stating that anyone who allowed alcohol to be drunk on his property forfeited ownership. In perpetuity! In perpetuity was forever, and 1948 was not even a hundred years later. In 1948 there were people walking around Grinnell who had known Josiah Grinnell personally. They were getting old; Grinnell had died in 1891; but they were still walking around. So... why take a chance!

The plain truth was, Grinnell had Middle West written all over it. It was squarely in the middle of Iowa's Midland corn belt, where people on the farms said "crawdad" instead of crayfish and "barn lot" instead of barnyard. Grinnell had been one of many Protestant religious communities established in the mid-nineteenth century after Iowa became a state and settlers from the East headed for the farmlands. The streets were lined with white clapboard houses and elm trees, like a New England village. And today, in 1948, the hard-scrubbed Octagon Soap smell of nineteenth century Protestantism still permeated the houses and Main Street as well. That was no small part of what people in the East thought of when they heard the term "Middle West. " For thirty years writers such as Sherwood Anderson, Sinclair Lewis, and Carl Van Vechten had been prompting the most delicious sniggers with their portraits of the churchy, narrow minded Middle West. The Iowa painter Grant Wood was thinking of farms like the ones around Grinnell when he did his famous painting American Gothic. Easterners recognized the grim, juiceless couple in Wood's picture right away. There were John Calvin's and John Knox's rectitude reigning in the sticks.

In the fall of 1948 Harry Truman picked out Grinnell as one of the stops on his whistle-stop campaign tour, one of the hamlets where he could reach out to the little people, the average Americans of the heartland, the people untouched by the sophisticated opinion-makers of New York and Washington. Speaking from the rear platform of his railroad car, Truman said he would never forget Grinnell, because it was Grinnell College, the little Congregational academy over on Park Street, that had given him his first honorary degree. The President's fond recollection didn't cut much ice, as it turned out. The town had voted Republican in every presidential election since the first time Abraham Lincoln ran, in 1860, and wasn't about to change for Harry Truman.

On the face of it, there you had Grinnell Iowa, in 1948: a piece of mid-nineteenth century American history frozen solid in the middle of the twentieth. It was one of the last towns in America that people back east would have figured to become the starting point of a bolt into the future that would create the very substructure, the electronic grid, of life in the year 2000 and beyond.

On the other hand, it wouldn't have surprised Josiah Grinnell in the slightest.

It was in the summer of 1948 that Grant Gale, a forty-five-year-old physics professor at Grinnell College, ran across an item in the newspaper concerning a former classmate of his at the University of Wisconsin named John Bardeen. Bardeen's father had been dean of medicine at Wisconsin, and Gale's wife Harriet's father had been dean of the engineering school, and so Bardeen and Harriet had grown up as fellow faculty brats, as the phrase went. Both Gale and Bardeen had majored in electrical engineering. Eventually Bardeen had taught physics at the University of Minnesota and had then left the academic world to work for Bell Laboratories, the telephone company's main research center, in Murray Hill, New Jersey. And now, according to the item, Bardeen and another engineer at Bell, Walter Brattain, had invented a novel little device they called a transistor.

It was only an item, however: the invention of the transistor in 1948 did not create headlines. The transistor apparently performed the same function as the vacuum tube, which was an essential component of telephone relay systems and radios. Like the vacuum tube, the transistor could isolate a specific electrical signal, such as a radio wave, and amplify it. But the transistor did not require glass tubing, a vacuum, a plate, or a cathode. It was nothing more than two minute gold wires leading to a piece of processed germanium less than a sixteenth of an inch long. Germanium, an element found in coal, was an insulator, not a conductor. But if the germanium was contaminated with impurities, it became a "semiconductor." A vacuum tube was also a semiconductor; the vacuum itself, like the germanium, was an insulator. But as every owner of a portable radio knew, vacuum tubes drew a lot of current, required a warm-up interval before they would work, and then got very hot. A transistor eliminated all these problems and, on top of that, was about fifty times smaller than a vacuum tube.

So far, however, it was impossible to mass-produce transistors, partly because the gold wires had to be made by hand and attached by hand two thousandths of an inch apart. But that was the telephone company's problem. Grant Gale wasn't interested in any present or future applications of the transistor in terms of products. He hoped the transistor might offer a way to study the flow of electrons through a solid (the germanium), a subject physicists had speculated about for decades. He thought it would be terrific to get some transistors for his physics department at Grinnell. So he wrote to Bardeen at Bell Laboratories. Just to make sure his request didn't get lost in the shuffle, he also wrote to the president of Bell Laboratories, Oliver Buckley. Buckley was from Sloane, Iowa, and happened to be a Grinnell graduate. So by the fall of 1948 Gale had obtained two of the first transistors ever made, and he presented the first academic instruction in solid-state electronics available anywhere in the world, for the benefit of the eighteen students majoring in physics at Grinnell College.

One of Grant Gale's senior physics majors was a local boy named Robert Noyce, whom Gale had known for years. Bob and his brothers, Donald, Gaylord, and Ralph, lived just down Park Street and used to rake leaves, mow the lawn, baby-sit, and do other chores for the Gales. Lately Grant Gale had done more than his share of agonizing over Bob Noyce. Like his brothers, Bob was a bright student, but he had just been thrown out of school for a semester, and it had taken every bit of credit Gale had in the local favor bank, not only with other faculty members but also with the sheriff, to keep the boy from being expelled for good and stigmatized with a felony conviction.

Bob Noyce's father, Ralph Sr. was a Congregational minister. Not only that, both of his grandfathers were Congregational ministers. But that hadn't helped at all. In an odd way, after the thing happened, the boy's clerical lineage had boomeranged on him. People were going around saying, "Well, what do you expect from a preacher's son?" It was as if people in Grinnell unconsciously agreed with Sherwood Anderson that underneath the righteousness the midwestern Protestant preachers urged upon them, and which they themselves professed to uphold, lived demons of weakness, perversion, and hypocrisy that would break loose sooner or later.

No one denied that the Noyce boys were polite and proper in all outward appearances. They were all members of the Boy Scouts. They went to Sunday School and the main Sunday service at the First Congregational Church and were active in the church youth groups. They were pumped full of Congregationalism until it was spilling over. Their father, although a minister, was not the minister of the First Congregational Church. He was the associate superintendent of the Iowa Conference of Congregational Churches, whose headquarters were at the college. The original purpose of the college had been to provide a good academic Congregational education, and many of the graduates became teachers. The Conference was a coordinating council rather than a governing body, since a prime tenet of the Congregational Church embedded in its name, was that each congregation was autonomous. Congregationalists rejected the very idea of a church hierarchy. A Congregational minister was not supposed to be a father or even a shepherd, but, rather, a teacher. Each member of the congregation was supposed to internalize the moral precepts of the church and be his own priest dealing directly with God. So the job of secretary of the Iowa Conference of Congregational Churches was anything but a position of power. It didn't pay much, either.

The Noyces didn't own their own house. They lived in a two-story white clapboard house that was owned by the church at Park Street and Tenth Avenue, at the college.

Not having your own house didn't carry the social onus in Grinnell that it did in the East. There was no upper crust in Grinnell. There were no top people who kept the social score in such matters. Congregationalists rejected the idea of a social hierarchy as fiercely as they did the idea of a religious hierarchy. The Congregationalists, like the Presbyterians, Methodists, Baptists, and United Brethren, were Dissenting Protestants. They were direct offshoots of the Separatists, who had split off from the Church of England in the sixteenth and seventeenth centuries and settled New England. At bottom, their doctrine of the autonomous congregation was derived from their hatred of the British system of class and status, with its endless gradations, topped off by the Court and the aristocracy. Even as late as 1948 the typical small town of the Middle West, like Grinnell, had nothing approaching a country club set. There were subtle differences in status in Grinnell, as in any other place, and it was better to be rich than poor, but there were only two obvious social ranks: those who were devout, educated, and hardworking, and those who weren't. Genteel poverty did not doom one socially in Grinnell. Ostentation did. The Noyce boys worked at odd jobs to earn their pocket money. That was socially correct as well as useful. To have devoted the same time to taking tennis lessons or riding lessons would have been a gaffe in Grinnell.

Donald, the oldest of the four boys, had done brilliantly at the college and had just received his Ph.D. in chemistry at Columbia University and was about to join the faculty of the University of California at Berkeley. Gaylord, the second oldest, was teaching school in Turkey. Bob, who was a year younger than Gaylord, had done so well in science at Grinnell High School that Grant Gale had invited him to take the freshman physics course at the college during his high school senior year. He became one of Gale's star students and most tireless laboratory workers from that time on. Despite his apparent passion for the scientific grind, Bob Noyce turned out to be that much-vaunted creature, the well-rounded student. He was a trim, muscular boy, five feet eight, with thick dark brown hair, a strong jawline, and a long, broad nose that gave him a rugged appearance. He was the star diver on the college swimming team and won the Midwest Conference championship in 1947. He sang in choral groups, played the oboe, and was an actor with the college dramatic society. He also acted in a radio drama workshop at the college, along with his friend Peter Hackes and some others who were interested in broadcasting, and was the leading man in a soap opera that was broadcast over station WOI in Ames, Iowa.

Perhaps Bob Noyce was a bit too well rounded for local tastes. There were people who still remembered the business with the box kite back in 1941, when he was thirteen. It had been harmless, but it could have been a disaster. Bob had come across some plans for the building of a box kite, a kite that could carry a person aloft, in the magazine Popular Science. So he and Gaylord made a frame of cross-braced pine and covered it with a bolt of muslin. They tried to get the thing up by running across a field and towing it with a rope, but that didn't work terribly well. Then they hauled it up on the roof of a barn, and Bob sat in the seat and Gaylord ran across the roof, pulling the kite. and Bob was lucky he didn't break his neck when he and the thing hit the ground. So then they tied it to the rear bumper of a neighbor's car. With the neighbor at the wheel, Bob rode the kite and managed to get about twelve feet off the ground and glide for thirty seconds or so and come down without wrecking himself or any citizen's house or livestock.

Livestock. . . yes. Livestock was a major capital asset in Grinnell, and livestock was at the heart of what happened in 1948. In May a group of Bob Noyce's friends in one of the dormitory houses at Grinnell decided to have a luau, and he was in on the planning. The Second World War had popularized the exotic ways of the South Pacific, so that in 1948 the luau was an up-to-the-minute social innovation. The centerpiece of a luau was a whole roasted suckling pig with an apple or a pineapple in its mouth. Bob Noyce, being strong and quick, was one of the two boys assigned to procure the pig. That night they sneaked onto a farm just outside of Grinnell and wrestled a twenty-five-pound suckling out of the pigpen and arrived back at the luau to great applause. Within a few hours the pig was crackling hot and had an apple in its mouth and looked good enough for seconds and thirds, which everybody helped himself to, and there was more applause. The next morning came the moral hangover. The two boys decided to go see the farmer, confess, and pay for the pig. They didn't quite understand how a college luau, starring his pig, would score on the laugh meter with a farmer in midland Iowa. In the state of Iowa, where the vast majority of people depended upon agriculture for a livelihood and upon Protestant morality for their standards, not even stealing a watermelon worth thirty-five cents was likely to be written off as a boyish prank. Stealing a pig was larceny. The farmer got the sheriff and insisted on bringing criminal charges. There was only so much that Ralph Noyce, the preacher with the preacher's son, could do. Grant Gale, on the other hand, was the calm, well-respected third party. He had two difficult tasks: to keep Bob out of jail and out of court and to keep the college administration from expelling him. There was never any hope at all of a mere slap on the wrist. The compromise Grant Gale helped work out? a one-semester suspension? was the best deal Bob could have hoped for realistically.

The Night of the Luau Pig was quite a little scandal on the Grinnell Richter scale. So Gale was all the more impressed by the way Bob Noyce took it. The local death-ray glowers never broke his confidence. All the Noyce boys had a profound and, to tell the truth, baffling confidence. Bob had a certain way of listening and staring. He would lower his head slightly and look up with a gaze that seemed to be about one hundred amperes. While he looked at you he never blinked and never swallowed. He absorbed everything you said and then answered very levelly in a soft baritone voice and often with a smile that showed off his terrific set of teeth. The stare, the voice, the smile; it was all a bit like the movie persona of the most famous of all Grinnell College's alumni, Gary Cooper. With his strong face, his athlete's build, and the Gary Cooper manner, Bob Noyce projected what psychologists call the halo effect. People with the halo effect seem to know exactly what they're doing and, moreover, make you want to admire them for it. They make you see the halos over their heads.

Years later people would naturally wonder where Bob Noyce got his confidence. Many came to the conclusion it was as, much from his mother, Harriett Norton Noyce, as from his father. She was a latter-day version of the sort of strong-willed, intelligent, New England-style woman who had made such a difference during Iowa's pioneer days a hundred years before. His mother and father, with the help of Rowland Cross, who taught mathematics at Grinnell, arranged for Bob to take a job in the actuarial department of Equitable Life in New York City for the summer. He stayed on at the job during the fall semester, then came back to Grinnell at Christmas and rejoined the senior class in January as the second semester began. Gale was impressed by the aplomb with which the prodigal returned. In his first three years Bob had accumulated so many extra credits, it would take him only this final semester to graduate. He resumed college life, including the extracurricular activities, without skipping a beat. But more than that, Gale was gratified by the way Bob became involved with the new experimental device that was absorbing so much of Gale's own time: the transistor.

Bob was not the only physics major interested in the transistor, but he was the one who seemed most curious about where this novel mechanism might lead. He went off to the Massachusetts Institute of Technology, in Cambridge, in the fall to begin his graduate work. When he brought up the subject of the transistor at MIT, even to faculty members, people just looked at him. Even those who had heard of it regarded it merely as a novelty fabricated by the telephone company. There was no course work involving transistors or the theory of solid-state electronics. His dissertation was a "Photoelectric Study of Surface States on Insulators," which was at best merely background for solid-state electronics. In this area MIT was far behind Grinnell College. For a good four years Grant Gale remained one of the few people Bob Noyce could compare notes with in this new field.

Well, it had been a close one! What if Grant Gale hadn't gone to school with John Bardeen, and what if Oliver Buckley hadn't been a Grinnell alumnus? And what if Gale hadn't bothered to get in touch with the two of them after he read the little squib about the transistor in the newspaper? What if he hadn't gone to bat for Bob Noyce after the Night of the Luau Pig and the boy had been thrown out of college and that had been that? After all, if Bob hadn't been able to finish at Grinnell, he probably never would have been introduced to the transistor. He certainly wouldn't have come across it at MIT in 1948. Given what Bob Noyce did over the next twenty years, one couldn't help but wonder about the fortuitous chain of events.

Fortuitous. . . well! How Josiah Grinnell, up on the plains of Heaven, must have laughed over that!

GRANT GALE WAS the first important physicist in Bob Noyce's career. The second was William Shockley. After their ambitions had collided one last time, and they had parted company, Noyce had concluded that he and Shockley were two very different people. But in many ways they were alike

For a start, they both had an amateur's hambone love of being on-stage. At MIT Noyce had sung in choral groups. Early in the summer of 1953, after he had received his Ph.D., he went over to Tufts College to sing and act in a program of musicals presented by the college. The costume director was a girl named Elizabeth Bottomley, from Barrington, Rhode Island, who had just graduated from Tufts, majoring in English. They both enjoyed dramatics. Singing, acting, and skiing had become the pastimes Noyce enjoyed most. He had become almost as expert at skiing as he had been at diving. Noyce and Betty, as he called her, were married that fall.

In 1953 the MIT faculty was just beginning to understand the implications of the transistor. But electronics firms were already eager to have graduate electrical engineers who could do research and development in the new field. Noyce was offered jobs by Bell Laboratories, IBM, by RCA, and Philco. He went to work for Philco, in Philadelphia, because Philco was starting from near zero in semiconductor research and chances for rapid advancement seemed good. But Noyce was well aware that the most important work was still being done at Bell Laboratories, thanks in no small part to William Shockley.

Shockley had devised the first theoretical framework for research into solid-state semiconductors as far back as 1939 and was in charge of the Bell Labs team that included John Bardeen and Walter Brattain. Shockley had also originated the "junction transistor," which turned the transistor from an exotic laboratory instrument into a workable item. By 1955 Shockley had left Bell and returned to Palo Alto, California, where he had grown up near Stanford University, to form his own company, Shockley Semiconductor Laboratory, with start up money provided by Arnold Beckman of Beckman Instruments. Shockley opened up shop in a glorified shed on South San Antonio Road in Mountain View, which was just south of Palo Alto. The building was made of concrete blocks with the rafters showing. Aside from clerical and maintenance personnel, practically all the employees were electrical engineers with doctorates. In a field this experimental there was nobody else worth hiring. Shockley began talking about "my Ph.D. production line. "

Meanwhile, Noyce was not finding Philco the golden opportunity he thought it would be. Philco wanted good enough transistors to stay in the game with GE and RCA, but it was not interested in putting money into the sort of avant-garde research Noyce had in mind. In 1956 he resigned from Philco and moved from Pennsylvania to California to join Shockley. The way he went about it was a classic example of the Noyce brand of confidence. By now he and his wife, Betty, had two children: Bill, who was two, and Penny, who was six months old. After a couple of telephone conversations with Shockley, Noyce put himself and Betty on a night flight from Philadelphia to San Francisco. They arrived in Palo Alto at six A.M. By noon Noyce had signed a contract to buy a house. That afternoon he went to Mountain View to see Shockley and ask for a job, projected the halo, and got it.

The first months on Shockley's Ph.D. production line were exhilarating. It wasn't really a production line at all. Everything at this stage was research. Every day a dozen young Ph.D.'s came to the shed at eight in the morning and began heating germanium and silicon, another common element, in kilns to temperatures ranging from 1,472 to 2,552 degrees Fahrenheit. They wore white lab coats, goggles, and work gloves. When they opened the kiln doors weird streaks of orange and white light went across their faces, and they put in the germanium or the silicon, along with specks of aluminum, phosphorus, boron. and arsenic. Contaminating the germanium or silicon with the aluminum, phosphorus, boron, and arsenic was called doping. Then they lowered a small mechanical column into the goo so that crystals formed on the bottom of the column, and they pulled the crystal out and tried to get a grip on it with tweezers, and put it under microscopes and cut it with diamond cutters, among other things, into minute slices, wafers, chips; there were no names in electronics for these tiny forms. The kilns cooked and bubbled away, the doors opened, the pale apricot light streaked over the goggles, the tweezers and diamond cutters flashed, the white coats flapped, the Ph. D.'s squinted through their microscopes, and Shockley moved between the tables conducting the arcane symphony.

In pensive moments Shockley looked very much the scholar, with his roundish face, his roundish eyeglasses, and his receding hairline; but Shockley was not a man locked in the pensive mode. He was an enthusiast, a raconteur, and a showman. At the outset his very personality was enough to keep everyone swept up in the great adventure. When he lectured, as he often did at colleges and before professional groups, he would walk up to the lectern and thank the master of ceremonies and say that the only more flattering introduction he had ever received was one he gave himself one night when the emcee didn't show up, whereupon - bango!- a bouquet of red roses would pop up in his hand. Or he would walk up to the lectern and say that tonight he was getting into a hot subject, whereupon he would open up a book and - whump! -a puff of smoke would rise up out of the pages.

Shockley was famous for his homely but shrewd examples. One day a student confessed to being puzzled by the concept of amplification, which was one of the prime functions of the transistor. Shockley told him: "If you take a bale of hay and tie it to the tail of a mule and then strike a match and set the bale of hay on fire, and if you then compare the energy expended shortly thereafter by the mule with the energy expended by yourself in the striking of the match, you will understand the concept of amplification."

On November 1,1956, Shockley arrived at the shed on South San Antonio Road beaming. Early that morning he had received a telephone call informing him that he had won the Nobel Prize for physics for the invention of the transistor; or, rather, that he was co-winner, along with John Bardeen and Walter Brattain. Shockley closed up shop and took everybody to a restaurant called Dinah's Shack over on El Camino Real, the road to San Francisco that had become Palo Alto's commercial strip. He treated his Ph. D. production line and all the other employees to a champagne breakfast. It seemed that Shockley's father was a mining engineer who spent years out on remote durango terrain, in Nevada, Manchuria and all over the world. Shockley's mother was like Noyce's. She was an intelligent woman with a commanding will. The Shockleys were Unitarians, the Unitarian Church being an offshoot of the Congregational. Shockley Sr. was twenty years older than Shockley's mother and died when Shockley was seventeen. Shockley's mother was determined that her son would someday "set the world on fire," as she once put it. And now he had done it. Shockley lifted a glass of champagne in Dinah's Shack, and it was as if it were a toast back across a lot of hardwrought durango grit Octagon Soap sagebrush Dissenting Protestant years to his father's memory and his mother's determination.

That had been a great day at Shockley Semiconductor Laboratory. There weren't many more. Shockley was magnetic, he was a genius, and he was a great research director? the best, in fact. His forte was breaking a problem down to first principles. With a few words and a few lines on a piece of paper he aimed any experiment in the right direction. When it came to comprehending the young engineers on his Ph.D. production line, however, he was not so terrific.

It never seemed to occur to Shockley that his twelve highly educated elves just might happen to view themselves the same way he had always viewed himself: which is to say, as young geniuses capable of the sort of inventions Nobel Prizes were given for. One day Noyce came to Shockley with some new results he had found in the laboratory. Shockley picked up the telephone and called some former colleagues at Bell Labs to see if they sounded right. Shockley never even realized that Noyce had gone away from his desk seething. Then there was the business of the new management techniques. Now that he was an entrepreneur, Shockley came up with some new ways to run a company. Each one seemed to irritate the elves more than the one before. For a start, Shockley published their salaries. He posted them on a bulletin board. That way there would be no secrets. Then he started having the employees rate one another on a regular basis. These were so-called peer ratings, a device sometimes used in the military and seldom appreciated even there. Everybody regarded peer ratings as nothing more than popularity contests. But the real turning point was the lie detector. Shockley was convinced that someone in the shed was sabotaging the project. The work was running into inexplicable delays, but the money was running out on schedule. So he insisted that one employee roll up his sleeve and bare his chest and let the electrodes be attached and submit to a polygraph examination. No saboteur was ever found.

There were also some technical differences of opinion. Shockley was interested in developing a so-called four-layer diode. Noyce and two of his fellow elves, Gordon Moore and Jean Hoerni, favored transistors. But at bottom it was dissatisfaction with the boss and the lure of entrepreneurship that led to what happened next.

In the summer of 1957 Moore, Hoerni, and five other engineers, but not Noyce, got together and arrived at what became one of the primary business concepts of the young semiconductor industry. In this business, it dawned on them, capital assets in the traditional sense of plant, equipment, and raw materials counted for next to nothing. The only plant you needed was a shed big enough for the worktables. The only equipment you needed was some kilns, goggles, microscopes, tweezers, and diamond cutters. The materials, silicon and germanium, came from dirt and coal. Brainpower was the entire franchise. If the seven of them thought they could do the job better than Shockley, there was nothing to keep them from starting their own company. On that day was born the concept that would make the semiconductor business as wild as show business: defection capital.

The seven defectors went to the Wall Street firm of Hayden Stone in search of start-up money. It was at this point that they realized they had to have someone to serve as administrator. So they turned to Noyce, who was still with Shockley. None of them, including Noyce, had any administrative experience, but they all thought of Noyce as soon as the question came up.  They didn't know exactly what they were looking for... but Noyce was the one with the halo. He agreed to join them. He would continue to wear a white lab coat and goggles and do research. But he would also be the coordinator. Of the eight of them, he would be the one man who kept track, on a regular basis, of all sides of the operation. He was twenty-nine years old.

Arthur Rock of Hayden Stone approached twenty-two firms before he finally hooked the defectors up with the Fairchild Camera and Instrument Corporation of New York. Fairchild was owned by Sherman Fairchild, a bachelor bon vivant who lived in a futuristic town house on East Sixty-fifth Street in Manhattan. The house was in two sections connected by ramps. The ramps were fifty feet long in some cases, enclosed in glass so that you could go up and down the ramps in all weather and gaze upon the marble courtyard below. The place looked like something from out of the Crystal Palace of Ming in Flash Gordon. The ramps were for his Aunt May, who lived with him and was confined to a wheelchair and had even more Fairchild money than he did. The chief executive officer of Fairchild was John Carter, who had just come from the Corning Glass Company. He had been the youngest vice president in the history of that old-line, family-owned firm. He was thirty-six. Fairchild Camera and Instrument gave the defectors the money to start up the new company, Fairchild Semiconductor, with the understanding that Fairchild Carnera and Instrument would have the right to buy Fairchild Semiconductor for $3 million at any time within the next eight years.

Shockley took the defections very hard. He seemed as much hurt as angered, and he was certainly angry enough. A friend of Shockley's said to Noyce's wife, Betty: "You must have known about this for quite some time. How on earth could you not tell me?" That was a baffling remark, unless one regarded Shockley as the father of the transistor and the defectors as the children he had taken beneath his mantle of greatness.

If so, one had a point. Years later, if anyone had drawn up a family tree for the semiconductor industry, practically every important branch would have led straight from Shockley's shed on South San Antonio Road. On the other hand, Noyce had been introduced to the transistor not by Shockley but by John Bardeen, via Grant Gale, and not in California but back in his own hometown, Grinnell, Iowa.

For that matter, Josiah Grinnell had been a defector in his day, too, and there was no record that he had ever lost a night's sleep over it.

Noyce, Gordon Moore, Jean Hoerni and the other five defectors set up Fairchild Semiconductor in a two-story warehouse building some speculator had built out of tilt-up concrete slabs on Charleston Avenue in Mountain View, about twelve blocks from Shockley's operation. Mountain View was in the northern end of the Santa Clara Valley. In the business world the valley was known mainly for its apricot, pear, and plum orchards. From the work bays of the light-industry sheds that the speculators were beginning to build in the valley you could look out and see the raggedy little apricot trees they had never bother to buldoze after they bought the land from the farmers. A few well known electronics firms were already in the valley: General Electric and IBM, as well as a company that had started up locally, Hewlett-Packard. Stanford University was encouraging engineering concerns to locate near Palo alto and use the university's research facilities. The man who ran the program was a friend of Shockley's, Frederick E. Terman, whose father had originated the first scientific measurement of human intelligence, the Stanford-Binet IQ test.

IBM had a facility in the valley that was devoted specifically to research rather than production. Both IBM and Hewlett-Packard were trying to develop a highly esoteric and colossally expensive new device, the electronic computer. Shockley had been the first entrepreneur to come to the area to make semiconductors. After the defections his operation never got off the ground. Here in the Santa Clara Valley, that left the field to Noyce and the others at Fairchild.

Fairchild's start-up couldn't have come at a better time. By 1957 there was sufficient demand from manufacturers who merely wanted transistors instead of vacuum tubes, for use in radios and other machines, to justify the new operation. But it was also in 1957 that the Soviet Union launched Sputnik I. In the electronics industry the ensuing space race had the effect of coupling two new inventions?the transistor and the computer?and magnifying the importance of both.

The first American electronic computer known as ENIAC, had been developed by the Army during the Second World War, chiefly as a means of computing artillery and bomb trajectories. The machine was a monster. It was one hundred feet long and ten feet high and required eighteen thousand vacuum tubes. The tubes generated so much heat, the temperature in the room sometimes reached 120 degrees. What the government needed was small computers that could be installed in rockets to provide automatic onboard guidance. Substituting transistors for vacuum tubes was an obvious way to cut down on the size. After Sputnik the glamorous words in the semiconductor business were computers and miniaturization.

Other than Shockley Semiconductor, Fairchild was the only semiconductor company in the Santa Clara Valley, but Texas Instruments had entered the field in Dallas, as had Motorola in Phoenix and Transitron and Raytheon in the Boston area, where a new electronics industry was starting up as MIT finally began to comprehend the new technology. These firms were all racing to refine the production of transistors to the point where they might command the market. So far refinement had not been anybody's long suit. No tourist dropping by Fairchild, Texas Instruments, Motorola, or Transitron would have had the faintest notion he was looking in on the leading edge of the most advanced of all industries, electronics. The work bays, where the transistors were produced looked like slightly sunnier versions of the garment sweatshops of San Francisco's Chinatown. Here were rows of women hunched over worktables, squinting through microscopes doing the most tedious and frustrating sort of manual labor, cutting layers of silicon apart with diamond cutters, picking little rectangles of them up with tweezers, trying to attach wires to them, dropping them, rummaging around on the floor to find them again, swearing, muttering, climbing back up to their chairs, rubbing their eyes, squinting back through the microscopes, and driving themselves crazy some more. Depending on how well the silicon or germanium had been cooked and doped, anywhere from 50 to 90 percent of the transistors would turn out to be defective even after all that, and sometimes the good ones would be the ones that fell on the floor and got ruined.

Even for a machine as simple as a radio the individual transistors had to be wired together, by hand, until you ended up with a little panel that looked like a road map of West Virginia. As for a computer, the wires inside a computer were sheer spaghetti.

Noyce had figured out a solution. But fabricating it was another matter. There was something primitive about cutting individual transistors out of sheets of silicon and then wiring them back together in various series. Why not put them all on a single piece of silicon without wires? The problem was that you would also have to carve, etch, coat, and otherwise fabricate the silicon to perform all the accompanying electrical functions as well, the functions ordinarily performed by insulators, rectifiers, resistors, and capacitors. You would have to create an entire electrical system, an entire circuit, on a little wafer or chip.

Noyce realized that he was not the only engineer thinking along these lines, but he had never even heard of Jack Kilby. Kilby was a thirty-six-year-old engineer working for Texas lnstruments in Dallas. In January, 1959 Noyce made his first detailed notes about a complete solid-state circuit. A month later Texas Instruments announced that Jack Kilby had invented one. Kilby's integrated circuit, as the invention was called, was made of germanium. Six months later Noyce created a similar integrated circuit made of silicon and using a novel insulating process developed by Jean Hoerni. Noyce's silicon device turned out to be more efficient and more practical to produce than Kilby's and set the standard for the industry. So Noyce became known as the co-inventor of the integrated circuit. Nevertheless, Kilby had unquestionably been first. There was an ironic echo of Shockley here. Strictly speaking, Bardeen and Brattain, not Shockley, had invented the transistor, but Shockley wasn't bashful about being known as the co-inventor. And, now eleven years later, Noyce wasn't turning bashful either.

Noyce knew exactly what he possessed in this integrated circuit, or microchip, as the press would call it. Noyce knew that he had discovered the road to El Dorado.

El Dorado was the vast, still-virgin territory of electricity. Electricity was already so familiar a part of everyday life, only a few research engineers understood just how, young and unexplored the terrain actually was. It had been only eighty years since Edison invented the light bulb in 1879. It had been less than fifty years since Lee De Forest, an inventor from Council Bluffs, Iowa had invented the vacuum tube. The vacuum tube was based on the light bulb, but the vacuum tube opened up fields the light bulb did not even suggest: long distance radio and telephone communicatlon. Over the past ten vears, since Bardeen and Brattain invented it in 1948, the transistor had become the modern replacement for the vacuum tube. And now came Kilby's and Noyce's integrated circuit. The integrated circuit was based on the transistor, but the integrated circuit opened up fields the transistor did not even suggest. The integrated circuit made it possible to create miniature computers, to put all the functions of the mighty ENIAC on a panel the size of a playing card. Thereby the integrated circuit opened up every field of engineering imaginable, from voyages to the moon to robots, and many fields that had never been imagined, such as electronic guidance counseling. It opened up so many fields that no one could even come up with a single name to include them all. "The second industrial revolution," "the age of the computer, " "the microchip universe, " "the electronic grid," none of them, not even the handy neologism "high tech. " could encompass all the implications.

The importance of the integrated circuit was certainly not lost on John Carter and Fairchild Camera back un New York. In 1959 they exercised their option to buy Fairchild Semiconductor for $3 million. The next day Noyce, Moore, Hoerni, and the other five former Shockley elves woke up rich, or richer than they had ever dreamed of being. Each received $250,000 worth of Fairchild stock.

Josiah Grinnell grew livid on the subject of alcohol. But he had nothing against money. He would have approved.

Noyce didn't know what to make of his new wealth. He was thirty-one years old. For the past four years, ever since he had gone to work for Shockley, the semiconductor business had not seemed like a business at all but an esoteric game in which young electrical engineers competed for attaboy's and the occasional round of applause after delivering a paper before the IEEE, the Institute of Electrical and Electronics Engineers. It was a game supercharged by the fact that it was being played in the real world, to use a term that annoyed scientists in the universities. Someone?Arnold Beckman, Sherman Fairchild, whoever?was betting real money, and other bands of young elves, at Texas Instruments, RCA, Bell, were out there competing with you by the real world's rules, which required that you be practical as well as brilliant. Noyce started working for Fairchild Semiconductor in 1957 for twelve thousand dollars a year. When it came to money, he had assumed that he, Like his father, would always be on somebody's payroll. Now, in 1959, when he talked to his father, he told him: "The money doesn't seem real. It's just a way of keeping score."

Noyce took his family to visit his parents fairly often. He and Betty now had three children, Bill, Penny, and Polly, who was a year old. When they visited the folks, they went off to church on Sunday with the folks as if it were all very much a part of their lives. In fact, Noyce had started drifting away from Congregationalism and the whole matter of churchgoing after he entered MIT. It was not a question of rejecting it. He never rejected anything about his upbringing in Grinnell. It was just that he was suddenly heading off somewhere else, down a different road.

In that respect Noyce was like a great many bright young men and women from Dissenting Protestant families in the Middle West after the Second World War. They had been raised as Baptists, Methodists, Congregationalists, Presbytenans, United Brethren, whatever. They had been led through the church door and prodded toward religion, but it had never come alive for them. Sundays made their skulls feel like dried-out husks. So they slowly walked away from the church and silently, without so much as a growl of rebellion, congratulated themselves on their independence of mind and headed into another way of life. Only decades later, in most cases, would they discover how, absentmindedly, inexplicably, they had brought the old ways along for the journey nonetheless. It was as if... through some extraordinary mistake... they had been sewn into the linings of their coats!

Now that he had some money, Bob Noyce bought a bigger house. His and Betty's fourth child, Margaret, was born in 1960, and they wanted each child to have a bedroom. But the thought of moving into any of the "best" neighborhoods in the Palo Alto area never even crossed his mind. The best neighborhoods were to be found in Atherton, in Burlingame, which was known as very social, or in the swell old sections of Palo Alto, near Stanford University. Instead, Noyce bought a California version of a French country house in Los Altos, a white stucco house with a steeply pitched roof. It was scenic up there in the hills, and cooler in the summer than it was down in the flatlands near the bay. The house had plenty of room, and he and Betty would be living a great deal better than most couples their age, but Los Altos folks had no social cachet and the house was not going to make House & Garden come banging on the door. No one could accuse them of being ostentatious.

John Carter appointed Noyce general manager of the entire division, Fairchild Semiconductor, which was suddenly one of the hottest new outfits in the business world. NASA chose Noyce's integrated circuits for the first computers that astronauts would use on board their spacecraft (in the Gemini program). After that, orders poured in. In ten years Fairchild sales rose from a few thousand dollars a year to $130 million, and the number of employees rose from the onginal band of elves to twelve thousand. As the general manager, Noyce now had to deal with a matter Shockley had dealt with clumsily and prematurely, namely, new management techniques for this new industry.

One day John Carter came to Mountain Vlew for a close look at Noyce's semiconductor operation. Carter's office in Syosset, Long Island, arranged for a limousine and chauffeur to be at his disposal while he was in California. So Carter arrived at the tilt-up concrete building in Mountain Vlew in the back of a black Cadillac limousine with a driver in the front wearing the complete chauffeur's uniform? the black suit, the white shirt, the black necktie, and the black visored cap. That in itself was enough to turn heads at Fairchild Semiconductor. Nobody had ever seen a limousine and a chauffeur out there before. But that wasn't what fixed the day in everybody's memory. It was the fact that the driver stayed out there for almost eight hours, doing nothing. He stayed out there in his uniform, with his visored hat on, in the front seat of the limousine, all day, doing nothing but waiting for a man who was somewhere inside. John Carter was inside having a terrific chief executive officer's time for himself. He took a tour of the plant, he held conferences, he looked at figures, he nodded with satisfaction, he beamed his urbane Fifty-seventh Street Biggie CEO charm. And the driver sat out there all day engaged in the task of supporting a visored cap with his head. People started leaving their workbenches and going to the front windows just to take a look at this phenomenon. It seemed that bizarre. Here was a serf who did nothing all day but wait outside a door in order to be at the service of the haunches of his master instantly, whenever those haunches and the paunch and the jowls might decide to reappear. It wasn't merely that this little peek at the New York-style corporate high life was unusual out here in the brown hills of the Santa Clara Valley. It was that it seemed terribly wrong.

A certain instinct Noyce had about this new industry and the people who worked in it began to take on the outlines of a concept. Corporations in the East adopted a feudal approach to organization, without even being aware of it. There were kings and lords, and there were vassals, soldiers, yeomen, and serfs, with layers of protocol and perquisites, such as the car and driver, to symbolize superiority and establish the boundary lines. Back east the CEOs had offices with carved paneling, fake fireplaces, escritoires, bergeres, leather-bound books, and dressing rooms, like a suite in a baronial manor house. Fairchild Semiconductor needed a strict operating structure, particularly in this period of rapid growth, but it did not need a social structure. In fact, nothing could be worse. Noyce realized how much he detested the eastern corporate system of class and status with its endless gradations, topped off by the CEOs and vice-presidents who conducted their daily lives as if they were a corporate court and aristocracy. He rejected the idea of a social hierarchy at Fairchild.

Not only would there be no limousines and chauffeurs, there would not even be any reserved parking places. Work began at eight A.M. for one and all, and it would be first come, first served, in the parking lot, for Noyce, Gordon Moore, Jean Hoerni, and everybody else. "If you come late," Noyce liked to say, "you just have to park in the back forty." And there would be no baronial office suites. The glorified warehouse on Charleston Road was divided into work bays and a couple of rows of cramped office cubicles. The cubicles were never improved. The decor remained Glorified Warehouse, and the doors were always open. Half the time Noyce, the chief administrator, was out in the laboratory anyway, wearing his white lab coat. Noyce came to work in a coat and tie. but soon the jacket and the tie were off. and that was fine for any other man in the place too. There were no rules of dress at all, except for some unwritten ones. Dress should be modest, modest in the social as well as the moral sense. At Fairchild there were no hard-worsted double-breasted pinstripe suits and shepherd's-check neckties. Sharp, elegant, fashionable, or alluring dress was a social blunder. Shabbiness was not a sin. Ostentation was.

During the start-up phase at Fairchild Semiconductor there had been no sense of bosses and employees. There had been only a common sense of struggle out on a frontier. Everyone had internalized the goals of the venture. They didn't need exhortations from superiors. Besides, everyone had been so young! Noyce, the administrator or chief coordinator or whatever he should be called, had been just about the oldest person on the premises, and he had been barely thirty. And now, in the early 1960s, thanks to his athletic build and his dark brown hair with the Campus Kid hairline, he still looked very young. As Fairchild expanded, Noyce didn't even bother trying to find "experienced management personnel." Out here in California, in the semiconductor industry, they didn't exist. Instead, he recruited engineers right out of the colleges and graduate schools and gave them major responsibilities right off the bat. There was no "staff," no "top management" other than the eight partners themselves. Major decisions were not bucked up a chain of command. Noyce held weekly meetings of people from all parts of the operation, and whatever had to be worked out was worked out right there in the room. Noyce wanted them all to keep internalizing the company's goals and to provide their own motivations, just as they had during the start-up phase. If they did that, they would have the capacity to make their own decisions.

The young engineers who came to work for Fairchild could scarcely believe how much responsibility was suddenly thrust upon them. Some twenty-four-year-old just out of graduate school would find himself in charge of a major project with no one looking over his shoulder. A problem would come up, and he couldn't stand it, and he would go to Noyce and hyperventilate and ask him what to do. And Noyce would lower his head, turn on his 100 ampere eyes, listen, and say: "Look, here are your guidelines. You've got to consider A, you've got to consider B. and you've got to consider C. " Then he would turn on the Gary Cooper smile: "But if you think I'm going to make your decision for you, you're mistaken. Hey... it's your ass."

Back east, in the conventional corporation, any functionary wishing to make an unusually large purchase had to have the approval of a superior or two or three superiors or even a committee, a procedure that ate up days, weeks, in paperwork. Noyce turned that around. At Fairchild any engineer, even a weenie just out of Cal Tech, could make any purchase he wanted, no matter how enormous, unless someone else objected strongly enough to try to stop it. Noyce called this the Short Circuit Paper Route. There was only one piece of paper involved, the piece of paper the engineer handed somebody in the purchasing department.

The spirit of the start-up phase! My God! Who could forget the exhilaration of the past few years! To be young and free out here on the silicon frontier! Noyce was determined to maintain that spirit during the expansion phase. And for the time being, at least. here in the early 1960s. the notion of a permanent start-up operation didn't seem too farfetched. Fairchild was unable to coast on the tremendous advantage Noyce's invention of the integrated circuit had provided. Competitors were setting up shop in the Santa Clara Valley like gold rushers. And where did they come from? Why, from Fairchild itself! And how could that be? Nothing to it... Defection capital!

Defectors (or redefectors) from Fairchild started up more than fifty companies, all making or supplying microchips. Raytheon Semiconductor, Signetics. General Microelectronics, Intersil. Advanced Micro Devices. Qualidyne? off they spun, each with a sillier pseudotech engineerologism for a name than the one before. Defectors! What a merry game that was. Jean Hoerni and three of the other original eight defectors from Shockley defected from Fairchild to form what would soon become known as Teledyne Semiconductors, and that was only round one. After all, why not make all the money for yourself! The urge to use defection capital was so irresistible that the word defection,with its note of betrayal, withered away. Defectors were merely the Fairchildren, as Adam Smith dubbed them. Occasionally defectors from other companies, such as the men from Texas Instruments and Westinghouse who started Siliconix, moved into the Santa Clara Valley to join the free-for-all. But it was the Fairchildren who turned the Santa Clara Valley into the Silicon Valley. Acre by acre the fruit trees were uprooted, and two-story Silicon Modern office buildings and factories went up. The state of California built a new freeway past the area, Route 280. Children heard the phrase "Silicon Valley" so often, they grew up thinking it was the name on the map.

Everywhere the Fairchild émigrés went, they took the Noyce approach with them. It wasn't enough to start up a company; you had to start up a community, a community in which there were no social distinctions, and it was first come, first served, in the parking lot, and everyone was supposed to internalize the common goals. The atmosphere of the new companies was so democratic, it startled businessmen from the East. Some fifty-five-year-old biggie with his jowls swelling up smoothly from out of his F. R. Tripler modified-spread white collar and silk jacquard print necktie would call up from GE or RCA and say, "This is Harold B. Thatchwaite," and the twenty-three-year-old secretary on the other end of the line, out in the Silicon Valley, would say in one of those sunny blond pale-blue-eyed California voices: "Just a minute, Hal, Jack will be right with you. " And once he got to California and met this Jack for the first time, there he would be, the CEO himself, all of thirty-three vears old, wearing no jacket, no necktie, just a checked shirt, khaki pants, and a pair of moccasins with welted seams the size of jumper cables. Naturally the first sounds out of this Jack's mouth would be: "Hi, Hal. "

It was the 1960s. and people in the East were hearing a lot about California surfers, California bikers, hot rodders, car customizers, California hippies, and political protesters, and the picture they got was of young people in jeans and T-shirts who were casual, spontaneous, impulsive, emotional, sensual, undisciplined, and obnoxiously proud of it. So these semiconductor outfits in the Silicon Valley with their CEOs dressed like camp counselors struck them as the business versions of the same thing.

They couldn't have been more wrong. The new breed of the Silicon Valley lived for work. They were disciplined to the point of back spasms. They worked long hours and kept working on weekends. They became absorbed in their companies the way men once had in the palmy days of the automobile industry. In the Silicon Valley a young engineer would go to work at eight in the morning, work right through lunch, leave the plant at six-thirty or seven, drive home, play with the baby for half an hour, have dinner with his wife, get in bed with her, give her a quick toss, then get up and leave her there in the dark and work at his desk for two or three hours on "a coupla things I had to bring home with me."

Or else he would leave the plant and decide, well, maybe he would drop in at the Wagon Wheel for a drink before he went home. Every year there was some place, the Wagon Wheel, Chez Yvonne, Rickey's, the Roundhouse, where members of this esoteric fraternity, the young men and women of the semiconductor industry, would head after work to have a drink and gossip and brag and trade war stories about phase jitters, phantom circuits, bubble memories, pulse trains, bounceless contacts, burst modes, leapfrog tests, p-n junctions, sleeping-sickness modes, slow-death episodes, RAMs, NAKs, MOSes, PCMs, PROMs, PROM blowers, PROM burners, PROM blasters, and teramagnitudes, meaning multiples of a million millions. So then he wouldn't get home until nine, and the baby was asleep, and dinner was cold, and the wife was frosted off, and he would stand there and cup his hands as if making an imaginary snowball and try to explain to her... while his mind trailed off to other matters, LSIs, VLSIs, alpha flux, de-rezzing, forward biases, parasitic signals, and that terasexy little cookie from Signetics he had met at the Wagon Wheel, who understood such things.

It was not a great way of life for marriages. By the late 1960s the toll of divorces seemed to those in the business to be as great as that of NASA's boomtowns, Cocoa Beach, Florida. and Clear Lake. Texas, where other young engineers were giving themselves over to a new technology as if it were a religious mission. The second time around the tended to "intramarry. " They married women who worked for Silicon Valley companies and who could comprehend and even learn to live with their twenty-four-hour obsessions. In the Silicon Valley an engineer was under pressure to reinvent the integrated circuit every six months. In 1959 Noyce's invention had made it possible to put an entire electrical circuit on a chip of silicon the size of a fingernail. By 1964 you had to know how to put ten circuits on a chip that size just to enter the game, and the stakes kept rising. Six years later the figure was one thousand circuits on a single chip; six years after that it would be thirty-two thousand, and evervone was talking about how the real breakthrough would be sixty-four thousand. Noyce himself led the race; by 1968 he had a dozen new integrated circuit and transistor patents. And what amazing things such miniatunzation made possible! In December 1968 NASA sent the first manned flight to the moon, Apollo 8. Three astronauts, Frank Borman. James Lovell, and William Anders, flew into earth orbit, then fired a rocket at precisely the right moment in order to break free of the earth's gravitational field and fly through the minute "window" in space that would put them on course to the moon rather than into orbit around the sun, from which there could be no return. They flew to the moon, went into orbit around it, saw the dark side, which no one had ever seen, not even with a telescope, then fired a rocket at precisely the right moment in order to break free of the moon's gravitational pull and go into the proper trajectory for their return to earth. None of it would have been possible without onboard computers. People were beginning to talk about all that the space program was doing for the computer sciences. Noyce knew it was the other way around. Only the existence of a miniature computer two feet long, one foot wide, and six inches thick?exactly three thousand times smaller than the old ENIAC and far faster and more reliable?made the flight of Apollo 8 possible. And there would have been no miniature computer without the integrated circuits invented by Noyce and Kilby and refined by Noyce and the young semiconductor zealots of the Silicon Valley, the new breed who were building the road to El Dorado.

Noyce used to go into a slow burn that year, 1968, when the newspapers, the magazines, and the television networks got on the subject of the youth. The youth was a favorite topic in 1968. Riots broke out on the campuses as the antiwar movement reached its peak following North Vietnam's Tet offensive. Black youths rioted in the cities. The Yippies, supposedly a coalition of hippies and campus activists, managed to sabotage the Democratic National Convention by setting off some highly televised street riots. The press seemed to enjoy presenting these youths as the avant-garde who were sweeping aside the politics and morals of the past and shaping America's future. The French writer Jean-Francois Revel toured American campuses and called the radical youth homo novus, "the New Man," as if they were the latest, most advanced product of human evolution itself. after the manner of the superchildren in Arthur C. Clarke's Childhood's End.

Homo novus? As Noyce saw it, these so-called radical youth movements were shot through with a yearning for a preindustnal Arcadia. They wanted, or thought they wanted, to return to the earth and live on organic vegetables and play folk songs from the sixteenth and seventeenth centuries. They were anti technology. They looked upon science as an instrument monopolized by the military-industrial complex. They used this phrase, "the military-industrial complex," all the time. If industry or the military underwrote scientific research in the universities?and they underwrote a great deal of it?then that research was evil. The universities were to be pure and above exploitation, except, of course, by ideologues of the Left. The homo novus had set up a chain of logic that went as follows: since science equals the military-industrial complex, and the military-industrial complex equals capitalism, and capitalism equals fascism, therefore science equals fascism. And therefore, these much-vaunted radical youths, these shapers of the future, attacked the forward positions of American technology, including the space program and the very idea of, the computer. And therefore these creators of the future were what? They were Luddites. They wanted to destroy the new machines. They were the reactionaries of the new age. They were an avant-garde to the rear. They wanted to call off the future. They were stillborn, ossified, prematurely senile.

If you wanted to talk about the creators of the future, well, here they were here, in the Silicon Valley! Just before Apollo 8 circled the moon, Bob Noyce turned forty-one. By age forty-one he had become such a good skier, people were urging him to enter competitions. He had taken up hang gliding and scuba diving. When his daughter Penny was almost fourteen, he asked her what she wanted for her birthday, and she said she wanted to drop from an airplane by parachute. Noyce managed to convince her to settle for glider lessons instead. Then, because it made him restless to just stand around an airfield and watch her soar up above, he took flying lessons, bought an airplane, and began flying the family up through the mountain passes to Aspen, Colorado, for skiing weekends. He had the same lean, powerful build as he had had twenty years before, when he was on the swimming team at Grinnell College. He had the same thick dark brown hair and the same hairline. It looked as if every hair in his head were nailed in. He looked as if he could walk out the door any time he wanted to and win another Midwest Conference diving championship. And he was one of the oldest CEOs in the semiconductor business! He was the Edison of the bunch! He was the father of the Silicon Valley!

The rest of the hotshots were younger. It was a business dominated by people in their twenties and thirties. In the Silicon Valley there was a phenomenon known as burnout. After five or ten years of obsessive racing for the semiconductor high stakes, five or ten years of lab work, work lunches, workaholic drinks at the Wagon Wheel, and work-battering of the wife and children, an engineer would reach his middle thirties and wake up one day; and he was finished. The game was over. It was called burnout, suggesting mental and physical exhaustion brought about by overwork. But Noyce was convinced it was something else entirely. It was...age, or age and status. In the semiconductor business, research engineering was like pitching in baseball; it was 60 percent of the game. Semiconductor research was one of those highly mathematical sciences, such as microbiology, in which, for reasons one could only guess at, the great flashes, the critical moments of inspiration, came mainly to those who were young, often to men in their twenties. The thirty-five year-old burnouts weren't suffering from exhaustion, as Noyce saw it. They were being overwhelmed, outperformed, by the younger talent coming up behind them. It wasn't the central nervous system that was collapsing, it was the ego.

Now here you saw youth in the vanguard, on the leading edge. Here you saw the youths who were, in fact, shaping the future. Here you saw, if you insisted on the term, the homo novus!

But why insist? For they were also of the same stripe as Josiah Grinnell, who had founded Grinnell, Iowa, at the age of thirty three!

It was in 1968 that Noyce pulled off the redefection of all redefections. Fairchild Semiconductor had generated tremendous profits for the parent company back east. It now appeared to Noyce that John Carter and Sherman Fairchild had been diverting too much of that money into new start-up ventures, outside the semiconductor field. As a matter of fact, Noyce disliked many things "back east." He disliked the periodic trips to New York, for which he dressed in gray suits, white shirts, and neckties and reported to the royal corporate court and wasted days trying to bring them up to date on what was happening in California. Fairchild was rather enlightened, for an eastern corporation, but the truth was, there was no one back east who understood how to run a corporation in the United States in the second half of the twentieth century. Back east they had never progressed beyond the year 1940. Consequently, they were still hobbled by all of the primitive stupidities of bureaucratism and labor-management battles. They didn't have the foggiest comprehension of the Silicon Valley idea of a corporate community. The brightest young businessmen in the East were trained?most notably at the Harvard Business School?to be little Machiavellian princes. Greed and strategy were all that mattered. They were trained for failure.

Noyce and Gordon Moore, two of the three original eight Shockley elves still at Fairchild, decided to form their own company. They went to Arthur Rock, who had helped provide the start-up money for Fairchild Semiconductor when he was at Hayden Stone. Now Rock had his own venture-capital operation. Noyce took great pleasure in going through none of the steps in corporate formation that the business schools talked about. He and Moore didn't even write up a proposal. They merely told Rock what they wanted to do and put up $500,000 of their own money, $250,000 each. That seemed to impress Rock more than anything they could possibly have written down, and he rounded up $2.51 million of the start-up money. A few months later another $300,000 came, this time from Grinnell College. Noyce had been on the college's board of trustees since 1962, and a board member had asked him to give the college a chance to invest, should the day come when he started his own company. So Grinnell College became one of the gamblers betting on Noyce and Intel?the pseudotech engineerologism Noyce and Moore dreamed up as the corporate name. Josiah Grinnell would have loved it.

The defection of Noyce and Moore from Fairchild was an earthquake even within an industry jaded by the very subject of defections. In the Silicon Valley everybody had looked upon Fairchild as Noyce's company. He was the magnet that held the place together. With Noyce gone, it was obvious that the entire work force would be up for grabs. As one wag put it, "People were practically driving trucks over to Fairchild Semiconductor and loading up with employees." Fairchild responded by pulling off one of the grossest raids in corporate history. One day the troops who were left at Fairchild looked across their partitions and saw a platoon of young men with terrific suntans moving into the executive office cubicles. They would always remember what terrific suntans they had. They were C. Lester Hogan, chief executive officer of the Motorola semiconductor division in Phoenix, and his top echelon of engineers and administrators. Or, rather, C. Lester Hogan of Motorola until yesterday. Fairchild had hired the whole bunch away from Motorola and installed them in place of Noyce & Co. like a matched set. There was plenty of sunshine in the Santa Clara Valley, but nobody here had suntans like this bunch from Phoenix. Fairchild had lured the leader of the young sun-gods out of the Arizona desert in the most direct way imaginable. He had offered him an absolute fortune in money and stock. Hogan received so much, the crowd at the Wagon Wheel said, that henceforth wealth in the Silicon Valley would be measured in units called hogans. *(Dirk Hanson, The New Alchemists, Boston: Little Brown, 1982).

Noyce and Moore, meanwhile, started up Intel in a tilt-up concrete building that Jean Hoerni and his group had built, but no longer used, in Santa Clara, which was near Mountain View. Once again there was an echo of Shockley. They opened up shop with a dozen bright young electrical engineers, plus a few clerical and maintenance people, and bet everything on research and product development. Noyce and Moore, like Shockley, put on the white coats and worked at the laboratory tables. They would not be competing with Fairchild or anyone else in the already established semiconductor markets. They had decided to move into the most backward area of computer technology, which was data storage, or "memory." A computer's memory was stored in ceramic ringlets known as cores. Each ringlet contained one "bit" of information, a "yes" or a "no, " in the logic of the binary system of mathematics that computers employ. Within two years Noyce and Moore had developed the 1103 memory chip, a chip of silicon and polysilicon the size of two letters in a line of type. Each chip contained four thousand transistors, did the work of a thousand ceramic ringlets, and did it faster. The production line still consisted of rows of women sitting at tables as in the old shed-and-rafter days, but the work bays now looked like something from out of an intergalachc adventure movie. The women engraved the curcuits on the silicon photographically, wearing antiseptic Mars Voyage suits, headgear, and gloves because a single speck of dust could ruin one of the miniature circuits. The circuits were so small that "miniature" no longer sounded small enough. The new word was "microminiature." Everything now took place in an air-conditioned ice cube of vinyl tiles, stainless steel, fluorescent lighting, and backlit plastic.

The 1103 memory chip opened up such a lucrative field that other companies, including Fairchild, fought desperately just to occupy the number-two position, filling the orders Intel couldn't take care of. At the end of Intel's first year in business, which had been devoted almost exclusively to research, sales totaled less than three thousand dollars and the work force numbered forty-two. In 1972, thanks largely to the 1103 chip, sales were $23.4 million and the work force numbered 1,002. In the next year sales almost tripled, to $66 million, and the work force increased two and a half times, to 2,528.

So Noyce had the chance to run a new company from start-up to full production precisely the way he thought Shockley should have run his in Palo Alto back in the late 1950s. From the beginning Noyce gave all the engineers and most of the office workers stock options. He had learned at Fairchild that in a business so dependent upon research, stock options were a more powerful incentive than profit sharing. People sharing profits naturally wanted to concentrate on products that were already profitable rather than plunge into avant-garde research that would not pay off in the short run even if it were successful. But people with stock options lived for research breakthroughs. The news would send a semiconductor company's stock up immediately, regardless of profits.

Noyce's idea was that every employee should feel that he could go as far and as fast in this industry as his talent would take him. He didn't want any employee to look at the structure of Intel and see a complex set of hurdles. It went without saying that there would be no social hierarchy at Intel, no executive suites, no pinstripe set, no reserved parking places, or other symbols of the hierarchy. But Noyce wanted to go further. He had never liked the business of the office cubicles at Fairchild. As miserable as they were, the mere possession of one symbolized superior rank. At Intel executives would not be walled off in offices. Everybody would be in one big room. There would be nothing but low partitions to separate Noyce or anyone else from the lowliest stock boys trundling in the accordion printout paper. The whole place became like a shed. When they first moved into the building, Noyce worked at an old, scratched, secondhand metal desk. As the company expanded, Noyce kept the same desk, and new stenographers, just hired, were given desks that were not only newer but bigger and better than his. Everybody noticed the old beat-up desk, since there was nothing to keep anybody from looking at every inch of Noyce's office space.  Noyce enjoyed this subversion of the eastern corporate protocol of small metal desks for underlings and large wooden desks for overlords.

At Intel, Noyce decided to eliminate the notion of levels of management altogether. He and Moore ran the show: that much was clear. But below them there were only the strategic business segments, as they called them. They were comparable to the major departments in an orthodox corporation, but they had far more autonomy. Each was run like a separate corporation. Middle managers at Intel had more responsibility than most vice-presidents back east. They were also much younger and got lower-back pain and migraines earlier. At Intel, if the marketing division had to make a major decision that would affect the engineering division, the problem was not routed up a hierarchy to a layer of executives who oversaw both departments. Instead, "councils," made up of people already working on the line in the divisions that were affected, would meet and work it out themselves. The councils moved horizontally, from problem to problem. They had no vested power. They were not governing bodies but coordinating councils.

Noyce was a great believer in meetings. The people in each department or work unit were encouraged to convene meetings whenever the spirit moved them. There were rooms set aside for meetings at Intel, and they were available on a first come, first served basis, just like the parking spaces. Often meetings were held at lunch time. That was not a policy; it was merely an example set by Noyce. There were no executive lunches at Intel. Back east, in New York, executives treated lunch as a daily feast of the nobility, a sumptuous celebration of their eminence, in the Lucullan expense-account restaurants of Manhattan. The restaurants in the East and West Fifties of Manhattan were like something from out of a dream. They recruited chefs from all over Europe and the Orient. Pasta primavera, saucisson, sorrel mousse, homard cardinal, terrine de legumes Montesquiou, paillard de pigeon, medallions of beef Chinese Gordon, veal Valdostana, Verbena roast turkey with Hayman sweet potatoes flown in from the eastern shore of Virginia, raspberry soufflé, baked Alaska, zabaglione, pear torte, creme brulee; and the wines! and the brandies! and the port! the Sambuca! the cigars! and the decor! walls with lacquered woodwork and winking mirrors and sconces with little pleated peach-colored shades, all of it designed by the very same decorators who walked duchesses to parties for Halston on Eaton Square! and captains and maitre d's who made a fuss over you in movie French in front of your clients and friends and fellow overlords! it was Mount Olympus in mid-Manhattan every day from twelve-thirty to three P.M. and you emerged into the pearl-gray light of the city with such ambrosia pumping through your veins that even the clotted streets with the garbage men backing up their grinder trucks and yelling, " 'Mon back, 'mon back, 'mon back, 'mon back," ' as if talking Urban Chippewa? even this became part of the bliss of one's eminence in the corporate world! There were many chief executive officers who kept their headquarters in New York long after the last rational reason for doing so had vanished...because of the ineffable experience of being a CEO and having lunch five days a week in Manhattan!

At Intel lunch had a different look to it. You could tell when it was noon at Intel, because at noon men in white aprons arrived at the front entrance gasping from the weight of the trays they were carrying. The trays were loaded down with deli sandwiches and waxed cups full of drinks with clear plastic tops, with globules of Sprite or Diet Shasta sliding around the tops on the inside. That was your lunch. You ate some sandwiches made of roast beef or chicken sliced into translucent rectangles by a machine in a processing plant and then reassembled on the bread in layers that gave off dank whiffs of hormones and chemicals, and you washed it down with Sprite or Diet Shasta, and you sat amid the particle-board partitions and metal desktops, and you kept your mind on your committee meeting. That was what Noyce did, and that was what everybody else did.

If Noyce called a meeting, then he set the agenda. But after that, everybody was an equal. If you were a young engineer and you had an idea you wanted to get across, you were supposed to speak up and challenge Noyce or anybody else who didn't get it right away. This was a little bit of heaven. You were face to face with the inventor, or the co-inventor, of the very road to El Dorado, and he was only forty-one years old, and he was listening to you. He had his head down and his eyes beamed up at you, and he was absorbing it all. He wasn't a boss. He was Gary Cooper! He was here to help you be self-reliant and do as much as you could on your own. This wasn't a was a congregation.

By the same token, there were sermons and homilies. At Intel everyone?Noyce included?was expected to attend sessions on "the Intel Culture." At these sessions the principles by which the company was run were spelled out and discussed. Some of the discussions had to do specifically with matters of marketing or production. Others had to do with the broadest philosophical principles of Intel and were explained via the Socratic method at management seminars by Intel's number-three man, Andrew Grove.

Grove would say, "How would you sum up the Intel approach?" Many hands would go up, and Grove would choose one, and the eager communicant would say: "At Intel you don't wait for someone else to do it. You take the ball yourself and you run with it. " And Grove would say, "Wrong. At Intel you take the ball yourself and you let the air out and you fold the ball up and put it in your pocket. Then you take another ball and run with it and when you've crossed the goal you take the second ball out of your pocket and reinflate it and score twelve points instead of six."

Grove was the most colorful person at Intel. He was a thin man in his mid-thirties with tight black curls all over his head. The curls ran down into a pair of mutton chops that seemed to run together like goulash with his mustache. Every day he wore either a turtleneck jersey or an open shirt with an ornamental chain twinkling on his chest. He struck outsiders as the epitome of a style of the early 1970s known as California Groovy. In fact, Grove was the epitome of the religious principle that the greater the freedom- for example, the freedom to dress as you pleased- the greater the obligation to exercise discipline. Grove's own groovy outfits were neat and clean. The truth was, he was a bit of a bear on the subject of neatness and cleanliness. He held what he called "Mr. Clean inspections." showing up in various work areas wearing his mutton chops and handlebar mustache and his Harry Belafonte-cane cutter's shirt and the gleaming chain work, inspecting offices for books stacked too high, papers strewn over desktops, everything short of running a white glove over the shelves, as if this were some California Groovy Communal version of Parris Island, while the chain twinkled in his chest hairs. Grove was also the inspiration for such items as the performance ratings and the Late List. Each employee received a report card periodically with a grade based on certain presumably objective standards. The grades were superior, exceeds requirements, meets requirements, marginally meets requirements,and does not meet requirements. This was the equivalent of A, B, C, D, and F in school. Noyce was all for it. "If you're ambitious and hardworking," he would say, "you want to be told how you're doing." In Noyce's view, most of the young hotshots who were coming to work for Intel had never had the benefit of honest grades in their lives. In the late 1960s and early 1970s college faculties had been under pressure to give all students passing marks so they wouldn't have to go off to Vietnam, and they had caved in, until the entire grading system was meaningless. At Intel they would learn what measuring up meant. The Late List was also like something from a strict school. Everyone was expected at work at eight A.M. A record was kept of how many employees arrived after 8:10 A. M. If 7 percent or more were late for three months, then every body in the section had to start signing in. There was no inevitable penalty for being late, however. It was up to each department head to make of the Late List what he saw fit. If he knew a man was working overtime every night on a certain project, then his presence on the Late List would probably be regarded as nothing more than that, a line on a piece of paper. At bottom and this was part of the Intel Culture Noyce and Grove knew that penalties were very nearly useless. Things like report cards and Late Lists worked only if they stimulated self-discipline.

The worst form of discipline at Intel was to be called on the Antron II carpet before Noyce himself. Noyce insisted on ethical behavior in all dealings within the company and between companies. That was the word people used to describe his approach, ethical; that and moral. Noyce was known as a very aggressive businessman, but he stopped short of cutting throats, and he never talked about revenge. He would not tolerate peccadilloes such as little personal I'll-reimburse-it-on-Monday dips into the petty cash. Noyce's Strong Silent stare, his Gary Cooper approach, could be mortifying as well as inspiring. When he was angry, his baritone voice never rose. He seemed like a powerful creature that only through the greatest self-control was refraining from an attack. He somehow created the impression that if pushed one more inch, he wouid fight. As a consequence he seldom had to. No one ever trifled with Bob Noyce.

Noyce managed to create an ethical universe within an inherently amoral setting: the American business corporation in the second half of the twentieth century. At Intel there was good and there was evil, and there was freedom and there was discipline, and to an extraordinary degree employees internalized these matters, as if members of Cromwell's army. As the work force grew at Intel, and the profits soared, labor unions, chiefly the International Association of Machinists and Aerospace Workers, the Teamsters, and the Stationary Engineers Union, made several attempts to organize Intel. Noyce made it known, albeit quietly, that he regarded unionization as a death threat to Intel, and to the semiconductor industry generally. Labor-management battles were part of the ancient terrain of the East. If Intel were divided into workers and bosses, with the implication that each side had to squeeze its money out of the hides of the other, the enterprise would be finished. Motivation would no longer be internal; it would be objectified in the deadly form of work rules and grievance procedures. The one time it came down to a vote, the union lost out by the considerable margin of four to one. Intel's employees agreed with Noyce. Unions were part of the dead hand of the past... Noyce and Intel were on the road to El Dorado.

By the early 1970s Noyce and Moore's 1103 memory chip had given this brand-new company an entire corner of the semiconductor market. But that was only the start. Now a thirty-two-year-old Intel engineer named Ted Hoff came up with an invention as important as Noyce's integrated circuit had been a decade earlier: something small, dense, and hidden: the microprocessor. The microprocessor was known as "the computer on a chip," it put all the arithmetic and logic functions of a computer on a chip the size of the head of a tack. The possibilities for creating and using small computers surpassed most people's imagining, even within the industry. One of the more obvious possibilities was placing a small computer in the steering and braking mechanisms of a car that would take over for the drive in case of a skid or excessive speed on a curve.

In Ted Hoff, Noyce was looking at proof enough of his hypothesis that out here on the electrical frontier the great flashes came to the young. Hoff was about the same age Noyce had been when he invented his integrated circuit. The glory was now Hoff's. But Noyce took Hoff's triumph as proof of a second hypothesis. If you created the right type of corporate community, the right type of autonomous congregation, genius would flower. Certainly the corporate numbers were flowering. The news of the microprocessor, on top of the success of the 1103 memory chip nearly trebled the value of Intel stock from 1971 to 1973. Noyce's own holdings were now worth $18.5 million. He was in roughly the same position as Josiah Grinnell a hundred years before, when Grinnell brought the Rock Island Railroad into Iowa.

Noyce continued to live in the house in the Los Altos hills that he had bought in 1960. He was not reluctant to spend his money; he was merely reluctant to show it. He spent a fortune on landscaping, but you could do that and the world would be none the wiser. Gradually the house disappeared from view behind an enormous wall of trees, tropical bushes, and cockatoo flowers. Noyce had a pond created on the back lawn, a waterscape elaborate enough to put on a bus tour, but nobody other than guests ever saw it. The lawn stretched on for several acres and had a tennis court, a swimming pool, and more walls of boughs and hot-pastel blossoms, and the world saw none of that, either.

Noyce drove a Porsche roadster, and he didn't mind letting it out for a romp. Back east, when men made a great deal of money, they tended to put a higher and higher value on their own hides. Noyce, on the other hand, seemed to enjoy finding new ways to hang his out over the edge. He took up paragliding over the ski slopes at Aspen on a Rogolla wing. He built a Quicksilver hang glider and flew it off cliffs until a friend of his, a champion at the sport, fractured his pelvis and a leg flying a Quicksilver. He also took up scuba diving, and now he had his Porsche. The high performance foreign sports car became one of the signatures of the successful Silicon Valley entrepreneur. The sports car was perfect. Its richness consisted of engineering beneath the body shell. Not only that, the very luxury of a sports car was the experience of driving it yourself. A sports car didn't even suggest a life with servants. Porsches and Ferraris became the favorites. By 1975 the Ferrari agency in Los Gatos was the second biggest Ferrari agency on the West Coast. Noyce also bought a 1947 Republic Seabee amphibious airplane, so that he could take the family for weekends on the lakes in northern California. He now had two aircraft, but he flew the ships himself.

Noyce was among the richest individuals on the San Francisco Peninsula, as well as the most important figure in the Silicon Valley, but his name seldom appeared in the San Francisco newspapers. When it did, it was in the business section, not on the society page. That, too, became the pattern for the new rich of the Silicon Valley. San Francisco was barely forty-five minutes up the Bayshore Freeway from Los Altos, but psychologically San Francisco was an entire continent away. It was a city whose luminaries kept looking back east, to New York, to see if they were doing things correctly.

In 1974 Noyce wound up in a situation that to some seemed an all-too-typical Mid-life in the Silicon Valley story. He and Betty, his wife of twenty-one years, were divorced, and the following year he "intramarried." Noyce, who was forty-seven, married Intel's personnel director, Ann Bowers, who was thirty-seven. The divorce was mentioned in the San Francisco Chronicle, but not as a social note. It was a major business story. Under California law, Betty received half the family's assets. When word got out that she was going to sell off $6 million of her Intel stock in the interest of diversifying her fortune, it threw the entire market in Intel stock into a temporary spin. Betty left California and went to live in a village on the coast of Maine. Noyce kept the house in Los Altos.

By this time, the mid-1970s, the Silicon Valley had become the late-twentieth-century-California version of a new city, and Noyce and other entrepreneurs began to indulge in some introspection. For ten years, thanks to racial hostilities and the leftist politics of the antiwar movement, the national press had dwelled on the subject of ethnic backgrounds. This in itself tended to make the engineers and entrepreneurs of the Silicon Valley conscious of how similar most of them were. Most of the major figures, like Noyce himself, had grown up and gone to college in small towns in the Middle West and the West. John Bardeen had grown up in and gone to college in Madison, Wisconsin. Walter Brattain had grown up in and gone to college in Washington. Shockley grew up in Palo Alto at a time when it was a small college town and went to the California Institute of Technology. Jack Kilby was born in Jefferson City, Missouri, and went to college at the University of Illinois. William Hewlett was born in Ann Arbor and went to school at Stanford. David Packard grew up in Pueblo, Colorado, and went to Stanford. Oliver Buckley grew up in Sloane, Iowa, and went to college at Grinnell. Lee De Forest came from Council Bluffs, Iowa (and went to Yale). And Thomas Edison grew up in Port Huron Michigan, and didn't go to college at all.

Some of them, such as Noyce and Shockley, had gone east to graduate school at MIT, since it was the most prestigious engineering school in the United States. But MIT had proved to be a backwater... the sticks... when it came to the most advanced form of engineering, solid-state electronics. Grinnell College, with its one thousand students, had been years ahead of MIT. The picture had been the same on the other great frontier of technology in the second half of the twentieth century, namely, the space program. The engineers who fulfilled one of man's most ancient dreams, that of traveling to the moon, came from the same background, the small towns of the Midwest and the West. After the triumph of Apollo 11, when Neil Armstrong and Buzz Aldrin became the first mortals to walk on the moon, NASA's administrator, Tom Paine, happened to remark in conversation: "This was the triumph of the squares. " A reporter overheard him; and did the press ever have a time with that! But Paine had come up with a penetrating insight. As it says in the Book of Matthew, the last shall be first. It was engineers from the supposedly backward and narrow-minded boondocks who had provided not only the genius but also the passion and the daring that won the space race and carried out John F. Kennedy's exhortation, back in 1961. to put a man on the moon "before this decade is out." The passion and the daring of these engineers was as remarkable as their talent. Time after time they had to shake off the meddling hands of timid souls from back east. The contribution of MIT to Project Mercury was minus one. The minus one was Jerome Wiesner of the MIT electronic research lab who was brought in by Kennedy as a special adviser to straighten out the space program when it seemed to be faltering early in 1961. Wiesner kept flinching when he saw what NASA's boondockers were preparing to do. He tried to persuade to forfeit the manned space race to the Soviets and concentrate instead on unmanned scientific missions. The boondockers of Project Mercury, starting with the project's director, Bob Gilruth, an aeronautical engineer from Nashwauk, Minnesota, dodged Wiesner for months, like moonshiners evading a roadblock, until they got astronaut Alan Shepard launched on the first Mercury mission. Who had time to waste on players as behind the times as Jerome Wiesner and the Massachusetts Institute of Technology...out here on technology's leading edge?

Just why was it that small-town boys from the Middle West dominated the engineering frontiers? Noyce concluded it was because in a small town you became a technician, a tinker, an engineer, and an and inventor, by necessity.

"In a small town," Noyce liked to say, "when something breaks down, you don't wait around for a new part, because it's not coming. You make it yourself."

Yet in Grinnell necessity had been the least of the mothers of invention. There had been something else about Grinnell, something people Noyce's age could feel but couldn't name. It had to do with the fact that Grinnell had once been a religious community; not merely a town with a church but a town that was inseparable from the church. In Josiah Grinnell's day most of the towns people were devout Congregationalists, and the rest were smart enough to act as if they were. Anyone in Grinnell who aspired to the status of feed store clerk or better joined the First Congregational Church. By the end of the Second World War educated people in Grinnell, and in all the Grinnells of the Middle West, had begun to drop this side of their history into a lake of amnesia. They gave in to the modern urge to be urbane. They themselves began to enjoy sniggering over Sherwood Anderson's Winesburg, Ohio, Sinclair Lewis's Main Street, and Grant Wood's American Gothic. Once the amnesia set in, all they remembered from the old days were the austere moral codes, which in some cases still hung on. Josiah Grinnell's real estate covenants prohibiting drinking, for example.... Just imagine! How absurd it was to see these unburied bones of something that had once been strong and alive.

That something was Dissenting Protestantism itself. Oh, it had once been quite strong and very much alive! The passion, the exhilaration, of those early days was what no one could any longer recall. To be a believing Protestant in a town such as Grinnell in the middle of the nineteenth century was to experience a spiritual ecstasy greater than any that the readers of Main Street or the viewers of American Gothic were likely to know in their lifetimes. Josiah Grinnell had gone to Iowa in 1854 to create nothing less than a City of Light. He was a New Englander who had given up on the East. He had founded the first Congregational church in Washington, DC., and then defected from it when the congregation, mostly southerners, objected to his antislavery views. He went to New York and met the famous editor of the New York Herald, Horace Greeley. It was while talking to Josiah Grinnell, who was then thirty-two and wondering what to do with his life, that Greeley uttered the words for which he would be remembered forever after: "Go west young man, go west." So Grinnell went to Iowa, and he and three friends bought up five thousand acres of land in order to start up a congregational community the way he thought it should be done. A City of Light! The first thing he organized was the congregation. The second was the college. Oxford and Cambridge had started banning Dissenting Protestants in the seventeenth century; Dissenters founded their own schools and colleges. Grinnell became a champion of "free schools," and it was largely thanks to him that Iowa had one of the first and best public-school systems in the west. To this day Iowa has the highest literacy rate of any state. In the 1940s a bright youngster whose parents were not rich, such as Bob Noyce or his brother Donald, was far more likely to receive a superior education in Iowa than in Massachusetts.

And if he was extremely bright, if he seemed to have the quality known as genius, he was infinitely more likely to go into engineering in Iowa, or Illinois or Wisconsin, then anywhere in the East. Back east engineering was an unfashionable field. The east looked to Europe in matters of intellectual fashion, and in Europe the ancient aristocratic bias against manual labor lived on. Engineering was looked upon as nothing more than manual labor raised to the level of a science. There was "pure" science and there was engineering, which was merely practical. Back east engineers ranked, socially, below lawyers; doctors; army colonels; Navy captains; English, history, biology, chemistry, and physics professors; and business executives. This piece of European snobbery that said a scientist was lowering himself by going into commerce. Dissenting Protestants looked upon themselves as secular saints, men and women of God who did God's work not as penurious monks and nuns but as successful workers in the everyday world. To be rich and successful was even better, and just as righteous. One of Josiah Grinnell's main projects was to bring the Rock Island Railroad into Iowa. Many in his congregation became successful farmers of the gloriously fertile soil around Grinnell. But there was no sense of rich and poor. All the congregation opened up the virgin land in a common struggle out on the frontier. They had given up the comforts of the East ... in order to create a City of Light in the name of the Lord. Every sacrifice, every privation, every denial of the pleasures of the flesh, brought them closer to that state of bliss in which the light of God shines forth from the apex of the soul. What were the momentary comforts and aristocratic poses of the East...compared to this? Where would the fleshpots back east be on that day when the heavens opened up and a light fell 'round about them and a voice from on high said: "Why mockest thou me?" The light! The light! Who, if he had ever known that glorious light, if he had ever let his soul burst forth into that light, could ever mock these, my very seed, with a Main Street or an American Gothic! There, in Grinnell, reigned the passion that enabled men and women to settle the West in the nineteenth century against the most astonishing odds and in the face of overbearing hardships.

By the standards of St. Francis of Assisi or St. Jerome, who possessed nothing beyond the cloak of righteousness, Josiah Grinnell was a very secular saint, indeed. And Robert Noyce's life was a great deal more secular than Josiah Grinnell's. Noyce had wandered away from the church itself. He smoked. He took a drink when he felt like it. He had gotten a divorce. Nevertheless, when Noyce went west, he brought Grinnell with him... unaccountably sewn into the lining of his coat!

In the last stage of his career Josiah Grinnell had turned from the building of his community to broader matters affecting Iowa and the Middle West. In 1863 he became one of midland Iowa's representatives in Congress. Likewise, in 1974 Noyce turned over the actual running of Intel to Gordon Moore and Andrew Grove and kicked himself upstairs to become chairman of the board. His major role became that of spokesman for the Silicon Valley and the electronic frontier itself. He became chairman of the Semiconductor Industry Association. He led the industry's campaign to deal with the mounting competition from Japan. He was awarded the National Medal of Science in a White House ceremony in 1980. He was appointed to the University of California Board of Regents in 1982 and inducted into the National Inventors Hall of Fame in February 1983. By now Intel's sales had grown from $64 million in 1973 to almost a billion a year. Noyce's own fortune was incalculable. (Grinnell College's $300,000 investment in Intel had multiplied in value more than thirty times, despite some sell-offs, almost doubling the college's endowment. ) Noyce was hardly a famous man in the usual sense, however. He was practically unknown to the general public. But among those who followed the semiconductor industry he was a legend. He was certainly famous back east on Wall Street. When a reporter asked James Magid of the underwriting firm of L. F. Rothschild, Unterberg, Towbin about Noyce, he said: "Noyce is a national treasure."

Oh yes! What a treasure, indeed, was the moral capital of the nineteenth century? Noyce happened to grow up in a family in which the long-forgotten light of Dissenting Protestantism still burned brightly. The light, the light at the apex of every human soul! Ironically, it was that long-forgotten light...from out of the churchy, blue-nosed sticks. . . that led the world into the twenty-first century, across the electronic grid and into space.

Surely the moral capital of the nineteenth century is by now all but completely spent. Robert Noyce turns fifty-six this month, and his is the last generation to have grown up in families where the light existed in anything approaching a pure state. And yet out in the Silicon Valley some sort of light shines still. People who run even the newest companies in the Valley repeat Noycisms with conviction and with relish. The young CEOs all say: "Datadyne is not a corporation, it's a culture, " or "Cybernetek is not a corporation, it's a society, " or "Honey Bear's assets"? the latest vogue is for down-home nontech names?"Honey Bear's assets aren't hardware, they're the software of the three thousand souls who work here." They talk about the soul and spiritual vision as if it were the most natural subject in the world for a well-run company to be concerned about.

On June 8, 1983, one of the Valley's new firms, Eagle Computer. Inc., sold its stock to the pubic for the first time. Investors went for it like the answer to a dream. At the close of trading on the stock market, the company's forty-year-old CEO, Dennis Barnhart, was suddenly worth nine million dollars. Four and a half hours later he and a pal took his Ferrari out for a little romp, hung their hides out over the edge, lost control on a curve in Los Gatos, and went through a guardrail, and Barnhart was killed. Naturally, that night people in the business could talk of very little else. One of the best-known CEOs in the Valley said, "It's the dark side of the Force." He said it without a trace of irony, and his friends nodded in contemplation. They knew exactly what Force he meant.

Close this section

Build your own FPGA (2012)

The Open 7400 Logic Competition is a crowd-sourced contest with a simple but broad criteria for entry: build something interesting out of discrete logic chips. It's now in its second year, and this time around I was inspired to enter it.

Discrete logic, for anyone who isn't familiar, are any of a number of families of ICs who each perform a single, usually fairly straightforward, function. Typical discrete logic ICs include basic logic gates like AND, OR and NAND, Flip-Flops, shift registers, and multiplexers. For smaller components like gates and flipflops, a single IC will usually contain several independent ones. As you can imagine, building anything complex out of discrete logic involves using a lot of parts; these days they're typically used as 'glue' logic rather than as first-class components, having been largely supplanted by a combination of specialised devices, microcontrollers, and FPGAs.

Building a microcontroller or CPU out of discrete logic is a popular hobbyist pursuit, and it serves a useful purpose: building a CPU from scratch teaches you a lot about CPU architecture and tradeoffs; it's an interesting and instructive exercise. So, I wondered, wouldn't building an FPGA out of discrete logic be similarly educational? Hence, my competition entry: an FPGA (or rather, a 'slice' of one) built entirely out of discrete logic chips.

Designing an FPGA from 7400s

The most basic building block of an FPGA is the Cell, or Slice. Typically, a slice has a few inputs, a Lookup Table (or LUT) which can be programmed to evaluate any boolean function over those inputs, and one or more outputs, each of which can be configured to either update immediately when the input updates (asynchronous) or update only on the next clock tick, using a flipflop built into the slice (synchronous). Some FPGA cells have additional capabilities, such as adders implemented in hardware, to save using LUTs for this purpose.

The core of a slice, the Lookup Table, seems nearly magic - taking an array of inputs, it can be programmed to evaluate any boolean function on them and output the result. As the name implies, though, the implementation is very simple, and it's a technique also used to implement microcode and other configurable glue logic. In principle, what you do is this: take a memory IC such as some SRAM or an EEPROM. Wire up the address lines to your inputs, and the data lines to your output. Now, any combination of input states will be interpreted as an address, which the memory will look up and provide on the data outputs. By programming the memory with the state tables for the functions you want to compute, you can configure it to evaluate anything you like.

Unfortunately, none of the 7400 series memories are manufactured anymore, and while there are plenty of SRAMs and EEPROMs available, the smallest sizes available are significantly larger than what we want for a simple discrete FPGA. Further, in order to be able to both program and read the memory, we'd need a lot of logic to switch between writing to the memory and reading from it (on a 'single port' memory, these use the same pins).

However, a simple solution presents itself: shift registers! A shift register is effectively an 8-bit memory, with serial inputs - convenient for our purposes - and each bit exposed on its own pin. By combining this with an 8-way multiplexer, we have a basic 3-input 1-output LUT. Our LUT can be reprogrammed using the data, clock, and latch lines, and many of them can be chained together and programmed in series. The 3 select inputs on the 8-way mux form the inputs to the LUT, and the mux's output bit is the output. So, in two readily available 7400 series ICs, we have one complete Lookup Table.

For our FPGA slice, we'll use two of these discrete LUTs, with their inputs ganged together. Why two? Because a combined capability of 3 inputs and 2 outputs about the smallest you can implement interesting things with. 3 inputs and 2 outputs lets you build a full adder in a single slice; any fewer inputs or outputs and just adding two 1-bit numbers together with carry requires multiple slices, which severely limits our capabilities.

The next component is the flipflops, and the logic for selecting asynchronous or synchronous mode. There's a profusion of flipflops and registers available, from 2 up to 8 in a single IC, and with various control methods, so that's no problem. Choosing between synchronous and asynchronous is a little tougher. The natural choice here is a 2-way multiplexer, but while chips with multiple 2-way multiplexers exist, they all gang the select lines together, meaning you have to choose the same input for all the multiplexers in a chip. Obviously, this isn't really suitable for our application.

Fortunately, a 2-way multiplexer isn't difficult to construct. There are several options, but the most efficient is to use tristate buffers. There are a couple in the 7400 range - the 74*125 and 74*126 that meet our requirements ideally. Each contains four tri-state buffers, the only difference between the two chips being that one enables its output when the enable line is high, while the other enables its output when it is low. By ganging these together in pairs, we can create multiplexers; one of each IC gets us four independent multiplexers. Two multiplexers, plus our register IC gets us our sync/async select logic. Of course, we need a way to control the multiplexers, so chain in another shift register to provide some state to program them with.

Now we've got the core of a basic slice designed, let's look at the second major component of any FPGA: routing. Flexible routing is a key attribute of any useful FPGA; without good routing, you can't get signals where they need to go and you waste precious resources, making your FPGA a lot less useful. Routing, though, uses a huge amount of resources to implement properly. What's the minimum we can provide and still get a useful and interesting result?

Typically, FPGAs position individual slices in a rectangular grid. Buses run between slices in the grid both horizontally and vertically. A slice is able to tap into some subset of the lines at its intersection, and can likewise output to some subset of the lines. Typically, the bus can continue through a slice uninterrupted, or the bus can be 'broken', effectively creating separate buses on either side of the slice. In some cases, buses can also be connected together in other ways, routing between different bus lines or between horizontal and vertical buses without the direct involvement of the slice.

One bit buses are a bit too narrow even for our purposes; a lot of interesting applications are going to require more than that, so let's see what we can make of 2 bit buses, both vertical and horizontal. Many FPGAs include a built in bias in one direction or another; this saves routing resources by favoring more common uses at the expense of making less common setups more expensive. In our case, we'll make it easier to read from the 'left' and 'top' buses, and easier to write to the 'right' and 'bottom' buses. We can do this by having 2-input multiplexers on each of the left, top and right buses; these multiplexers feed into our LUT's 3 inputs. For output, we can use more tristate buffers to allow one LUT to output to either or both of the right bus lines, while the other outputs to either or both of the bottom bus lines. To read from the bottom, or to drive the left or top lines, one simply has to drive the opposite side, and close the appropriate bus switch.

Speaking of bus switches, we'll go for the simplest configuration: a switch connecting each of the top and bottom lines, and a switch connecting each of the left and right lines, which can be opened or closed individually. The 74*4066 "quad bilateral switch" IC provides a convenient way to do this in a single IC. All of our routing requires state, of course - 3 bits for the input multiplexers, 4 bits for the output enables, and 4 more bits for the bus switches - so we'll use another shift register, and some of the spare bits from the one we added for sync/async selection.

With routing done, we've more or less designed the entire of a basic FPGA slice in discrete logic. Let's take inventory:

  • 4 x 74HC595 Shift Registers, for LUTs and routing/multiplexer state
  • 2 x 74HC251 8-line multiplexer, for LUTs
  • 2 x 74HC125 and 2 x 74HC126 Tristate buffers, for multiplexers and output enables.
  • 1 x 74HC173 4-bit register, for synchronous operation.
  • 1 x 74HC4066 Quad Bilateral Switch, for bus switches.

That's a total of 12 discrete logic ICs to implement one moderately capable FPGA slice. Add a few LEDs to give a visual indicator of the status of the bus lines, and some edge connectors to hook them up together, and we have a board that can be ganged together in a rectangular configuration to make a modular, expandable discrete logic FPGA. Pointless, given that it's a fraction of the capability of a moderately priced FPGA or CPLD chip? Probably. Cool? Most definitely.


Of course, it's no good having a DFPGA if there's no way to program it. We could figure out the bitmasks to achieve what we want ourselves, but that's tedious and error prone. Porting VHDL or Verilog to something like this would be tough, and massive overkill given the number of slices we're dealing with. Instead, I opted to implement a simple hardware description language, which I'll call DHDL.

DHDL doesn't attempt to handle layout or optimisation; instead it implements a fairly straightforward compiler to take logic expressions and turn them into slice configuration data. A DHDL file consists of a set of slice definitions, followed by a list of slices to 'invoke', arranged in the same manner as the DFPGA is laid out. Here's an example of a DHDL definition for a 'ripple carry full adder' slice:

slice adder {
  l0 ^ r1 ^ u0 -> r0;
  (l0 & r1) | (l0 & u0) | (r1 & u0) -> d0;

Here, l0, r1, etc, refer to bus lines - 'u', 'd', 'l' and 'r' for up, down, left, and right. The two addends are provided on l0 and l1; since the bus switches are closed, they're also available on r0 and r1, which the adder takes advantage of, since we can only select from one left bus line at a time. Carry input enters via the bus line u0. The first expression computes the sum of the two inputs and the carry, outputting it on r0. The second expression computes the carry output, which is transmitted to the next slice down via d0.

DHDL takes care of bus switch configuration for us here: by default, all bus switches are closed (that is, they conduct), but when we output to a bus line, the corresponding bus switch defaults to open. In this situation, that's the correct behaviour, since it allows us to read one of the addends on l0 and output the result on r0; it also ensures we separate the incoming and outgoing carry signals.

In some cases, we might want to configure the buses ourselves. We can use the expression `a > b` to specify that a bus switch should be open, and the expression `a b` to specify that it should be closed. Here's an example of a storage element that utilizes that:

slice storage {
  (u0 & r1) | (!u0 & l0) sync -> r0;
  l0  r0;

This slice uses feedback to store a value, by outputting it on r0 and reading it back from l0. Since outputting to r0 would normally cause the compiler to open the switch between l0 and r0, we explicitly tell it that we want the switch closed, making the feedback possible. This definition also demonstrates how we specify synchronous vs asynchronous behaviour, with the `sync` or `async` keyword before the assignment operator. The default is asynchronous. Thus, this slice will output the stored value on r0 and l0; on the leading edge of a clock cycle where u0 is high, it will store the value of l1/r1 as the new value. Also note that since we're not outputting to d0, the switch between u0 and d0 is closed, meaning we could stack many of these vertically and control them all with an enable input. We've effectively created a flipflop slice.

Let's see what a complete FPGA definition looks like. Here's one for a 4-bit combination lock:

slice storage {
  (u0 & r1) | (!u0 & l0) sync -> r0;
  l0  r0;

slice compare_carry {
  !(l0 ^ r1) & u0 -> d0;

slice compare {
  !(l0 ^ r1) -> d0;
  u0 > d0;

storage compare,
storage compare_carry,
storage compare_carry,
storage compare_carry

First we define some slices - the storage slice we already saw, and a comparer, which outputs a 1 to d0 iff both the horizontal bus lines are equal and its u0 input was 1. We also define a version of the comparer without a carry, since the topmost slice will not have a carry input.

Operation is like this: To set the code, input the values on the l1 input of each of the leftmost slices, then take the top slice's u0 input high for one clock cycle. To test a combination, input the values on the l1 inputs again, but leave the top slice's u0 input low. The bottom right slice's d0 line indicates if the combination is correct.

Finally, let's try something a little bit more involved: a PWM controller. We'll need a counter, some comparators, and a set/reset circuit:

slice toggler {
  !r1 sync -> r1;
  r1 -> d0;

slice counter {
  r1 ^ u0 sync -> r1;
  r1 & u0 -> d0;

slice compare {
  !(l0 ^ r1) -> d0;

slice compare_carry {
  !(l0 ^ r1) & u0 -> d0;

slice overflow_pass {
  u0 -> r0;

slice srlatch {
  (r0 | u0) & !l0 sync -> r0;

toggler compare,
counter compare_carry,
counter compare_carry,
overflow_pass srlatch

The first two slice definitions, toggler and counter, collectively implement a binary counter. Toggler is the least significant bit, while any number of counter stages can be chained vertically to make an n bit ripple-carry counter - in this case, we've constructed a 3 bit counter. compare and compare_carry should look familiar from the previous sketch; they implement a ripple-carry comparator, in this case comparing the output of the binary counter with the other bus line, which will be set with switches. overflow_pass's job is very simple - it passes the overflow signal from the counter to its right output, making both that and the comparator output available to the final slice, srlatch. As the name implies, this is a simple set/reset latch, with the counter overflow resetting it, and the comparator setting it.

By setting the 3 input bits to reflect the duty cycle required, and pulsing the clock line sufficiently fast, the srlatch slice's r0 output will be PWMed with the appropriate duty cycle - which can be visually observed as the LED on that bus line being dimmed.


Designing and building this board was an interesting exercise. Due to the number of ICs and wanting to make the PCB as compact as possible, this was by far the toughest board to route that I've designed so far. Since I had time constraints to get the board sent off for fabrication in time for the contest, I ended up using an autorouter for the first time. Eagle's autorouter is remarkably awful, but it turns out there's a much better free alternative, called Freerouting. Freerouting is a Java based PCB router; it can import layouts from Eagle, KiCad and others, and exports scripts that can be executed to implement the final routing in your CAD tool. Where Eagle wanted to produce a board with over 150 vias, Freerouting was able to produce one that had fewer than 50, and visual inspection shows it to be fairly sane, too. It's not just for automatic routing, either - it has an excellent manual routing mode, where it'll allow you to nudge existing tracks around without having to rip them up and reroute them every time you need to fit another line in.

For fabrication, I went with the excellent Hackvana, who made me up 20 of the boards and had them to me in record time. A jumbo order of parts from Farnell/Element14 saw me sorted for parts, and all that was left was hours and hours of soldering - with over 200 SMT pads on each board, the assembly process takes a while.

Of course, as with any first iteration design, there were problems. A couple of minor design improvements occurred to me almost immediately, which would've increased the board's capabilities somewhat, and the jumpers that let you determine how the serial programming stream connects between boards could be better placed. More problematic, I accidentally tied all the shift registers' reset lines low, when they're actually active low - they should be connected to the 5v rail. After some experimentation, however, I came up with a greenwiring solution for this, which you can see in the photos below; it doesn't even add to the construction time by more than a couple of minutes per board. This bug is, of course, fixed in the schematics.


How does it look once assembled and working? Very cool. It may not be anywhere near as capable as a real FPGA, but it's also a lot easier to inspect and understand. With LEDs on all the bus lines you can see exactly what the internal state is at any time, which makes debugging a whole lot easier.

Here's one of the boards in the array fully constructed and hooked up; click the photo for more.

Of course, it wouldn't be complete without a video of the DFPGA in action...


All the design files, along with the DHDL compiler, test suite, and demo definitions are open source under the Apache 2.0 license. You can find them all on Github here. If you decide to build your own DFPGA, or find the schematics or code useful in your own project - let me know!

Future developments

Remember how I rubbished the use of dedicated memory chips at the beginning, saying that all the ones available now are too big, and too difficult to interface with? Well... that's not quite as accurate as I thought when I was designing things.

It's true that the memory you can get is mostly larger than we need, but that can be an advantage in moderation - it means it's possible to construct much more capable slices. How would you like a slice with 8 inputs and 4 outputs, that can output to any of the bus lines, and has 4 bits of internal state, allowing it to implement a 16 state state machine in each slice? And all with a little over half as many ICs as the design above? It turns out that with a few clever tricks, that ought to be possible - with one catch.

The catch is this: the smallest SRAMs available are 256 kilobit, which is really quite large - so much so that an embedded processor like an Arduino could never program even one of these slices without external memory. We can use EEPROMS instead, which tend to be a bit smaller and could be easily programmed ahead of time, but that still leaves us needing a way to store the other configuration bits, such as the output enables. EEPROM shift registers, unfortunately, don't really seem to exist.

With a little clever optimisation, though, a compact design that loads the output enable state from the EEPROM at power on is possible, albeit somewhat more complicated than the current design. Unfortunately, I suspect the demand for discrete logic FPGAs - even fairly capable ones - is low, so it's unlikely this design will ever see the light of day.

I could be wrong, though. Do you want your own discrete FPGA? Can you think of a practical use for one? Let me know in the comments!

View the discussion thread. blog comments powered by

Close this section

Despite money and effort, homelessness in SF as bad as ever

'); }

But, despite all the money and effort, reality on the streets hasn’t improved. In many ways, homelessness in San Francisco is as bad as ever.

Just-released numbers from January’s homeless count, conducted every two years as a requirement to receive federal funds, show a very slight decrease. The drop is attributed to fewer families and youths among the homeless, while the number of single adults living on the street — the most visible — has risen.

The waiting list for nighttime shelter beds also has risen, from not even 900 last year to about 1,100 now.

Residents’ complaints to the city’s 311 line about tent encampments, needles and human feces are way up. In 2016, people made 22,608 complaints to 311 about encampments — a fivefold increase from the previous year.

But the biggest indicator is merely walking around the city, where it’s obvious the misery continues.

“It’s worse — that’s my observation,” Supervisor Jeff Sheehy said.

He and his 12-year-old daughter frequently ride BART downtown from their Glen Park home on weekends to shop and explore. They used to get off at the Powell Street Station, but his daughter now refuses to set foot there. They get off a stop later, at Montgomery, and backtrack on foot instead.

“The Powell Street BART Station is basically a homeless shelter, and not a well-maintained one,” Sheehy said. “There are homeless people sprawled all over the place, sometimes shooting up, sometimes with clothes not completely covering their backsides. Some people have seen people masturbating. There’s the smell, the dirt.

“The needles, the human waste, the garbage,” he continued. “I just don’t understand why we think it’s OK.”

The mayor tries to strike a balance between assuring residents it’s not OK and maintaining full faith in the staffers he has charged with improving the problem.

Lee is “more optimistic today than I ever have been” that San Francisco is finally on track to make a big dent in homelessness, and said his final 2½ years in office will be dedicated to improving the situation on the streets.

“We will have degrees of relief,” he said.

Denise Ward is one of the first residents to move off the streets and into the newly opened Navigation Center in San Francisco’s Dogpatch neighborhood. Photo: Michael Macor, The Chronicle
Photo: Michael Macor, The Chronicle

Denise Ward is one of the first residents to move off the streets and into the newly opened Navigation Center in San Francisco’s Dogpatch neighborhood.

Denise Ward is one of the first residents to move off the streets...

But tension between departments at City Hall seems to have bogged down the response. The new homeless department focuses on long-term solutions, while Public Works crews grow frustrated that they’re cleaning the same camps again and again.

And as people on the street remain stubbornly in place, they grow older and sicker. That means help is that much harder to provide, said Jennifer Friedenbach, director of the Coalition on Homelessness.

“Seeing who is walking into soup kitchens and who we’re seeing when we do outreach, they’re barely hanging on,” she said. “Like they’re recently released from the hospital with colostomy bags. There are people with cancer on the streets, severe diabetes, heart disease, a lot of really severe mental illness combined with addictive disorders.

“We don’t have a lot of exits out for people, and people are trapped on the streets,” she said.

Clearly, the destitution is awful for people who are living on the streets. And it’s troubling for the people who see it in their doorways. In such a liberal and wealthy city — the 2017-18 city budget is a record $10 billion — why improvement always seems out of reach remains a frustrating question.

Jeff Kositsky, the director of the year-old Department of Homelessness and Supportive Housing, is certainly trying hard to answer that question and create solutions. There have been victories. The single Navigation Center in the Mission District that was open when Kositsky’s office was formed has now been replicated in the Civic Center area and in Dogpatch. Three more are scheduled to open by early next year. Together, they will offer nearly 500 new beds.

A new team to clean up tent camps has cleared 11 since September, moving methodically to ensure those living in the camps trust the team and agree to move inside. Kositsky said two of the camps have sprung back up — along Shotwell Street between 14th and 17th streets and also at 19th and Folsom streets — but nine have remained clear. He said 70 percent of the people living in those camps have moved inside.

“Thirty percent wandered off, but that’s a lot better than 100 percent,” Kositksy said. “When you just go move people, that’s what they do.”

Kositsky’s department has created a coordinated entry system that is just getting started. It seeks to help homeless people obtain housing and services based on their age, health, time on the streets and other needs, rather than plugging people into a waiting list based only on when they sought help. This is made possible by using one new data system shared among city agencies and nonprofits to smooth the process of connecting the homeless with services.

Brian Borland shaves in front of his tent in a homeless encampment at Utah and 15th streets in San Francisco. Borland has been living on the streets since he arrived from Washington state about a year ago. Photo: Paul Chinn, The Chronicle
Photo: Paul Chinn, The Chronicle

Brian Borland shaves in front of his tent in a homeless encampment at Utah and 15th streets in San Francisco. Borland has been living on the streets since he arrived from Washington state about a year ago.

Brian Borland shaves in front of his tent in a homeless encampment...

His department has also engaged the private sector, winning a $100 million commitment from the Tipping Point Community nonprofit to end chronic homelessness and starting a $30 million public-private partnership to end family homelessness.

Despite those achievements, the picture on the streets remains dire. The slow pace of change isn’t helped by a laborious city hiring process that means the homelessness department — which has been up and running since August — still isn’t fully staffed. And that staff is spread across at least six buildings, which makes it hard to run a unified department. That won’t change for at least nine months.

With round glasses and a suit jacket, Kositsky looks like a professor and talks like one too, using phrases such as “strategic framework” and “smart intentionality.” He’s working on a long-term plan to address homelessness and believes focusing on the big picture will pay dividends in the long run. He says the next biennial homeless count, in 2019, will really show results.

For many San Franciscans and City Hall functionaries, two more years is too much time.

There has been tension in recent months between the long-term planner, Kositsky, and the city’s fix-it-guy, Public Works Director Mohammed Nuru, whose job is to clean the streets and camps.

“We feel like we’re a maid service,” Nuru said. “We clean, we come back. We clean, we come back. The real question is, ‘Are we getting anywhere?’ We don’t want to just continue going around in circles.”

Donning a baseball cap and rolled-up shirtsleeves during a recent interview in his office, Nuru looked like the hands-on fixer reflected in his Twitter handle, @MrCleanSF. Nuru said the mayor’s administration has done a great job opening Navigation Centers and finding new supportive housing. The Department of Homelessness and Supportive Housing has opened 303 units of permanent supportive housing in the past year.

But, Nuru said, the time needs to come when people are no longer allowed to sleep on the streets, and the city stops looking the other way when homeless campers inject drugs, cook over open flames, block sidewalks and streets, run bicycle chop shops and break into cars.

“They’re the types who really decrease the quality of life that people expect,” Nuru said. “We’ve got to take them on. ... I am very frustrated, and I have been having closed-door meetings with my team and with other agencies, and we’re going to put a stop to this.”

Nuru wasn’t specific about how he’s going to do that, but he said the tent camp issue has grown notably worse since the Super Bowl in February 2016. A possible reason for that, he said, is the football extravaganza caused advocates for homeless people to fear that those on the streets would be pushed aside permanently, so they began handing out free tents. So now some parts of the city are filled with REI tents — and destitution.

Nuru said that he meets with the mayor often and that when the tensions between him and Kositsky became public, the mayor tried to reassure him.

“He has asked me to try and embrace my colleagues and not get frustrated,” Nuru said. “It just hurts me to see a beautiful city like this.”

Lee said both men are doing important work that reflects the city’s commitment to “the short- and long-term care of individuals on the streets.” He added that he agrees with Nuru that lawbreaking in the camps can’t be ignored.

“I think we have historically given the Police Department mixed messages,” he said. “Some politicians will say, ‘Yeah, I want those activities to stop,’ and some say, ‘Why would you criminalize the homeless?’ The police are kind of stopped in their tracks.”

Lee said he is adding police officers to the teams that clear homeless camps and is funding new efforts such as harm reduction centers where injection drug users can receive services and supplies, and more beds in the emergency psychiatric ward at San Francisco General Hospital.

“This will take time,” Lee said of improving the city’s streets. “But now I have some answers. We’re breaking it down into biteable sizes.”

Kositsky said he, too, understands the frustrations of Nuru and neighbors who are sick of seeing swelling tent encampments on their blocks. Solving it, though, takes time, he said.

Asked to grade his department’s performance in its first year, Kositsky said he and his staff earn an A for moving in the right direction.

“In terms of the pace at which we’re doing it? I would say a B-minus,” he said. “I certainly wish things were going faster, but they’re going at a steady pace.”

Kositsky said one of the hardest parts of his job is that his successes are hidden away. Once that homeless panhandler is moved inside, you never think of him again. But if that tent encampment is on your block day after day, frustration mounts.

“Our successes are invisible, and our failures to resolve a problem are very evident,” he said.

To the residents of San Francisco who see tent camps on their corners and step over syringes every day, they’re very evident indeed.

Heather Knight is a San Francisco Chronicle staff writer. Email: Twitter: @hknightsf

Close this section