Features

A CRISPR Cut

A CRISPR Cut: Jennifer Doudna ’85 didn't set out to revolutionize Genetic Engineering, but that may be exactly what she did.

PC_Doudna

THE ACTION AT scientific conferences mostly happens in conference rooms and hotel bars, but sometimes the players break out to see the sights. That’s what Jennifer Doudna was doing at a conference put on by the American Society of Microbiology in spring 2011. She attended a session on a type of bacterial genetic sequence called a clustered, regularly interspaced, short palindromic repeat—CRISPR, for short. They seemed to have something to do with an immune system for bacteria, though to be honest, Doudna, a biochemist at UC Berkeley who specializes in the three-dimensional structure of genetic material, thought they were “a boutique area of science” at best.

That night, Doudna ran into someone else who was working on the same problem. In Germany, Emmanuelle Charpentier’s lab was studying what most people call “flesh-eating bacteria,” a species of Streptococcus, and Charpentier’s team had found something important in one of their pet bug’s CRISPRs—it made a protein called Cas9. But she needed help to understand all of the moving parts. So Charpentier asked Doudna if she wanted to team up. Doudna said yes. Typical conference stuff.

Over the next few months, Doudna’s postdocs in California worked with Charpentier’s teams in France and Germany. But what they started to figure out began, slowly, to look a lot bigger than just an immune system for flesh-eating bacteria. CRISPR/Cas9 made a complex structure of protein and genetic material that looked like it could cut DNA—which is to say, genes—but it was precisely targeted, almost as simple as putting a cursor between two letters on a computer screen and clicking delete. “There were other techniques in the literature, but they were difficult,” Doudna says. “This is the kind of technique that, in principle, anybody who knows anything about molecular biology will be able to do.”

Doudna and Charpentier wrote a paper for Science, one of the world’s premiere scientific journals. When it came out, in summer 2012, the scientific community went nuts. By the end of 2013, hundreds of papers from labs all over the world had confirmed that, yes, not only was CRISPR a quick-and-easy way to edit a genome as easily as Word edits a magazine article, but it worked in just about every living thing—yeast, zebrafish, mice, stem cells, in-vitro tissue cultures, and even cells from human beings. Most gene-editing techniques work in theory, but in practice require wrestling to the ground complicated, ornery techniques that often fail. But with CRISPR: You want that gene over there? You got it. Companies formed seemingly overnight to turn CRISPR into medicines, research tools, and maybe even profits. Doudna’s lab was at the center of a shift that could be every bit as significant as being able to sequence the genome.

CRISPR1a

THE KIND OF CELLS that you and I have encode information in the form of deoxyribonucleic acid—DNA, a long backbone of two spiraling strands bridged by “base pairs,” the famous A, T, C, and G that comprise the genetic sequence. But DNA isn’t the only genetic material. When it’s time to make proteins, cells unspool lengths of DNA from their tightly-packed chromosomes and make a cheap copy called ribonucleic acid, or RNA. It’s this RNA that other machinery in the cell reads—the sequence of A, C, G, and U (replacing the T) represent amino acids, and amino acids put together are the proteins of which we are mostly made. It’s a cool system.

RNA, though, is kind of weird. Because in addition to containing information, it can also form structures that do jobs. In fact, the biological machine that reads RNA and outputs proteins, called a ribosome, is itself made of RNA. In this particular corner of molecular biology, the map is also the terrain. This dual personality is behind the “RNA world” theory, the idea that RNA’s ability to both carry and initiate the code of life means it gave rise to all life on earth.

It’s also what compelled Jennifer Doudna, freshly graduated from Pomona, to get her PhD. Growing up in Oahu she knew she wanted to be a biochemist; a set of seminars she attended in high school had sealed that deal. She studied biology as an undergraduate, worked in real labs, and pointed herself at grad school in Boston as soon as it was time to apply. She ended up getting in the laboratory of Jack Szostak, who came up with the RNA world idea.

Doudna decided that she wanted to understand those RNA structures—figuring out the structure of ribosomes and other so-called catalytic RNA. Basically that meant trying to get them to crystalize and then x-ray them. It was, Doudna says, a methodological challenge that was “every bit as cool as I could have imagined.” Eventually she ended up a professor at Yale, and she solved a few of those structures. Doudna was earning a reputation as an ace in a field without many practitioners. “She’s careful and diligent in pursuing all the leads without cutting corners,” says George Church, a Harvard Medical School geneticist who remembers Doudna’s student days there. “And she has a good knack for picking the right topics.”

The West Coast lured her back; in 2010, Doudna took a job at UC Berkeley. “I had always considered myself a basic scientist,” she says. “But you want to feel like your work is going to help solve human problems at some level.” She started working on diseases caused by RNA mutations, and on a technique called RNA interference, or RNAi. Basically it uses small molecules to interrupt the translation of RNA into proteins, to try to fix problems before they start. And to make it work, you have to understand the structural characteristics of RNA.

At about the same time, some food researchers in Copenhagen were learning something new about yogurt. Turning milk into yogurt requires specialized bacteria, but every so often those bacteria get sick—just like people, they get attacked by viruses trying to hijack their cellular machinery. The Danish team found that bacteria exposed in advance to the viruses, called bacteriophages, became immune. It was like vaccination, but for microbes.

Durell_140916_PC_doudna_0155

HOW’D IT WORK? In the late 1980s scientists found long, repeating sequences in bacterial DNA that were the same back-to-front—palindromes, in other words. And between the palindromes: nonsense. At least, that’s what they thought at the time. But the genetic gibberish turned out to be quoted from bacteriophage DNA. Put all that together and you got RNA structures that could target specific DNA sequences in a virus, and a protein that would chop that DNA up, destroying it. It was, in other words, an immune system. The Danish yogurt makers had hit upon a rudimentary way of programming it to hit specific viral targets.

That’s why Charpentier started studying it. “I’m interested in how bacteria cause diseases, and how they can become resistant to antibiotics,” she says. “Initially the goal was to look for this class of small RNAs to find one with a nice regulatory function. Coming to CRISPR was in a way a bit by chance.” It wasn’t crazy to imagine that she’d find a useful enzyme in her work—most of the DNA- and RNA-cutting enzymes used in labs were isolated from organisms found in nature.

The thing was, though, that even the most advanced techniques for cutting-and-pasting DNA and RNA were really tough to use. The two best approaches, “zinc-finger nucleases” and “TALEN,” required the creation of a new, bespoke protein every time, coded to the specific sequence a researcher wanted to cut. “Zinc finger nucleases were originally priced at about $25,000 each. You could do it yourself for a little less, but it extracted a corresponding amount of flesh,” says Church. “TALENs looked easier, but they were particularly hard to engineer biologically. It lasted for about a year, a year and a half, as a fad.”

The RNAi Doudna was working with turned out to have similar problems. Usually you try to engineer a bacterial or insect cell to make protein or RNA. However, you then have to purify the protein or RNA that you want out of all the gunk you don’t. Those methods took a whole skill set, and not everyone had it.

In the late 2000s, though, postdocs in Doudna’s lab were starting to get really good results in experimenting with CRISPR. It was easier to do and didn’t need custom-made proteins. Doudna’s and Charpentier’s two labs together realized that in the case of CRISPR/Cas9, the same protein was doing the cutting every time. The only thing that changed were two adjacent structures made of RNA—and you could engineer their function into one, a short, synthetic stretch called a guide RNA that was really easy to make. “We figured out how to program Cas9 to cut any sequence in DNA just by changing the guide RNA,” says Doudna. When she and one of her students realized what they had, sitting in her office at Cal, “we looked at each other and said, this could be an incredible tool for genome engineering.”

On the other side of the world, Charpentier was just as stunned. “All the other tools, each time you want to target DNA at a specific site you have to engineer a new protein. This requires time, and it’s not easy for someone who isn’t used to it,” Charpentier says. “With Cas9, anyone can use the tool. It’s cheap, it’s fast, it’s efficient, and it works in any size organism. It’s revolutionizing biology.”

So what’s CRISPR actually going to be for? That’s still being determined, at labs all around the world—and a handful of companies that spun up in the dizzy aftermath of the Doudna-Charpentier paper. “There’s a distinction between CRISPR-Cas9 as a therapeutic tool, using it to correct mutations in cells that would then be reimplanted in a patient—or delivering Cas9 directly,” says Charpentier. “But the other possibility is more indirect. That’s using it as a tool in labs for development, to help screen drugs or to understand a disease by using it to create models of the disease in animals.” In other words, you could use the technique as a medicine, to correct a mutation directly, or use it to induce a mutation you wanted to study in an animal to test other possible drugs.

Charpentier herself is one of the founders of a company, Crispr Therapeutics, based in Basel, that’s planning to focus on making treatments for people. The company’s CEO, Rodger Novak, is a longtime drug development exec, and the company is well-funded, but even Novak acknowledges what they’re doing won’t be easy. “What biotech pharma always struggles with is the biology of the target. In many instances we don’t know until late-stage development, pivotal human trials, if the target we’re using is the target we need.” Novak says. “The other challenge is delivery. If you go after the liver or the lungs or the brain, very different requirements apply.” He says he’s cautiously confident.

Meanwhile on the other side of the Atlantic, Doudna had teamed up with a few other CRISPR pioneers, as well as George Church, to start her own company: Editas, based in Cambridge. Doudna remains a “co-founder,” but is no longer associated with the company. Church and Doudna got $43 million from a handful of well-known venture investors to spin the company up, aiming, he says, at “a large number of genetic diseases, both common and rare, especially those that might require the removal or editing of DNA, rather than just the addition.” Other therapies in trials are better than CRISPR-Cas9 at adding DNA, inserting a gene, says Church. CRISPR-Cas9 is much better at cutting—harkening back to its original function in the flesh-eating Streptococcus Charpentier studied.

Back in Doudna’s lab at Berkeley, though, her team is still trying to answer some fundamental questions. No one doubts that CRISPR works, but some researchers still worry about whether they can target it narrowly enough to work as a therapeutic. But Doudna would like to know how its targeting system works at all—with just 20 bases of RNA it can somehow home in on any sequence of DNA. No one knows how. “We’d love to figure that out,” Doudna says. And no one knows how it acquires the “spacer” sequences, the genetic information between the palindromes. That would be the secret to how CRISPR works as an immune system in bacteria, and for now it’s a mystery.

As the excitement around CRISPR has continued to grow, no one seems more surprised than Doudna herself. “I was working away in my lab on a bacterial immune system. Genome engineering wasn’t on my radar,” she says. “If you had asked me in 2007 if the CRISPR system was going to be useful, I don’t know what I would have answered.” Today, though, Doudna is a little more sure: If it continues to work as we expect, it’s going to change everything.

 

EDITOR’S NOTE: In November, Doudna and Charpentier were awarded the 2015 Breakthrough Prize for Life Sciences, each receiving a stipend of $3 million. Their discovery was also featured as one of 10 “World Changing Ideas” on the cover of Scientific American.

Bamboo Bicycles Beijing

Bamboo Bicycles Beijing: In a city besieged by six million motor vehicles, how do you bring about change? David Wang ’09 believes it might be one bamboo bicycle at a time.

28.bbb

IN A NARROW ALLEY between the twelve-lane roar of Beijing’s second ring road and the touristy mayhem of the Drum and Bell towers is a whitewashed shoebox of a space that is more an extension of the little concrete stoop out front than an actual room. There, on a late-summer Saturday this year, four early risers stand peering at a matter-of-fact list of instructions jotted on a white board. Sweet odors of epoxy and sawdust mask the gritty, metallic smell of the smog blanketing the city.

Knobby sticks of bamboo are lined up on four workbenches. A diagram of a bicycle frame is posted on the wall above each bench. In a day’s time these sticks will be frames and, in a few days more, they will be rolling off down the street with their builders on board.

At the moment, though, the diagrams look like a dare.

Here at Bamboo Bicycles Beijing’s ninth and last bicycle-building workshop of the year, David Chin-Fei Wang ’09, compact and boyish at 28, moves calmly among the participants, taking up a saw and guiding its teeth into a bamboo tube, adjusting a vise or murmuring encouragement to someone who has improvised a solution to a tricky cut. The industrious humility of the scene belies the seriousness of the issue Wang hopes to address: the smog-belching, land-devouring, atomizing urban gridlock that besets cities across Asia as newly wealthy societies embrace the private automobile.

In Beijing, a city once famous for its bicycles, car worship is powerful. Tree-lined bike paths have been leveled to accommodate the city’s six million vehicles. An ever-expanding network of ring roads radiates deeper and deeper into the countryside, and car exhaust accounts for a quarter of the city’s infamous smog. All of this brings little joy to drivers. Though China has become the number one consumer of some of the world’s fastest cars—Lamborghinis, Porsches and Ferraris abound—traffic creeps along at an average speed of under 10 miles per hour.

Wang hopes that, by helping people make fine bikes by hand he can get them to think differently about the way they move around the city.

“It’s not about getting rid of cars,” says Wang. “It’s about starting a conversation about a more diverse mobility culture.”

21.bbb3

AFTER GRADUATING FROM Pomona with a degree in Asian Studies, Wang won a Fulbright to study physical fitness programs and nationalism in China, where Mao Zedong once prescribed a regimen of exercises, saying, “The body is the capital of the revolution.” After spending time at Xi’an Jiatong University, though, he found what the students did in their free time more interesting than the program of study.

Moving to Beijing, he took a job with China Youthology, a market research company set up to help brands like Mercedes, Pepsi Co. and Nokia understand how Chinese kids in their teens and twenties make decisions.

As Wang settled in Beijing, he started noticing the old bicycles abandoned around the city, sometimes in heaps and sometimes tethered alone to a fence or tree. It seemed like a waste. Scouring the sidewalks for something he could refurbish, he decided on a Yongjiu brand cruiser from the 1980s.

In his living room he cleaned and stripped the old frame, tinkering with components until everything was back in working order. He painted the bike Fanta orange, added some tiger stripes and set it atop white tires.

Wherever the tiger bike went, it drew curious onlookers. Though China produces more goods than any other country, making stuff by hand is not a pastime for the urban elite. A new middle class has so thoroughly rejected handicraft that even the building of IKEA furniture is mostly outsourced. So Beijingers were uncertain what to make of something so humble made with such obvious care.

Wang, however, wasn’t satisfied. In a country where mining and refinement of metals has tainted rivers, soil and air, he wanted to work with something friendlier to the environment. So naturally, he thought of a resource that China has in great abundance—something beautiful and rapidly renewable, a species of grass that can grow as much as 35 inches in a day, producing a material pliant enough to act as a natural shock absorber, yet durable and stiff enough to handle well.

He thought of bamboo.

The idea of building bicycle frames out of bamboo isn’t new. The first bamboo bicycles were a sensation at the London Stanley Show of 1894, and there are several companies manufacturing and selling them today. In China, a small start-up called Shanghai Bamboo Bicycles has been marketing bamboo bikes and trikes since 2009. But Wang wasn’t interested in selling bamboo bikes—he just wanted to build one.

So he ordered a few lengths of bamboo on Taobao, China’s vaster version of eBay, and set to work figuring out how to make them into a frame. Cobbling together information from the web, Wang worked in the living room of the apartment he shared with friends until he had his first completed bamboo bike.

If his tiger bike attracted attention, the bamboo bike was a showstopper. “People were always stopping me to ask about the bike,” he says. Wang enjoyed the resulting conversations with people from all walks of life, from stolid middle-class citizens to fashion-conscious kids, and in their curiosity, he sensed an opportunity to make a difference.

During his research on the web, he had stumbled onto a number of bike-building workshops in places ranging from Africa to Australia. Maybe, he thought, a workshop to teach people in Beijing to build their own bamboo bikes would deepen the impromptu conversations he was having in the street and create a cascade of conversations about the sustainability of China’s growing car culture.

But first, he had to scale up.

He streamlined his building process and equipment so that four participants could be fairly certain to produce a bamboo frame in just two days. Then, to get a better understanding of his new medium, he traveled to Taiwan, where some of the best bamboo craftsmen taught him which bamboo was best suited for bicycles and how to work with it. Through a Kickstarter campaign, he raised $17,869. A campaign on China’s Dreamore crowdsourcing platform brought in another $3,000, and Bamboo Bicycles Beijing was born.

45.bbb

“I AM YOUR TYPICAL victim of a Chinese education,” laughs Danna Zhu as she works on her bicycle frame. “There are so many of us. They don’t let us make anything in school.” Posing with a bandsaw poised above a tube of bamboo, she asks the gangly bespectacled college dropout at the next station to take her picture. She’s never used a saw before.

At 23, Danna is part of a post-1990 generation of Chinese women widely chastised for their materialism. Back in 2010, when a contestant on a reality TV show told a young bachelor she would “rather cry in the back of a BMW” than take a ride on the back of his bicycle, government regulators ordered reality shows to rein in the broadcast of “incorrect social and love values such as money worship.”

But a car, like a house, remains a prerequisite for many brides and their families.

“They’ll still want cars,” Zhu says of her friends.

Ragtag kids from the hutong—the narrow lane outside—skitter about her as she works, steadying a bamboo tube while she saws or pilfering bamboo scraps to paint on the steps, where an old woman driving a tricycle cart loaded with soda stops to banter. Another grandma pauses on her way from the wet market to give a thumbs up.

The kids seem to appreciate the workshop more than anyone. Against one wall is propped a small bike. The kids built it themselves. For days before the September workshop, they have been coming by, clamoring to know whether Wang has finished adding components like wheels and a seat. On the first day of the workshop, Fei Fei, 9, finds the bike ready at last. Quietly he lifts it out onto the street, pedaling a tentative few yards before disappearing around the corner.

Twice in the course of a few hours, people stop to compliment the bikes and ask how much they cost. When a grizzled man towing recycling on his tricycle cart repeats the query a third time, Wang, visibly piqued, says again that the bicycles are not for sale.

“They think it’s just about selling a bike and it’s not.” What it is about, as he often repeats, is starting a conversation.

bamboo-bicycle-kid

IT’S FALL 2014, and Wang is planning an October trip to Massachusetts, with several frames for supporters of his Kickstarter campaign stashed in his luggage. The project is at a turning point—the goal he had set for himself and online funders had been to build 25 bikes. He has doubled that and is on target to make 75 by the end of the year. All of this is good but he wonders how he might further the conversation, perhaps tapping into the kind of enthusiasm he’d seen in the kids on the lane.

He’s looking into partnering with local schools. At the same time, he’s searching for ways to make more and better bikes in as sustainable a way as possible, tracking down villages with abundant bamboo, looking for alternatives to epoxy, and working on kits for people to make bikes at home.

The frames themselves are slowly morphing—he made his first women’s and kid’s prototypes in late summer. Participants have egged him on with requests for things like cup-holders, children’s seats and multi-gear models.

“Right now I’m just concentrating on making the next bike,” Wang says. “I’ll follow my curiosity and hope it just grows.”

Recently, Mercedes, an old client of his, got in touch.

“Benz is doing a lot to create more efficient and sustainable urban mobility,” he says. “It’s to their credit. They see the way things are isn’t sustainable.”

Back on the lane, an old, flat-faced Liberation truck filled with sand for construction has blocked the T-junction. While a bleating line of cars and jerry-rigged motorized tricycle trucks forms along the cross road, an antlike stream of pedestrians and scooters improvise a path through a pile of sand. It’s along this path that two bamboo bicycles slip quietly northward, leaving a Lexus SUV in the dust.

 

Making Ideas Happen

Making Ideas Happen: Dick Post ’40 has had an amazingly productive life in science, and at age 96, he’s still going strong.

Durell_140911_PCM_post_0302s

REMEMBER THE U.S. ARMY commercials from the 1980s? A voice intones: “We do more before 9 a.m. than most people do all day.” If Mad Men’s Don Draper were to coin a catchphrase for Dick Post, it would read: “He’s made more discoveries after age 90 than most scientists do in their careers.”

For more than six decades, Richard F. Post ’40 has dedicated his life to solving energy challenges. “My whole career has been shaped by energy,” he says. An applied physicist at Lawrence Livermore National Laboratory (LLNL), Post has 34 patents to his name—nine issued since he turned 90—in nuclear fusion, magnetics and flywheel energy storage.

Post, who turned 96 in November, maintains a work schedule that would exhaust a man a quarter of his age. Retired since 1994 and officially known as a “rehired retired scientist,” Post clocks 30 hours at the lab each week. Until a fall last year injured his shoulder, he was driving himself the 60-mile round trip from his home four days each week to LLNL. “You know why he takes the fifth day off?” asks Steve Wampler, a public information officer at LLNL. “Because he’s retired.”

Post may take a day off at the lab, but that doesn’t mean he stops working. “Friday through Sunday are not always days off for Dick,” says Post’s colleague Robert Yamamoto, principal Investigator for LLNL’s electromechanical battery program. “When Dick comes to work at LLNL on Monday morning, I can generally expect to get a call from him. Dick will want to chat about his latest ideas and thoughts he has developed over the past 72 hours,” adds Yamamoto. The two often meet for lunch on Mondays in Dick’s office. “Dick wants to know if his ideas are practical and have some ‘real meat on the bones.’”

“I learn so much, and am equally amazed at the quality of the new ideas Dick has and his extreme enthusiasm for his ideas—as if he was a person just starting off in his science career,” he says.

 

POMONA ROOTSInfrographic03

Post’s family connections with Pomona College go back more than 100 years. His grandfather, Daniel H. Colcord, was professor of classics, and his mother, Miriam Colcord, graduated in 1910. Post credits Pomona College with helping him discover both his life’s work and the love of his life.

In Professor of Physics Roland R. Tileston he found both an academic mentor and a life mentor. “He not only looked after my academic side; he looked after my social side,” Post says. “He would call me in on Monday morning and say, ‘There was a dance on Saturday. Did you go to it? Did you take a different girl?’”

Tileston took special care to ensure that promising students had the financial means to undertake graduate studies. “In my case, for financial reasons, he kept me on for a year as a graduate assistant—700 bucks, saved half of it. And then, in 1941, he started me as an instructor in physics because he had arranged for a fellowship for me at MIT.”

Then came Pearl Harbor. Tileston immediately recommended Post for a position at the Naval Research Laboratory near Washington, D.C., where a former student of Tileston’s, Dr. John Ide, was director of the sonar program. “When I left, I flew from Los Angeles airport and he brought the whole physics class to see me off,” recalls Post. “He was a marvelous professor, and I owe him a tremendous debt.”

It was while he was living in Virginia that a friend and fellow alumnus, Vince Peterson ’43, heard that some Pomona College girls were in town and decided to throw a party. “Marylee [Marylee Armstrong ’47] and her friends came,” Post recalls, “and it was 15 minutes before I knew that this is the one I wanted to spend my life with.”

 

FROM FUSION TO FLYWHEELS

It was nuclear fusion that brought Dick Post to Lawrence Livermore National Laboratory in 1952. Following his completion of a doctorate in physics at Stanford and a brief stint at nearby Lawrence Berkeley National Laboratory, Post had heard Herb York, the first director of LLNL, deliver some lectures on the topic. Post later followed York to LLNL, interviewing for the job with the lab’s co-founder and future director, Edward Teller, known as the father of the hydrogen bomb. Magnetic nuclear fusion would be the focus of Post’s research until the mid-1980s.

Convinced of the promise of the technology, Post regards the premature termination of his line of fusion research, in an act of Cold War politics, a regrettable mistake. “The fuel reserve for fusion is infinite. There’s no long-term radioactive problems, no carbon. That’s why I devoted most of my career to fusion research—until the budget was cut in a deal between Reagan and Gorbachev,” he says. LLNL was forced to phase out its magnetic fusion program, and Post had to shift his attention to other fields. But, in what would become a hallmark of his career, Post was able to apply lessons learned from his research in nuclear fusion to a seemingly unrelated area: magnetically levitated trains.

With seed money dispensed under a competitive grant program run by the LLNL director, Post was awarded a couple of million dollars to develop a concept for maglev train technology. The concept, Inductrack, was later licensed by the San Diego-based defense contractor General Atomics and, in 2004, a 120-meter test track was built.

“It involves a special array of permanent magnets mounted underneath the train and a track, which consists of shorted parallel conductors. The levitation only requires motion. The test model levitates at walking speeds. The Japanese superconducting train has to be over 100 miles per hour before it levitates,” says Post.

“Why is this important?” he asks. “Because if all power fails, this train doesn’t give a darn. It slows down to walking speeds and settles down onto its wheels. It’s a totally fail-safe system. That’s going to be a very important characteristic of future maglevs—they’d better be fail-safe.”

Plans to build a 4.6-mile, full-scale demonstration system on the campus of California University of Pennsylvania were put on hold when the Great Recession hit in 2008. Post says Inductrack technology has been licensed to two other companies—he couldn’t give their names.

In another case of iterative innovation, Post is leveraging knowledge gained from the development of Inductrack to his latest—and likely last—research focus: flywheel energy storage. Post’s thinking on the topic has had a long gestation. In December 1973, Post and his son Stephen published an article in Scientific American making the case that advances in materials and mechanical design made it possible to use flywheels to store energy in the power sector and in the propulsion systems of vehicles.

Forty years later, Post believes he might actually see his concept hit the market within a few years.

The principle of the flywheel is quite simple. A spinning wheel stores mechanical energy; energy can be either put in or taken out of the device. Post’s flywheel research builds upon work done in the 1950s by MIT researcher John G. Trump on electrostatic generators. “He didn’t appreciate the importance of the charging circuit that charges this thing and puts the voltage on it. He used a resistor,” Post says. When Post started work on electrostatic generators for flywheels, he asked himself a question: “’Why the heck did Trump use a resistor? Why didn’t he use an inductor, to reduce losses?’” Post experimented on his computer, inserting an inductor value for the resistor. “All of a sudden, the power took off by almost a factor of 100,” he says.

Many flywheel manufacturers use what are called active magnetic bearings to suspend and stabilize the flywheel—not Post. “What we have developed over many years now,” he says, “is what is called passive magnetic bearings, which are self-stabilizing and don’t require any feedback circuits. They are extremely simple compared to the active bearings, much less expensive, and don’t require any maintenance.”

“If we design it right,” he adds, “it has an almost indefinite lifetime because there are no wearing parts.” This is in contrast to, say, the lithium-ion batteries used in smart phones, laptops, and electric vehicles, which have a limited life cycle of charges and discharges before they must be replaced. Because Post’s flywheel operates essentially friction free, energy dissipation is very low. The flywheel operates at 95% to 98% efficiency—that is, up to 98% of the energy stored in the flywheel can be extracted and put to use.

Post envisions the technology used in large- and small-scale applications. Grid-scale units would be deployed by utilities or grid operators to help balance the variable output from large solar and wind power plants. A 5-kilowatt residential unit would be about the size of a suitcase, says Post, with the flywheel sealed in a vacuum chamber, perhaps under the floor of the garage.

Post says a company is interested in adapting his flywheel technology for use in cars. The idea is not new. Flywheels are already used by Formula 1 racing cars to supply quick bursts of power. For the consumer market, manufacturers would install a series of small flywheels that would, in all-electric vehicles like the Nissan Leaf or Tesla Model S, replace a battery pack entirely; for hybrids like the Toyota Prius, flywheels would replace the small battery pack that boosts efficiency.

“We could substantially extend the range and eliminate the fire hazard posed by lithium-ion battery technology, and avoid the battery life cycle problem,” says Post.

Durell_140911_PCM_post_0464s

MAKING IT HAPPEN

“If I have seen further, it is by standing on the shoulders of giants,” said Sir Isaac Newton. If so, the spry Dick Post must be a contortionist, because one set of shoulders he is standing on is his own.

He says one reason he has been so productive for so long, especially after age 90, is that his recent patents build upon earlier discoveries. He is using magnetic bearings work from Inductrack, for instance, in his current flywheel research. “Even though it’s a totally different field, it’s the same concept,” he says. “One of the reasons that I can still file patents is I can draw on past experience and convert it to a present problem.” He has filed 28 of his 85 records of invention, a precursor to a patent, since he turned 90.

Asked where his ideas come from, Post mentions that he recently read a book about the inventor Nikola Tesla. “He was able to visualize, without hardware, his inventions. That’s part of the process for me. When I go to sleep, I think: The magnets could go this way.” Post also credits software invented decades after his career began. “The tremendous help to me in this whole process is I learned how to use Mathematica, which is a very sophisticated computer mathematics program. I couldn’t live without it.”

For Post, the labors of his theorizing mean nothing if the result cannot be used to solve a problem in the real world. “Dick certainly is an ‘idea’ man,” says colleague Robert Yamamoto, “but he truly understands the need to do the real engineering to make an idea come to fruition. He knows that theory alone can only take you so far, and that an idea, unless demonstrated completely, is not worth much. He sincerely understands the real-world, make-it-happen aspect that engineering brings to the table. This is not a common trait of many of the scientists I have worked with.”

Asked what drives him to come to the office four days a week, Post answers without hesitation: “The flywheel. I want to see this happen! I’ve devoted my career to energy. For Pete’s sake, we were torpedoed on fusion, which I think was a terrible mistake.

“My real hope is that this thing becomes commercial before I kick the bucket. I would like to see it happen.”

Hackers

Hackers: Hackathon: A deadline-driven, energy-drink-fueld rush to create something that just might become a Silicon Valley startup but is more likely to be remembered as one of those crazily fun things people do in college when they are alight with intelligence and passion.

It was almost dawn outside Lincoln and Edmunds halls, and the clicking of laptop keys on a Saturday morning had slowed to a persistent few. Three students slept in chairs in the Edmunds lobby, one next to a lone coder at his keyboard. In the Lincoln lobby, a quilt lay seemingly abandoned in a clump on the floor. Then it moved, and the petite student who had been slumbering beneath it climbed into a chair and disappeared under the quilt again.

Upstairs, John Verticchio ’15 looked around the windowless room where he’d spent the night working with three friends. “Is the sun up yet?” he asked.

hackers-400Welcome to the 5C Hackathon, the all-nighter that lures as many as 250 students from The Claremont Colleges each semester to stay up building creative and often elaborate software projects and apps in a mere 12-hour span. It is a deadline-driven, energy-drink-fueled rush to create something that just might become a Silicon Valley startup but is more likely to be remembered as one of those crazily fun things people do in college when they are alight with intelligence and passion.

The event is student-created and student-led, built from scratch by three Pomona College students in 2012 with a budget of $1,000 and 30 participants. By the fifth 5C Hackathon in April, the budget had grown to $13,000 and the semiannual event had drawn sponsors that have included Intuit, Google and Microsoft. The codefest also is supported by Claremont McKenna’s Silicon Valley Program, which helps students of The Claremont Colleges spend a sort of “semester abroad,” studying while interning at a technology company in Northern California.

The 5C Hackathon is a one-night gig. Competitors are allowed to come in with an idea in mind, but “the rules are that you have to start from scratch. You’re not allowed to have pre-written code,” said Kim Merrill ’14, one of the three co-founders. “It’s all about learning, having fun, staying up all night. It’s not a heavy competition.”

As students wandered into the Seaver North Auditorium around 7 on a Friday night, Merrill, who will go to work for Google as a software engineer in the fall, sat on a table in front wearing shorts and a green H5CKATHON t-shirt as hip music played on the audio system.

The aspiring hackers—how odd that a term that once referred to computer criminals has become a compliment—carried backpacks and laptops, sleeping bags and pillows, the occasional stuffed animal and Google swag bags holding USB chargers, blue Google knit caps and Lego-like toys in boxes emblazoned with the words “google.com/jobs.” This looked like serious fun, and contrary to the stereotypical image of computer geeks, there were women everywhere.

“Having Kim leading the whole thing, I think, has been really powerful for that,” said Jesse Pollak ’15, a former Pomona student who was visiting Claremont for the event he co-founded with Merrill and Brennen Byrne ’12 before leaving school last year to join Byrne in founding a Bay Area startup. (Clef, a mobile app, replaces user passwords on websites with a wave of your smartphone and has been featured by The New York Times.)

“I came in my first year and I knew I wanted to study computer science, and I was hoping there would be, like, a scene here for people who like building stuff, and there wasn’t then. There was nothing,” said Pollak, who didn’t start coding until his senior year in high school. “So I started trying to track down people who were interested in that sort of thing.”

He found them in Byrne and in Merrill, who had planned to be an English major but started coding after an introductory computer science class as a freshman at Pomona.

The event they founded gave the 5Cs an early start on what has now become a national phenomenon. “Hackathons were a new thing and most were on large campuses,” Merrill said.

Hackathons have exploded into prominence in the last two years. The second LA Hacks competition at UCLA in April drew more than 4,000 registrants from universities that included UCLA, USC, Stanford, UC Berkeley and Harvard for a 36-hour event it touted as a “5-star hacking experience” with VIP attendees. Civic groups and government organizations have gotten into the act, too, with the second National Day of Civic Hacking on May 31 and June 1 featuring events in 103 cities, many focused on building software that could help improve communities and government.

hack-cup-350While some hackathons have gone grander and glitzier—MHack at the University of Michigan awarded a $5,000 first prize this year and HackMIT drew 1,000 competitors to compete for $14,000 in prizes at the Massachusetts Institute of Technology last year—the 5C Hackathon has remained doggedly itself. “We really wanted, instead of pushing for bigger things, to think about how we can get more people into this,” Pollak said. “You’ll see people present (projects) in the morning who didn’t know how to code at the beginning of the week and who actually built something. It’ll be small and ugly, but it will work.”

A centerpiece of the 5C Hackathon is “Hack Week,” a free beginners’ course of four two-hour evening tutorials leading up to the event, with students teaching other students such basics as HTML and CSS, JavaScript, jQuery and MongoDB, all of it an alphabet soup to the uninitiated.

Christina Tong ’17 tried her first hackathon the fall of her freshman year, picking up ideas during Hack Week that helped inspire her team to fashion a restaurant-ordering app for the Coop Fountain. This spring, continuing to teach themselves more programming languages with online tutorials, her team built a financial tracking system called Money Buddy.

It’s the “forced deadline” of a hackathon, Tong said, that helps coders power through the inevitable snags and bugs of building a program. Pressing on is a huge part of the task. “When you’re fresh, you could probably figure out those bugs decently quickly, but around 3 o’clock, it’s past your normal bedtime and you’re staring for hours at things you probably could fix when you’re fresh,” she said.

Tong’s strategy is catnaps and sustenance. The spring 5C Hackers got an 11 p.m. food truck visit and a snack spread featuring clementines, jelly beans, Oreos, Krispy Kreme doughnuts, bananas and a veggie tray. And at 3 a.m., just because it’s tradition, Merrill—who typically spends much of the night mentoring beginning teams—rallied the students for a two-minute, middle of the night campus run. “It can be hard to motivate people to run at 3 a.m.,” she said.

By 4 a.m., someone had scrawled a message on a whiteboard dotted with listings for tutors: “Countdown 4 hours!”

Some didn’t make it—“I think we lost a lot more teams than we usually do,” Merrill said—but by mid-morning Saturday, 30 teams of two to four people had made one-minute slam demonstrations of their completed projects, roughly half beginners and half advanced.

Judged by America Chambers, a Pomona visiting assistant professor of computer science, and representatives of some of the sponsoring tech companies—this could be the new model of campus recruiting—the entries included efforts such as 5Cribs and the Cyborg Dorm Chooser, designed to help students pick the best dormitory rooms or suites for them.

There was a Craigslist-type site exclusively for The Claremont Colleges and an app to help recreational athletes find a pickup game on campus. One called Expression uses a webcam and face recognition to automatically select music that seems to fit the user’s mood. Another named Echo was a message-in-a-bottle app that allows people to leave audio messages for strangers that can only be heard when the person is standing near the same spot.

The Drinx app suggests cocktail combinations based on what ingredients are in the fridge. But the winning advanced project—sense a theme here?—was the Shotbot, a boxlike robot controlled by a Siri hack that makes mixed drinks automatically. Nonalcoholic, for demonstration purposes.

“Siri loves to serve drinks,” the familiar voice said after taking an order.

“We definitely used it at parties the next few weeks,” said Sean Adler, Claremont McKenna ’14, who built the project, using Arduino, Python, iOS and Node.js, along with three other Claremont McKenna computer science students—brothers Joe and Chad Newbry, both ’14, and Remy Guercio ’16. Their prize? Each team member received an iPad2.

The winners in the beginners’ division, Matt Dahl, Patrick Shao, Ziqi Xiong and John Kim—all Pomona ’17—won Kindle Fires for their project,  a “confessions” site similar to other popular sites that allow people to post anonymous secrets or desires. The Pomona students added several features—systems for sorting posts, marking favorites and for hiding offensive content, often a concern on confessions sites.

The next 5C Hackathon will be in the fall, but with Merrill’s graduation in May—she was working for the nonprofit Girls Who Code in San Francisco during the summer before starting at Google in Seattle in late September—the three founders have left Pomona. Andy Russell ’15, Aloke Desai ’16 and Ryan Luo ’16, all of whom helped organize and competed in the spring hackathon, will return to stage more all-night programming binges, the tradition now entrenched.

Russell, his night of coding done, walked out into the quiet of an early Saturday morning, unable to make it to the presentations. He had a Frisbee tournament at 8.

The Message in the Song

The Message in the Song: National Geographic writer Virgina Morell '71 takes us inside the research of scientists working to decode the chitters and trills of animals ranging from bats to prairie dogs.

At the Mayan ruin of Uxmal, Mexico, bat researcher Kirsten Bohn bends down beside a narrow crack in one of the ancient limestone walls. “Do you hear them?,” she asks. “The twittering? That’s our bats, and they’re singing.”

I lean in, too, and listen. It takes a moment for my ears to adjust to the bats’ soft sounds, and then the air seems to fill with their birdlike trills, chirps and buzzes.

The twittering calls are the songs of Nyctinomops laticaudatus, the broad-eared bat—one of several species of bats that scientists have identified as having tunes remarkably similar to those of birds. Like the songs of birds, bats’ melodies are composed of multiple syllables; they’re rhythmic and have patterns that are repeated.

And like birds, these bats sing not during the dark of night, but in the middle of the day, making it easy for us to see them, too.

Bohn, a behavioral ecologist at Florida International University in Miami, presses her face against the crack in the wall, and squints. “Well, hello there,” she says. I follow her example, and find myself eyeball-to-eyeball with one of the bats that’s sandwiched inside. He scuttles back, but his jaws chatter at me, “Zzzzzzzz.”

“He’s telling us to back off, to go away,” Bohn says, translating. “He wants to get back to his singing.”

That suits Bohn, who has traveled to Uxmal to record the broad-eared bats’ tunes for her study on the evolution and function of bat song—research that may help decode what the bats are saying to one another with their songs, and even teach us something about the origins of human language.

Not so long ago, most animal scientists and linguists regarded the sounds that animals and humans make as markedly different. Language was considered to be something only humans possessed; supposedly it appeared de novo instead of evolving via natural selection. And animals were regarded as incapable of intentionally uttering any sound. Songs, barks, roars, whistles: These were involuntary responses to some stimulus, just as your knee jerks when your doctor taps it. But since the 1990s, the notion of language as a uniquely human skill has fallen to the wayside as researchers in genetics, neurobiology and ethology discover numerous links between animal vocalizations and those of humans.

in-song-250Take grammar and syntax, the rules that determine how words can be combined into phrases and sentences. Most linguists still insist that animal calls lack these fundamental elements of language. But primatologists studying the vocalizations of male Campbell’s monkeys in the forests of the Ivory Coast have found that they have rules (a “proto-syntax,” the scientists say) for adding extra sounds to their basic calls. We do this, too. For instance, we make a new word “henhouse,” when we add the word “house” to “hen.” The monkeys have three alarm calls: Hok for eagles, krak for leopards, and boom for disturbances such as a branch falling from a tree. By combining these three sounds the monkeys can form new messages. So, if a monkey wants another monkey to join him in a tree, he calls out “Boom boom!” They can also alter the meaning of their basic calls simply by adding the sound “oo” at the end, very much like we change the meanings of words by adding a suffix. Hok-oo alerts other monkeys to threats, such as an eagle perched in a tree, while krak-oo serves as a general warning.

Scientists have found—and decoded—warning calls in several species, including other primates, prairie dogs, meerkats and chickens. All convey a remarkable amount of information to their fellows. The high-pitched barks of prairie dogs may sound alike to us, but via some variation in tone and frequency he or she can shout out a surprisingly precise alert: “Look out! Tall human in blue, running.” Or, “Look out! Short human in yellow, walking!”

Many animals use their calls to announce that they’ve found food, or are seeking mates, or want others to stay out of their territories. Ornithologists studying birdsong often joke that all the musical notes are really about nothing more than sex, violence, food and alarms. Yet we’ve learned the most about the biological roots of language via songbirds because they learn their songs just as we learn to speak: by listening to others. The skill is called vocal learning, and it’s what makes it possible for mockingbirds to mimic a meowing cat or a melodious sparrow, and for pet parrots to imitate their owners. Our dogs and cats, alas, will never say “I love you, too” or “Good night, sweetheart, good night,” no matter how many times we repeat the phrases to them, because they lack both the neural and physical anatomy to hear a sound and then repeat it. Chimpanzees and bonobos, our closest relatives, cannot do this either, even if they are raised from infancy in our homes.

Via vocal learning, some species of songbirds acquire more than 100 tunes. And via vocal learning, the chicks of a small parrot, the green-rumped parrotlet, obtain their “signature contact calls”—sounds that serve the same function as our names.

A few years ago, I joined ornithologist Karl Berg from the University of Texas in Brownsville at his field site in Venezuela where he studies the parrotlets’ peeping calls. Although the peeps sound simple to our ears, Berg explained, they are actually complex, composed of discrete sequences and phrases. A male parrotlet returning to his mate at their nest, a hollow in a fence post, makes a series of these peeps. “He calls his name and the name of his mate,” Berg told me, “and then he’s saying something else. And it’s probably more than just, ‘Hi Honey, I’m home.’” Because the female lays eggs throughout the long nesting season, the pair frequently copulates. And so, Berg suspects that a male on his way home after laboring to fill his crop with seeds for his mate and their chicks, is apt to call out, “I’ve got food, but I want sex first.” His mate, on the other hand, is likely hungry and tired from tending their chicks. She may respond, “No, I want to eat first; we’ll have sex later.” “There’s some negotiating, some conversation between them,” Berg said, “meaning that what one says influences what the other says next.”

bird-in-song-300Berg discovered that parrotlets have names by collecting thousands of the birds’ peeps, then converting them to spectrograms, which he subsequently analyzed for subtle similarities and differences via a specialized computer program. And how does a young parrotlet get his or her name? “We think their parents name them,” Berg said—which would make parrots the first animals, aside from humans, known to assign names to their offspring.

Parrotlets aren’t the only animals that have names (or to be scientifically accurate, signature contact calls). Scientists have discovered that dolphins, which are also vocal-learners, have these calls, although these seem to be innate; the mothers aren’t naming their calves. And some species of bats have names, which they include when singing, and in other social situations.

Bats sing, for the same reason birds do: to attract mates and to defend territories. They’re not negotiating or conversing, but their lovelorn ditties are plenty informative nonetheless. After analyzing 3,000 recordings of male European Pipistrellus nathusii bats, for instance, a team of Czech researchers reported that the songs always begin with a phrase (which the scientists termed motif A) announcing the bat’s species. Next comes the vocal signature (motifs B and C), information about the bat’s population (motif D), and an explanation about where to land (motif E).

“Hence, translated into human words, the message ‘ABCED’ could be approximately: (A) ‘Pay attention: I am a P.nathusii, (B,C) specifically male 17b, (E) land here, (D) we share a common social identity and common communication pool,’” the researchers wrote in their report.

Bohn suspects that the tunes of her bats at Uxmal convey the same type of information. “The guys are competing for females with their songs,” she says, “so they can’t afford to stop singing.” She doesn’t yet know what the females listen for in the voice of a N.laticaudatus, but expects that something in a male’s intonation or his song’s beat gives her clues about his suitability as a mate.

But her focus is on another question: Are these bats long-term vocal learners, as are humans and some species of birds, such as parrots? “If they are,” she explains, “then they might be a good model for studying the origins of human speech”—which would make bats the first mammal ever used for such research.

Bohn had earlier recorded some of the bats’ songs, and digitally altered these so that they sounded like the refrains of different bats—strangers. At the wall, she attaches a pair of microphones and a single speaker to a tripod, and points the equipment at the fissure, where the bats sing. Pushing a button on her laptop, she broadcasts the remixed bat songs to the tiny troubadours, who respond with even louder twitters, trills, and buzzes. Bohn watches their responses as they’re converted into sonograms that stream across her laptop’s screen like seismic pulses. These are territorial buzzes and contact calls, Bohn explains. “They know there’s an intruder.” She’s silent for a moment, and then beams. “Yes! One of the guys is trying to match the intruder’s call. He doesn’t have it exactly right, but he’s close—he’s so close, and it’s hard.”

But there it was: the first bit of evidence that bats are life-long vocal learners. Just like us.

The Ash Heap of Success

The Ash Heap of Success: As an expert witness in an international biotech patent suit, Professor Lenny Seligman finds his own research on trial.

ash-heap-400Expert witnesses at contentious trials can expect to be challenged, even discredited. But when he took the stand last year in a complex biotech patent case, Pomona Biology Professor Lenny Seligman never anticipated that his groundbreaking work at Pomona would be relegated to the “ash heap of failure.”

That attack line echoed from start to finish during the high-stakes federal trial in Maryland between two rival companies in the cutting-edge field of genetic engineering. The dismissive salvo was fired in the opening statement by the attorney for Cellectis, a large French firm that filed suit for patent infringement against its smaller U.S. competitor, Precision BioSciences, which had hired Seligman for its defense.

Seligman was more than just an expert witness. His research at Pomona had become a cornerstone for the case. Both sides cited Seligman’s work as a basis for the science on which their businesses had been built. Ironically, the plaintiff then found itself in the awkward position of having to undermine the validity of his work. It did so by claiming he had not actually produced anything concrete in his college lab that would invalidate the firm’s far-reaching claims.

“I don’t hold that against him,” said the counselor. “This is very complicated technology. It does not surprise me that he wasn’t able to do it. What does bother me is Precision attempting to rescue his (work) from the ash heap of failure.”

Seligman left court that day thinking, “Ouch! Did he really say that?” When cross-examined by that same lawyer, Paul Richter, Seligman found an opportunity to sneak in a mild retort, saying on the stand, “That was not very nice.” Considering the attack still in store, the lawyer might have mused, “If you thought that was bad, wait until you hear my summation.”

In those final arguments, though, Seligman’s side fired back with outrage and eloquence. Following a week of mind-numbing technical testimony, David Bassett, an attorney for Precision, rebutted the now infamous line. The court reporter transcribed the original reference as “ashes of failure,” but Seligman and others clearly remember it as a heap, and that’s the phrase that stuck.

To say Seligman’s work belonged in the “ash heap of failure” was “as incorrect as it is offensive,” said Bassett. “To the contrary, Professor Seligman’s article represented a monumental success from a small lab at Pomona College where (he) does his research with undergraduate students, 18 to 22-year-olds. And it paved the way for companies like Cellectis and Precision to do their work. … The real difference is that Professor Seligman was teaching the world what he had done and hoping that others would follow his blueprint.”

In the end, Precision won the infringement case and Seligman’s work was vindicated. The attack strategy against the likeable professor’s little-lab-that-could appeared to have backfired.

“I think that statement bit them in the ass,” he said. “Because even the jury kind of cringed when the lawyer said it. I mean, that’s really aggressive. And then when they got to know the witness—what a sympathetic guy I am—it was like, why would you do that? You could have made the point without going for the jugular like that.”

Indeed, it may have been the professor’s disarming, down home charm that won the day, as much as all the technical testimony about the DNA and microscopic structures called meganucleases. Beneath the complex science ran a compelling narrative that must have appealed to the federal jury empaneled in the district court of Delaware.

It was, in the end, a classic American underdog story.

The synopsis: Powerful and imperious European firm with raft of lawyers and battery of full-time scientists is defeated by scrappy U.S. start-up and its folksy professor with his one-man lab and part-time student assistants.

Seligman relished the role. In a PowerPoint presentation of the case presented recently to campus groups, he portrays the litigants as Team France v. Team USA. He uses slides to illustrate the uneven competition between the two companies and their dueling expert witnesses. For Cellectis, we see the flattering portrait of an award-winning genetics researcher from a big university. For the other side, we have Lenny Seligman, but the slide shows a picture of Homer Simpson.

The visual gets a big laugh.

A year after the verdict, Seligman still expresses astonishment when recalling the whirlwind experience of being a central figure in an intense international dispute about science. Interviewed in his office at Seaver Hall, where he presides as Biology Department chair, he also reflected on the awesome amounts of money circulating in science today, and what it means for those trying to teach and do research at a small, liberal arts college like Pomona.

“Part of me is happy with how things turned out,” he says. “At one level, it would have been great to be able to continue working on the I-CreI project without competition. However, we would never, at Pomona College, in my lab, have gotten to the point these two companies got to in five years. They were putting products out there, they were making enzymes that cut specific DNA sequences. It would’ve taken us so long to get there. So in the big picture, this is great. These companies are doing it, and they’re still graciously referencing our early work. It’s all good.

“We just have to find something new to do.”

Court and Class

Watching Seligman’s PowerPoint presentation about the case, posted online, gives viewers a flavor of his teaching style. He is engaging, enthusiastic and funny in a self-deprecating way. He’s also informal, standing casually at a podium with his shirttail hanging out and joking about wearing a suit only for court. But most importantly, he has a knack for explaining complex concepts to scientific novices, like college freshmen—or jurors.

The concepts in this case involved the business of protein engineering using meganucleases, which have been described as “extremely precise DNA scissors.” Scientists have developed ways to alter these naturally occurring enzymes and make them cut DNA segments at specific, targeted locations, with potentially lucrative uses in medicine and agriculture.

“Court is interesting because it’s kind of like a class,” Seligman says. “But it’s not like a class at Pomona where someone’s going to raise their hand and ask a question and stop you. When I’m in a class and I’m lecturing off-the-cuff and I can see that I’m losing students, I’ll stop and I’ll ask them certain questions. You can’t do that when you’re an expert witness, but you can still kind of get the visual cues. You still could get a sense that (the jurors) were with you, and I really felt that they were. They weren’t glazing over.”

Neither were the lawyers. They were ready to pounce on every word, eager to point out the smallest inconsistency or weakness. And Seligman was trying to make sure he didn’t slip up.

So there was no Homer Simpson on the witness stand. In court, Seligman’s easy-going, spontaneous classroom persona was restrained. The transcript of his testimony shows a witness who is cautious, serious and coldly factual. By then, he had been through hours of grueling depositions, and he knew the name of the game—Gotcha!

“Well, the whole idea (of pre-trial depositions) is for them to get a sound bite that they can use in trial,” he says. “So they ask questions really quickly. The thing that was hard, especially for someone who’s not a lawyer, is that they move from one aspect of the case to another, rapid-fire. …Your mind is over here and they’re trying to get you to slip up, so they can say to the jury, “But didn’t you testify that…?”

Seligman pounds on his office desk to impersonate an intimidating attorney.

“I felt really guarded. In class, when I get a question and don’t know the answer, the first thing I say is, ‘I don’t know.’ And so that’s my default mechanism, because I’ll figure it out, and we’ll talk about it next lecture. But if you’re getting deposed, you can’t fall back on that answer because lawyers will shoot back, ‘You don’t know? Well, on page 285 of your third report, didn’t you write this?’” Here, his tone mocks a Perry Mason moment. “So you feel you have to be on your toes all the time, and really be thinking about everything you’ve ever written.”

At times, the legal wrangling was so contentious, even the judge sounded exasperated. During one testy confrontation, U.S. District Court Judge Sue L. Robinson threatened to give the lawyers “a time out,” like an angry parent with misbehaving kids.

Underneath, Seligman perceived a bitter dislike between the two companies. It was like a battle to the death. He speculates that Cellectis’s strategy was to put Precision, the much smaller firm, out of business, bankrupted by legal fees. So Precision could win the battle and still lose the war.

Call it the ashes of success.

“Cynically, a lot of us (supporting the U.S. company) thought this was all about trying to bleed them.”

Money and Science

The experience was not all cutthroat and high anxiety, however. Seligman also recalls the excitement of being swept up into the high-flying world of international business and high-priced corporate lawyers. He describes it with the wide-eyed wonder of a kid who grew up in Claremont and still uses the nickname he was given in kindergarten, rather than his full name, Maurice Leonard Seligman.

To Lenny, it was a thrill just being in New York for the deposition and looking out onto that breathtaking Manhattan skyline. He often punctuates his story with youthful expressions, like “awesome” and “oh, my gosh!” He breathlessly describes the “war room” where a battalion of lawyers in a suite of offices prepared for testimony. (“Oh, my gosh!”) And he recalls how lawyers worked through the night preparing challenges even to illustrations planned for court the next day, putting pressure on a graphics guy to create instant substitutions. (“Oh, my gosh!”)

“And you mix that with all this adrenaline and dread of being deposed—it was really exciting,” he says.

When it came to how much the defense paid him, the response might also be, “Oh, my gosh!” That pesky attorney made a point of making him divulge the fee in court: $400 an hour. “It was more money than I had ever made in a short amount of time,” he recalled in the interview. “It was a lot of money for me.”

The amount of money these companies dumped on this lawsuit raises larger concerns about the corrupting influence of big profits on basic research.

“The whole privatization of science is something that’s certainly to be looked at carefully,” agrees Seligman. “Did I ever think to put a patent out? I’m glad I didn’t, in retrospect. If somebody wanted to choke me like they tried to choke Precision, they would serve me and I would say uncle. There’s just no way I would have the resources to fight. But beyond that, there’s just something that’s really special about open science, where everyone is sharing everything and building on each other. And once it gets into the industry, it’s not open science. They’re protecting it. They’re hiding it until they get the patent issued.”

Bringing it all back home, Seligman sees implications for his future work at a small college. How can his little research lab compete with wealthy companies, often with ties to large universities.

“That’s what we worry about all the time in a place like Pomona College,” Seligman says. “You want to do interesting science, but it’s got to be small enough that you’re not doing the same thing that the big labs are doing because we don’t have the same resources.”

Focus on Students

Beyond doing good science, Seligman and his colleagues at liberal arts colleges have another mission to worry about—teaching undergraduates. In his own lab, he notes, research must also be a teaching tool, a training ground for future scientists. In this regard, he says, Pomona is in a perfect position to compete.

The work on meganucleases is a prime example. In the early days, before big money entered the fray, much of the research was being done by students at Lenny’s lab. Today, they all have their names—as full-fledged co-authors—on those important research papers that figured so prominently in the trial.

These were not graduate students or post-docs. They were undergraduates like Karen Chisholm ’01, Adeline Veillet ’03, Sam Edwards ’99 and Jeremiah Savage ’98, who co-authored Seligman’s pioneering 2002 paper, marking the first time researchers described making mutations in a meganuclease, called I-Cre1, that altered the site where it cleaved DNA. Two years later, Steve Fauce ’02, Anna Bruett ’04 and Alex Engel ’01 co-authored another of Seligman’s key research papers, along with Dr. Ray Monnat of the University of Washington, where Seligman got his Ph.D. and did his first work on meganucleases as a post-doc in Monnat’s lab. Finally in 2006, five other Pomona undergrads—Laura Rosen ’08, Selma Masri ’02, Holly A. Morrison ’04, Brendan Springstubb ’05 and Mike Brown ’07—co-authored a third paper in which new mutant meganucleases were described.

Many former students praise Seligman as a great mentor who inspired them to pursue science in graduate school. At least 10 of these 12 student co-authors went on to get doctorates in biological sciences or M.D.’s.

“He really fostered a good environment for learning and being productive,” recalls Morrison, who got her Ph.D. from UC Berkeley in molecular and cell biology. “He had several students in there at any one time, and everybody was really good about helping each other. It was not at all cutthroat competition. It was very much a supportive team mentality and there was also a camaraderie about it.”

Today, Seligman speaks about his former students as if they were his kids. He makes a point of mentioning them in his PowerPoint presentation, and even notes who got married and who just had a baby.

“We are so lucky to be a place that gets such great students,” he says. “It’s our job to work with them, to get them excited about science and keep them excited about it. I have no doubt they’re going to do really amazing things.

“And I’m going to sit back and smile.”

Infographic-web1

 

Code Blue

Code Blue: October 2013: The President's health care web is in cardiac arrest, threatening to to drag his signature initiative down with it. Enter Mikey Dickerson '01...

code-blue-600Lunch was supposed to be casual. Mikey Dickerson ’01 was in Chicago catching up with Dan Wagner, a friend who’d been in the trenches with him on Barack Obama’s campaign for the presidency in 2012. Wagner had since gone on to found a company, Civis Analytics; Dickerson was a site reliability engineer at Google, one of the people who make sure that the search engine never, ever breaks down.

This was October of 2013, no time for the President’s geekiest loyalists to have a little fun. Healthcare.gov, the sign-up website that was the signature element of President Obama’s signature initiative, was a technological disaster. People couldn’t sign up even if they wanted to—the site would break, or fail. Delays were interminable. Information got lost. Customer service was about as good as you’d expect from a cable TV company. The Department of Health and Human Services, responsible for the new health care system, couldn’t seem to get it working.

“So, we got this phone call yesterday,” Wagner told Dickerson. “HHS is looking for help with healthcare.gov. Can I list you as an advisor or consultant?”

“Yeah, sure. If it’s any value to you, list me,” Dickerson replied. It seemed innocuous enough. Today, he smiles at his own naïveté. “I had no idea what I was getting into,” he says. About a week later, Dickerson found himself on a 5 a.m. conference call with a van full of technologists in Washington D.C., headed over to HHS. With him in the White House motor-pool car was Todd Park, the U.S. chief technology officer. And Park, whom Dickerson didn’t know, was selling the group as a team of experts who could solve any tech problem. Dickerson realized: They’re saying I can fix healthcare.gov.

Without really meaning to, Dickerson had become an anchor of the Obama administration’s “tech surge,” a Silicon Valley-powered push to fix the bugs in the healthcare.gov system. But the system was more than just software. In D.C., Dickerson and his new team found an organization in bureaucratic and technological meltdown, unable to execute what any e-commerce start-up would consider basic prerequisites for being in business.

The crazy part is, they fixed it.

To a Connecticut native like Dickerson, good at math and computers but with no desire to attend a big university, Pomona shows itself off pretty well—especially on a campus visit in May, when Dartmouth might still have slush on the ground. It’s not that he was so avid about computer science—in those days, as a major, CS really ran out of Harvey Mudd anyway—it’s just that Dickerson was an ace. He felt like he was cheating just a little. “It seemed dumb to be spending all that money on something I was already good at,” he says. In fact, Dickerson was already coding for various companies while in school. After graduation, he ended up working in Pomona’s computer lab.

Then the 2000 presidential election came around, with its photo finish in favor of George W. Bush. “It was a trauma for me,” Dickerson says. “That razor’s edge. All that was intensely painful. Almost anything would have moved those last 200 votes.” So in 2004 Dickerson volunteered with a poll-watching group … and caught the politics bug. Four years later he was working at Google, where CEO Eric Schmidt was (and remains) a multimillion-dollar Obama supporter. During campaign season an email went to a mass-distribution list that Dickerson was on, looking for people who could manage big databases for the Obama campaign.

Hey, Dickerson thought. I manage a group that runs large databases. And that was it. He worked as a volunteer in Chicago, one of a small group of techies who, during their long nights, idly wondered if maybe they could do something useful for the campaign with better records of people’s voting history. When the 2012 campaign came around, he was still on the campaign organizers’ list. This time, though, he was no newbie—though still technically a volunteer, his experience made him a trusted veteran. Those vague ideas about leveraging voter lists went into practice, and Dickerson’s group became the analytics team, credited by some political analysts as having been the key to Obama’s re-election. Once the campaign was over, Dickerson went back to managing a site reliability engineering team at Google, but he stayed in touch with his friends—which is why Dickerson was at lunch with Wagner on October 11.

The tech team’s first stop, in Virginia on October 17, was PowerPoint Hell. Technically, it was a large IT firm working as a government contractor. “They scheduled a three-hour meeting and sent a VP with, I shit you not, a 130-slide PowerPoint presentation,” Dickerson says. Over beers in a bar on San Francisco’s Embarcadero, about a block from Google’s offices, Dickerson wears the uniform of the coder—hoodie, Google ID badge, Google T-shirt, close-cropped hair and unshaven chin. In San Francisco, that’s stealth armor. In Washington’s blue-sports-coated, khaki-pantsed hallways, he was an alien.

The group fought its way out of the meeting and took over the office of someone who was on vacation. Then they went wandering, finding teams huddled in cubicles and asking them what they were working on, which bugs they were trying to fix. But they weren’t—mostly they were waiting for instructions. In their defense, it was hard to figure out what needed fixing. Engineers weren’t really allowed to talk to clients or users, and the people who created the healthcare.gov website hadn’t even built a dashboard, a way to monitor the health and status of their own system. If you wanted to know whether healthcare.gov was functioning, the only way to find out was to try to log on. “We thought this would be a targeted assessment and we’d spend a few days there,” says Paul Smith, another member of the team. “When we realized how bad things were, we just independently decided, we’re not going home. This is what we’re doing now, for an indefinite period of time, until it gets better.”

After a couple of days, Park asked them whether it could be fixed. “Todd, they have made all the mistakes that can be made,” Dickerson told him. “We can barely find a case where, when two decisions could be made, they made the right one. But low-hanging fruit isn’t the right metaphor. We’re stepping on the fruit.” The point was, some very simple fixes would yield some very big gains. Any improvement would be a massive improvement. Google site reliability engineers have a saying—they tell each other, if we have an outage that big it’ll be on the front page of The New York Times. Is that what you want? “But here’s the thing,” says Dickerson. “Healthcare.gov had been on the front page of The New York Times for four weeks. That was the silver lining. How much more could I screw it up?”

The group of coders decided that if no one was telling anyone what to do, they would. That’s when they started getting called “the Ad Hoc Team.” The name stuck. “We had a big stick, because we were the magical guys from the White House,” Dickerson says. “After a couple of days, we instituted a war room.” Every morning at 10 a.m., every team had to send a representative to a big meeting to explain what was going right, or wrong, and why. “It was an incredibly expensive thing to do—60 people in a room while we arbitrate disputes between two of them. But we made so much progress we stopped worrying,” Dickerson says. “Having a giant studio audience is better sometimes. It’s harder to say, ‘I didn’t do that because it wasn’t on my task order.’”

In other words, Dickerson had built into the system something no one had thought of: accountability. “What Mikey really excelled at was, if there’s a priority issue that needs to be addressed, how can people address it? What do they know? What do they need to know? What’s blocking them?” says Smith. “That’s just his demeanor and the way he operates.” The meetings were so productive and making so much of a difference in site performance that the Ad Hoc Team instituted a second one, making them twice a day, seven days a week.

When they weren’t in the war room, they coded. Problems started getting solved. A stupid little flaw that required the same kind of wait to connect to the database every time went away with the change of a couple of configuration settings, and poof! An eight-second response delay dropped to a two-second delay. “And that’s still terrible,” Dickerson says. The site stopped crashing. People actually started signing up for health care.

The work took a toll, though. Except for a quick trip back to California to pick up some clothes—Dickerson had come to the East Coast with a carry-on bag and a Google computer, expecting a short visit—he was in the greater D.C. area from mid-October through Christmas. Dickerson estimated he ran 150 war-room meetings in a row.

After a couple of moves to accommodate bureaucracy, Dickerson ended up working remotely, alone, from an operations center in Columbia, Md.—three hours from D.C. in what locals sometimes call “spook valley” for its preponderance of government contractors. Since healthcare.gov’s original creators hadn’t built a ship-in-a-bottle version of the software to test updates and fixes, everything the Ad Hoc Team fixed had to get changed on the live site, and the primary maintenance window was when traffic was lightest, between 1 and 5 a.m. “It was literally 20-hour days a lot of time. ” Dickerson says. “I was hallucinating by the end, hearing things.”

mikey-400With 12 days left before the deadline, Dickerson was ready to go home. He gave a speech listing the five mission-critical things remaining, and attempted to flee back to California. But the bosses panicked. The Ad Hoc guys can’t go home, they said. They gave him the service-to-your-country pitch. They begged. So Dickerson agreed to stay through to the end—with some conditions. He got to set the specific technical goals for what his team and the rest of the government coders would do. And he got to hire whomever he wanted, without arguing the point. He wanted to be able to trust the new team members, so he chose them himself. Eventually a rotating team of Google site reliability engineers started coming through to keep the project on track.

Dickerson got to dictate those terms because he was getting results. He had become indispensable. “Mikey is an incredible talent who was seemingly built in a lab to help fix healthcare.gov,” Park says. “It’s not just the fact that he’s got a sky-high tech IQ, honed over years as a star site reliability engineering leader. He’s also got tremendous EQ, enabling him to step into a tough situation, mesh well with others, and help rally them to the job at hand.”

The real bummer, of course, is that healthcare.gov, while an unprecedented attempt to link government services, private insurers and identity verification, shouldn’t have been that hard to build. “It’s basically a distributed, transactional, retail-type website, and we’ve been building those for years,” says Smith. “In the private sector, we know how to do that. We’re not forging new computer science ground here, right?”

By April of 2014, just a few days after Dickerson and I spoke, the Obama administration announced that over 7 million people had signed up for private health care through federal and state exchanges, and 3 million had signed up for Medicare. The program had made its numbers—barely, to be sure—because people, in the end, could actually use the website.

Dickerson is back at Google, but as he says, “you can never unsee the things you see in the federal government.” He has become an outspoken advocate for reform in the ways government builds technology, concentrating especially on trying to convince young technologists to go work for government. “You’re gonna eat free food and drink free soda in micro-kitchens and work on another version of what we’ll say, for argument’s sake, lets people share pictures of what they ate for breakfast, and tens of thousands of people will die of leukemia because we couldn’t get a website to work,” Dickerson says. “These are real people’s lives that will end in 2014, and you’re going to sit at your desk working on picture sharing.”

The problem isn’t competence. People who work on websites for the government are every bit as competent as the ones who work at Google or Facebook. “The mechanisms by which you do a contract with the federal government are so complex that it requires expertise in and of itself,” says Jennifer Pahlka, founder and executive director of Code for America, a group that connects software developers with local governments. “Fundamentally the process in government has evolved to meet government needs. A federal project has dozens of stakeholders, none of whom represent the user.”

That’s why Code for America focuses on local governments, Pahlka says. The feds are too hard to crack, and anyway, most people’s interactions with government are at the state and city level—think DMV, local parks, or trash pick-up. So Dickerson has started stumping for Code for America, giving speeches at their events. And he is lobbying Eric Schmidt and his other bosses at Google to develop programs that would allow—maybe even encourage—software developers there to take time to work on government projects. Consider: The feds paid $700 million for healthcare.gov, and it didn’t work. Imagine being able to bid for that contract at a tenth the price. “I don’t have to appeal to your altruism or desire to serve your country,” Dickerson says. “I can just say, ‘Do you want to make a ton of money?”

Pahlka thinks the pitch might actually work—and not just because of capitalism. “The consumer internet has influenced the way a generation feels about doing things together,” she says. “You have a generation of people who value collective intelligence and collective will—not necessarily collective political will, but the ability to actually do things together.” Software designers and engineers are already political, Pahlka and Dickerson are saying; it’s just that the web generation is ignoring the greater good. Going to work at Twitter is a political choice just as much as going to work for the Department of Veterans Affairs.

“I give the worst sales pitch,” Dickerson says. “I tell people, ‘This is what your world is going to be like: It’s a website that is a Lovecraft horror. They made every possible mistake at every possible layer. But if you succeed, you will save the lives of thousands of people.’”

The weird part: Almost everyone says yes.

————-

EDITOR’S NOTE: Shortly before this magazine went to press, Dickerson announced that he’s going to practice what he preaches, full time. He is leaving Google to join the Obama administration as administrator of the U.S. Digital Service, a newly created office overseeing government spending on information technology. And after signing on, he discovered that the lead designer on the initial staff for U.S.D.S. is another Pomona grad, Mollie Ruskin ’08.

The Code of Beauty, the Beauty of Code

Class Program
{
public static void Main()
{
System.Console.WriteLine(  “Hello, world!”  );
}
}

Even if you’re the kind of person who tells new acquaintances at dinner parties that you hate email and e-books, you probably recognize the words above as being some kind of computer code. You may even be able to work out, more or less, what this little ‘program’ does: it writes to the console of some system the line ‘Hello, world!’

hackers-300A geek hunched over a laptop tapping frantically at the keyboard, neon-bright lines of green code sliding up the screen—the programmer at work is now a familiar staple of popular entertainment. The clipped shorthand and digits of programming languages are familiar even to civilians, if only as runic incantations charged with world-changing power. Computing has transformed all our lives, but the processes and cultures that produce software remain largely opaque, alien, unknown. This is certainly true within my own professional community of fiction writers—whenever I tell one of my fellow authors that I supported myself through the writing of my first novel by working as a programmer and a computer consultant, I evoke a response that mixes bemusement, bafflement and a touch of awe, as if I’d just said that I could levitate. Most of the artists I know—painters, film-makers, actors, poets —seem to regard programming as an esoteric scientific discipline; they are keenly aware of its cultural mystique, envious of its potential profitability, and eager to extract metaphors, imagery and dramatic possibility from its history, but coding may as well be nuclear physics as far as relevance to their own daily practice is concerned.

Many programmers, on the other hand, regard themselves as artists. Since programmers create complex objects and care not just about function but also about beauty, they are just like painters and sculptors. The best-known assertion of this notion is the essay ‘Hackers and Painters’ by programmer and venture capitalist Paul Graham. ‘What hackers and painters have in common is that they’re both makers. Along with composers, architects and writers, what hackers and painters are trying to do is make good things.’

According to Graham, the iterative processes of programming—write, debug (discover and remove bugs, which are coding errors, mistakes), rewrite, experiment, debug, rewrite—exactly duplicate the methods of artists: ‘The way to create something beautiful is often to make subtle tweaks to something that already exists, or to combine existing ideas in a slightly new way … You should figure out programs as you’re writing them, just as writers and painters and architects do.’ Attention to detail further marks good hackers with artist-like passion:

All those unseen details [in a Leonardo da Vinci painting] combine to produce something that’s just stunning, like a thousand barely audible voices all singing in tune. Great software, likewise, requires a fanatical devotion to beauty. If you look inside good software, you find that parts no one is ever supposed to see are beautiful too.

This desire to equate art and programming has a lengthy pedigree. In 1972, the famed computer scientist Butler Lampson published an editorial titled ‘Programmers as Authors’ which began:

Creative endeavor varies greatly in the amount of overhead (i.e. money, manpower and organization) associated with a project which calls for a given amount of creative work. At one extreme is the activity of an aircraft designer, at the other that of a poet. The art of programming currently falls much closer to the former than the latter. I believe, however, that this situation is likely to change considerably in the next decade.

Lampson’s argument was that hardware would become so cheap that ‘almost everyone who uses a pencil will use a computer,’ and that these users would be able to use ‘reliable software components’ to put together complex programs. ‘As a result, millions of people will write non-trivial programs, and hundreds of thousands will try to sell them.’

hackers-250A poet, however, might wonder why Lampson would place poetry making on the same spectrum of complexity as aircraft design, how the two disciplines—besides being ‘creative’—are in any way similar. After all, if Lampson’s intent is to point towards the future reduction of technological overhead and the democratization of programming, there are plenty of other technical and scientific fields in which the employment of pencil and paper by individuals might produce substantial results. Architecture, perhaps, or carpentry, or mathematics. One thinks of Einstein in the patent office at Bern. But even the title of Lampson’s essay hints at a desire for kinship with writers, an identification that aligns what programmers and authors do and makes them—somehow, eventually—the same.

Both writers and programmers struggle with language. The code at the beginning of this chapter is in Microsoft’s C#, one of thousands of high-level programming languages invented over the last century.

Each of these is a ‘formal language,’ a language ‘with explicit and precise rules for its syntax and semantics,’ as the Oxford Dictionary of Computing puts it. Formal languages ‘contrast with natural languages such as English whose rules, evolving as they do with use, fall short of being either a complete or a precise definition of the syntax, much less the semantics, of the language.’ So these formal dialects may be less flexible and less forgiving of ambiguity than natural languages, but coders—like poets—manipulate linguistic structures and tropes, search for expressivity and clarity. While a piece of code may pass instructions to a computer, its real audience, its readers, are the programmers who will add features and remove bugs in the days and years after the code is first created. Donald Knuth is the author of the revered magnum opus on computer algorithms and data structure, The Art of Computer Programming. Volume 3 of the Art was published in 1973; the first part of Volume 4 appeared in 2011; the next part is ‘under preparation.’ If ever there was a person who fluently spoke the native idiom of machines, it is Knuth, computing’s greatest living sage. More than anyone else, he understands the paradox that programmers write code for other humans, not for machines: ‘Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.’ In 1984, therefore, he famously formalized the notion of ‘literate programming’:

The practitioner of literate programming can be regarded as an essayist, whose main concern is with exposition and excellence of style. Such an author, with thesaurus in hand, chooses the names of variables carefully and explains what each variable means. He or she strives for a program that is comprehensible because its concepts have been introduced in an order that is best for human understanding, using a mixture of formal and informal methods that reinforce each other.  

Good code, then, is marked by qualities that go beyond the purely practical; like equations in physics and mathematics, code can aspire to elegance. Knuth remarked about the code of a compiler that it was ‘plodding and excruciating to read, because it just didn’t possess any wit whatsoever. It got the job done, but its use of the computer was very disappointing.’

To get the job done—a novice may imagine that this is what code is supposed to do. Code is, after all, a series of commands issued to a dumb hunk of metal and silicon and plastic animated by electricity. What more could you want it to do, to be? Knuth answers: code must be ‘absolutely beautiful.’ He once said about a program called SOAP (Symbolic Optimal Assembly Program) that ‘reading it was like hearing a symphony, because every instruction was sort of doing two things and everything came together gracefully.’

We are now unmistakably in the realm of human perception, taste and pleasure, and therefore of aesthetics. Can code itself—as opposed to the programs that are constructed with code—be beautiful? Programmers certainly think so. Greg Wilson, the editor of Beautiful Code, an anthology of essays by programmers about ‘the most beautiful piece of code they knew,’ writes in his forward to that book:

I got my first job as a programmer in the summer of 1982. Two weeks after I started, one of the system administrators loaned me Kernighan and Plauger’s The Elements of Programming Style … and Wirth’s Algorithms + Data Structures = Programs. … [These books] were a revelation—for the first time, I saw that programs could be more than just instructions for computers. They could be as elegant as well-made kitchen cabinets, as graceful as a suspension bridge, or as eloquent as one of George Orwell’s essays.

Knuth himself is careful to limit the scope of his aesthetic claims: ‘I do think issues of style do come through and make certain programs a genuine pleasure to read. Probably not, however, to the extent that they would give me any transcendental emotions.’ But in the many discussions that programmers have about craftsmanship, elegance and beauty, there is an unmistakable tendency to assert—as Wilson does—that code is as ‘eloquent’ as literature. …

The day that millions will dash off beautiful programs—as easily as with a pencil—still remains distant. The ‘lovely gems and brilliant coups’ of coding remain hidden and largely incomprehensible to outsiders. But the beauty that programmers pursue leads to their own happiness, and—not incidentally—to the robustness of the systems they create, so the aesthetics of code impact your life more than you know.

This excerpt from Geek Sublime: The Beauty of Code, the Code of Beauty (Graywolf Press), by Vikram Chandra ’84, is published with permission of the author. In his first venture into nonfiction, the noted novelist roams from logic gates to the writings of 11th-century Indian philosopher Abhinavagupta, in search of connections between the worlds of art and technology.

Photos accompanying this excerpt are from the Spring 2014 Hackathon held at Pomona College and are by John Lucas.

The Island of California

The Island of California: Examine the original 17- and 18th century maps of the New World at Honnold-Mudd Library and you'll find an array of creative geography. But there's one point on which all seem to be in agreement: California was an island.

 

1600s map of California from Honnold-Mudd Library Special Collections.

1600s map of California from Honnold-Mudd Library Special Collections.

Somehow it seems fitting that the story of California should begin with a fabulous tale about a mythical island.

Both the island and the myth, along with the state’s future name, seem to have sprung first from the pen of Spanish writer Garci Ordóñez de Montalvo, whose lavish romantic novel Las Sergas de Esplandián (The Deeds of Esplandián), published around 1510, described a race of griffin-riding Amazons living in a far-off realm rich in gold and precious stones—“an island on the right hand of the Indies … very close to the side of the Terrestrial Paradise.” He dubbed this imaginary isle California, a name that may have been constructed from Latin roots meaning “hot oven.”

So, right from the start, California was portrayed as isolated, rich, strange, adventurous, bigger than life, sunburned and next door to Paradise. Is this starting to sound familiar?

The real California—the Baja part—was first discovered by Europeans in 1533 by an expedition commissioned by Hernán Cortés, the Spanish conqueror of Mexico. Sailing west from the Mexican mainland, the crew set ashore on what they believed to be an island. After their shore party was slain in a clash with the inhabitants, the survivors returned to the mainland with tales of an island full of pearls and other riches.

No one knows exactly when or where place and name actually came together, but at some point in the ensuing years of failed colonization, someone—probably some conquistador familiar with Montalvo’s tale and eager to believe in its treasures—gave the presumed island its suitably mythic name.

Here’s where things get a bit strange. Through the rest of the 1500s and early 1600s, the few surviving maps depicted the west coast of North America as a continuous line and Baja California as a peninsula. Then, in the early 1600s, the supposed island of California suddenly returned to the scene, apparently firing the imagination of mapmakers across Europe. For more than a century thereafter, California would be depicted as a huge, rugged outline separated from the west coast of the North American mainland by a narrow strait.

Perhaps the most intriguing thing about maps from this period is that the truth was already known by the time they were made. As early as 1539, one of Cortés’s lieutenants, Francisco de Ulloa, sailed north and confirmed that the so-called island was actually a peninsula, and by the mid-1600s, the geographic facts of the place had been pretty clearly established by its Spanish masters. So why did the island of California resist reattachment to the mainland for so long?

One practical reason may be that the people most familiar with the actual place weren’t making the maps. In the 16th and 17th centuries, the Spanish held sway over much of western North America. Most of the surviving maps from this period, however, were drawn by cartographers in Venice, Paris, Amsterdam and London. These maps were meant for public consumption, so they needed to appeal to the romantic notions of the time. Meanwhile, Spanish mapmakers were drawing their maps behind closed doors to be used by actual navigators, and Spanish officials, jealous of their secrets and worried about foreign intrusions into their New World possessions, had good reason to keep them under wraps—or even to encourage misinformation.

Historian Dora Beale Polk blames the voyage of the famous English explorer (and gentleman pirate) Sir Francis Drake into Pacific waters in 1578 for the myth’s 17th-century revival. Confused stories about Drake’s exploits along the west coast shores seem to have lent new strength to the notion that there was a continuous strait separating those lands from the continent.

But by the beginning of the 18th century, the only remaining prop for this geographical blunder seems to have been the persistence of myth. Mapmakers who should have known better still clung to the diminishing evidence that California was an island. Perhaps they were so enthralled by the notion of California as a strange and magical place—a place that simply felt more suitable as an island—that they couldn’t bring themselves to accept the more pedestrian truth.

A lot has changed, of course, since those maps were made. The California of the 1600s was eventually subdivided into three huge, modern states, one north of the border and two south of it. Here in the United States, the 31st state became the biggest, most populous, most diverse, and, in many ways, most controversial state in the Union.

And yet, as a metaphor, the island of California still feels eerily appropriate, even today. Maybe because there’s so much truth in it. After all, as a bio-region, California has been termed an “island on the land,” isolated from the rest of the continent by such natural barriers as deserts and mountain ranges. And from an economic standpoint, the state is frequently described as if it were a separate nation. (With last year’s economic surge, California reportedly regained its theoretical place as the eighth largest national economy in the world, just behind the United Kingdom and Brazil and just ahead of Russia and Italy.)

Perhaps most importantly, California continues to occupy a place in the cultural life of our nation that sets it apart. Admired by some as a place of innovation and a harbinger of national change and decried by others as a narcotic in the body politic, intoxicating the rest of the country with its crazy ideas, the state seems to inspire in Middle America just about every emotion except apathy.

In 1747, Ferdinand VI of Spain issued a royal proclamation declaring: “California is not an island.” That may have helped bring an end to the literal vision of California as an enchanted isle, but the idea of California as a quasi-myth—a strange and wonderful place in the distant west where venturesome souls might go to find adventure or wealth or simply a spot in the sun—was just getting started.

 

The California We Came To

Sometimes I wonder how it would have gone if this country had been settled backwards, west to east—if those doughty Pilgrims, huddled praying among the ship’s creaking timbers, had anchored not in the crook of Cape Cod but, say, in San Diego Bay. Would the relentless push that drove us westward have driven us eastward just as fast? Would the Eastern Seaboard have seemed as manifestly destined as the West Coast (which somehow never seems like a “seaboard”) once did? What would all that light and warmth have done to the iron in the Puritan soul? Would the Atlantic states—coming late into our consciousness—have seemed enchanted?

yosemite1a As it is, things have worked out nicely. California’s climactic geography came last, a necessary if unanticipated coda to what is often called the American experience. Knowing the end of the story—so far— makes it easy to grasp how incomplete this country would have felt without California, the volatile edge, it seems, of all our national imaginings. For much of the way westward, the story was about settling down, finding a homestead and improving it. But California was never really about settling down. Its very geology is transient. This is where you file a claim on the future and hope that events don’t overtake you.

 It’s hard to live in the everyday way and sustain a mythic consciousness. That’s what I learned going to high school and college in California. My family had crossed into the state over the Sierras, sluiced down 80 into Sacramento. In California, I expected to find a transubstantiated landscape glimmering with intimations of Pacific immortality, and I expected to be transubstantiated in turn. What can I say? We had come from Iowa, and I was 14, an age when the mere house-ness of the house we chose to live in and the car-ness of the car we drove seemed strangely disappointing. I hadn’t expected to find so much ordinariness on display. It seemed as though the Californians who already lived here had lost the magical sense that they were in California. And then I lost it too. It faded away like the San Gabriel Mountains after a hot autumn week without a breeze.

 None of us would get much done if we regularly inhabited a mythic consciousness, and the traffic would be so much worse. So much of life seems to require an ordinary perspective, the sameness and familiarity of the normal. There are times when Southern California seems like a vast machine engineered to produce endless quantities of the ordinary. And yet, from time to time, California rises up and smites you, and you find yourself re-dazzled. It may be the sun sinking out at the end of the 10—the “subtropical twilights,” as Joan Didion put it—or a day of purifying desert light. It may be chimney-stacks swaying slightly in a minor earthquake or the sight of the kelp-matted inshore, out beyond which the gray whales move. It may be nothing more than the scent of rain on asphalt in a dry winter. It hardly matters what it is. You look up, look around, and see, again, what an extreme and beautiful place this is, where the continent crumbles and slips and subducts and the weather blows in from the Pacific and the mountains seem like a temporary arrangement, just waiting to slide down into the Inland Empire.

 Over the past decade, I’ve come to Claremont and Pomona College every couple of years to teach. I always drive out from my home in New York because I always want to come into California from the great emptiness of Arizona or Nevada. It’s a strange sensation, familiar to nearly everyone who comes this way. You seem to get farther and farther west—to get more and more western—and then you cross into California and the very meaning of “west” changes. You have to look pretty hard to find the “west” in California that’s continuous with the west in, say, Elko, Nevada. But that’s one reason I like California so much—I keep discovering ways in which it’s discontinuous with anywhere else, discontinuous perhaps especially with itself.

  I settle in and remember what January smells like in Southern California. The place I left seems unimaginable, part of an old world that seems to contain everything but California. And I wonder again how it would have gone if it had all gone differently. What we have now are the myths that arise historically from the California we came to, not the California we came from. That makes all the difference, as the Pilgrims discovered in their own way and on their own coast.

January 2014