science notes banner science notes banner science notes banner
link to science notes archive
link to science notes 2001 contents page
science notes banner

Written by Erica Klarreich
Illustrated by Cathy Genetti Reinhard

The first digital computer weighed as much as six elephants and filled a room big enough to hold 20. Experts predicted in 1949 that someday an equally powerful computer would weigh only as much as an automobile, and be as small. That prediction turned out to be just a little conservative. Half a century later, computers 1,000 times more powerful fit into the palm of the hand.

Now, scientists speak of supercomputers the size of a teardrop. They envision a microscopic machine that could sidle up to a bacterium to decide what type it is, and a hand-held computer smart enough to chat with you as if there really were a little man inside. They talk about being able to do the work of all the computers in existence today on less energy than a single light bulb consumes. And they are dreaming not of the distant future, but of ten or twenty years from now.

To accomplish these goals, researchers are coaxing atoms to assemble spontaneously into computer components one millionth the size of those in use today. “We’re talking about devices so small, you could put 10 billion of them on the top of a grain of salt,” says James Ellenbogen, head of the Nanosystems Group at the MITRE Corporation, a non-profit research organization in McLean, Virginia. “Right now, a Pentium chip in a brand new computer has only about 10 million devices in the space of a postage stamp.” To find the building blocks for such tiny machines, scientists are looking beyond the device that has been the basis for computing for more than four decades, the silicon transistor.

All the calculations that go on inside a Pentium chip today rest on two simple ideas: that information can be encoded into long strings of ones and zeros, and that these ones and zeros can be represented physically by switches called silicon transistors—when a transistor is open, it represents a one, and when it is closed, it represents a zero. By linking transistors together into complicated circuits, engineers build machines that can transform information in a dazzling variety of ways.

Recently, though, rumblings have arisen within the computing industry that the silicon transistor is nearing the end of its heyday. The transistor is the great success story of modern technology, the force that has propelled the computer from a laboratory curiosity into the indispensable tool of modern society. Its strength lies in its capacity for miniaturization. During the last forty years, the number of transistors that engineers can fit on a silicon chip has doubled every 18 to 24 months. This rapid doubling rate is the famous Moore’s law, first observed in the 1960s by Gordon Moore, one of the founders of Intel Corp.

But this breathtaking advance in computer power has not come cheap. To make a computer chip, manufacturers etch vastly complicated circuits onto a silicon wafer. The circuits must be virtually perfect if the chip is to function—no easy matter when they involve 10 million switches. As engineers try to fit exponentially more transistors onto a chip, the price of manufacturing chips is also growing exponentially (a second, lesser-known observation of Moore). It now costs $2.5 billion to build a single chip factory, and by 2012, the cost is expected to soar to somewhere between $30 billion and $50 billion. Continue this trend another fifty years, experts say, and the cost of a single factory will outstrip the annual gross product of the entire world.

Even more serious, as transistors are getting smaller, engineers are running up against a fundamental physical obstacle: Key layers of insulation are getting thinner and thinner. Within the next decade, transistors are projected to shrink to 100 nanometers (billionths of a meter) on a side. At that scale, the insulation will be so thin that it will not be able to prevent electrons from leaking through.

The silicon industry has come up with a tentative roadmap to maintain the exponential shrinking of transistors through 2012. But after that, what next? “If you want to carve up a chip into a trillion little pieces, each a nanometer across, and they all have to be the same size, you rapidly begin to wish that there were God-given nanometer- scale structures,” Ellenbogen says. “One day you stop and say, ‘Wait a minute, I’ve got an epiphany: there are God-given nanometer-scale structures. They’re called molecules.’”

Building a single-molecule computer component entails working on a truly Lilliputian scale—a nanometer is just about 10 atoms in size. In spite of the difficulty of such a task, several approaches are beginning to bear fruit. Two research teams have managed to build tiny wires and single-molecule switches, and are working on novel ways to sculpt arrays of these structures into complicated circuits. A third team has decided to do away with switches altogether, and is trying to represent ones and zeros instead by patterns of electrons sitting on tiny metallic dots.

These efforts to build ‘nanocomputers’ come none too soon, says R. Stanley Williams, senior principal laboratory scientist at Hewlett-Packard Laboratories in California.

“It’s obvious that in another decade, computing is going to involve the nano-scale world,” Williams says. “To be on track for that, we have to start doing the basic science now, looking 10 or 20 years into the future.”

It’s a small world, after all

While the transistor hurtles toward its limit, Williams, with colleagues Philip Kuekes at Hewlett-Packard and James Heath and Fraser Stoddart at UCLA, is preparing a new army of wires and switches to spring into action. They have already overcome the first few hurdles. Two years ago, they built a single-molecule switch, just a couple of nanometers wide, consisting of a dumbbell-shaped component with an interlocked ring. When the molecule, called a rotaxane, sits between two wires, electrons flowing through one wire can hop onto the rotaxane and across to the other wire. But if a voltage is applied to the rotaxane, the interlocked ring slides to a new position on the dumbbell and blocks the electrical current.

Once the ring slips to its new position, it can’t be moved back, making the rotaxane molecule a single-use switch. But since discovering the rotaxane switch, the researchers have come up with a dozen more molecular switches, including reversible ones. “It’s getting better by the month,” Stoddart says.

Making the switches is a blend of technological precision and alchemical magic: Start with a beakerful of one compound, toss in some of another, add a pinch of a third, and salt to taste, until you have a molecule with the desired properties. Coming up with the right recipe takes ingenuity and patience. But once the formula is found, it is a fairly easy matter to produce large quantities of the switches. “In a factory,” Ellenbogen says, “you could produce pounds for just pennies.”

A pound doesn’t sound like much material for building a new generation’s computers. But it contains a mind-boggling number of molecules. A single drop of water, for example, contains a billion trillion water molecules—more than the total number of silicon transistors manufactured in the past forty years. “One batch of molecules could last you through eternity,” Williams says.

Last year, Williams’ team figured out a simple way to make wires on the nanometer scale. The idea is to start with a smooth sheet of silicon, in which the atoms are arranged in a neat rectangular lattice, then deposit some of the metallic element erbium. Erbium reacts with silicon to form erbium disilicide, a chemical with two silicon atoms for each erbium atom. The two silicon atoms in an erbium disilicide molecule prefer to sit a certain fixed distance from each other; this distance matches the spacing between silicon atoms in one direction on the rectangular sheet, but not the other direction. Accordingly, the erbium atoms on the sheet bond with silicon atoms in the preferred direction on the sheet, lining up into perfectly spaced wires. “I’m ashamed we didn’t think of it years before,” Williams says.

But it is one thing to construct the building blocks of molecular circuitry, and another to combine them into the intricate arrangements of ‘logic gates’ that enable a computer to calculate. Putting the switches and wires into place one at a time simply isn’t an option—“You don’t have enough seconds in your life to push them all around,” Ellenbogen says. Experts agree that any computer design that will take full advantage of the minuscule size of these devices must involve self-assembly, in which molecules fall into place through chemical processes. The erbium disilicide wires are a prime example of this technique. Chemists also can make molecular switches attach to a metallic surface, by using what are called ‘alligator clips’—two-ended molecules that like to glue themselves to a switch on one end and the metallic surface on the other end. When chemists dip the surface into a solution in which the switches and clips are swimming, the clips attach to the surface like burrs latching onto fabric, dragging the switches along with them.

Scientists are left with one crucial question: What is the right arrangement of wires and switches? The ideal architecture must satisfy several requirements. First, it must involve a configuration that chemists can make through self-assembly. Second, it must have the potential to be molded into a complex arrangement of logic gates. And third, it must be able to function even with defective components. Because chemical assembly inevitably produces a few flawed molecules, scientists can’t hope to achieve the level of perfection demanded by conventional computer architectures. “One hundred square millimeters can hold a trillion molecular devices in a checkerboard layout,” Ellenbogen says. “You could have just a one-millionth failure rate, and still if you have a trillion devices, you will have a million flawed devices in that space.”

Williams’ team hopes it has found the solution to this problem. Computer chips today are built like family trees, with information passed down from one generation of wires to the next. But a tree structure is not viable for a system prone to defects, because if one switch breaks, all the wires further along get blocked—the same way a major highway accident blocks access to all exits beyond the accident. Williams and his colleagues propose to replace the tree architecture with a criss-cross array of wires, like a city grid, with switches at the intersections. That way, if a switch is defective, there are still plenty of ways to get from one place to another via ‘side streets.’

It might seem odd that computer designers aren’t already taking advantage of such a simple solution to defective wires and switches. But conventional wires are expensive, and the crossbar layout uses almost 10 times as much wire as the tree. “If you bought a house, you wouldn’t want to spend 10 times the mortgage buying insurance,” Williams says. For the crossbar architecture to be effective, there has to be an inexpensive way to produce wires in abundance—exactly what Williams’ team has found on the molecular level.

Williams’ colleagues at Hewlett-Packard have already built a prototypical crossbar computer called Teramac, out of conventional chips. Teramac was constructed with more than 220,000 faulty components, about three percent of its resources. Once the crossbar was arranged, the researchers connected Teramac to a workstation that tested the crossbar to identify the defective switches. The workstation then reset Teramac’s good switches to a configuration that could perform complicated calculations while staying far from the defective switches. The end result was a computer about 100 times faster than a top-end workstation.

The next problem for Williams’ team is to figure out how to get molecular wires and switches to assemble into the necessary crossbar structure. So far, the solution has been elusive. “We’re trying to make something trivial, just a little tic-tac-toe board,” Heath says. “The fact that we can’t do it right now tells you just how hard it is, and how crude our knowledge is of this kind of process.”

James Tour of Rice University and Mark Reed of Yale University, who have produced their own promising breed of molecular wires and switches, have a different idea. Since assembling such devices into a crossbar presents a huge difficulty, why not instead just dump a bunch of them onto a flat surface, and see what you get? The resulting circuit will be a “junkyard” in comparison to the crossbar architecture, Tour says. But he still hopes to turn it into a useful machine, by hooking it up to computer software. A program would test various on-off configurations in small blocks of switches, to see which produces the best response to a particular input signal. Tour likens the process to oil exploration, in which a few sensors above ground can detect what is deep inside the earth. “We’re mapping out what’s inside a box by testing on the outside,” Tour says. “The software just has to figure out how to use it, not what it looks like.”

Like the crossbar design, Tour and Reed’s approach has advantages and disadvantages. It gets around the difficulty of assembling the wires and switches into a predetermined structure. And since the molecules assemble randomly, many more will fit on a surface than in the crossbar arrangement, which leaves large spaces between the switches. But random assembly imposes a huge burden on the software that must configure this hodge-podge of wires and switches: Finding the best arrangement may take too long to be practical. Tour hopes this difficulty may be avoided by keeping the blocks small.

Regardless of which architecture engineers choose, they face a serious obstacle in the amount of heat electrical circuits give off. Pentium chips, with their densely packed transistors, emit more heat per unit of surface area than a stove-top cooking element. That’s why when you turn on a computer, you hear an immediate whirring noise—it’s the sound of a fan, keeping the computer from melting down.

Molecular components would require much less current than the transistors in use now, but there would also be many more of them packed into the same area. In principle, a million molecular switches could fit in the space a transistor takes up. But heat dissipation may make it impossible to fit in more than 10,000, Williams says. “We all have strategies for thinking about how not to heat them up too much,” Ellenbogen says. “But we’re still going to be passing current through a very small space.”

Connect the dots

Since that’s the case, could it be time to move beyond using electrical currents to compute? Imitating the wires and switches of today’s computers may not be the best approach to building molecular computers, suggests a team of scientists at Notre Dame University. “When the automobile first came in, the idea was to take the technology used in a carriage and transfer it to the automobile,” says Marya Lieberman, a chemist at Notre Dame. “Later the auto developed and was seen as something of its own, not as a horseless carriage.”

She adds, “The approach we’re using is to start with the question, ‘What do molecules do well?’ And then build up a computing scheme from that.”

One thing molecules do well is hold electrons. Lieberman’s colleagues Craig Lent and Wolfgang Porod have designed a theoretical computer architecture based on objects called quantum dots, small islands of metal that use an electric field or insulation to hold a single electron in place. Scientists have built quantum dots just a few nanometers in size.

To represent zeros and ones in this new computing scheme, Lent and Porod plan to use quantum dot ‘cells,’ consisting of four dots placed at the corners of a square. If you throw two loose electrons into the cell, they will migrate to two of the dots. Since electrons repel each other, they will prefer to sit on diagonal dots rather than adjacent dots, so as to be as far from each other as possible. This means the quantum dot cell will take on one of two configurations, corresponding to the two diagonal arrangements of electrons. To use quantum dot cells for computation, call one of those configurations ‘zero’ and the other ‘one’.

But it’s not enough to have a way to represent ones and zeros. To do meaningful calculations, you have to be able to send information over distances, and transform it according to rules of your choice. To accomplish the first of these tasks, Lent and Porod propose ‘wires’ consisting of long strings of quantum dot cells, lined up so that each square touches the next square along one of its sides. To send a signal through the wire, no current is needed. Just set the first cell along the wire into your chosen configuration. The electrons in the second cell move to sit in the same configuration, since that is the way they can stay farthest from the electrons in the first cell. The electrons in the third cell do likewise, and each cell influences the next, in a domino effect. At the end of the wire, the electrons sit along the same diagonal as the ones at the start of the wire, and you have successfully transmitted your signal.

Lent and Porod also have found arrangements of cells to perform the various logical transformations used in calculating. One essential logical operation is something called a ‘majority gate,’ a device that takes in three bits of information (zeros and ones) and produces an output the same as the majority of the inputs—so, for instance, if you put in two zeros and a one, the majority gate will give out zero. To build a majority gate, start with one quantum dot cell and put cells next to three of its sides. If the three outer cells are assigned particular configurations, the inner cell will take on the configuration that appears more often in the outer cells—the majority configuration—since that is the way its electrons can stay farthest from the other electrons. Lent and Porod also have a design for an ‘inverter,’ a device that switches a zero to a one, and vice versa.

In a quantum dot wire, a signal at the left end of the wire works its way to the right end as each cell influences the cell to its right.

Click for animation (10 Kb)



In an inverter, the output signal is the reverse of the input signal. Inverters and majority gates can be combined in complicated arrangements to create all the logical operations a computer uses.

Click for animation (59 Kb)



In a majority gate, the output configuration is the same as the input configuration that occurs most often (in this case, the configuration in wires B and C).

All the manifold logical structures that go into computing can be constructed by connecting inverters and majority gates together into complicated arrangements. So the next step is actually to build a computer.

Engineers have not yet managed to build nanometer-scale dots uniform enough to be used for computing. But the Notre Dame researchers have built a prototypical quantum dot system out of larger dots. This first device has a serious drawback. It can function only at temperatures near absolute zero, the temperature at which molecules stop moving. Researchers are working on raising the operating temperature and shrinking the size of the dots. Happily, those two things may go hand in hand: As the dots get smaller and closer together, the forces they exert on each other will grow stronger, making it possible for a computer to function in the more chaotic environment at room temperature.

Figuring out how to exploit single-molecule dots will not be easy, says Lent. To detect the configuration of a dot, researchers will have to come up with a device that can sneak up close to the molecule and discover the location of a single electron—a significant technological challenge. And, unlike the approaches of Williams and Tour, the quantum dot architecture would rely on potentially expensive top-down manufacturing techniques like the ones in use now, in which the architecture is decided upon ahead of time and then imprinted on a chip. But Lieberman hopes that as etching techniques are refined, cost won’t be an issue. Since quantum cells are all identical, she says, it should be easier to put them in place than wires and switches made of several different kinds of molecules.

If the Notre Dame team does succeed, the quantum dot computers should give off much less heat than any computer with current running through its wires, Ellenbogen says. “Then you have a cool computer,” he says, “in both the literal and metaphorical sense.

Any or all of these designs may produce soon working molecular computers. Experts aren’t sure which one has the best chance, but they agree that molecular computers are only one or two decades away. “It’s almost frighteningly close,” Williams says.

When they arrive on the scene, molecular computers will deliver one indisputable benefit: a drastic reduction in the amount of energy computers need to operate. This is welcome news at a time when computers are consuming a growing share of the nation’s power supplies. “In principle, it should be physically possible to do the work of all the computers on Earth today using a single watt of power,” Williams says. “There’s a wonderful energy-saving potential.”

It’s hard to imagine what we might do with computers on such an incomprehensibly tiny scale. “If they are as efficient as we hope they’ll be, they could be in places we don’t expect them, like clothes,” Heath says. “When you come up with something cool and new, people usually come up with cool and new things to do with them.” Why would you want a computer in your clothes? For starters, it might have a digital display that could help you navigate an unfamiliar city, or remind you the name of that person coming toward you, who’s about to be mortally offended that you have forgotten him.

Once molecular computers are integrated into society, Williams predicts, they will be used in an astonishing range of applications, from virtual reality devices that give us the sensation of ‘being there’ to medical scanners that produce a three-dimensional image of a human body, detailed almost down to individual cells. On the other hand, Williams says, we shouldn’t imagine that all the dreams of science fiction writers will suddenly become reality. “The people who stake out the extremes are not working in the field,” he says.

Regardless of how molecular computers are used, Ellenbogen says, they will force us to rethink our understanding of material objects. “What we’re talking about in some ultimate sense is making computation a property of matter, the way color is a property of matter,” he says. “It’s a capability that is almost within our grasp that is transformative. Why do you want computation as a property of matter, what would you do with it? Well, in 1950, no one had invented Excel spreadsheets, no one had a word processor, no one was thinking about the Internet, so no one knew what you would do with a computer in every household.”

There is one type of matter that can already compute, Ellenbogen points out: ourselves. No one knows exactly how many neurons are in the human brain, but experts believe a brain performs the equivalent of 10 thousand trillion operations in a second, Williams says. Within 10 years, he thinks, we will be making computers with that capability. This will raise some serious ethical questions, but also open up a vast untapped potential.

Ellenbogen adds, “Until we have a molecular computer, and we turn the next generation of inventive people loose on it, we don’t know what they are going to do with it. But it will be something wonderful.”


WRITER Erica Klarreich
B.A., mathematics, Brooklyn College; Ph.D mathematics., State University of New York, Stony Brook.
Internship: Nature Magazine, London, England
ILLUSTRATOR Cathy Genetti Reinhard
B.A., biology and environmental studies, University of California, Santa Cruz
Internship: Design Science, Santa Cruz, California

Text © 2001 Erica Klarreich
Illustrations © 2001 Cathy Genetti Reinhard
link to science notes 2001 contents pag link to top of the article link to writer and artist contact information page link to science notes archive nav banner nav banner nav banner nav banner nav banner
article descriptions link to astronomy article nav banner link to sea lion article nav banner link to atoms article nav banner nav banner nav banner nav banner nav banner nav banner nav banner nav banner
nav banner nav banner nav banner nav banner
link to manic article link to farming article link to enemy article link to heart article
nav banner nav banner nav banner
link to planets article link to bactiria article link to egypt article nav banner
nav banner nav banner nav banner nav banner

ContentsPage | Back to Top | Contact Info. | Science Notes Home