Shenoy, who heads a new lab at Stanford University, is one
of several researchers across the country who are linking
brains to computers as part of the growing field of neural
prosthetics. He is devising ways to tap into a monkey's
brain and read where the animal plans to reach its arm. He
can route these signals to a computer icon that moves for
the monkey.
By bypassing the need for the brain and arm to
"talk" through the usual neural connections, this
technology could eventually help people with spinal cord
injuries to type, pick up a fork, or turn a page just by
thinking about it. Electrodes set in the brain will talk
to robots or stimulate distant muscles.
Shenoy is well aware that overblown research claims have
raised the hopes of paraplegics in the past, only to
fizzle. But he believes his monkey experiments are leading
to practical results. "We have a view toward human
patient tests. We've initiated those conversations with
neurosurgeons here at Stanford," he says. "We
have to have a bigger picture, an ambitious goal, or we're
frittering away our time."
Researchers in neural prosthetics build devices that make
up for lost neural activity. In the healthy body, the brain
communicates with the limbs via the spinal cord. Messages
zip along as electric pulses through end-to-end nerve
cells, moving from the brain to the spinal cord and from
the spinal cord to the limbs. Any break in this line of
communication stops the message cold, usually permanently.
Using sophisticated new electronic devices, researchers
hope to bypass such breaks.
One approach to treating spinal cord injury, for example,
is to build a neural prosthetic that mimics the work of the
spinal cord. Three steps are involved in building such a
device: plucking neural signals from the brain, making
sense of them, and carrying out the intention encoded in
the signals.
"The biggest bottleneck has been getting neural
signals out of brain correctly," says Daniella Meeker,
a graduate student who collaborated with Shenoy when he was
a post-doc at the California Institute of Technology,
before he moved to Stanford. Each electrode listens to a
single nerve, and there's no wiggle room. If the electrode
moves even 50 microns (the size of a pinhead) away from the
neuron its recording, it will lose communication.
Unfortunately for a scientist trying to place an electrode,
the brain is a bit wiggly. The pliable brain moves
slightly relative to the skull, threatening to move the
target neuron out of earshot of the electrode, which is
fixed in the bone of the skull. This loose connection
between brain and electrode may be the limiting factor for
using neural prosthetics in humans, Shenoy says.
Moreover, electrodes get gummed up with sticky fluids after
a while, insulating them from local signals. The
electrodes that Shenoy and colleagues plant in a monkey's
brain have limited lifetimes. Improving the robustness and
longevity of the electrodes also will be critical to
transferring this technology into humans.
Nevertheless, these challenges haven't prevented
researchers from achieving some startling successes in
laboratory monkeys. Shenoy's research team at Caltech, led
by Richard Andersen, trained a rhesus monkey to touch the
right or left side of a computer screen in response to an
on-screen flash of light. All the while, the scientists
snooped into the monkey's brain, recording neural pulses.
Using this code, a computer read "right" or
"left" from the monkey's brain activity and
flashed an arm icon on the corresponding side of the
screen. The crafty monkey soon realized it didn't have to
lift a finger to get its reward, a sip of juice; it just
had to think about moving. The thought alone was enough to
get the virtual arm to do the work and earn the reward.
"They preferred using the icon to play these video
games we provided them instead of using their real
arm," says Meeker. She and others were surprised that
it was so natural for the monkeys to quit moving their
arms.
The researchers bring the monkeys to a dark, isolated room
where there is no background interference. It's so quiet
in these chambers that you can almost hear yourself think.
And that's exactly what Shenoy is trying to dohear
the monkey's thoughts.
What exactly does a thought sound like? "If you're
listening to it, it is sort of like a buzzing, and the
buzzing increases or decreases its frequency,"
describes neuroscience expert Andrew Schwartz, who does
related work at the Neurosciences Institute, a private
foundation in San Diego. The raw language of neural pulses
is better suited to a computer's ear than a
humans.
But even for a computer, reading these buzzing thoughts is
tricky. "We don't know the language of the brain.
We're tourists with only a visitor's guide book,"
Shenoy says. "The brain is magic. How do wet squishy
neural cells compute? It's just fascinating."
Information is contained in the speed and intervals at
which neuron cells fire their electric pulses. By
monitoring that process for a while, researchers can
correlate the nerve-firing rate of a nerve cell with a
monkey's actual movement.
"We listen in during the normal behavior, and we make
our little map. For example, 100 spikes per second means
right, 10 spikes per second means left," says Shenoy.
Thereafter, they can predict movements from the rate of
cell firings in the recorded neuron.
Once the monkey's intention has been read, that intention
must be acted out. In Shenoy's experiment, the researchers
simply flashed an arm icon to the correct side of the
screen. Eventually, researchers aim to move a real arm
through muscle stimulation or to move a robotic arm.
If you ask Shenoy when this technology will be available in
humans, one answer he gives is "two years ago."
Though there are no systems that move arms, researchers
Philip Kennedy of Georgia Tech and Roy Bakay of Emory
University have implanted electrodes in humans with
amyotrophic lateral sclerosis ("Lou Gehrig's
disease") or strokes in their brain stems. These
patients can't move a muscle but are cognitively alert. The
implants allow them to move an icon over a virtual keyboard
and slowly tap out messages, simply by thinking. This is
the first example of a human brain communicating directly
with a computer.
Kennedy and Bakay implanted two glass cones, each about the
size of a tip of a ballpoint pen, into the brain of Johnny
Ray, a 53-year old brain-stem stroke victim who is
completely paralyzed. His brain functions perfectly but
the signals don't get anywhere. With special chemicals,
Kennedy and Bakay induced neurons in the motor
cortexwhich controls movementto grow into the
glass cones, ensuring that the electrodes would stay in
place. Ray was told to think about moving his finger. A
circuit routed this signal to an icon on the screen instead
of into his arm. After practicing, Ray eventually learned
to will the cursor to move right or left and up or down.
The brain signals act as a computer mouse. They move the
cursor across the screen and select pre-scripted phrases,
such as "See you later. Nice talking with you,"
or "I'm thirsty."
Beyond these initial human tests, the field of neural
prosthetics is embroiled in many controversies. One major
quandary is where to place electrodes within the
still-mysterious brain.
Shenoy's group placed electrodes deep in the brain, in an
area called the "parietal reach region." This
area of the brain first specifies where to you want to go,
and precedes any formal plan for how to get there. It's
the place where thoughts are born.
"This is the highest level, the most abstract plan of
how you want to move your arm," Shenoy says.
Most other researchers place electrodes in the motor cortex
of the brain, which is the last place thoughts visit before
they exit the brain for the spinal cord. But tapping into
the planning region of the brain has advantages, Shenoy
believes. Whereas motor neurons coordinate movement along a
pathway, planning neurons simply tell where and when the
arm should go next an easier set of instructions to
read and transfer. If the neuron just specifies a target,
then scientists should be able to engineer a robotic
solution of how to get there, without having to read tons
of neurons. Recording electrical impulses from a few
neurons is technologically simpler and surgically less
invasive, and thus may be more feasible to do in humans in
the near future, Shenoy says.
Planning neurons may also be less susceptible to the
changes that may take place in motor neurons after
paralysis, when the muscles they control become inactive.
"We're going to a deeper, more isolated, more central
part of the brain, farther from the sites of potential
injury," Shenoy says. "It may well be that,
since the motor cortex is closer to the periphery, if you
have a spinal cord injury the motor cortex reorganizes and
the parietal reach region remains intact."
But not everyone agrees with this theory. "I think
most of the data are against them," Schwartz says.
"My point of view is even if it [the motor cortex]
does reorganize you can train the individual to reorganize
it again to the way you want it to work. In my mind it's
not such an issue."
Shenoy's experiment involved only one neuron, but he says
this was just a proof of concept. He plans to expand to
reading from several neurons, using electrode arrays.
"It could be that if we then go listen to a second
neuron or a third or a fourth or even 100 neurons all at
the same time, then we can do a very good job of predicting
where the monkey wants to reach not just left versus
right, but up versus down, and near versus far,"
Shenoy says.
Indeed, there are distinct advantages to reading more than
one neuron. John Donoghue, a top neuroscientist at Brown
University, says that it is crucial to read from
populations of neurons. "How we're coming to
understand the brain is like trying to understand one
instrument at a time in a symphony," Donoghue says.
"Certain things arise from interactions, such as
harmony, that can't be heard one at a time."
Donoghue and his collaborators at Brown look at groups of 6
to 25 cells in the motor cortex using multi-electrode
arrays. They read out specific motor plans,
three-dimensional pathways with direction and speed, not
just binary movements. "Our lab is interested in
turning thoughts into behaviors," he says.
In Donoghue's experiment, a monkey plays a video game,
rather like ping-pong, where it has to capture an on-screen
target by moving a mouse with its hands. It doesn't take
the monkey long to master the game. After the monkey has
played for a few minutes, the scientists disconnect the
mouse from the computer and switch from mouse control to
brain control, unbeknownst to the monkey. Instantaneously,
the monkey controls the video game from its brain.
"What's coming out of the brain is some kind of code
that mathematical filters can decipher in minutes,"
Donoghue says.
Donoghue was surprised the monkeys could do it so well.
Eventually, one monkey even realized he didn't need to move
the mouse, and he quit moving his hand altogether.
Based on these findings, Donoghue says he could reconstruct
how a person was scribbling on a paper just from recording
his brain activities. "Once you have that signal, you
can control any kind of device that you can imagine,"
he says.
The system performs better when the team reads more nerve
cells, he says. However, it's a trade-off. Breaking into
the brain is one of the biggest obstacles to this type of
technology. The more electrodes in the brain, the greater
the chance of infection a particular danger once the
procedure is moved outside the controlled environment of a
lab.
Says Donoghue, "If you had simply paralyzed one leg,
would you do this [in order to walk normally again]?
Id say were not sufficiently comfortable with
this technology to recommend it in this case."
"The holy grail in these communities would be to have
a totally non-invasive way of reading out the brain and
what you want to do," Shenoy says. "We're not
there, but we're at least getting much closer to the
invasive way of doing what we've been discussing."
There are procedures that involve cutting into a part of
the body other than the brain, and these might be better
for people who are only partially paralyzed. For example,
scientists have sent signals from a working shoulder to a
non-working hand through external electrodes, letting the
shoulder take on some duties of the injured spinal
cord.
He cites functional Magnetic Resonance Imaging (fMRI),
which remotely images brain activity by measuring blood
flow changes. However, like normal MRI, the machine takes
a huge room. Even if you could miniaturize the technology
to a pinhead, the resolution is not good you're not
able to say, "That neuron just fired one spike,"
Shenoy says.
The history of practical successes in the field of neural
prosthetics is rather short. The two biggest success
stories involve reading signals into the brain
instead of reading them out.
The cochlear implant, a commercially available device,
restores hearing to some deaf people, was the first real
interface between the brain and an external, man-made
device. The implant takes over for damaged cochlea organs,
which normally turn sound waves into electric pulses that
stimulate nerve cells in the brain. A receiver under the
ear receives digitized sound from a microphone and converts
these signals to electric pulses. The pulses trigger
microelectrodes in the cochlea, which then spark the brain
neurons
Another electronic device, made by MedTronics, Inc.,
prevents tremors in Parkinson's patients by writing signals
into the brain and disrupting neural circuits.
In fact, for neural prosthetics to be truly useful, they
must be able to both read signals out of the brain
and read them in. "The typewriter is helpful for a
paraplegic, but from a longer-range scientific view, we
want to be able to do much more than this," Shenoy
says.
For example, just picking up a glass is a complicated
coordinated process between the brain and the fingers.
Grip too hard and you might break the glass. Grip too
lightly and youll drop it. The prosthetic either has
to be intelligent enough to gauge how to react, or it has
to be able to talk back and forth with the brain.
Ultimately, the neural prosthetic should be able to learn
to work with the brain. "The brain is going to change.
Therefore, our algorithms and our electronics have to keep
up with, if not encourage, the brains behavior. That
way, the whole system improves itself, just like a child
learning to catch a ball," Shenoy says.
"Eventually, you want to have the computer system
intelligent enough that it fine tunes itself, sort of like
modern cars giving themselves tune-ups. "
Listening to the brain is going to satisfy Shenoy for only
so long. Eventually, he wants to have a conversation.