contents

CHRISTOPHER SEWELL LOWERS THE SCALPEL until he feels contact and the soft, elastic resistance of skin. Fine lines and dark hairs mark the tan expanse below his poised hand. He pauses to check heartbeat and oxygen monitors. One careful stroke parts the skin. Another lays aside pink muscle to reveal the unnatural bulge of a tumor. As his blade slides through soft, compliant fat, Sewell feels a stronger, spongy resistance — then an ominous pop. Blood gushes out where his knife has cut into a large artery. The heart monitor blips faster and faster as Sewell trades scalpel for suturing scissor, and struggles to repair the damaged vessel. Blood obscures his view. The heart monitor slows, blips, pauses, blips again. Flatlines.

Good thing this is only a simulation. The “patient” is a set of live-motion graphics, human tissue models, and mathematical calculations on a desktop computer. The “scalpel” is the Phantom Desktop touch feedback device, a pen-shaped tool attached to a robot arm. Gears at the Phantom’s wrist, elbow and shoulder provide resistance to let Sewell “feel” the shape and texture of computer-generated objects. Sewell, a graduate student in computer science, is demonstrating virtual surgery software, part of a joint project between Stanford University computer scientists, robotics engineers and surgeons.

The Stanford group is developing their touch-enhanced system for use in surgical training. Once the technology is in place, the use of simulation in educational programs, tests of surgical skill — perhaps even practice operations for specific patients — are only a matter of time and acceptance in the medical culture, says Stanford surgery chair Thomas Krummel. The key benefit will be allowing aspiring surgeons to push their skills to new limits with no risk to patients.

“How might we rethink our 100-year-old tradition of apprenticeship,” asks Krummel, “and apply what the military and aviation have done to develop performance in a context where failure is okay?” The aerospace industry, for example, has long relied on simulation to test rocket and airplane models as well as pilots. In medicine, and surgical procedures in particular, the longstanding custom of training first on objects, then assisting with live patients, leaves room for improved realism on the one hand, and greater tolerance for error on the other.

Medical students and surgical residents currently practice by cutting and sewing everything from banana skins to pigs’ feet. Then they assist in real surgeries. Eventually they perform entire surgical procedures under supervision, before launching out on their own. But according to Krummel, the custom of apprenticeship is under challenge by ever-tightening compensation from insurance companies. To keep hospitals in business, experienced surgeons must squeeze in more and more operations, which allows less time to show new surgeons the ropes.

At the same time, training through real operations leaves little room for errors as young doctors progress up the learning curve. Animal surgery is an option, but it’s limited by strict animal welfare regulations. And today’s surgeons need more training than ever before, to keep up with new procedures such as laparoscopy — the minimally invasive surgery-through-a-tube — and new technologies such as special staples, robots, cameras and lasers. Realistic practice, in an environment where failure is noncatastrophic and even instructive, is the promise of simulated surgery.

Krummel has been promoting the idea since 1994, when as chair of surgery at Pennsylvania State University College of Medicine in Hershey, Pennsylvania, he read in the Wall Street Journal about the Phantom touch feedback device, made by SensAble Technologies, Inc. Seeing the possibilities for medical education, he contacted and began collaborating with Phantom inventor Ken Salisbury. After both Salisbury and Krummel took faculty positions at Stanford, they joined forces with other surgeons and engineers to develop and test prototype simulation systems.

However, simulation of a live operation — including patient responses and hard-to-model body tissues, complete with touch feedback — challenges the computing power of even today’s high-speed machines. So the surgeons and engineers work together to find the right compromises between realism and performance. The simulated “feel” doesn’t have to be perfect to be useful for training, Krummel says, but it should be as close as possible. The models under development at Stanford are getting pretty realistic, he says, compared to the real thing. But speed is an issue. “Virtual reality can be good, fast and cheap,” Krummel says. “Pick two.”

Computer models that simulate the look, sound and feel of tissues, organs, and patient responses in real time require millions of computations per second — more than today’s computers can handle. Computer manufacturers improve processing speed and memory every year, promising faster simulation in the future. The Stanford researchers are developing algorithms and data-processing tricks to produce the thousands of force-feedback images per second needed for seamless, real-time touch data, not to mention realistic sound and imagery. They will next integrate their detailed visual, auditory and touch models with scenario-setting software that dictates patient condition and responses, for a complete surgery experience in a box.

THE SURGERY THAT SEWELL DEMONSTRATED highlights the patient-response portion of the Stanford project. Sewell, a student in Ken Salisbury’s lab, is creating software that could someday let surgeons build virtual surgery scenarios. Each scenario relies on a set of possible patient conditions or “states,” with criteria for transitioning from one state to another by adjusting important factors — such as heart rate, bleeding, and success or failure of tumor removal. Too much force, or a cut in the wrong location, and the user moves to a state in which the patient is wounded. Fail to take appropriate action in time, and the patient may go downhill. Sew up the bleeding, and the patient stabilizes. The simulations can also include unplanned events, generated at random. The patient might move or cough — or go into cardiac arrest.

Sewell’s software combines the planned and unplanned aspects of patient response and links those to visual and touch-feedback models of the body. His current prototype uses simplified virtual tissues that model the surface of skin and muscle, but not their interiors. A three-dimensional model, he explains, would require a lot more computing power — so much so that simulation researchers worldwide are hard at work looking for the best balance between realism and speed.

One approach to modeling body tissue uses “voxels” — 3-D pixels — to create volume. Each voxel has its own characteristics, like hardness and elasticity, which determine the tissue’s behavior. For example, harder voxels represent the tough outer layer of bone; softer ones, the interior. Alterations between adjacent voxels approximate the varied texture of spongy bone.

Across the lab from Sewell, Dan Morris, a pre-medical student turned computer-science Ph.D. candidate, brings up an image on his computer monitor: a fine scroll of virtual bone. Modeled by Stanford ear surgeon Nikolas Blevins, the 3-D whorl mimics the temporal bone lying just behind each of our ears. Morris grips the Phantom’s pen-like control, which is currently acting as the mouse for a brand-new, 3 gigahertz, dual processor computer. On screen, the mouse cursor becomes a drill, represented by a purple cylinder with a ridged, spherical tip. The computer’s speakers emit a high buzzing drone, creating the ambiance of a dentist’s office.

Morris moves the Phantom to bring drill to bone. His hand stops suddenly, as though he’s struck stone. The buzzing sound momentarily rises in pitch. Morris presses into the bone. The Phantom trembles with a slight vibration, and the buzzing pitch rises. The bone develops a pit, then a hole, as voxels are removed, one after another. Morris presses harder. The voxels disappear faster as the Phantom’s vibration becomes an audible rattle.

Morris brings up another control on the computer monitor, and intermittent blips join the drilling noise. With each blip, a vertical spike appears on screen, along a glowing green timeline. Morris designed the blips and timeline based on devices that monitor nerve function during actual ear surgeries. The blips represent electrical activity recorded from the facial and auditory nerves. Rising like green bamboo shoots from a natural opening in the temporal bone, the delicate nerves respond to nearby disturbances, warning the surgeon that his drill is nearing a danger zone.

As Morris’s drill approaches the simulated nerves, the blips intensify. Morris presses his drill forward over the bone ledge leading down towards the nerves. The drill slips from hard bone into soft nerve. The blips fire in a rapid frenzy, then stop. This virtual ear will not hear again.

Nerves break quickly on contact with a drill. But pressed with a blunt object, they stretch and deform. While voxels provide a good tool for creating virtual bone, the soft, stretchy, and spongy behaviors of nerve, muscle and internal organs call for a different modeling approach. “Having soft things that move around by themselves and can be squished is an important part of surgical simulation,” Morris says, “but things that move around are just numerically unpleasant.” It takes time to repeatedly recalculate position, to compute stretch and tension as the tissues move and pull on one another. “Everyone doing simulation finds some compromise between realism and real-time.”

Morris’s teammate across campus is taking on the challenge. In the quiet hum of the robotics lab in Stanford’s William Gates building, computer science Ph.D. student Francois Conti double-clicks his mouse. A giant set of disembodied innards appear on his computer screen, complete with a shiny red liver and slippery pink stomach. Tiny bluish veins form a lace border along the intestines.

Click.

The intestines squish, elongate and contract, digesting a virtual lunch.

Click.

The organs move slowly up and down, the red liver compressing slightly with the breath of ghostly, missing lungs.

Conti releases the mouse and takes hold of the Delta haptic device. This contraption resembles a red aluminum spider with three gangly legs pointing skyward, joined together at the end by a handle the size and shape of a hockey puck. Motorized gears on each leg joint transmit touch information to Conti, who is holding the puck-handle.

Conti moves his hand to push the computer cursor into the liver on screen. Squish. He pokes the ghostly entrails. They slosh and jiggle. The realistic movement stems from stretchy sheets of mathematical tissue, which Conti wraps around sets of virtual spheres that slide and bounce to imitate the blobby, elastic behavior of organs.

Conti drags the cursor across the liver, which twists sideways, then slips, causing his hand to lurch forward. The apparent surface texture comes from variations in the force passed to Conti’s hand in three dimensions. Friction and stickiness come from resisting movement along the surface of an object. Slipperiness is simply a lack — or release — of resisting force.

To make it all work, Conti says, it’s necessary to first track when and where the cursor meets the imaginary objects. With another mouse-click, the on-screen stomach becomes a 3-D gridwork of white-outlined triangles. As Conti pokes at them, the triangles stretch and deform. The surface of each virtual organ is made up of thousands of these triangles. To know when and where the cursor is touching an object, the simulation checks each triangle for contact. Once contacted, the triangle tracks the cursor’s penetration. The deeper the cursor goes, the greater the pressure exerted against further movement by Conti’s hand. Hard surfaces increase the resisting force very rapidly, allowing little surface penetration. More elastic surfaces, like the liver, increase feedback resistance gradually.

But squishing tissues and opening them up are different matters in computer modeling, just as in real surgery. “The problem with cutting,” Conti says, “is then you have to start displaying what’s inside.” One method is to use voxels to represent the interior of organs in three dimensions. But it takes a lot of computing power to calculate millions of voxels’ stretch and resistance against sharp objects. Mapping every micrometer of nerve, vessel and organ would be prohibitively slow. So Conti came up with a shortcut: He sticks with two-dimensional triangles, creating new triangles as needed to cover newly cut surfaces.

Conti picks up a virtual scalpel on screen, and slices into the white-framework stomach. The triangles next to the cut flatten as the gash yawns open. A channel has formed, lined with a swatch of new triangles. Conti’s simulation works, but needs further refinement before it will keep up with moment-to-moment actions of a mock operation. “This is the big difficulty with our simulators,” Conti says.

Like a film shown at extreme slow motion, touch feedback is jerky when it’s not updated frequently enough — causing the user to sense vibrations as his hand moves across the simulation. In order to produce a realistic feel, the system must produce at least 300 touch “images” per second for soft tissues, and up to 1,000 per second for hard surfaces. By comparison, smooth video requires only about 30 images per second.

To compound the problem, human touch is highly sensitive. Imagine running your hand across the surface of a wooden table. Every grain in the wood, any excess of wax, gives a new sensation. So, realistic touch feedback needs fine spatial resolution. Modeling a single organ in the body requires as many as 100,000 triangles, each of which must be checked up to 1,000 times per second — creating a bottleneck in real-time touch simulation.

FOR FASTER PERFORMANCE, the Stanford group plans to test whether beefing up their processing power with computer graphics cards, which can handle multiple pixels at a time, will improve speed. Meanwhile, to address the performance bottleneck, simulation researchers around the world are refining their models to reduce the number of calculations needed.

“One idea is to use very cheap models — in terms of computation — where you don’t need to cut, and very expensive models where you need it,” says computer scientist Hervé Delingette of the Institute National de Recherche en Informatique et en Automatique (INRIA) in Sophia-Antipolis, France. Delingette and colleagues model the liver using about 5,000 tightly packed tetrahedra, or pyramid shapes. The pyramids create 3-D volume, unlike the 2-D surfaces created with Conti’s triangles. The advantage is that any type of cutting is possible through that volume, but the computations for pyramids are more complex, and therefore slower than for triangles.

To minimize number-crunching and improve efficiency, Delingette pre-computes the stretch-and-bend response of each pyramid. But this strategy means the pyramids can’t change — in other words, the pre-computed tissue can’t be cut. “The problem,” Delingette says, “is specifying beforehand which parts you can cut, which not.”

To get around this, the simulation watches and responds to what the user is up to. As the virtual surgeon approaches an area with the scalpel, the model adds pyramids to that section in preparation to fill out newly cut surfaces, Delingette explains.

Delingette’s group is now working with surgeons at the European Institute of Telesurgery in Strasbourg, France, to evaluate the simulations as a tool for teaching “keyhole” surgery techniques. They are comparing a simulator against a standard mechanical training device, which gives doctors practice in inserting narrow surgical tools into the patient through a special tube, via a small incision. The simulator also uses a tube and narrow tool, but adds touch feedback and a simulated view of the surgery on a computer screen — very similar to the on-screen view used for a real operation. In collaboration with the Strasbourg surgeons, and German and Norwegian companies, Delingette seeks to develop a keyhole surgery simulator that can be deployed for surgical training by 2007.

Back in California, the Stanford team expects to put their simulated ear operation before surgical residents for evaluation within the next six months, according to Stanford surgeon Blevins, who developed the computer models used by Dan Morris to simulate drilling. Currently, Blevins says, residents practice on temporal bone from cadavers. But the bone lab is costly to maintain, and tissue availability is unpredictable. More importantly, he says, simulation can go beyond mimicking anatomy to create scenarios that might arise in a real surgery. Simulations can also present disease processes or challenging anatomy rarely seen in the cadaver lab.

Future work, Blevins says, could even develop the means to build customized simulations using imaging data from specific patients, so that surgeons can practice before a given person’s operation. Software that converts MRI and CT scans into models easily and quickly doesn’t exist yet, but the Stanford group is working on it.

It’s only now that researchers really have access to the technology for creating true-to-life surgical simulations, Blevins says. Virtual surgery doesn’t yet feel exactly like the genuine thing, let alone provide the operating room setting. Still, the Stanford researchers say it can give a close enough approximation to be useful, and will only get better. Simulation will never replace the need for apprenticeship in the OR, but Blevins says it can dramatically increase the amount and breadth of realistic training experiences for medical students and residents. “I think it’s inevitable it will become a standard part of the surgical learning process.”