"Scientists for the first time have linked multiple brain cells with silicon chips to create a part-mechanical, part-living electronic circuit" — this opening of a Washington Post story last August grabbed my attention. Peter Fromherz and Martin Jenkner of the Max Planck Institute for Biochemistry in Germany had succeeded in growing networks of six living snail neurons onto silicon chips. These formed synapses along paths the scientists laid down by the same photo — lithographic techniques used in making standard microchips which they then coated with the adhesive proteins that organize nerve cells into networks in the brain. This latest success was the outcome of pioneering work in finding ways to grow neurons — from leeches, snails, rats, monkeys, and other animals — on silicon chips in such a way that electric currents could pass from the chip to individual neurons and between the neurons.
This research left me feeling uneasy — in fact, repelled — but why? The burgeoning field which unites neurobiology, physiology, electrical engineering, computer science, and physics, among other disciplines, is wide ranging and well established. Much of the work centers on understanding human and animal brains and senses by constructing electronic devices which mimic various biological functions, or making devices for other uses based on principles discovered through neurobiological research. Some research attempts to under
stand the way neurons process information in order to stimulate breakthroughs in computer processing and performance. Other silicon systems, whose architecture and design are based on neurobiology, seek to make sensory systems that are competitive with human senses, neuron models which actually emulate living neurons, and real-time pattern recognition systems.
Dr. Fromherz believes that the first mass-produced product that will result from his research is likely to be disposable biological sensors made of cells cultured onto an electronic interface, which would be naturally sensitive to environmental substances that endanger living beings. Some in the field hope that silicon systems could eventually become sophisticated enough to replace living creatures even in complex experiments, greatly reducing the number of laboratory animals used in scientific research. Others point to the potential to repair damaged central nervous systems or other organs. Already a joint MIT and Harvard team is developing an artificial retina made of light-sensitive semiconductors, which would interface with existing nerve cells in the patient's eye. Of course, most of this research involves extensive use of animals.
Then there are such futuristic anticipations as artificially grown networks of living neurons used as computers, or implanted electronic hardware interfacing directly with the human brain. In 1998 Philip Yam reported on the Soul Catcher project of British Telecommunications, which seeks "to develop a computer that can be slipped into the brain to augment memory and other cognitive functions." He continued: "Hans Moravec of Carnegie Mellon University and others have argued, somewhat disturbingly, that it should be possible to remove the brain and download its contents into a computer — and with it, one hopes, personality and consciousness." ("Intelligence Considered," Scientific American Quarterly, Nov 1998)
Much of this speculation seems destined to remain just that — and doesn't touch the heart of my concern. Rather, it is the casual use of living creatures as device components and experimental fodder, which denies them the innate value conferred by the sacredness of life itself. The attitudes underlying such research are widespread in science and society, and this field will undoubtedly continue to grow. In time its products and procedures will affect the lives of millions of people, directly and indirectly, just as semiconductor and medical technology have. Once such technology is commonplace, few will think about the ethical implications of its development, manufacture, or use.
For the most part we are content to use technologies in blissful ignorance of their underlying realities and costs, since we really care only about the results they give us. After all, what are the lives of a few snails — or any number of snails, or rats, or monkeys, or leeches — compared to the ability to repair or enhance human bodies, boost productivity, or increase our convenience? We are accustomed to using animals and plants — in fact, the entire planet — primarily as means to our own ends, rather than as ends in themselves. Still, questions come to mind: As stewards of the natural world, what is our responsibility, and what are our rights, in relation to the earth's other inhabitants? All kingdoms live off each other, but we can deliberately manipulate nature in a way other terrestrial life forms can't: at what point do we cross the line from symbiosis to destructiveness? Is motive the key, or are some actions wrong regardless of possible material improvements to human life?
Sometimes it seems that human nature plus technical prowess destines us to a "brave, new world." And by virtue of our use of technologies and our participation in life today, we each partake of whatever karma the modern world generates — and ignorance is no excuse before that universal law. Perhaps we need to recognize more fully than we have that maintaining our day-to-day lives by means that entail the suffering and degradation of other life forms has consequences for us individually and as a civilization, and for that reason become more informed and concerned about what is involved in the science and technology which surrounds us on every side.