Christina Bieber Lake
Humanity+?
Most people are blissfully unaware of the World Transhumanist Organization, even now as it is going by its catchy new name, Humanity+. Guided by their dedication to "elevating the human condition," their website provides a mind-blowing look at our possible posthuman future. In H+ magazine, you can read about new developments in cybersex, current plans to keep the human brain from degenerating, and the advent of NEMS, nanoelectromechanical systems, through which scientists are hoping to build machines smaller than red blood cells in order to improve sensory and motor capabilities. You can also learn the top-ten Transhumanist pickup lines, including "Wanna get our bodies frozen together so we can be immortal like ice ice baby?" The pickup lines notwithstanding, transhumanists are serious folks who take a no-holds barred approach to biotechnology. Max More provides a clear statement of their goals. The transhumanist seeks "the continuation and acceleration of the evolution of intelligent life beyond its currently human form and human limitations by means of science and technology, guided by life-promoting principles and values."
The guiding principles and values have always been the rub. To study transhumanist literature is to quickly recognize that the highest value held by transhumanism is the promise of technology itself. But lest we think these outliers are foreign to us, it is sobering to recognize that transhumanism is really only a pumped up (enhanced!) version of what Albert Borgmann astutely calls the "persistent glamour" of technology: the tendency of advanced technological societies to turn to technology first to solve problems. The dreamer says, "I want to lasso the moon!" To which the glamour of technology replies, "There's an app for that!!" Writ large, the appeal is nearly irresistible.
In The Techno-Human Condition, Arizona State University professors Braden Allenby and Daniel Sarewitz argue that the only certain thing about our posthuman future is that nothing is certain. Promised the moon, we may get a second sun. Or no sun. Or no moon. Or, more likely, altered tides, tsunamis, destruction of coastal cities, and subsequent re-development of inland cities (viva Detroit!). Promised an enhanced brain, we may get that, but it will not work exactly the way we anticipated. It is more than just the possibility of unintended consequences; it is that all consequences will be unintended. No matter what, complexity will prevail. Like Columbus, we will set out to find India, but what we will find instead will be "new, curious, and unexpected." And so today, rather than haggling over the future, we should "question the grand frameworks of our time" that lead us to think we control it.
Humans have always used technology, so arguments that see technology as the ultimate savior or ultimate destroyer of humanity are worse than non-starters; they misdirect the conversation, preventing us from asking the most important questions. And so Allenby and Sarewitz propose a new taxonomy for discussing how technology functions. Level I concerns the immediate effectiveness of the technology, or how it is used to perform what is desired: an airplane gets us more quickly to our destination. Level II describes the systematic complexity that a given technology is part of, encompassing emergent behaviors that cannot be predicted. Here, the authors explain how airline technology is enmeshed in a system of schedules that affect other transportation systems, and how mass intercontinental transit contributes to the unforeseen spread of diseases like SARS. Level III describes how behaviors "co-evolve" with technologies—for instance, the way that airline travel has helped to engender mass-market consumer capitalism, consumer credit, and so on.
Though they admit that there are no clear boundaries between these levels, Allenby and Sarewitz rightly insist that anyone writing about technology should be aware of their differences. Writers who think about only one of these levels will either ignore important contributions of technology or destructively extrapolate from those successes. One example of a successful (bounded) Level I technology is the development of vaccines, especially those for polio and smallpox. Vaccines have solved the problem of the spread of these diseases much better than any competing method. But that does not give us leave to extend the reasoning to other technologies, which is what transhumanists tend to do. Transhumanist rhetoric is incoherent because Level I solutions "cannot be plausibly extended to imply that those technologies represent solutions to more complex social and cultural phenomenon. It's a category mistake." An example of this kind of mistaken extrapolation would be the argument that reprogenetic technologies (Gattaca-type selection of genetic traits for children) will eventually solve the problem of discrimination by eliminating racial differences. This "solution" would have unpredictable other outcomes, not to mention that it would do nothing to get at the underlying issues. Thus, the authors insist, we must "muddle through" these problems by way of "integrated inquiry," with full awareness of our tendency to reason in this faulty way.
Both sides of the technology debate, the authors argue, are enamored of Enlightenment assumptions about how much control and power we actually have as individuals and as a society. Transhumanists tend to think that enhancing individual brains (for example) will necessarily eventuate in a better society, but this is decidedly not so, as enhancement by individuals "does not aggregate well." Opponents, Allenby and Sarewitz insist, tend to argue with the same set of assumptions. They are both fatefully misguided. The ever-shifting evolution of technology will be driven by "economic efficiency and competition for military and cultural dominance, not quality of life or 'better humanness,' even if we knew (or could agree on) what the latter was." In my opinion, this point cannot be made often enough.
Historically, assuming individual choice and control has led to blindness about Level III complexities. A stunning example is how the 19th-century development of the railroad changed the economy, our sense of time, and social structures—all in unpredictable ways. We should heed the example well, for today a cluster of new technologies are changing us equally quickly and substantially: nanotechnology, biotechnology, information and communication technology (ICT), and applied cognitive science. The authors discuss examples ranging from the military's development of cyber-insects (both real bugs guided by electronic circuitry and electronic gizmos behaving like bugs) to telepathic helmets and lethal autonomous robots, all to show how quickly our advanced techno-human future is coming, and how radically, and unpredictably, human life will be redrawn.
Though the approach that Allenby and Sarewitz advocate will be unsatisfying to some, it is unabashedly and refreshingly pragmatic. Rather than lock ourselves into a do-or-die vs. do-and-die debate about technology, we must muddle through the complex scenarios with humility. What we call problems are usually conditions, and conditions require balanced and prolonged attentiveness, not one-size-fits-all answers. Not only can enhancement technologies not be predicted with certainty, they can never resolve our conflicting values about what future would be most desirable.
It is so rare for anyone to argue for humility these days that my ears pricked up. This book helpfully illuminates the dangers of a way of thinking that is not in the habit of asking the right kinds of questions. To my delight, the authors even give a nod to the importance of speculative fiction in asking ourselves what kind of persons we are so bent on becoming. We need humility to recognize how often we don't even know what is best for us. If I were queen, I'd stipulate that every major biotechnological firm emblazon this quotation from The Techno-Human Condition on their walls: "If we don't embrace and understand our incompetence we will never manage our technological prowess."
The only substantially frustrating thing about Allenby and Sarewitz's argument is their tendency to set up straw men when characterizing the debates surrounding enhancement technology. While proponents of transhumanism can readily be seen to occupy certain Enlightenment biases toward progress, opponents cannot be so easily categorized. Allenby and Sarewitz make it sound as if the only people arguing for restraint and caution are those who defend a view of the human person as having an inviolate human nature, one that should not be touched by technology at all. This is simply not true. Similarly, they claim that thinkers like Jacques Ellul make arguments that are "all Level II, all the time." This is not true either, and may even be ironically self-defeating. Allenby and Sarewitz seem to most want a lively debate in which Level III questions can be addressed in all their complexity, with an awareness of guiding values, not just outcomes. While they recognize the importance of different perspectives in such a debate, they devalue the contributions of the thinkers who, precisely because of their "coherent worldview" convictions, are trying to raise the larger questions such as: "Whose values? Which outcomes?"
I understand, and have a lot of sympathy for, the authors' insistence that the "guiding precept for individual authenticity must be that which you believe most deeply, you must distrust most strongly." But this precept cannot be as absolute as they seem to want it to be. If held to by individuals entrusted to make larger ethical choices, it might eventuate in just the kind of capitulation to market forces that Allenby and Sarewitz seem to understand is not the best way forward. In other words, as Alasdair MacIntyre, Stanley Hauerwas, and many others have pointed out, certainty about human progress is not the only failure of the Enlightenment. After we recognize that the values we hold are indeed historical and contingent, the next move is to understand that without some guiding narrative, ethical decisions are incoherent. Understanding this, it seems to me, is also a necessary part of muddling through our techno-human condition.
Christina Bieber Lake is professor of English at Wheaton College. Her latest book, Prophets of the Posthuman: American Fiction, Biotechnology, and the Ethics of Personhood, will be published this fall by the University of Notre Dame Press.
Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.
Click here for reprint information on Books & Culture.
Displaying 11 of 1 comments
See all comments
Skip MCKINSTRY
Certainly, Jacques Ellul, was not "level-II, all the time." His discussion of technique was specifically about the dynamics that served as drivers of the co-evolution of technique and humanity. Perhaps it was more technique evolves and humanity adapts, but that adaptation is almost always a spiritual dead-end. Personally, I have never understood why the machines would "want" to merge with a flawed and fallen race anyway.
*