Career
What Is To Be Done
Why today's policies regarding human enhancement don't work
Posted May 23, 2011
Events such as the dramatic action against Osama Bin Laden often make us at least temporarily aware of at least some of the emerging technologies that are more and more rapidly changing the world around us - psychopharmaceuticals; robots with rat brain controllers; cyborg insects and small robotic platforms for military surveillance and other missions; augmented cognition in the battlefield and, less esoterically, in many automobiles; telepathic helmet technology (image credit: Jeff Corwin Photography, Boeing); and so forth. Many such technologies, directly or indirectly, increasingly affect not just what it means to be human, but turn the human body into a design space in unprecedented ways. This makes it important to understand the state of the policy art, so that we can be reassured that we and our society have reasonable and ethical ways forward.
Ah . . . no. We have looked at the literature around human enhancement and transhumanism, and much of it is pretty superficial. In broad terms, there are two positions that people tend to take. One group, the techno-optimists, tend to adopt a fairly libertarian and free market perspective: one should be free to enhance oneself as one chooses, with the government assuring that adequate information regarding risks and benefits is available, but not taking strong regulatory positions. The second group, the techno-pessimists, view human enhancement or in some cases even medical advances as dangerous to human "dignity," and argue that a "yuck factor" will prevent humans from changing what Nature (or God) has created.
Even though proponents of each position can be far more subtle than this simple summary suggests, there are serious and fundamental problems with both positions. The techno-optimist position, for example, assumes that enhancement technologies such as psychopharmaceuticals operate only at the individual level. This may be true if, for example, only a few people taking the SAT test with me have taken off-label ADHD drugs such as Adderall or Ritalin to enhance their test performance. If, however, such use becomes common, a new level of performance is established as normal for everyone, putting significant pressure on me to take appropriate pharmaceuticals just to stay even. If many professional sports figures use enhancement, as is increasingly the case, those with unenhanced skills become uncompetitive. Moreover, the subtle suggestion that enhancing particular aspects of one's cognition or performance makes one a better person - and society better because it's composed of better people - should be treated with caution: enhancing Pol Pot by enabling him to stay awake longer, for example, would not have been a good thing by any metric. The "invisible hand" may work in economics; it doesn't work with enhancement technologies.
But techno-pessimism is not any more helpful. All the available data, from experience in sports to expenditures on cosmetic body enhancement through surgery, indicate that given an opportunity, people will enhance. As enhancements become more core, shifting from changing appearances to enhancing cognition, quality of life, and longevity, the desire to enhance will if anything strengthen. So in reality, arguing for human "dignity" becomes an argument for authoritarian limitations on human enhancement, because that's what you'd need to stop it. And, of course, you're only going to be successful with that argument in cultures that agree with your underlying ideology or religion: just because a dominant minority in one country feels that, say, research on stem cells is inappropriate doesn't mean people in other countries are going to agree. In short, major technology systems may shift, but they won't be stopped.
In reality, the techno-optimist and techno-pessimist positions share the same weakness: they represent an attempt to keep applying early Enlightenment modes of thought in situations where they don't work because things have gotten so complex. There's an irony here: we've got these levels of complexity in our economies and societies precisely because the Enlightenment modes of thought - the reductionism of the scientific method, the applied rationality of Enlightenment approaches to physical, psychological, and social realities - have been so productive and powerful. But now they've resulted in levels of complexity that applied rationality can't manage.
This does not mean, however, that there aren't ways we can more effectively analyze and manage the techno-human condition. Some organizations, such as leading global firms and successful militaries for example, have developed a number of tools - structured games, scenario exercises - to help them grapple with precisely such levels of complexity. What it does mean, however, is that the routine application of old policy approaches to new challenges in the realm of emerging technologies is problematic, and even increasingly dysfunctional.