"This species could have been great, and now everybody has settled for sneakers with lights in them."
— George Carlin
"Sometimes I think the surest sign that intelligent life exists elsewhere in the universe is that none of it has tried to contact us."
— Bill Waterson, Calvin and Hobbes
We saw in my last blog, “Lord of the Flies,” that religion supplies primarily pain and suffering for all the inhabitants of planet Earth. And all means all. The burning of the rain forests in Brazil kills trillions of animals and plants, and such burnings are due principally to a “mandate” — bolstered by religion — that the Earth is ours to do with what we (humans) want. And that's just one thing humans do.
We’ve also seen in this blog that religion is due to our evolved DNA. We humans are hardwired by our evolution to be religious.
The unhappy conclusion is that getting rid of religion, which would be a good thing, is likely not in the cards.
So what are we to do? Of course, most humans don’t want to do anything because they see no need to do anything. This is religion at work. But there is something that we can do, and we are doing it: Build our replacements.
We humans get better at being moral: over the last several thousand years, we have become nicer and kinder. But, as we’ve seen, religion prevents us from getting moral enough. We are very unlikely to reach the heights of morality required for the flourishing of all life on planet Earth. Just as we are epistemically bounded, we also seem to be morally bounded. This fact coupled with both the fact that we can build machines that are better than us in various capacities, and the fact that artificial intelligence is making progress, entail that we should build or engineer our replacements and then usher in our own extinction. In fact, the moral environment of the Earth wrought by humans together with what current science tells us of morality, human psychology, human biology, and intelligent machines morally requires us to build our own replacements and then exit stage left. This claim might seem outrageous, but in fact it is a conclusion born of good, old-fashioned rationality and decency.
Here are three mechanisms evolution rigged up in us (to speak anthropomorphically) to further our species' chances for continuing: a strong preference for our kin, a strong preference for our group or tribe (not all of whom need be related), and, of course, a strong preference for mating. Of course individuals of all species (including plants) have these preferences, but we're the only ones who have them and know that undertaking certain behaviors to satisfy them is wrong. The first induces us to engage in some forms of child abuse; the second induces us to be racists; and the third induces us (and many other species) to rape. (All the details can be found in my papers: “After the humans are gone” (Philosophy Now, v. 61, May/June, 2007, 16-19) and “Homo sapiens 2.0” (in M. Anderson and S. Anderson, (eds.), Machine Ethics, Cambridge University Press.)
So we are morally bounded. Yet, there are things about us worth preserving: art and science, to name two. Some might think that these good parts of humanity justify our continued existence. This conclusion no doubt used to be warranted, before human-level A.I. became a real possibility. But now, it no longer is warranted. If we could implement in machines the better angels of our nature, then morally we have a duty to do so, and then we should exit, stage left.
So let's build a race of machines — Homo sapiens 2.0 — that implement only what is good about humanity, that do not feel any evolutionary tug to commit the evils we do, and that can let the rest of the world live. And then let us — the humans — exit the stage, leaving behind a planet populated with machines who, while unlikely to be perfect angels, will nevertheless be a vast improvement over us.
What are the prospects for building such a race of machines? We know this much: it has to be possible since we are such machines. We are quasi-moral, meat machines with human-level intelligence.
Building our replacements involves two issues: 1) building machines with human-level intelligence, and 2) building moral machines. Kant can be interpreted as claiming that building the former will give us the latter. But most philosophers now regard this as incorrect. Emotional concern for others is required, and this doesn't flow from pure rational, human-level intelligence — pretty obviously, since there are a lot of intelligent evil people out there. So we have to somehow implement within our replacements the idea that all other Earthlings matter. How do we do this?
I don’t know of any answer to this question. But there is no reason to assume that it cannot be answered; we aren’t doomed to building sneakers with lights in them nor Terminators. And when we do answer this question, we’ll be able to exit, secure in the knowledge that we did our very best . . . by building our replacements.