Skip to main content

Verified by Psychology Today

Artificial Intelligence

On Superfluous People

And what to do with them.

Key points

  • Yuval Noah Harari's remarks about the future of people in the Age of AI are critiqued.
  • Considering AI as making people superfluous takes us down a dangerous path.
  • There are more human values beyond their productive value.
  • AI might simply be another version of "The Sorcerer’s Apprentice" story.

As if to purposely provoke like a novel kind of adversary, author Yuval Noah Harari has said that "The most important question in 21st-century economics may well be: What should we do with all the superfluous people once we have highly intelligent, non-conscious algorithms that can do almost everything better than humans?" The implication is immediate: if humans aren’t economically viable, they are, ipso facto, superfluous. Harari brings two elements into play here: (1) we are homo economicus (and nothing more), (2) algorithms rule. The two are interleaved for Harari: the better the algorithmic performance, the greater the economic value.

The idea that this might not be the only way to think about value seems to pass Harari by. We must ask, for example, to what end are we aiming with this improved performance? Who is it for if not humanity? The problem here is not with humans, but with a purely economically driven take on the world. This is, unfortunately, a rather natural consequence of a purely materialist view of the world. It we are nothing but matter in motion, then Harari’s logic is perhaps tenable. If we are simply the proletariat with nothing but our labour to sell, then we are seemingly doomed by AI’s impressive advances.

Harari does consider some non-economic values, but finds that humans might well be just as superfluous there:

“Art is often said to provide us with our ultimate (and uniquely human) sanctuary. In a world where computers have replaced doctors, drivers, teachers and even landlords, would everyone become an artist? Yet it is hard to see why artistic creation would be safe from the algorithms. According to the life sciences, art is not the product of some enchanted spirit or metaphysical soul, but rather of organic algorithms recognizing mathematical patterns. If so, there is no reason why non-organic algorithms couldn’t master it.”

We see many problems here, which now come from (2): algorithms rule.

Oh, Noah.

Firstly, the programs we have thus far are little more than “stochastic parrots”. They work from preexisting materials only. They do not create new things—they re-arrange and make guesses from prompts or seeds provided from real human creations. We might get a fugue that mimics Bach, but we will not get a Bach. The result would be a quick stagnation of looping through the old masters. Here, as Salomon once said, there would be no new thing on the Earth, only here all novelty is but permutation.

Secondly, I know of no life scientist (I know many) that would tread so far as to say art is nothing but “organic algorithms recognizing mathematical patterns”—I’m not sure I’ve heard a more absurd claim about art. Some art might well be along these lines, but to say art as a whole is of this form bespeaks an impoverished standpoint.

Thirdly, Harari, even here, is focused on the need to produce things in order to have value. It is still an economic criterion, only switched to the production of artistic creations. This is the most worrying element of his viewpoint, which we might laugh off if he did not have such a seemingly influential role in guiding the minds of others of very significant influence on the global stage.

This much would be bad enough, but Harari goes further down the line of what is beginning to look like good old fashioned eugenics:

“In the 21st century, we might witness the creation of a massive new unworking class: people devoid of any economic, political or even artistic value, who contribute nothing to the prosperity, power and glory of society. This 'useless class' will not merely be unemployed—it will be unemployable.”

Oh, Noah. Surely this is where he stops? Surely he could go no lower than this remark, which removes all inherent dignity from human beings, ignoring human emotions, and converting them into human doings or human nothings? Nope, he goes on:

“The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms.”

Oh, Noah!

The crucial problem is recognising the divine in any human. Maybe, who knows, AI systems will pick up some of that divine spark, in which case they will deserve the same kind of respect as humans and other life-forms do. Even if they have very limited consciousness, then they will, in my opinion, still deserve respect as animals and plants do. Part of this respect would be to not demand that they better be producers lest they not be allowed to exist. This is the road to a dangerous form of eugenics. It is promoting conditional love and respect in an era when the opposite is sorely needed.

But, being more charitable, what can we take away from this? I think it is that it can be useful to view AI as a kind of modern parable designed to show what it is like to have your creations trying to overthrow you through problematic programming and a lack of deeper understanding. This is simply Goethe’s poem The Sorcerer’s Apprentice, or could be if we continue down Harari’s path. If we start to build algorithms that encode Harari’s productive definition of value, then we are, indeed, doomed if they get out of our control. If we also communicate this idea of produce or perish to the public, and to future generations, then humanity will collapse. When that happens, the only beings in the cosmos, so far as we know, that are capable of generating meaning will be gone. We must consider what, if anything, will remain of the cosmos at that point. Bits of data only mean what we make them mean.

advertisement
More from Dean Rickles Ph.D.
More from Psychology Today