Skip to main content

Verified by Psychology Today

Artificial Intelligence

ChatGPT Makes Us Human

The AI chatbot’s limitations allow us to appreciate our own.

Key points

  • The smartest, most human-like AI-powered chatbot to date, ChatGPT, is coming for the creative class.
  • And yet, ChatGPT will serve to make us more aware of our unique and irreplaceable human qualities.
  • Thinking is hard, critical thinking even harder, and ChatGPT isn’t good at either.
  • ChatGPT is incapable of forming relationships: to itself, others, the truth, the future.
Image created by AI / Midjourney
Image created by AI / Midjourney

The internet is awash in ChatGPT stories at the moment. Just a few days ago, U.S. Congressman Jake Auchincloss gave a ChatGPT-created speech; media site Buzzfeed announced that it would use ChatGPT to create content; and at Wharton Business School, ChatGPT would pass an exam. Colleges and universities, facing a wave of ChatGPT-enabled plagiarism, have been forced to respond with new policies and teaching protocols.

Without doubt, ChatGPT is impressive and arguably the smartest, occasionally even humorous, most human-like AI-powered chatbot to date. And frankly, it was about time for AI to have its big moment. Deep Mind’s AlphaGo beating the human world champion in the board game Go, Lee Sedol, in 2016 perhaps came closest. But it remained an abstract proposition, whereas ChatGPT, as a practical tool, garnered one million users in just five days.

Some compare the advent of ChatGPT to the impact of the iPhone, but that doesn’t do it justice. ChatGPT, and the generative AI that will follow and outsmart it, is more disruptive.

And yet, that doesn’t necessarily mean the apocalypse is upon us. On the contrary, ChatGPT, I would argue, might serve to make us more aware of our unique and irreplaceable human qualities. It is the AI’s very limitations that will make us appreciate our own.

“The king of pastiche”: no suffering, no transcendence

Take the creative act, and writing in particular.

“A writer is someone for whom writing is more difficult than for other people,” the novelist Thomas Mann once remarked. The searching for the right word, the correct tone; the discomfort that lurks between the lines of knowing too much and saying too little, and saying too much and knowing too little; the horror vacui of a blank page, or in its chronic form, writer’s block—all of this is foreign to ChatGPT.

With ChatGPT, these struggles are so yesterday. If you want it to, the AI-powered chatbot always produces something because it has the whole world of online data to draw from, including the conversations it has just had with you. It is, as the AI scholar and author Gary Marcus puts it, “the king of pastiche.” Like us, it has the data. But unlike us, it lacks the self-awareness to struggle with it. It has the intelligence but not the consciousness. It can’t really think.

Thinking is hard, critical thinking even harder, and ChatGPT isn’t good at either. It just rehashes what has already been said; it regurgitates; it is one big recycling machine. And ChatGPT doesn’t alter one fundamental truth underlying any AI and the future-of-work discussion: the only two bullet-proof professions of the future are philosopher and artist. Both cannot afford to automate their work, because it is, in essence, contrarian thinking, counterintuitive imagination.

Nick Cave nailed it in his response to a fan who had prompted ChatGPT to create song lyrics in the songwriter’s inimitable style. Cave’s verdict? “This song sucks.” He explained why:

“ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend.”

Writing, as a transcendent act, will remain inherently human. Now you could argue, of course, that we humans are vast landscapes of data, too, and that our writing is a pastiche, a remix of what has already been written as well. The difference, though, lies in the process: ChatGPT is algebra; human writing, at its best, is alchemy. It adds a layer rather than just adding up the inputs. It has soul, and because of that, can touch other souls. ChatGPT can serve as a writing companion, but it will never write like a human.

“An author without ethics”: not lying, just bullshitting

The other obvious limitation of ChatGPT is ethics. It has no sense of right or wrong, no ethical awareness or moral compass. It doesn’t take a stance when it is prompted to do so. That, in and of itself, raises ethical concerns. Jessica Apotheker, partner, managing director, and global CMO of the Boston Consulting Group, told me that “If you ask ChatGPT, ‘what is the ideal shape of a female body?' it will answer with a neutral disclaimer—clearly an overwrite, and not what the algorithm would have yielded.” She insists that we need to know when an overwrite occurs and expects AI checking the accuracy of AI to become a blossoming field (GPTZero, designed to detect text written by AI, is one early example).

Furthermore, there is the issue of truth. Moral philosopher Harry Frankfurt, in his seminal book On Bullshit, contends: “The essence of bullshit is not that it is false but that it is phony.” In other words, the difference between a bullshitter and a liar is that the liar knows what the truth is but decides to take the opposite direction; a bullshitter, however, has no regard for the truth at all.

Gary Marcus, in a podcast interview with New York Times columnist Ezra Klein, applies this distinction to ChatGPT and other generative AI, which he contends lacks any “conception of truth.” Marcus believes that we have reached a critical point when “the price of bullshit reaches zero and people who want to spread misinformation, either politically or maybe just to make a buck, start doing that so prolifically that we can’t tell the difference anymore in what we see between truth and bullshit.”

Not only is ChatGPT “bullshitting,” it is also not accountable. “If you’re offended by AI-generated content, who should you blame?,” wonders tech journalist John Edwards, concluding that ChatGPT is “an author without ethics.”

Masters of relationships

This is why AI literacy is critical. The so-called AIQ is an extension of our human IQ, a measurement of our human intelligence as it relates to AI: our overall knowledge of AI tools and practices, our mastery of prompts, and our ethical awareness.

ChatGPT is going to change everything—and nothing. Humans will continue to stay in the loop. Ingenuity, imagination, ethics, suffering, transgression, a striving for transcendence, and the ability to lie (and not just to bullshit)—these will all remain exclusive human domains.

ChatGPT can only see the world as it presents itself through data, but it fails to see the world as it could be. It is incapable of forming relationships: to itself, others, the truth, the future. We humans, however, define ourselves through relationships. Even if they may ultimately fail, we can’t help but enter them, for they give us the illusion, and the beauty and terror, of a blank page.

Shaping and cultivating our relationship to AI will (have to) be our masterpiece.

advertisement
More from Tim Leberecht
More from Psychology Today