Artificial Intelligence
The Truth and Consequences of A.I.
A.I. can collect and deliver data, but can it tell truth from a lie?
Posted February 19, 2024 Reviewed by Tyler Woods
Key points
- A.I. is fast and fantastic but it has problems.
- A.I. can't determine what is true or false in the data it collects.
- A.I. is fast, but is it creative? Does it really know all the answers?
Our six grandchildren, ages 1-10, had been outside playing hard all day, having fun. Dinner was over. The kitchen was cleaned. Baths were done. It was now story time.
The cousins crawled onto the couch and surrounded our oldest son, Neil. He pulled out his iPhone, looked at his eager crowd, selected a storyline, and then plugged in their names and characteristics into ChatGPT.
In less than a minute, a full-blown adventure story was created with the cousins as heroes. They were spellbound.
The stories Neil read them were spun with plot twists reminiscent of a half dozen YA writers.
It was a parlor trick. A delightful one, but a trick nonetheless.
It was, for my money, the “age of innocence” of A.I..
Truth or Consequences
Despite the dazzling stories Neil read to them that night, he later commented that there were some real-life problems with the system. To illustrate, he asked ChatGPT to create my professional resume.
Within seconds, my life appeared, albeit more fantastical than the truth. It had me graduating from schools I had never attended. It gave me a sterling array of awards I never even knew existed, added a few more books to my list of accomplishments, and embellished a few other things for good measure.
Was there any truth in it? A smidge here and there, in fact, enough to make me somewhat recognizable and plausible. However, much of it was not true. So not true, it scared me.
What if this embellished ChatGPT resume found its way to the internet?
It was easy to see that the lack of truth in my A.I. resume could have some rather embarrassing, perhaps even legal, consequences.
That’s when I began to worry about the dark power of A.I.
Coming to Grips With A.I.
Why did it fabricate what I had accomplished?
A.I. can rapidly collect and present data, but it doesn’t determine what is true and what is false. It merely collects and presents without questioning the material it found.
So, where did ChatGPT get the false data? Perhaps from someone who had a similar name. Perhaps from some article about me or my work that reported a detail incorrectly. Perhaps it engaged in a flight of fancy and thought I should have done better by now than I have, so it gave me a little extra padding of accomplishments. I don’t know.
I just know that it got it wrong, and getting it wrong is a real-life problem.
Do I trust ChatGPT and A.I. to gather and present the whole truth and nothing but the truth?
No, I don’t.
I’m not alone in my apprehension about this new “thing” out there. Pick up any magazine or newspaper and you’ll find yet another article about A.I.: its power and its problems.
Even the Vatican now has its own go-to A.I. ethicist: Father Benanti. Pope Francis has engaged him to help protect the vulnerable from the coming technological storm that is A.I.. Father Benanti, in addition to advising the Pope, also teaches a moral theology and ethics course, titled "The Fall of Babel: The Challenges of Digital, Social Networks and Artificial Intelligence."
I’m not a theologian or a moral ethicist, but my false resume has made me cautious about using A.I. in my work.
I just received an email announcing a new app for my iPhone that can unleash the power of A.I. to lighten my workload.
The fine print promised that this easy-to-use app, fueled by the power of A.I., can search and collect data on any topic I might need for my work.
It can search, but can it think? Can this new phone app determine what is true and what is false? Can it create new ideas of its own, or just collect someone else’s ideas and present them as original research?
That whole searching the internet for other people’s ideas and bundling them as original research via an algorithm is one of those truly grey areas of intellectual property theft.
The best part about being human is the ability to reflect and have original thoughts. A.I., at this point, can't even tell if something is true or false. It can only give you someone else’s ideas.
In trying to gather background information or think through a problem or a storyline, you might gather information from books you have read, people you’ve talked to, places you've been, and even from searching the internet or using A.I. All of this gathered information becomes a starting point for you to create something original, not an end product.
What A.I. can quickly hand you should never be seen as the whole story: like the stories it wove for our grandchildren.
Truth: Using A.I. to do all the work for you is just copying what others have done.
I believe it was once called plagiarism.
References
The Friar Who Became the Vatican's Oracle on A.I.
New York Times, February 10, 2024