Skip to main content

Verified by Psychology Today

Artificial Intelligence

The Future Just Took a Big and Scary Step Forward

Can causality become a victim of technology?

Pixabay
Source: Pixabay

The future took another step forward. But it was stopped right in its tracks by its inventor, OpenAI. According to its website, OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence. Elon Musk is one of the backers.

In a recent blog post, OpenAI's technology, called GPT2, was shown to craft written passages that mimic the style and content of a given sample. Now think about that for a moment. Artificial intelligence can recreate your literary voice base on a simple writing sample. And that "you" can be just about anyone: John Nosta, William Shakespeare or, I guess, Elon Musk himself. My sense is that the printed word--in emails to proclamations--has always been a genuine reflection of the author. Style and content have been the typographic personality that enhances communication. But that insight might fall victim to technology's intrusion into humanity.

These innovations are not without merit. The utility is significant and, as suggested in their blog post, can include a wide range of applications that include writing assistants, enhanced translation, and better speech recognition. But the story is certainly a bit more complex. The ability to mimic a writing style raised important concerns. From generating fake news to outright impersonation, the potential for misuse is concerning. It's so concerning, that this technology was met with technological breaks. OpenAI will not be opening this modern-day Pandora's Box" but will be leaving the lid a bit open for "experimentation."

Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.

But that's just part of the story.

Just beyond the ability to craft words that accurately mirror specific authors lies the emerging ability to modify videos. Content and video are merging to create a "new reality" that is truly confounding. The words of Nietzsche smashed into the character of John Bulishi maybe the comic side of this mash-up, But, the harsher reality is reality itself. The truth suffers in the context of technology as what's left is almost impossible to describe. In physics, there's a term called non-causal reality that describes how events don't make sense if things can travel faster than the speed of light. Simply put, a non-causal reality (speed that's faster than light) could result in an observer seeing an effect precede its cause. And the ensuing questions demand attention: what can we believe?

Science and philosophy introduced the idea of the causal world. And into our causal world, physics introduced a non-causal reality--kept in check by the speed of light, at least for now. The advances in video editing and manipulation combined with the language skills of GPT2 begin to suggest that a new non-causal reality can emerge. Our ability to discern fact from fiction and even to navigate our reality may be about to change. Perhaps the better word isn't "change", but "shift" to a point where an empirical reality is just part of a hypothetical realm that extends our multiverse.

OpenAI has taken an inevitable step forward and opens the lid of technology's new Pandora's Box. Opening it was a tremendous effort. I wonder if their cautious posture of keeping that lid closed—at least to mainstream innovation and exploitation—will be much harder to do.

advertisement
More from John Nosta
More from Psychology Today
More from John Nosta
More from Psychology Today