Skip to main content

Verified by Psychology Today

Artificial Intelligence

Super Artificial General Intelligence (Super-AGI)

What’s next for virtual mental healthcare?

Key points

  • Artificial intelligence (AI) will soon evolve into super-AGI that will equal or surpass human intelligence.
  • Developers and engineers will create AIs that program and teach themselves, getting exponentially smarter.
  • “Super intelligent” AI has great potential to expand mental healthcare and delivery.
Image by Luke Olson / OpenArt ai 2024
Source: Image by Luke Olson / OpenArt ai 2024

We’ve all heard about artificial intelligence (AI), perhaps the biggest technical revolution since the internet. AI will impact the economy (jobs), healthcare, and people’s daily lives. Since robots and AI are already here making a difference, what is the next big step for technology? We are about to observe an exponential advance in AI because as we soon enable AIs to program themselves, a super-intelligent entity will emerge.

Artificial general intelligence (AGI) is the field of AI research attempting to create software with human-like intelligence. Even newer is the idea that future AGI systems will begin programming themselves: coding, self-prompting, and data mining their own machine learning algorithms to create ever more intelligent AGI. Ultimately, the AGIs will grow themselves into “super intelligent” AGI models that will be smarter than humans. Wow.

Normal human intelligence will seem relatively elementary when compared to these future super-AGI machines. What is worrying is the possibility that the future super-AGIs will become self-aware and conscious. Will these sentient digital entities decide to protect their own existence at the expense of their human creators? Before that happens, consider whether it’s too late to install needed safety rules and guardrails: The question for AGI developers is whether we are now facing an “Oppenheimer moment” where decisions about building super-AGI today will determine humanity’s future.

In Situational Awareness, author Leopold Aschenbrenner makes several predictions. One essential idea is that AGI models will be built to teach and program themselves, with machine learning algorithms orchestrated by the AGIs themselves. “We don’t need to automate everything—just AI research. By 2025/26, machines will outpace many college graduates. By the end of the decade, they ('super-AGIs') will be smarter than you or I.”

In just a few years, super-AGI models will expand their problem-solving and reasoning capacity exponentially to ultimately produce qualitative leaps in intelligence that are orders of magnitude (OOM) beyond what we think of as normal human intelligence today. Those virtual assistants we interact with online today will soon be like extinct dinosaurs compared to the realism of future “virtual agents” who will function more like coworkers than today’s chatbots. As super-AGI systems self-evolve, they’ll engineer themselves to increase their intelligence through OOMs in each successive version. As we instill the ability to self-teach, AGI machines will reach “superintelligence” beyond that of their human creators by about 2028.

Many see the vast potential of intelligent machines. But there are gloomy prognosticators who foretell that super-AGI poses an existential threat to humanity itself. Which of the two views, utopian or dystopian, is correct? We do not know, and therefore, preparation and caution are advisable. A recent paper in Science addresses the kinds of safeguards AI researchers should consider now. (1)

Despite the cautionary warnings for engineers and AI developers, what is clear is that this technology is approaching, and mental health professionals must acknowledge super-AGI’s practical implications for psychology. A virtual revolution in AI-assisted virtual health care—and especially in mental health—is ongoing, and AI already impacts mental healthcare delivery in many ways. (2)

One scenario will create a future AGI-driven mental health virtual practitioner who can:

  1. Be granted instant access to a patient's full health and history,
  2. Use encyclopedic knowledge of mental health books, articles, and cases worldwide,
  3. Demonstrate a deep understanding of therapy and therapeutic relationships based on knowledge of all psychological theories and interventions and
  4. Provide empathic statements and clinical suggestions modeled from (potentially) millions of hours of taped sessions with human therapists utilizing best practices for positive results.

Just as autonomous driving AIs learn from the behavior of millions of driver-miles, so too will future super-AGI therapists be trained and informed by worldwide scientific theory and data and potentially infinite learning from observations of human-delivered therapy. Even better, since it’s an AGI, this expert personal mental healthcare provider will be instantly available to anyone, on-call 24/7, essentially for free. In this way, super-AGI therapists may help improve mental health delivery for the social good.

These advances will happen so long as we can anticipate and prevent ethical problems. Humans are imperfect, and thus, our creations may reflect our biases or mirror our worst instincts. Given the anticipated explosive rate of growth, we should continue to develop AGI, as we would with any new clinical device, to ensure safety while carefully weighing the risks and benefits to humanity.

References

(1) Yoshua Bengio et al. ,(2024) Managing extreme AI risks amid rapid progress. Science384, 842-845 (2024). DOI:10.1126/science.adn0117

(2) Thakkar A, Gupta A, De Sousa A. Artificial intelligence in positive mental health: a narrative review. Front Digit Health. 2024 Mar 18;6:1280235. doi: 10.3389/fdgth.2024.1280235.

advertisement
More from Jeffrey N Pickens Ph.D.
More from Psychology Today