Skip to main content

Verified by Psychology Today

Artificial Intelligence

AI's Paradigm Shift to Foundation Models

New Stanford study evaluates benefits and risks of AI foundation models.

Rooy33/Pixaby
Source: Rooy33/Pixaby

Artificial intelligence (AI) is undergoing a paradigm shift towards using foundation models such as GPT-3, BERT, Codex , CLIP, DALL-E, and others.

In AI, foundation models are machine learning algorithms that have been trained on broad data at a massive scale that can be modified to perform a broad variety of functions.

“Transfer learning is what makes foundation models possible, but scale is what makes them powerful,” the researchers wrote.

A new study by Stanford researchers evaluates the opportunities, risks, and societal implications of AI foundation models.

“Foundation models are scientifically interesting due to their impressive performance and capabilities, but what makes them critical to study is the fact that they are quickly being integrated into real-world deployments of AI systems with far-reaching consequences on people,” reported the study authors from the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and the newly launched Center for Research on Foundation Models (CRFM) at Stanford University.

The Stanford Institute for Human-Centered Artificial Intelligence was founded in 2019 to advance AI research, education, policy, and practice to improve the human condition. CRFM at Stanford University is an interdisciplinary initiative that was launched from HAI with over 175 researchers spanning across more than ten departments at Stanford University for the goal of advancing the study, development, and deployment of AI foundation models.

“In this report, we have endeavored to comprehensively discuss many of most critical aspects of foundation models, ranging from their technical foundations to their societal consequences,” wrote the Stanford researchers. “In this way, we acknowledge the unusual approach taken: we have attempted to clarify the nature of a paradigm that may only have just begun, rather than waiting for more to unfold or the dust to settle.”

Fueled by the rise of AI deep learning and self-supervised learning, foundation models have dramatically increased in scale and scope in recent years. The researchers characterize the history of AI as one of increasing emergence and homogenization.

The Stanford researchers posit that foundation models have also led to “surprising emergence which results from scale.” They point out how the 175 billion parameters in GPT-3 represents a significant increase compared to the 1.5 billion parameters in GPT-2. GPT-3 enables in-context learning where the language model can be modified to a task using a natural language description of the task as a prompt. This contextual learning is considered an emergent property.

“With the introduction of machine learning, how a task is performed emerges (is inferred automatically) from examples; with deep learning, the high-level features used for prediction emerge; and with foundation models, even advanced functionalities such as in-context learning emerge,” the researchers reported. “At the same time, machine learning homogenizes learning algorithms (e.g., logistic regression), deep learning homogenizes model architectures (e.g., Convolutional Neural Networks), and foundation models homogenizes the model itself (e.g., GPT-3).”

According to the Stanford study authors, foundation models have resulted in unprecedented levels of homogenization.

“Almost all state-of-the-art NLP models are now adapted from one of a few foundation models, such as BERT, RoBERTa, BART, T5, etc.,” the researchers wrote.

The Stanford researchers warn that researchers lack a clear understanding of how foundation models do and do not work, and due to their emergent nature, the full extent of potential consequences. They call for interdisciplinary collaboration to address this knowledge gap.

Copyright © 2021 Cami Rosso All rights reserved.

advertisement
More from Cami Rosso
More from Psychology Today
More from Cami Rosso
More from Psychology Today