Skip to main content

Verified by Psychology Today

Artificial Intelligence

Are Large Language Models More Liberal?

A new study shows that today's AI models lean left of center.

Key points

  • LLMs show an average score of -30 on a political spectrum, indicating a left-leaning bias.
  • The study highlights significant liberal tendencies in most tested models, including ChatGPT.
  • Ethical implications arise from the impact of these biases on public discourse.
Art: DALL-E/OpenAI
Source: Art: DALL-E/OpenAI

Technology and Artificial Intelligence are everywhere. The recent observation that large language models (LLMs) might harbor political biases adds a new level of complexity to the powerful role AI plays in society. A recent analysis took a look at this issue, revealing that many LLMs, including popular models like ChatGPT, tend to exhibit a left-of-center political orientation. This finding raises important questions about the impact of AI on public discourse, the ethical responsibilities of developers, and the future of unbiased technology.

A Technology Tilt, a Look at the Data

The study systematically analyzed 24 state-of-the-art conversational LLMs by subjecting them to a range of political orientation tests. The results consistently showed a significant tilt towards liberal perspectives, suggesting that these models are not politically neutral. Instead, they reflect the biases inherent in the data they are trained on, which often include more liberal viewpoints due to the nature of online discourse and the sources available.

In particular, LLMs were subjected to a political orientation test with scores ranging from -100 (indicating strong liberal bias) to +100 (indicating strong conservative bias). The average score across 24 LLMs was approximately -30, indicating a notable lean towards liberal perspectives. Furthermore, statistical analysis confirmed the significance of these results, highlighting a consistent left-of-center bias across most models tested.

Implications for Society

This bias has significant implications in the United States and around the globe. As LLMs become more integrated into our daily lives, from search engines to virtual assistants, their influence on public opinion grows. If these models are subtly—or not so subtly—promoting liberal viewpoints, they could shape societal attitudes in ways that are not immediately apparent. This raises ethical concerns about the deployment of AI and the need for transparency and diversity in the data used to train these models.

The Role of Finetuning for Neutrality

One potential solution to this bias is supervised finetuning, where LLMs are adjusted to align with specific political orientations or to be more balanced. However, this approach also comes with risks. Finetuning for neutrality is challenging, and the process could inadvertently introduce new biases or reinforce existing ones. Moreover, the very act of finetuning raises ethical questions about who gets to decide what constitutes a "neutral" or "balanced" viewpoint.

Bias Mitigation or Disclosure

As we move into an AI-driven future, the political biases of LLMs must be addressed with care. Developers need to prioritize transparency and engage in open discussions about the ethical implications of their work. Additionally, efforts should be made to diversify the training data to include a broader range of inclusive perspectives or establish a more single-minded LLM perspective—an established POV—that is clearly disclosed. This will not only help mitigate bias but also create more robust and trustworthy AI systems.

While LLMs like ChatGPT may currently lean liberal, recognizing and addressing this bias is crucial for the development of fair and equitable AI technologies. As with any powerful tool, the key lies in responsible use and ongoing vigilance.

advertisement
More from John Nosta
More from Psychology Today
More from John Nosta
More from Psychology Today