A new study from Peking University reveals that ChatGPT, the AI chatbot developed by OpenAI, has shifted towards the right on the political spectrum. Researchers analyzed changes in the responses of ChatGPT models, including GPT-3.5 and GPT-4, over time. The findings suggest a noticeable political shift, raising concerns about AI’s role in shaping public opinion.
Study Reveals Political Change in ChatGPT Responses
The research, published in Humanities and Social Science Communications, used 62 questions from the Political Compass Test. Each question was tested over 3,000 times per model to track response patterns. While ChatGPT is still classified as “libertarian-left,” the study highlights a gradual but clear movement towards the right.
These findings are significant because AI models like ChatGPT influence how people access and interpret information. As these chatbots become more common in daily life, their responses shape conversations on politics, society, and ethics.
What Causes AI Models to Change?
This study builds on previous research from the Massachusetts Institute of Technology (MIT) and the UK-based Centre for Policy Studies, which identified left-leaning biases in AI responses. However, those earlier studies did not examine how ChatGPT’s responses change over time.
The Peking University researchers suggest that three key factors could explain the shift:
- Changes in Training Data – AI models are updated with new datasets, which may reflect evolving political discussions.
- User Interactions – As millions of people engage with ChatGPT, their inputs may subtly shift the model’s patterns.
- Regular Model Updates – OpenAI frequently updates its models to improve accuracy, which may unintentionally affect political biases.
Some experts also believe that major global events, like the Russia-Ukraine war, COVID-19 debates, and economic crises, contribute to the shift. Users asking politically charged questions may cause AI models to adopt new response trends over time.
Concerns About AI Bias and Its Impact
The shift in ChatGPT’s responses raises concerns about the potential influence of AI on public discussions. If left unchecked, AI models may spread biased information and deepen societal divisions. The risk of AI-driven “echo chambers” is particularly troubling. These occur when AI responses reinforce existing beliefs rather than presenting balanced perspectives.
In previous incidents, AI bias has sparked controversy. In 2023, several users reported that ChatGPT refused to generate responses for certain political figures while readily answering questions about others. Similar concerns were raised in 2024 when some AI-generated news summaries appeared to favor specific political viewpoints. Such issues highlight the need for transparency in AI decision-making.
The Need for Oversight and Transparency
To address these risks, experts suggest regular audits and clear guidelines for AI models. Transparency reports can help the public understand how models are trained and how biases are addressed. Some researchers also propose allowing independent reviews of AI datasets and response trends.
“Frequent reviews and clear guidelines are essential to guarantee the responsible use of AI technologies,” the study’s authors emphasize.
Regulatory bodies in different countries are also taking steps to manage AI bias. The European Union’s AI Act, for example, aims to set rules for transparency and accountability in AI systems. Similarly, the United States is exploring policies to ensure fairness in AI-generated content.
As AI continues to evolve, the debate over its role in shaping public opinion will likely intensify. Ensuring that AI remains a fair and reliable source of information is crucial for maintaining trust in tech.
For more updates on AI research and its impact, visit Newyork Mirror.