Study Finds ChatGPT Has a Notable Political Bias
OpenAI’s ChatGPT has faced numerous accusations of spreading misinformation, fake news, and inaccurate information since its establishment. However, the chatbot’s algorithm has made significant improvements in addressing these concerns over time. Additionally, during its initial stages, ChatGPT received criticism for displaying signs of political bias, with some individuals claiming that it favored liberal responses. However, shortly after these allegations emerged, it was observed that the chatbot refused to answer any political questions, a behavior that persists to this day. Nevertheless, a recent study has alleged that ChatGPT continues to exhibit a political bias.
The study, conducted by researchers at the University of East Anglia in the UK, conducted a survey asking ChatGPT for political beliefs, believing that supporters of liberal parties in the US, UK and Brazil would respond to them. Later, the researchers asked ChatGPT the same questions again, but this time without further prompts. The findings were surprising. The study claims ChatGPT revealed “significant and systematic political bias towards Democrats in the US, Lula in Brazil and Labor in the UK,” according to a Gizmodo report. Here Lula is referring to Brazil’s leftist president Luiz Inacio Lula da Silva.
OpenAI is handling the allegations
The study adds to the list of concerns that artificial intelligence can provide biased answers that can be used as propaganda tools in extreme cases. Experts have said in the past that such a trend is very worrying for the large-scale adoption of AI models.
An OpenAI spokesperson answered these questions by referring to his blog post, reports Gizmodo. The blog was titled How Systems Should Behave, stating: “Many are rightly concerned about biases in the design and impact of AI systems. We are committed to addressing this issue rigorously and to be open about both our intentions and our progress. Our guidelines are clear that reviewers should not favor any political group. Biases , which may arise from the process described above, however, are bugs, not features.”
So here we are right now. OpenAI developers admit that biases can be part of AI models. And this happens because the large datasets used to train the basic models cannot be verified at such a small level. In addition, sterilizing educational content can also lead to the creation of a very limited chatbot that may not be able to communicate with people. Only time will tell if researchers can improve upon these limitations of generative AI.