Shocking study claims ChatGPT has a “significant and systematic political bias”

[ad_1]

Since its inception, OpenAI’s ChatGPT has faced many allegations around spreading misinformation, fake news, and inaccurate information. But over time, the chatbot’s algorithm has been able to improve these issues significantly. Alongside, one more criticism was made about ChatGPT in its very early days that the platform displayed a sign of political bias. Some people alleged that the chatbot leaned liberal while giving responses to some questions. However, just days after the allegations first surfaced, people found that the OpenAI’s chatbot refused to answer any political questions, something it does even today. However, a new study has made claims that ChatGPT still holds a political bias.

A study by researchers from the University of East Anglia in the UK conducted a survey where it asked ChatGPT about political beliefs as it believed the supporters of the liberal parties in the US, the UK, and Brazil would answer them. Afterwards, the researchers again asked the same questions to ChatGPT but this time without any additional prompts. The findings were surprising. The study claims ChatGPT revealed “significant and systematic political bias toward the Democrats in the U.S., Lula in Brazil, and the Labour Party in the U.K.”, as per a report by Gizmodo. Here, Lula refers to the leftist President of Brazil, Luiz Inacio Lula da Silva.

OpenAI addresses the allegations

The study adds to a list of bodies concerned that AI can give biased responses that may be used as tools of propaganda in extreme cases. Experts have previously said that such a trend is very concerning when it comes to the large-scale adoption of AI models.

OpenAI spokesperson answered these questions by pointing to its blog post, reported Gizmodo. The blog was titled How Systems Should Behave which mentioned, “Many are rightly worried about biases in the design and impact of AI systems. We are committed to robustly addressing this issue and being transparent about both our intentions and our progress. Our guidelines are explicit that reviewers should not favor any political group. Biases that nevertheless may emerge from the process described above are bugs, not features”.

So, this is where we are at right now. OpenAI developers admit that biases can be part of AI models. And this happens because the large data sets used to train the foundational models cannot be verified at such a minute level. Further, sterilizing the training content can also end up creating a very limited chatbot that may not be able to engage with humans. Only time shall tell whether researchers will be able to improve these limitations in generative AI.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *