[ad_1]
Google’s Gemini AI chatbot came under scrutiny yesterday with the company temporarily halting image generation amid concerns over inaccuracies in historical depictions. Following this issue, the tech giant apologised for these inaccuracies. However, it now faces a new challenge in India as the country’s IT Minister, Rajeev Chandrasekhar, flagged violations of IT rules and criminal code provisions by Gemini.
Gemini AI’s Political Bias
The controversy unfolded when a verified user shared a screenshot revealing biassed responses from the Gemini AI chatbot regarding Prime Minister Narendra Modi. Chandrasekhar, taking note of the issue, criticised the AI’s response as a direct violation of IT rules and criminal code provisions. In a social media post, he emphasised the need for the Government of India (GOI) to intervene, calling Gemini AI not only “woke” but “downright malicious”.
Chandrasekhar marked the post to Google and the Ministry of Electronics and IT, indicating potential further actions against Google’s AI tool. The minister’s stern response showcased concerns over the potential misuse of AI technologies, especially when dealing with political figures.
This incident follows Google’s recent decision to pause Gemini AI’s image generation capabilities globally due to controversies surrounding inaccuracies in AI-generated historical images. Critics raised questions about whether the company was over-correcting for bias risks in its AI model.
Google acknowledged the issues, stating that the team was aware of inaccuracies and committed to immediate improvements. Gemini AI, powered by the Imagen 2 model, is designed for language, audio, code, and video understanding. Released officially in December, it allows users to generate high-quality images with text prompts, integrating natural language processing and image recognition. Despite these capabilities, it has faced criticism for missing the mark in accurately representing diverse scenes.
As legal implications loom in India, Google finds itself navigating the delicate balance between the potential of its advanced AI technology and the responsibility to address concerns about bias and accuracy.
[ad_2]
Source link