5 things about AI you may have missed today: US orders Nvidia to stop AI chip export, AI data poisoning tool, more

[ad_1]

Today, October 25, was an important day in the artificial intelligence space, especially when it comes to AI chips. In the first incident, tech giant Nvidia says that the US government has ordered it to immediately stop exporting some of its advanced AI chips to China. Earlier, this decision was supposed to come into effect 30 days after October 17. In other news, Qualcomm has unveiled a new AI-powered chip for Microsoft Windows-based laptops and claims that its performance may even surpass that of Apple’s Mac computers. This and more in today’s AI roundup. Let us take a closer look.

US stops Nvidia from exporting AI chips to China

Nvidia has revealed that, due to regulatory changes, the US government has told it to immediately cease exporting certain top-tier artificial intelligence chips to China, as per a report by The Guardian. These restrictions, initially slated to take effect 30 days after the October 17th announcement by the Biden administration, were part of measures aimed at preventing countries like China, Iran, and Russia from acquiring advanced AI chips developed by Nvidia and other companies. Nvidia did not provide a specific reason for the accelerated timeline but expressed that it does not anticipate a significant immediate impact on its earnings resulting from this action.

Qualcomm unveils AI chip for Windows computers

Qualcomm revealed details about a chip designed for Microsoft Windows-based laptops, as per a report by Reuters. The AI chips are due to be launched in 2024, and the company claims will outperform Apple’s Mac computer chips in certain tasks.

According to Qualcomm executives, the upcoming Snapdragon Elite X chip has undergone a redesign aimed at enhancing its performance in artificial intelligence-related tasks such as email summarization, text generation, and image creation.

These AI capabilities are not limited to laptops. Qualcomm intends to incorporate them into its smartphone chips as well. Google and Meta have both announced their plans to harness these features for their respective smartphone platforms.

Tech firms push for safety standards for AI

According to a report by the Financial Times, Microsoft, OpenAI, Google, and Anthropic have collectively pushed toward establishing safety standards for AI. They have appointed a director for their alliance, aiming to address what they consider “a gap” in global AI regulation.

These four tech giants, who united earlier this summer to create the Frontier Model Forum, have chosen Chris Meserole from the Brookings Institution as the executive director of the group. Additionally, the forum has disclosed plans to allocate $10 million to an AI safety fund.

IWF issues warning over AI-generated child abuse images

The Internet Watch Foundation (IWF) is actively engaged in the removal of child sexual abuse images from websites, reports BBC. They have identified thousands of AI-generated images that exhibit such a high level of realism that they violate UK law.

“Our worst nightmares have come true,” said Susie Hargreaves OBE, chief executive of Cambridge-based IWF. “Chillingly, we are seeing criminals deliberately training their AI on real victims’ images. Children who have been raped in the past are now being incorporated into new scenarios because someone, somewhere, wants to see it,” she added.

Data poisoning tool surfaces, can corrupt image-generating AI models

A newly developed tool called Nightshade allows users to integrate it into their digital intellectual property, effectively tampering with training data using art, as per a report by The Verge. Over time, it has the potential to disrupt and degrade the performance of AI art platforms like DALL-E, Stable Diffusion, and Midjourney, rendering them incapable of generating images.

Nightshade introduces imperceptible alterations to the pixels in a digital artwork. When this manipulated artwork is used in model training, the “poison” exploits a security vulnerability, leading to model confusion. As a result, the AI will no longer recognize an image of a house as a car and might, for instance, misinterpret it as a boat.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *