5 things about AI you may have missed today: AI data workers demand protection, AI MRI tools, more

[ad_1]

Today, October 24, turned out to be a pivotal day in the field of artificial intelligence. In the first incident, data workers in the US who are training AI have written an open letter to the policymakers urging them to safeguard their rights and livelihoods. This comes ahead of the second AI Insight Forum which is going to be hosted by the US Senate with AI leaders. In other news, a new study has assessed AI-based imaging techniques to diagnose multiple sclerosis. This and more in today’s AI roundup. Let us take a closer look.

AI data workers write an open letter to the US Senate

In a letter directed to Senator Chuck Schumer (D-NY), who serves as the Senate Majority Leader, laborers and organizations representing civil society have called upon Congress to protect against a “dystopian future” marked by extensive surveillance and meager wages for those responsible for training AI algorithms.

“The contributions of data workers, often invisible to the public, are critical to advancements in AI. The corporations failed to adequately answer the questions posed by members of Congress. We therefore urge you to consider how workers, across sectors, are already impacted by new technologies and respond to their demands,” said the letter.

The letter added, “To guard against this dystopian future, Congress should develop a new generation of economic policies and labor rights related to prevent corporations like Amazon from leveraging tech-driven worker exploitation into profit and outcompeting rivals by taking the low road”.

AI-based MRI tools show promise

According to a report by News Medical, a new study has assessed the effectiveness of AI-based tools in evaluating magnetic resonance imaging (MRI) test results and has found that it can more sensitively and accurately evaluate such reports of disease activity than conventional methods based on radiology reports.

A reference group was established using MRI scans from more than 3,000 individuals without health issues and an independent set of 839 individuals diagnosed with MS. Uniform processing methods were applied to both datasets.

AI experts call for holding AI responsible for the harm they cause

A group of senior experts has cautioned that potent artificial intelligence systems pose a threat to social stability, as per a report by The Guardian. They have called for AI companies to assume responsibility for the consequences of their products. This warning was issued on Tuesday, just as global politicians, tech firms, academics, and civil society representatives are gearing up to convene at Bletchley Park next week for a summit focused on AI safety.

“It’s time to get serious about advanced AI systems…These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless,” said Stuart Russell, professor of computer science at the University of California, Berkeley.

Nvidia: US fast-tracked export curbs on AI chips

Nvidia announced that the new US export restrictions preventing the sale of its high-end AI chips to China have been implemented ahead of schedule, effective as of Monday, according to a Reuters report.

Originally, these restrictions were slated to take effect 30 days after their announcement on October 17 by the Biden administration, with the aim of preventing countries like China, Iran, and Russia from acquiring advanced AI chips produced by Nvidia and other companies.

In a filing on Tuesday, Nvidia stated that it anticipates no immediate impact on its earnings due to this development, but the reason behind the US government’s decision to expedite the timing remains undisclosed.

Google DeepMind chief says AI must be treated as seriously as Climate crisis

As the UK government readies to host a summit addressing AI safety, Demis Hassabis, the British CEO of Google’s AI division DeepMind, proposed that industry regulation could commence by establishing an entity akin to the Intergovernmental Panel on Climate Change (IPCC), reports The Guardian.

“We must take the risks of AI as seriously as other major global challenges, like climate change. It took the international community too long to coordinate an effective global response to this, and we’re living with the consequences of that now. We can’t afford the same delay with AI,” Hassabis said.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *