Five AI Developments You Should Know About: Data Worker Rights, AI-Assisted MRIs, and More
Today, October 24, marked a significant day in the realm of artificial intelligence. Firstly, data workers in the United States, responsible for training AI, have penned an open letter to policymakers, appealing for the protection of their rights and livelihoods. This development precedes the upcoming AI Insight Forum, to be hosted by the US Senate alongside prominent AI leaders. Additionally, a recent study has evaluated the effectiveness of AI-driven imaging methods in diagnosing multiple sclerosis. These stories and more are covered in today’s AI roundup, where we delve deeper into the details.
AI data workers write an open letter to the US Senate
In a letter to Senate Majority Leader Sen. Chuck Schumer (D-NY), civil society workers and organizations called on Congress to protect against a “dystopian future” characterized by widespread surveillance and meager pay for those in charge. For training AI algorithms.
“The contributions of data workers, often invisible to the public, are important to the development of artificial intelligence. Companies did not adequately respond to the questions posed by members of Congress. Therefore, we urge you to consider how new technologies are already affecting workers in various fields and how they will respond to their demands,” the letter says.
The letter added, “To guard against this dystopian future, Congress should develop a new generation of economic policy and workers’ rights related to preventing companies like Amazon from taking advantage of technology-driven worker exploitation and outmaneuvering their competitors by taking the low road.”
AI-based MRI tools show promise
According to a report in News Medical, a new study has evaluated the effectiveness of artificial intelligence-based tools in evaluating magnetic resonance imaging (MRI) test results and found that it can evaluate such reports of disease activity with greater sensitivity and accuracy than traditional methods. in radiology reports.
The comparison group was created using MRI scans of more than 3,000 individuals without health problems and an independent group of 839 people diagnosed with MS. Uniform processing methods were used in both datasets.
Artificial intelligence experts are calling for AI to be held accountable for the harm it causes
According to a report by The Guardian, a number of senior experts have warned that powerful artificial intelligence systems threaten social stability. They have demanded that AI companies take responsibility for the consequences of their products. The warning came on Tuesday, just as global politicians, tech companies, researchers and civil society representatives prepare to gather at Bletchley Park next week for an AI security summit.
“It’s time to get serious about advanced artificial intelligence systems… These are not toys. Adding their capabilities before we understand how to make them secure is completely reckless,” said Stuart Russell, a computer science professor at the University of California, Berkeley.
Nvidia: US expedited export restrictions on AI chips
According to a Reuters report, Nvidia announced that new US export restrictions barring the sale of its high-end artificial intelligence chips to China have been implemented ahead of schedule starting Monday.
These restrictions were originally set to take effect 30 days after the Biden administration announced them on Oct. 17. The goal was to prevent countries like China, Iran, and Russia from acquiring advanced AI chips made by Nvidia and other companies.
In a filing on Tuesday, Nvidia said it does not expect the development to have an immediate impact on its earnings, but the reason for the US government’s decision to accelerate the timing remains undisclosed.
The head of Google DeepMind says artificial intelligence must be treated as seriously as the climate crisis
As the UK government prepares to host a summit on AI security, Demis Hassabis, the UK head of Google’s AI division DeepMind, suggested that regulation of the industry could begin by creating a body similar to the Intergovernmental Panel on Climate Change (IPCC), reports The Guardian. .
“We need to take the risks of AI as seriously as other major global challenges such as climate change. It took too long for the international community to coordinate an effective global response to this, and we are now living with the consequences. We cannot afford the same delay with AI,” Hassabis said.