Kamala Harris to Present Artificial Intelligence Plan in London Address
During a speech in London, Vice President Kamala Harris will address the growing concerns associated with artificial intelligence, emphasizing the need for global collaboration and more stringent regulations to safeguard consumers against its potential risks.
“As history has shown, in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over the well-being of their customers; the safety of our communities; and the stability of our democracies,” Harris is scheduled to deliver at the U.S. Embassy in the British capital on Wednesday.
The speech is part of a broad effort by the White House to impose restrictions on new AI tools that are rapidly entering the market without regulatory oversight. Harris is in London with other foreign leaders to attend an AI security summit convened by UK Prime Minister Rishi Sunak at Bletchley Park.
Harris reveals a number of steps the White House has taken to combat the risks. Among them is a new AI Security Institute within the US Department of Commerce, which is creating guidelines and tools to mitigate the dangers posed by AI. The Office of Management and Budget plans to release a draft policy on how the US government should use artificial intelligence.
The vice president also announces that the US government is partnering with major foundations, including the David and Lucile Packard Foundation, the Ford Foundation, and the Heising-Simons Foundation, which have committed $200 million to AI security efforts. In addition, Harris notes that the United States has joined other countries in helping to establish norms for the military use of artificial intelligence.
The speech came after President Joe Biden signed an executive order on Monday authorizing the federal government to introduce security standards and privacy protections for new artificial intelligence tools. The order has broad implications for companies including Microsoft Corp., Amazon.com Inc. and Alphabet Inc.’s Google. Companies must submit test results of their new models to the government before they are made public. The directive also requires labeling of content produced by artificial intelligence.
The use of AI tools has increased in recent months with the release of platforms accessible to the average consumer, including OpenAI’s ChatGPT application. The increased use of the technology has also raised concerns that the platforms could be used to spread misinformation or that the underlying algorithms perpetuate the bias.
Several governing bodies, including the United Nations and the Group of Seven, are actively working to create rules for artificial intelligence. The European Union is probably the furthest along, and its AI law is expected to become law by the end of the year.
The Biden administration’s quick response to curbing AI is at odds with Washington’s general approach to new technologies. Efforts to control social media platforms have languished in Washington for years, with many disputes left to be settled in court, including a major federal antitrust case the Justice Department is bringing against Google.
Still, the White House order still relies on federal agencies — most of which don’t have much AI expertise — to take internal steps to strengthen oversight. Congress should act to get more comprehensive oversight. Senate Majority Leader Chuck Schumer has opened discussions on artificial intelligence, but it’s unclear whether the legislation could pass a bitterly divided Congress.