Regulating Artificial Intelligence: Exploring the Necessity and Methods
The rapidly growing AI industry has surpassed the early stages of development and is now causing significant disruptions in society. ChatGPT, which was launched in November, has revolutionized the digital world and is being utilized in various fields such as machine coding, industrial applications, game design, and virtual entertainment. However, it has also been misused for illegal activities like expanding spam email operations and generating deepfakes.
It’s one technological genie we’ll never get back in the bottle, so we’d better start regulating it, argues Silicon Valley-based author, entrepreneur, investor and policy advisor Tom Kemp in his new book Containing Big Tech. : How to protect our civil rights, our economy and our democracy. In the excerpt below, Kemp explains what the regulation could be and what its implementation would mean for consumers.
Excerpted from Containing Big Tech: How to Protect Our Civil Rights, Economic and Democracy (IT Rev, 22 Aug 2023) by Tom Kemp.
Route map with artificial intelligence
Pandora in Greek myth brought powerful gifts, but also unleashed enormous plagues and evils. So, similarly with AI, we need to take advantage of its benefits, but keep the potential harm AI can do to humans within the proverbial Pandora’s Box.
When Dr. Timnit Gebru, founder of the Distributed Artificial Intelligence Research Institute (DAIR), was asked by the New York Times how AI bias can be countered, he answered in part: “We have to have principles and standards and governing bodies and people who vote on vetted things and algorithms , something similar to the FDA [Food and Drug Administration]. So for me, it’s not as simple as creating a more diverse data set, and things are fine.”
He is right. First, we need regulation. AI is a new game and it needs rules and referees. He suggested that we need the equivalent of the FDA for artificial intelligence. In fact, both the AAA and the ADPPA require the FTC to play this role, but instead of the FDA handling drug supply and approval, Big Tech and others should submit impact assessments of AI systems to the FTC. These assessments apply to AI systems in impactful sectors such as housing, employment and credit, helping us better address digital regeneration. Thus, these bills promote the necessary accountability and transparency for consumers.
In the fall of 2022, the Biden administration’s Office of Science and Technology Policy (OSTP) even proposed an “AI Bill of Rights.” Protections include the right “to know that an automated system is being used and to understand how and why it affects the results that affect you.” This is a great idea and could be included in the FTC’s rulemaking responsibilities if the AAA or ADPPA were passed. The point is that AI shouldn’t be a complete black box for consumers, and consumers should have rights to know and object – just as they should have when their personal data is collected and processed. In addition, consumers should have the right to private lawsuits if they are harmed by AI-based systems. And websites with a significant amount of AI-generated text and images should have food nutrition labeling to tell us how AI-generated content compares to human-generated content.
We also need AI certificates. For example, the financial industry has Accredited Certified Public Accountants (CPAs) and Certified Audits and Statements, so we should have an equivalent for AI. We need codes of conduct and industry standards for the use of artificial intelligence. For example, the International Organization for Standardization (ISO) publishes quality management standards that organizations can follow for cybersecurity, food safety, and so on. Fortunately, a working group with ISO has started to develop a new standard for AI risk management. And another positive development is that the National Institute of Standards and Technology (NIST) published a preliminary framework for AI risk management in January 2023.
We need to remind companies that they have more diverse and inclusive design teams building AI. As Olga Russakovsky, an assistant professor in Princeton University’s Department of Computer Science, said, “There are many opportunities to diversify this pool [of people building AI systems], and as diversity grows, the AI systems themselves become less biased. .”
As regulators and lawmakers look into antitrust for big tech companies, artificial intelligence should not be overlooked. To paraphrase Wayne Gretzky, regulators must skate where the puck is going, not where it has been. Artificial intelligence is where the puck is going in technology. Therefore, the acquisition of artificial intelligence companies by Big Tech companies should be looked at more closely. In addition, the government should consider mandating open intellectual property for artificial intelligence. This could be modeled on, for example, Bell’s 1956 federal consent decree, which required Bell to license all of its patents royalty-free to other companies. This led to incredible innovations such as the transistor, the solar cell, and the laser. It is not healthy for our economy that the future of technology is concentrated in the hands of a few companies.
Finally, our society and economy must better prepare for the impact of AI in displacing workers through automation. Yes, we need to prepare our citizens with better education for new jobs in the AI world. But we have to be smart about this because we can’t say let’s retrain everyone to become software developers because only some have this skill or interest. Also note that AI is increasingly being built to automate software development, so even being aware of what software skills should be taught in the AI world is very important. As the economist Joseph E. Stiglitz pointed out, we’ve had trouble managing the smaller changes in technology and globalization that have led to polarization and the erosion of our democracy, and the changes in artificial intelligence are more profound. That’s why we need to prepare for it and make sure that artificial intelligence is a positive thing for society.
Since Big Tech leads AI, ensuring its positive effects should start with them. AI is incredibly powerful and Big Tech is “all-in” with AI, but AI is fraught with risk if misguided or built to exploit. And as I documented, Big Tech has had problems with AI. This means that the depth and breadth of the collection of our sensitive data is not a threat, but the way Big Tech uses AI to process that data and make automated decisions is also a threat.
Thus, in the same way that we need to limit digital surveillance, we also need to ensure that Big Tech does not open Pandora’s Box with AI.