Regulating AI Too Soon Could Be a Mistake
Artificial intelligence policy has recently become a focal point, with various significant events taking place. Today, President Joe Biden is set to release an executive order addressing this matter. Additionally, a summit focused on AI safety is scheduled to occur in the UK later this week. Furthermore, the US Senate recently conducted a closed-door forum discussing research and development in the field of AI.
I spoke at a Senate forum convened by Majority Leader Chuck Schumer. Here’s an outline of what I told the panel about how the US can advance AI and improve its national security.
First, the United States should allow in far more highly educated foreigners, especially those working in artificial intelligence and related fields. As you might expect, many of the key players in the advancement of AI – such as Geoffrey Hinton (British-Canadian) and Mira Murati (Albanian) – come from abroad. The US may never be able to compete with China in terms of raw computing power, but many of the world’s best and brightest would love to live in America. The government should make their way as easy as possible.
Artificial intelligence also means that science is likely to progress faster in the future. This applies not only to AI itself, but also to the sciences and practices that benefit it, such as computational biology and green energy. The US cannot afford the luxury of its current slow procurement and funding cycles. Biomedical funding should be more like the nimble National Science Foundation and less like the bureaucratic National Institutes of Health. Even better, Darpa models could be applied more broadly to give program managers more power to take risks with their grants.
These changes would make it more likely that new and future AI tools will produce better results for ordinary Americans.
The United States should also speed up permit reform. Building new and better semiconductor factories is a priority for both national security and the advancement of artificial intelligence more generally, as recognized in the CHIPS Act. Still, the need for multi-level permits and environmental assessments slows down this process and increases costs. It is generally recognized that license reform is needed, but it has not happened.
As the speed of scientific development increases, it may be necessary to adapt the regulation. Many critics have charged that the FDA’s approval processes are too slow and conservative. This problem could become much worse if the number of new drug candidates increases two or three times. It is unrealistic to expect government to become as fast as AI, but it can certainly be faster than it is now.
And is more regulation needed?
In the short term, the United States can step up, reform, and rethink what is sometimes called “modular regulation.” If AI were to provide health or diagnostic advice, for example, existing regulatory bodies—federal, state, and local—would cover it. At all levels, these institutions need to make significant changes. Sometimes it involves more regulation and sometimes less, but now is the time to start reassessing.
What if AI provides diagnostic advice that is better than human doctors – but still not perfect? Should an AI company be subject to medical malpractice law? I would prefer a “user beware” approach, as currently exists for Googling medical advice. But apparently this matter requires deeper consideration. The same concern applies to AI legal advice: many current laws apply but need to be revised to reflect new technologies.
The United States should not currently regulate or license AI services to itself as entities. Obviously, current AI services are subject to existing laws, including anti-violence and anti-fraud laws.
Over time, I’m sure people will understand what AIs, including large language models, are best used for. The structure of the industry can become relatively stable and the risks are better known. It remains to be seen whether American AI providers have maintained their lead in China.
At that time — but not until then — the U.S. may consider more general AI regulations. Market experimentation has the biggest payoff now that we’re discussing the best and most appropriate use cases for AI. It is unrealistic to expect bureaucrats, few of whom have AI expertise, to find answers to these questions.
In the meantime, it will not work for licensed AIs to work on the condition that they demonstrate no harm or are highly unlikely to do so. The technology is very common, its future use is difficult to predict, and some inconveniences may be the fault of the users, not the company behind the service. Similarly, it would have been unwise to place similar demands on the printing press or automation in their early days. And permit systems have an unfortunate tendency to devolve into bureaucratic or political wrangling.
In any case: Now is the time to act. The US needs to get on with it.