[ad_1]
All of a sudden there is a flurry of activity around artificial intelligence policy. President Joe Biden is scheduled to issue an executive order on the topic today. An AI safety summit is being held in the UK later this week. And last week, the US Senate held a closed-door forum on research and development in AI.
I spoke at the Senate forum, convened by Majority Leader Chuck Schumer. Here’s an outline of what I told the panel about how the US can boost progress in AI and improve its national security.
First, the US should allow in many more high-skilled foreign citizens, most of all those who work in AI and related fields. As you might expect, many of the key contributors to AI progress — such as Geoffrey Hinton (British-Canadian) and Mira Murati (Albanian) — come from abroad. Perhaps the US will never be able to compete with China when it comes to assembling raw computing power, but many of the world’s best and brightest would prefer to live in America. The government should make their path as easy as possible.
Artificial intelligence also means that science probably is going to move faster in the future. That applies not only to AI itself, but also to the sciences and practices that will benefit, such as computational biology and green energy. The US cannot afford the luxury of its current slow procurement and funding cycles. Biomedical science funding should be more like the nimble National Science Foundation and less like the bureaucratic National Institutes of Health. Better yet, Darpa models could be applied more broadly to give program managers greater authority to take risks with their grants.
Those changes would make it more likely that new and forthcoming AI tools will translate into better outcomes for ordinary Americans.
The US should also speed up permitting reform. Construction of more and better semiconductor plants is a priority, both for national security and for AI progress more generally, as recognized by the CHIPS Act. Yet the need for multiple layers of permits and environmental review slows down this process and raises costs. There is a general recognition that permitting reform is needed, but it hasn’t happened.
As the rate of scientific progress increases, regulation may need to adapt. Many critics have charged that FDA approval processes are too slow and conservative. That problem could become much worse if the number of new candidate drugs were to increase by two or three times. It is unrealistic to expect the government to become as fast as the AIs, but it can certainly be faster than it is now.
What about the need for more regulation?
In the short run, the US can beef up, reform and reconsider what is sometimes called “modular regulation.” If an AI were to issue health or diagnostic advice, for example, it would be covered by current regulatory bodies — federal, state and local. At all levels, those institutions need to make significant changes. Sometimes that will involve more regulation and sometimes less, but now is the time to start those reappraisals.
What if an AI gives diagnostic advice that is better than that of human doctors — but is still not perfect? Should the AI company be subject to medical malpractice law? I would prefer a “user beware” approach, as currently exists for googling medical advice. But obviously this issue requires deeper consideration. The same concern applies to AI legal advice: Plenty of current laws apply, but they need to be revised to match new technologies.
The US should not, at the moment, regulate or license AI services as entities unto themselves. Obviously current AI services fall under extant laws, including laws against violence and fraud.
Over time, I am confident that people will figure out what exactly AIs, including large language models, are best used for. Industry structure may become relatively stable, and risks will be better known. It will be clear whether the American AI service providers have kept their leads over China’s.
At that point — but not until then — the US might consider more general regulations for AI. Market experimentation has the highest return now, when we are debating the best and most appropriate use cases for AI. It is unrealistic to expect bureaucrats, few of whom have any AI expertise, to figure out answers to these questions.
In the meantime, it does not work to work license AIs on the condition that they prove they will not cause any harm, or are very unlikely to. The technology is very general, its future uses are hard to predict, and some harms could be the fault of the users, not the company behind the service. In similar fashion, it would not have been wise to make similar demands of the printing press, or of automation, in their early days. And licensing regimes have an unfortunate tendency to devolve into bureaucratic or political squabbling.
In any case: The time to act is now. The US needs to get on with it.
[ad_2]
Source link