US Spies Should Tap Private AI Models, NSA’s Research Chief Says

[ad_1]

A top US spy official said intelligence agencies should use commercially available AI to keep up with foreign adversaries that will do the same — while being sure to address the risks to privacy and broader concerns about misuse of the fast-developing technology.

“The intelligence community needs to find a way to take benefit of these large models without violating privacy,” Gilbert Herrera, director of research at the National Security Agency, said in an interview. “If we want to realize the full power of artificial intelligence for other applications, then we’re probably going to have to rely on a partnership with industry.”

Herrera said he wants the NSA to be able to use large commercial AI models that are trained on the open Internet, citing companies that can access massive amounts of data such as Meta’s Facebook, Alphabet Inc.’s Google and Microsoft Corp., which also owns GitHub, used as a repository for code by software developers.

He acknowledged that using commercially available AI models risks importing potentially biased algorithms into classified spying missions. He said intelligence needs can be met without accessing the underlying data of American people and companies used to train and develop the models.

“If we used a model that was trained on the world, then we wouldn’t have access to the data, we would only have the decision trees, that would yield the information that we wanted out of it,” he said on the sidelines of a summit on modern conflict held at Vanderbilt University,

The National Security Council and the Office of the Director of National Intelligence didn’t immediately respond to requests for comment.

Herrera’s warning came after Vice President Kamala Harris met Thursday with the chief executive officers of Alphabet, Microsoft, OpenAI Inc., and Anthropic, and said the White House supports efforts to mitigate the potential harms from artificial intelligence technology. Harris said businesses must work with government to ensure safeguards to protect civil rights and privacy and prevent disinformation and scams.

Wall Street firms have begun adapting some AI technology. Last week, JPMorgan Chase & Co. unveiled an artificial intelligence-powered model that aims to decipher Federal Reserve messaging and uncover potential trading signals.

Even if relying on commercial AI proves feasible, Herrera’s proposal could undermine 2020 guidelines from the Office of the Director of National Intelligence that US spy agencies should “maintain accountability for iterations, versions and changes” made to any AI model and identify and mitigate “undesired bias.”

Herrera warned the US nevertheless needed to find a way to solve the problem because adversaries will be exploiting commercial AI. He said those who draw on large data sets will have an advantage, citing China’s proliferation of video surveillance cameras and other troves of data. China has said it wants to become the world leader in AI by 2030.

“The issue of the intelligence community’s use of publicly trained information is an issue we’re going to have to grapple with because otherwise there would be capabilities of AI that we would not be able to use,” Herrera said. “It would put us in a position where adversaries would be able to take full advantage of the technology and we couldn’t.”

Herrera said private companies’ algorithmic models could be useful for the NSA’s work on coding and in report writing assessing classified information.

“It all has to be done in a manner that respects civil liberties and privacy,” Herrera said. “It’s a tough problem.”

 

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *