Craig Martell, the Pentagon’s chief digital and artificial intelligence officer, wants companies to share insights into how their AI software is built. (REUTERS)AI 

Pentagon Calls for Increased Transparency from AI Companies

The Defense Department’s leading official in artificial intelligence has expressed the need for a deeper understanding of AI tools before fully embracing their use. The official also called upon developers to enhance transparency in their work.

Craig Martell, the Pentagon’s chief of digital and artificial intelligence, wants companies to share insights into how their AI software is built — without losing their intellectual property rights — so the department can “feel comfortable and safe” about deploying it.

Artificial intelligence software is based on large language models, or LLMs, which use huge data sets to power tools like chatbots and image generators. The services are typically offered without their inner workings – in a so-called black box. This makes it difficult for users to understand how the technology makes decisions or what makes it better or worse over time.

“We’re only getting the final result of the model building – it’s not enough,” Martell said in an interview. The Pentagon has no idea how the models are built or what data was used, he said.

The companies also don’t explain what dangers their systems may pose, Martell said.

“They say, ‘There it is. We won’t tell you how we built it. We won’t tell you where it’s good or bad. We don’t say whether it’s biased or not,” he said.

He described such models as equivalent to “discovered space technology” by the Department of Defense. He is also concerned that only a few groups of people have enough money to build LLM companies. Martell did not identify any companies by name, but Microsoft Corp., Alphabet Inc.’s Google and Amazon.com Inc. are among those developing LLMs for the commercial market, along with startups OpenAI and Anthropic.

Martell is inviting industry and scientists to Washington in February to address concerns. The goal of the Pentagon’s symposium on defense intelligence and artificial intelligence is to find out what jobs LLMs might be suitable for, he said.

Martell’s team, which is already running a task force to evaluate LLMs, has already found 200 potential uses for them in the Department of Defense, he said.

“We don’t want to stop major language models,” he said. “We just want to understand the use, the benefits, the dangers and how they can be mitigated.”

The department has a “big uptick” in people wanting to use LLMs, Martell said. But they also recognize that if the technology hallucinates — a term for artificial intelligence software to fabricate information or produce a false result, which is not uncommon — they must take responsibility for it.

He hopes the February symposium will help build what he calls a “maturity model” for benchmarking hallucinations, delusions and dangers. While it might be acceptable for the first draft of a report to contain AI-related errors – which a human can later weed out – these errors would not be acceptable in riskier situations, such as information needed to make operational decisions.

A classified session at the three-day February event will focus on testing and evaluating models and protecting against hacking.

Martell said his office acts as a consultant to the Department of Defense, helping various groups find the right way to measure the success or failure of their systems. The agency has more than 800 artificial intelligence projects underway, some of which concern weapons systems.

Given the stakes, the Pentagon applies a higher bar to how it uses algorithmic models than the private sector, he said.

“There are going to be a lot of use cases where lives are at risk,” he said. “So allowing hallucinations or whatever we want to call it — that’s just not going to be acceptable.”

Related posts

Leave a Comment