The Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. (Pixabay)AI 

Lethal Autonomous Weapons: Pentagon’s AI Accelerates Tough Decisions

The U.S. military has utilized artificial intelligence to operate small surveillance drones during special operations and provided assistance to Ukraine in its conflict with Russia. This advanced technology monitors soldiers’ physical condition, anticipates maintenance requirements for Air Force aircraft, and aids in monitoring adversaries in outer space.

Now, the Pentagon plans to bring several thousand relatively inexpensive wearable autonomous vehicles equipped with artificial intelligence to the market by 2026 to keep pace with China. The ambitious initiative — called Replicator — seeks to “accelerate progress in the too-slow shift in U.S. military innovation to utilize platforms that are small, smart, cheap and numerous,” Deputy Defense Secretary Kathleen Hicks said in August.

Although its funding is uncertain and details unclear, Replicator is expected to speed up difficult decisions about what artificial intelligence technology is mature and reliable enough to be used — even in weaponized systems.

There is little dispute among scientists, industry experts, and Pentagon officials that the United States will have fully lethal autonomous weapons within the next few years. And while officials insist humans will always be in control, experts say advances in computing speed and machine-to-machine communication will inevitably relegate humans to supervisory roles.

This is especially true if, as expected, lethal weapons are used en masse in drone swarms. Many countries are working on them – and neither China, Russia, Iran, India, nor Pakistan have signed the US-initiated pledge to use military artificial intelligence responsibly.

It is unclear whether the Pentagon is currently formally evaluating any fully lethal autonomous weapon system for use under the 2012 directive. A Pentagon spokesman did not say.

Paradigm shifts

Replicator highlights the enormous technological and personnel challenges for Pentagon acquisition and development as the AI revolution promises to transform the way wars are fought.

“The Department of Defense is struggling to embrace the latest AI breakthrough in machine learning,” said Gregory Allen, a former senior Pentagon AI official who now works at the Center for Strategic and International Studies think tank.

The Pentagon’s portfolio includes more than 800 unclassified AI-related projects, many of which are still in testing. Typically, machine learning and neural networks help people gain insights and create efficiencies.

“The artificial intelligence that we have in the Department of Defense right now is heavily leveraged and leveraged,” said Missy Cummings, director of the Robotics Center at George Mason University and a former Navy fighter pilot. “AI doesn’t run on its own. People use it to try to better understand the fog of war.”

Space, the new frontier of war

One domain where AI-assisted tools are tracking potential threats is space, the latest frontier in military competition.

China plans to use artificial intelligence, including satellites, “to make decisions about who is and who is not an adversary,” Lisa Costa, chief of technology and innovation for the US Space Force, told an online conference this month.

The USA is trying to keep pace.

A working prototype called Machina, used by the space forces, independently tracks more than 40,000 objects in space and organizes thousands of data collections at night with the help of a global network of telescopes.

Machina algorithms marshal telescope sensors. Computer vision and large language models tell them which targets to track. And artificial intelligence choreographies that immediately leverage datasets from astrodynamics and physics, Space Systems Command Col. Wallace ‘Rhet’ Turnbull told a conference in August.

Another space force artificial intelligence project analyzes radar data to detect imminent launches of enemy missiles, he said.

Maintenance of aircraft and soldiers

Elsewhere, AI’s predictive powers are helping the Air Force keep its fleet afloat by anticipating maintenance needs for more than 2,600 aircraft, including B-1 bombers and Blackhawk helicopters.

Machine learning models identify potential failures dozens of hours before they happen, said Tom Siebel, CEO of Silicon Valley-based C3 AI, which has the contract. C3’s technology also models missile trajectories for the US Missile Defense Agency and identifies insider threats to federal employees of the Defense Counterintelligence and Security Agency.

Among the health-related efforts is a pilot project to monitor the health of the Army’s entire 3rd Infantry Division — more than 13,000 soldiers. Predictive modeling and artificial intelligence help reduce injuries and increase performance, said Major Matt Visser.

Helping Ukraine

In Ukraine, artificial intelligence provided by the Pentagon and its NATO allies helps prevent Russian aggression.

NATO allies share intelligence from data collected by satellites, drones and humans, some of which is combined with software from US contractor Palantir. Some of the data comes from Maven, the Pentagon’s path-finding artificial intelligence project that is now largely managed by the National Geospatial-Intelligence Agency, say officials, including retired Air Force Gen. Jack Shanahan, the Pentagon’s director of AI.

Maven began in 2017 as an effort to process video footage from drones in the Middle East—prompted by US special operations forces fighting ISIS and al-Qaeda—and now collects and analyzes a vast array of sensor and human-derived data.

AI has also helped the U.S.-created Security Assistance Group Ukraine help organize logistics for military aid from the 40-nation coalition, Pentagon officials say.

All-Domain Command and Control

To survive on the battlefield these days, military units must be small, mostly invisible and move quickly, as exponentially growing sensor networks allow anyone to “see anywhere in the world at any time,” then-Joint Chiefs Chairman Gen. Mark Milley said in a news release. June speech. “And what you see, you can shoot.”

To bring together warfighters more quickly, the Pentagon has prioritized the development of interwoven combat networks — called Joint All-Domain Command and Control — to automate the processing of optical, infrared, radar and other data across the armed forces. But the challenge is huge and full of bureaucracy.

Christian Brose, a former staff director for the Senate Armed Services Committee now at the defense technology company Anduril, is among the supporters of military reform who, however, believe “we can win to some extent here.”

“The issue may be less about whether this is the right way to do it and more about how we actually do it – and with the fast timelines required,” he said. Brose’s 2020 book “The Kill Chain” claims. for urgent restructuring to meet China’s competition to develop smarter and cheaper networked weapons systems.

To that end, the US military is working hard on “man-machine teaming.” Dozens of unmanned air and sea vehicles are currently monitoring Iran’s activities. The U.S. Marines and Special Forces also use Anduril’s autonomous Ghost mini-copter, sensor turrets and anti-drone technology to protect American forces.

Advances in the field of computer vision have been essential. Shield AI allows Drones to operate without GPS, communications or even remote pilots. It’s the key to its Nova quadcopter, which US special operations forces have used to survey buildings in conflict zones.

On the horizon: The Air Force’s “loyal wingman” program plans to combine manned aircraft with autonomous aircraft. For example, an F-16 pilot can send drones to scout, shoot down enemy fire, or attack targets. Air Force leaders are aiming for a debut later this decade.

Racing to complete autonomy

The “faithful wingman” timeline doesn’t match the Replicator, which many consider too ambitious. The Pentagon’s vagueness on the Replicator, meanwhile, may keep competitors guessing, even as designers may still be mulling over features and mission goals, said Paul Scharre, a military artificial intelligence expert and author of the book “Four Battlegrounds.”

Anduril and Shield AI, each backed by hundreds of millions in venture capital funding, are among the companies vying for the contracts.

Nathan Michael, CTO of Shield AI, estimates that they have an autonomous fleet of at least three drones per year ready using its V-BAT aircraft. The U.S. military currently uses V-BAT technology — sans artificial intelligence — on naval vessels, in counter-narcotics missions and to support Marine expeditionary units, the company says.

It will take some time before larger flocks can be reliably propagated, Michael said. “It’s all about crawling, walking, running—unless you’re setting yourself up to fail.”

The only weapons systems that Shanahan, the Pentagon’s incoming AI chief, currently believes can operate autonomously are fully defensive ones, such as the ships’ Phalanx anti-missile systems. He is less concerned about autonomous weapons that make decisions on their own than about systems that don’t work as advertised or kill non-combatants or friendly troops.

Craig Martell, the department’s current head of digital and artificial intelligence, is determined not to let that happen.

“Regardless of the system’s autonomy, there is always a responsible operator who understands the system’s limitations, has trained well with the system, has reasonable confidence in when and where it will be available – and always takes responsibility.” said Martell, who previously led machine learning at LinkedIn and Lyft. “It will never be like that.”

As for when AI will be reliable enough for lethal autonomy, Martell said there’s no point in generalizing. Martell, for example, relies on its car’s adaptive cruise control, but not the technology that should prevent it from changing lanes. “As a responsible agent, I wouldn’t use it except in very limited situations,” he said. “Now extrapolate that to the military.”

Martell’s office is evaluating potential generative AI use cases—it has a dedicated task force for that—but is focusing more on AI testing and evaluation in the development phase.

One pressing challenge, says Jane Pinelis, principal AI engineer at Johns Hopkins University’s Applied Physics Lab and former director of AI assurance at Martell’s office, is recruiting and retaining the talent needed to test AI technology. The Pentagon can’t compete on salaries. PhDs in computer science with AI-related skills can earn more than top-ranking generals and admirals in the military.

Testing and evaluation standards are also immature, a recent National Academy of Sciences report on Air Force artificial intelligence highlighted.

Could this mean that the United States will one day be forced to deploy autonomous weapons that don’t penetrate size?

“We’re still operating under the assumption that we have time to do this as accurately and diligently as possible,” Pinelis said. “I think if we’re less than ready and it’s time to take action, somebody’s going to have to make a decision.”

Related posts

Leave a Comment