Finance

The AI-piloted fighter age has arrived


In a scene harking to a futuristic science fiction film, a US AI-piloted fighter has just engaged in a dogfight against a manned fighter, heralding the autonomous future of aerial combat.

This month, The Warzone reported that the X-62 test jet, a modified F-16, successfully conducted a first-of-its-kind dogfight against a manned F-16.

The test flight, which involved a pilot in the cockpit as a failsafe, was part of the US Defense Advanced Research Projects Agency’s (DARPA) Air Combat Evolution (ACE) program, The Warzone report said.

The X-62A, also known as the Variable-stability In-flight Simulator Test Aircraft (VISTA), can mimic aircraft systems, making it an ideal platform for supporting work like ACE.

The X-62A completed 21 test flights in support of ACE between December 2022 and September 2023, with nearly daily reprogramming of the “agents.”

Still, DARPA and the US Air Force emphasize that the program’s goal goes beyond dogfighting and aims to improve the fleet of aircraft by having the best AI pilot at any given time.

The ACE program analyzes historical data using machine learning to decide on current and future situations. While there are challenges in understanding and verifying the use of AI in flight-critical systems, the X-62A’s safety features have been instrumental in allowing the use of machine learning agents in real-world settings.

The ACE program, set to be tested later this year with US Secretary of the Air Force Frank Kendall in the cockpit, is part of the Collaborative Combat Aircraft (CCA) drone program, which aims to acquire low-cost drones with high autonomy.

The underlying technology being developed under ACE could have broad applications, with potential adversaries and global competitors such as China actively pursuing developments in the emerging field.

Dogfighting is one of the most challenging aspects of air-to-air combat, but advances in AI can revolutionize it. The proliferation of increasingly stealthy fighter aircraft means opposing sides will unlikely detect each other at beyond-visual-range (BVR) distances, increasing the chances that a close-range dogfight may happen.

In 2020, an AI developed by US-based Heron Systems bested a human pilot with more than 2,000 hours on the F-16, winning 5-0 using only its onboard cannon in a simulated dogfight.

The human pilot and Heron System’s AI fought in five basic maneuver scenarios, with the AI operating within the limits of the F-16’s maneuvers.

The AI’s superhuman accuracy allowed it to score cannon kills against the human pilot, aiming from seemingly impossible angles. This ability means an AI-piloted fighter can decimate a manned fighter fleet using only a few rounds of ammunition and thus at marginal costs.

AI-piloted aircraft are not subject to human limitations and can fly faster, maneuver quicker and shoot better with constantly improving sensors, processors and software. 

The apparent dogfighting superiority of AI pilots has raised questions about whether human pilots will still be needed for future aerial combat. While AI performs specific tasks well, it lacks a human pilot’s general intelligence and judgment.

Combining AI precision with human decision-making may thus be the best approach to integrating AI in future aerial combat.

In a January 2022 article for The New Yorker, Sue Halpern argues that AI will change human pilots’ roles and only partially replace them.

Halpern predicts that AI-piloted fighters will fly alongside manned fighters, with human pilots directing squads of unmanned aircraft. She also notes that the ACE program is part of a more significant effort to “decompose” fighter units into smaller, cheaper units, as the US may be unable to produce the number of manned fighters and train the pilots needed for a great power conflict with China.

However, Halpern points out that trust in AI is a significant issue, pointing out that the main challenge is how to get human pilots to trust their AI counterparts. A lack of trust may lead to the former constantly watching over the latter, breaking the logic of having AI pilots in the first place.

Tim McFarland notes in a 2022 article in the peer-reviewed International Journal of Law and Information Technology that, in a military context, trust in AI can be considered the confidence that AI will act as expected without constant supervision.

McFarland explains that people tend to rely on AI in situations that involve risk and uncertainty, such as navigating a vehicle or identifying military targets because past experiences have shown AI to be trustworthy. He notes that establishing trust in AI is essential to establish clear expectations, similar to a contract.

McFarland says that, for example, an AI system may be required to perform specific functions under certain conditions, such as identifying targets in a military operation, with its reliability in meeting these expectations being a critical factor in determining its trustworthiness.

McFarland emphasizes that in high-risk scenarios where operators may not have direct control or communication with AI systems, especially in electronic warfare (EW)-heavy environments, developing reliable AI systems that can be trusted based on their performance is crucial.

Caitlin Lee and others note in a May 2023 Aerospace America article that the sheer amount of data needed to train an AI pilot, compounded with the difficulties of training AI in a simulated environment, may not reflect real-world combat environments and the complexity of dogfighting.

China has conducted its own simulation dogfights pitting AI versus human pilots to avoid being left behind in the AI fighter pilot race.

In March 2023, the South China Morning Post (SCMP) reported that Chinese military researchers conducted a dogfight between two small unmanned fixed-wing aircraft—one with an AI pilot on board and the other controlled remotely by a human pilot on the ground. SCMP notes that the AI-piloted plane was superior in close-range dogfights, with its human opponent a constant underdog.

At the start of the said dogfight, the human pilot made the first move to gain the upper hand but the AI predicted his intentions, outmaneuvered, counter-moved and stuck close behind.

The SCMP report mentions that the human pilot attempted to lure the AI to crash to the ground but the AI moved to an ambush position and waited for him to pull up.

The human pilot performed the “rolling scissors” maneuver, hoping the AI would overshoot, but he could not evade his AI opponent, forcing the science team to call off the simulation after 90 seconds.

SCMP mentions that while the US pioneered AI pilot research 60 years ago, China has caught up quickly, with its technology using just a fraction of the computing resources used by US projects. It also says China’s AI pilot is designed to operate on almost any People’s Liberation Army-Air Force fighter.



READ SOURCE

Business Asia
the authorBusiness Asia

Leave a Reply