lunes, 30 de marzo de 2026

Algorithmic War: How AI is Redrawing Geopolitics — and Why the World Isn't Ready

Algorithmic War: How AI is Redrawing Geopolitics — and Why the World Isn't Ready

From the skies over Iran to the waters of the Pacific, artificial intelligence systems are already making decisions that once required decades of human training. The opportunities are enormous. The risks, potentially civilizational.


On February 27, 2026, just hours before the first American missiles struck Iran, the Pentagon did something that would have seemed unthinkable a decade earlier: it declared one of the world's most promising artificial intelligence companies (Anthropic, creator of the Claude assistant)  a national supply chain risk. The reason wasn't espionage or sabotage. It was a philosophical disagreement about how far a machine should go in the decision to kill.

That tension is no accident. It is the most visible fracture in the architecture of modern war.

In the hours following the announcement, Maven Smart System  AI-powered targeting platforms developed primarily by Palantir with components from Amazon, Microsoft, and computer vision startup Clarifai  processed thousands of targets across Iranian territory. According to US Central Command, 1,000 targets were struck in the first 24 hours, approximately twice the scale of the initial bombardment of Iraq in 2003. Within ten days, the number exceeded 5,000. For the first time in a conflict of this magnitude, semi-autonomous attack drones were described by the regional commander as "indispensable."

Welcome to algorithmic warfare. It has already begun.


The Promise: Speed, Precision, and Strategic Advantage

To understand why the world's militaries are investing tens of billions of dollars in military AI, you first need to understand the problem they're trying to solve: the speed of modern chaos has outpaced human cognitive capacity.

In a high-intensity conflict, a commander may simultaneously receive data from satellites, drones, ground sensors, intercepted communications, and human intelligence sources. Processing that information in real time to make correct tactical decisions is, quite literally, impossible for the human brain without assistance. AI doesn't replace the commander; in theory, it amplifies decision-making capacity.

Maven Smart System does exactly that. Originally designed in 2017 to analyze drone video using computer vision, the system has evolved into a command-and-control platform that generates what the Pentagon calls "points of interest"  potential targets automatically identified from movement patterns, thermal signatures, and comparison against databases of known threats. Today, every US military command worldwide uses a version of the system. In 2025, NATO began operating its own variant.

The documented advantages are real. In conflicts like Ukraine, where both sides have deployed millions of drones for reconnaissance and physical attacks, AI-assisted recognition systems have demonstrated the ability to identify armored vehicles, artillery positions, and troop movements with a precision that would have required teams of analysts working for days. According to a 2025 analysis by the Center for Strategic and International Studies, AI-assisted targeting systems reduced the average time between detection and strike decision from several minutes to under 30 seconds in controlled scenarios.

In the maritime domain, the outlook is equally transformative. Since 2022, the Pentagon has been collecting vast quantities of imagery of Chinese military vessels in the Pacific  data that feeds automatic recognition models capable of identifying destroyers, aircraft carriers, and submarines with high precision. If China were to attempt an amphibious operation against Taiwan  something for which Washington anticipates operational capability by 2027  those models would constitute the first line of response in the conflict's opening hours.


The Risks: When the Algorithm Fails, Real People Die

But here lies the problem that Anthropic identified — and that the Pentagon would prefer to ignore: AI systems, however sophisticated, fail. And in lethal contexts, their failures have irreversible consequences.

In June 2025, during Replicator program tests at Channel Islands Harbor, California, an autonomous naval drone from L3Harris Technologies entered autonomous mode accidentally when an operator inadvertently sent a command that disabled the autonomy safety lock. The drone, designed to maintain a minimum distance of 80 meters from all objects, accelerated, swerved erratically, and ultimately capsized the boat towing it. The captain fell into the water. He survived. By three minutes.

That incident  (which the industry classifies as a "fat-finger mistake")  illustrates one of the fundamental principles of systems engineering: any sufficiently complex system will fail in ways its designers did not anticipate. And in armed systems with autonomous capability, that is not an acceptable bug. It is a potential catastrophe.

Jane Pinelis, who oversaw testing and evaluation at Maven in its early phases, put it with brutal clarity: "Perfection simply isn't possible. There will be errors from AI hallucinations, faulty data, and algorithmic drift. The only thing that makes sense is to plan for how AI will fail." That statement, made in 2023, remains the most honest assessment of the current state of the art.

The risks can be classified into four critical categories:

1. Misidentification errors. Computer vision models are extraordinarily good at identifying what they have seen before. They are dangerously poor at what they haven't. In real combat environments (with smoke, camouflage, lighting variations, mixed civilian-military equipment)  systems can confuse ambulances with armored vehicles, civilians with combatants. The AI that struck a girls' school in Iran, killing more than 175 people, was described in reports as relying on "outdated intelligence." It was never confirmed whether target selection was algorithmically assisted.

2. Involuntary escalation. Autonomous systems can react to perceived threats faster than diplomatic mechanisms can intervene. In a high-tension scenario between nuclear powers, a drone that misinterprets a military exercise as an attack could trigger a chain of automatic responses impossible to halt before conflict escalates beyond any point of return.

3. Asymmetric proliferation. Military AI is not the exclusive domain of superpowers. Non-state groups in Ukraine, Gaza, and the Sahel have demonstrated the capacity to adapt commercial drones with basic guidance systems and open-source AI for attack missions. The technological threshold for building a semi-autonomous lethal system fell dramatically with the democratization of computer vision models. What today requires a state military could tomorrow be within reach of an organized criminal actor.

4. Opacity and accountability. When an autonomous system makes a lethal error, who is responsible? The programmer who wrote the algorithm? The commander who approved the deployment? The manufacturer who sold the hardware? International humanitarian law was designed for a world where humans pull the trigger. A coherent legal framework for assigning responsibility in attacks with significant autonomy does not yet exist.


The Global Balance: China, Russia, and a Race No One Can Win Alone

The United States is not alone in this race. China has invested massively in military AI as part of its People's Liberation Army modernization strategy. According to the Pentagon's 2025 annual report to Congress on Chinese military capabilities, the PLA has deployed facial recognition and vehicle identification systems in its South China Sea operations, and is developing drone swarm capabilities that could potentially saturate Taiwan's defense systems in a coordinated attack.

Russia, for its part, has used the conflict in Ukraine as a laboratory. Its Lancet drones, equipped with semi-autonomous guidance capabilities, have been responsible for numerous documented strikes against Ukrainian artillery and armored vehicles. The Ukrainian conflict has effectively been the first theater of war where AI applied to weapons systems has been tested at industrial scale — with consequences that academics at the Johns Hopkins Applied Physics Laboratory describe as "a dramatic acceleration of the learning curve for all actors."

The problem is that this technological race has no referee. Unlike nuclear weapons (where the Non-Proliferation Treaty, imperfect as it is, created a minimal governance framework)  lethal autonomous weapons have no regulatory equivalent. Discussions within the framework of the UN Convention on Certain Conventional Weapons have been deadlocked for a decade, blocked largely by the United States, Russia, and China, all of which refuse to accept binding restrictions.


Recommendations: What Must Be Done Now

The situation is not irreversible. But the window for establishing norms that prevent worst-case scenarios is closing. These are the priority actions:

For governments:

The first step is to establish a principle of "meaningful human control" as a legal requirement for any armed system with autonomous target selection capability. This does not mean a human must pull every trigger, but that no system can initiate a lethal attack without a verifiable and documented human decision. This principle must be incorporated into both domestic legislation and international treaties.

Second, governments must create independent audit mechanisms for military AI systems before operational deployment. The civilian equivalent would be aviation or pharmaceutical safety agencies: bodies with real authority to halt systems that fail to meet minimum reliability standards.

For the technology industry:

Companies developing general-purpose AI face an existential decision: if their systems can be adapted for lethal applications, they bear active responsibility for how that adaptation occurs. Anthropic's model — establishing clear red lines about unacceptable uses and litigating when the government attempts to force them — is a valuable precedent, though a costly one. The industry needs to develop shared risk assessment standards for military applications, analogous to biosafety standards in pathogen research.

For the academic and security community:

Urgently invest in research on the failure mechanisms of AI systems in real combat environments. Data from Ukraine and now Iran is invaluable but remains classified or dispersed. An initiative similar to the Intergovernmental Panel on Climate Change (an independent expert panel synthesizing evidence and proposing standards)  could create the scientific consensus necessary to inform public policy.

For international organizations:

Relaunch negotiations on Lethal Autonomous Weapons Systems under a renewed mandate, with concrete deadlines and verification mechanisms. The model of the Ottawa Treaty on Anti-Personnel Mines (1997)  (negotiated outside the UN framework precisely because that framework was too slow)  offers a viable alternative. A core group of countries willing to establish rigorous standards can create sufficient normative pressure to eventually incorporate more reluctant actors.


Conclusion: AI Is Not the Problem. We Are the Problem

Retired General Jack Shanahan, who directed Project Maven from its inception, said it without ambiguity: "No LLM, anywhere, in its current form, should be considered for use in a fully lethal autonomous weapon system. Over-reliance on them at this stage is a recipe for catastrophe."

Technology, by itself, is neither good nor bad. A target recognition system can precisely identify a rocket launcher hidden in vegetation, saving civilian lives that would have been lost in an imprecise strike. Or it can confuse a farmer with a combatant and kill someone who was never a threat. The difference is not in the algorithm. It is in the incentives, oversight, and consequences that humans build — or fail to build — around it.

The world arrived at this crossroads because technology advanced faster than the institutional wisdom to manage it. It is not the first time. Nuclear energy, modified pathogens, chemical weapons: each generation produces its own version of this problem. What distinguishes military AI is the speed at which it spreads, the difficulty of detecting it, and the ease with which any actor can access basic versions of it.

The question is not whether militaries will use artificial intelligence. They already do. The question is whether humanity will be capable of establishing the rules of the game before the game establishes its own rules. And the clock  (as always in these matters)  is already running.


Glossary

Meaningful human control: The requirement that an armed system cannot initiate a lethal attack without a genuine, verifiable, and documented decision by a human being who understands the context and consequences of that decision.

Algorithmic drift: The gradual degradation of an AI model's accuracy when real-world environmental data diverges from the data on which it was trained, causing the system to produce increasingly unreliable outputs over time.

Drone swarm: Autonomous coordination of multiple unmanned aerial vehicles to execute collective missions, where each unit adapts its behavior in real time based on the actions of the others, without centralized human command at the tactical level.

AI hallucination: An error in which a language or vision model produces confident but factually incorrect outputs, with no internal mechanism to detect or flag the error — particularly dangerous in time-sensitive decision environments.

LAWS (Lethal Autonomous Weapon Systems): Armed systems capable of selecting and engaging targets without meaningful human control; the subject of international regulatory debate since 2014 under the UN Convention on Certain Conventional Weapons.

Maven Smart System: AI-powered military platform developed primarily by Palantir for the US Department of Defense, used for target identification, pattern-of-life analysis, and tactical decision support across all global US military commands.

Asymmetric proliferation: The spread of advanced armed technology to non-state actors or lower-capacity states, fundamentally altering traditional strategic balances and creating threat environments that conventional deterrence frameworks were not designed to address.

Computer vision: A field of artificial intelligence that enables machines to interpret and understand visual information from the world — including images and video — and make decisions based on that interpretation, with applications ranging from medical imaging to autonomous weapons targeting.

Points of interest: Pentagon terminology for potential targets automatically flagged by AI systems based on movement patterns, thermal signatures, and threat database comparisons, presented to human commanders for final decision-making.

Red lines: Explicit ethical and operational boundaries established by AI companies defining uses of their technology they will not support under any circumstances, regardless of contractual or governmental pressure.


References

Department of Defense. (2025). Annual Report to Congress: Military and Security Developments Involving the People's Republic of China. Office of the Secretary of Defense.

Future of Life Institute. (2023). Autonomous Weapons: An Open Letter from AI & Robotics Researchers. futureoflife.org

Johns Hopkins Applied Physics Laboratory. (2025). Unmanned Systems in Modern Warfare: A Technical Assessment. APL Technical Digest, Vol. 43.

NATO. (2023). Principles of Responsible Use of AI in Defence. Brussels: NATO Headquarters.

Roff, H. & Moyes, R. (2024). Meaningful Human Control, Artificial Intelligence and Autonomous Weapons. Article 36 Briefing Paper.

Scharre, P. (2023). Four Battlegrounds: Power in the Age of Artificial Intelligence. W.W. Norton & Co.

US Central Command. (2026). Operational statements on AI-assisted targeting in the Iran theater. centcom.mil

Walsh, T. (2024). Machines That Think: The Future of Artificial Intelligence. Updated Edition. Prometheus Books.

Bloomberg Businessweek 2026-3 Inside the Pentagon's decade long quest to build combat ready AI. 

No hay comentarios.:

Publicar un comentario

Algorithmic War: How AI is Redrawing Geopolitics — and Why the World Isn't Ready

Algorithmic War: How AI is Redrawing Geopolitics  — and Why the World Isn't Ready From the skies over Iran to the waters of the Pacific...