AI, weapons, and state reason: why the Pentagon clause is not enough


This week has shed a troubling light on the intersection between artificial intelligence and military applications.

While Anthropic maintains a firm stance against using its Claude model in mass surveillance or autonomous weapons, companies such as OpenAI, Google (with Gemini) and xAI (with Grok) have adopted the standard proposed by the United States Department of Defense.

This is Marco allows AI to be used in “all lawful uses”without special additional restrictions.

This development mirrors developments in the relationship between high tech and the defense sector: a provision allowing “any lawful use” behind the door to integrate advanced models into military systems, including classified ones.

OpenAI started this trend in 2024 when it removed the explicit ban on “military and warlike” applications from its policies, replacing it with a general reference to “not to disarm or use weapons.”

In 2025, Google abandoned its compromise to avoid using AI in weapons or intrusive surveillance, opting for a more flexible focus: implementing the technology when “the benefits outweigh the foreseeable risks.”

At the same time, xAI has formalized a plan to incorporate Grok into its classifications, fully adhering to the “all legal uses” criterion set by Anthropic.

The panorama is clear. The current defense sector is trying to gradually complement Claude (now a mainstay in its sentient intelligence systems) with suitable alternatives, exerting a presence over Anthropic through the possible classification of society as “I can handle it for the duration of the administration”, staying within its limits with regard to the surveillance of ordinary citizens and fully autonomous weapons.

At the same time, there are disagreements within Google and OpenAI: a card signed by Google employees requires clear prohibitions for Gemini not when used in domestic surveillance or armed operations without human intervention.

This conflict is not limited to the tension between governments and corporations. It is also reflected in the internal dynamics of their technology companies.

In this context, it is easy to draw analogies to science fiction, evoking scenarios such as a militarized Skynet.

However, such exaggerated comparisons distract from real-life experiences that do not require apocalyptic stories to be taken seriously.

In reality, the concern does not revolve around an autonomous and rebellious superintelligence until more concrete and immediate changes occur: the opacity of state models, the acceleration of decision-making cycles. and geopolitical knowledge that favors early deployment of Indian technologies.

It illustrates this experience in Ukraine, where drones with increasing levels of autonomy (from terminal guidance to area operations) can continue missions before losing contact with human operators, although this autonomy is usually gradual and does not involve target selection without initial human supervision.

These systems do not represent a “will” of their own, but have the ability to recognize patrons, prioritize targets, and take action around overloaded data and electronic countermeasures.

There are more mistakes in choosing white. By integrating into algorithmic structures, these systems could lead to human managers delegate more authority than is publicly acknowledgedwhich disrupts the determination of political and legal responsibility.

First up, there’s the sneaky technician. Military AI is based on incomplete, fragmented, or manipulable data, which can lead to unpredictable fouls in real combat scenarios that exacerbate adversary intentions to contaminate information or induce erratic behavior.

Second, there are strategic implications. Incorporating artificial intelligence into early warning and response systems can dramatically reduce the number of decisionsenabling human reasoning at critical moments, especially in connection with the search for a nuclear umbrella.

Ultimately, ethical and legal challenges arise. The greater the automation in the sequence, the more complete the attribution of responsibility for fatal accidents.

However, it would be a mistake to blanketly condemn AI in the military sphere. The same technology that increases the risks of escalation or error can, if handled properly, minimize collateral damage, increase target identification accuracy, and strengthen basic cyber defenses.

For example, OpenAI works with DARPA on cybersecurity measures aimed at protecting critical infrastructure, demonstrates the value of defensive AI applications under strict surveillance protocols.

Recalling these occasions in principle would be as reductionist as accepting any questions under the “all fair use” principle.

The central debate is not whether AI should be integrated into military intelligence (if it is, and its presence will increase), what applications they should be banned from, and how many safeguards are allowed for them.

Hence the problem with the standard set by the Pentagon. “All legal use” represents a bare minimum in an area where international policy moves slowly in the face of technological innovation.

Europe, including Spain, cannot remain on the sidelines of this discussion. Although the EU has moved forward with its AI law, which many high-risk uses in peacetime, the purely military sphere is deliberately excluded, although dual-use (civilian-military) technologies are regulated.

If major platforms accept without the current model, and if global regulation of autonomous weapons remains paralyzed, the margin is reduced dramatically to set limits based on humanistic principles.

A balanced approach does not mean closing the door to AI in defense, but imposing three basic requirements.

1. Prohibit systems that select and target individuals without significant human control.

2. Exclude configurations where unpredictability violates international humanitarian law.

3. Some of the remaining applications for independent auditors, transparency and actual reporting mechanisms on political matters.

The disappointment is evident. States and societies are now aware of verifiable limits and effective human control over lethal force, or contracts and dynamics of future conflicts will in turn define fronts that avoid discussion today.

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*