AI-DEFENSE-INDUSTRIAL; AN INDUSTRY GETS MORE COMPLEX
- Aaditya Divekar
- 15 hours ago
- 6 min read

The use of AI in the warzone without Congressional approval is blurring the lines on how much lethal force the President is authorized to use, especially when fewer human lives are sent to the front. What is dangerous about the modern evolution of the military-industrial complex is the seemingly flawless integration of AI companies in this new mix.
DEFINING TRADITIONAL ‘DUAL USE’
Traditionally, “dual use” technologies have been widely considered beneficial capital allocations, as they have for decades led to developments in civilian spaces like aviation, NASA mission development, healthcare developments, etc. For example, ahead of the Artemis Missions to the lunar surface and Mars, the Space Force is using a new procurement strategy to focus their sourcing from commercial entities more than traditional governmental contractors. As a result, the satellite industry has seen a renaissance in the last few years, providing companies like Apple or Amazon with low-earth orbit (LEO) internet capabilities, and even helping imaging needs related to civilian city-planning.
DUAL USE IN THE MODERN ERA
In recent years, however, “dual use” has retreated from an emphasis on civilian applications. The United States is increasingly comfortable asserting its national security authority over civilian companies operating satellites or AI. The Trump Administration has requested Planet Labs to hide images taken from the Iran War and categorized certain AI companies as supply-chain risks for not cooperating with Department of Defense leadership. “Dual use” policy thus confuses its aim to simultaneously innovate quality civilian technology when companies themselves shift their priorities to capitalize on defense procurement.
When foreign adversaries are retooling satellites into tools for surveillance, it accelerates the Trump Administration’s need to reimagine procurement, as it wants to compete on the same level as its adversaries despite concerns over privacy and national security goals.The U.S. Army is already initiating the integration with AI tools with Project ARIA. Instead of the traditional procurement process from Lockheed or Raytheon—which leaves the government with added expenses—new Army initiatives are looking directly to the private market to purchase cheap, ready-to-go equipment that can be formatted in the field.
MODERN WARFARE
Warfare has changed significantly within recent decades. Largely from the sidelines, the United States watched intently on how AI has been integrated into the battlefields seen in the conflicts between Russia and Ukraine, Israel and Gaza, and most recently it has been testing those learnings in its own war against Iran. The United States observed Iranian Shahed drones used by Russia against Ukraine, and now deploys LUCAS drones reverse engineered from the Shahed in its ongoing efforts to reopen the Strait of Hormuz, or rescue downed pilots.
The Department of Defense has shifted its focus from in-house development of AI-tools, to procuring technology directly from the private sector. This strategy has changed due to the LUCAS drone’s low-cost, augmenting traditional military capabilities. The Trump administration is successfully signaling this shift in procurement strategy to the private sector.
Anduril Industries is one of these beneficiaries of Trump’s new procurement policy. Although Anduril’s Lattice AI had been developed prior to the start of the Iran War, Anduril has been heavily involved in using its technology in Iran in the past few weeks. It has received a $20 billion contract from the U.S. Army for use of its Lattice program. The model of AI-integration into traditional heavy-equipment manufacturing with human oversight has given genuine advantages to U.S. Presidents who are reluctant to put “boots on the ground”. For actors within the EU, and NATO’s recently admitted states, AI-integration is paramount to meeting the current threat posed by Russia.
The use of AI itself as a war-fighting tool, however, is one of the more dangerous developments to result from the U.S.-Iran War. Both Claude and Gotham, Anthropic and Palantir’s respective models, were employed by the Department of Defense to analyze intercepted Iranian planning documents. Although these reports were fed back to human authority in the Department, the same body prosecuting the war in Iran has been increasingly authorizing the use of AI, specifically allowing Anduril’s Lattice AI to make more independent decisions.
This delegation of decision-making, especially when integrated with lethal drone usage, presents a moral problem that Congress has not taken seriously. The War Powers Act, one of Congress’s foremost tools in curbing the President’s war powers, has not been seriously considered in light of the U.S.-Iran war. In that light, allowing a drone to independently strike targets is equally jarring, and a dangerous integration of AI in defense-use that disregards just war doctrine that the Department of Defense theoretically adheres to. One of the key policy reasons behind the War Powers Act was to add deliberation, notice, and hold for realistic information before committing to any military action. The opposite problem arises when AI overstimulates the chain-of-command with an endless influx of “intelligence”.
There is already evidence that AI-defense applications have generated incorrect conclusions based on aggregate data it is fed from the battlefield. The IDF’s use of Lavender in the Gaza War produced a list of targets within two weeks of analyzing content, whereas a human-counterpart division would have produced a similar list in a year. However, the IDF used many AI-generated lists to compile and rank people, buildings, and movements for likelihood of affiliation with terrorist organizations. Through extensive media reporting through March 2025, many of these targets were found to be purely civilian in nature. “Dual use” classification can extend to whether a targeted site is strictly used for military purposes, or in conjunction with civilian activities. Power plants are a prime example of “dual use” sites. However, these AI-generated target lists will amalgamate sites like hospitals, power plants, residential apartment buildings, and more “dual use” sites in a single list. Targeting these AI-generated sites without human verification is shown to lead to devastating loss of life and resources. Although the public does not have information as to the efficacy of such amalgamated lists, the U.S. also generated lists of targets using Palantir and Anthropic models in anticipation of its strikes against Iran.
AI usage also has played a role in war-propaganda efforts. Iran was the purchaser of AI-generated content depicting the outcomes of the Iran War in favor of Iran, although generated in the style of a Lego Movie. Following the successful virality of the Iran War Lego Movie, President Trump himself posted more AI-propaganda on Truth Social. This post also went viral, and most viewers readily understood it to be a doctored image of Trump; however, this development in the use of AI-content to spread misinformation presents a genuine worry that AI-slop (quickly generated, realistic visual content generated by AI) will be a tool used in future information wars to aid in garnering civilian support that circumvents traditional media and press outlets.
PRIVATE VENTURE CAPITAL SEEKS TO PROFIT
The private markets are signalling that they will make the decision-making to turn to defense and AI much easier for companies still on the fence, and are profiting significantly from companies who have pursued the new procurement strategy eagerly.In the D.C. region, where many defense startups are located for the sake of expediency and proximity, Q4 of 2025 signalled a major affirmance towards refunding established defense startups.
One major venture, a16z, has held an “American Dynamism” fund since the first Trump Administration. It initially veiled itself as a fund that would be bringing manufacturing jobs back, but pivots happen often in the startup world. The current American Dynamism fund holds $1.2 billion for future bets on national security needs, and to reinvest in proven successes like Anduril. The irony present in this venture’s pivot to defense in recent years is that its founder, Marc Andreesen, founded NetScape for civilian internet use after the Defense Department’s initiation of DARPA. Using his profits, Andreesen invested in companies clearly interested in a more open internet in the civilian sector. This represents the traditional function “dual use” policy intended to serve in American society. The more dangerous implication present here is that a “reverse dual use” is being spearheaded by the Trump Administration, and clearly articulated by Vice President Vance. Defense priorities are being funded, with no apparent post-war or long-term civilian use conceived or any safety regulations considered.
CONCLUSION
With some success, AI-usage in active warzones around the world has shown that intelligence gathering is no longer bound by limits presented by traditional human gathering; it is also burdening war-fighting by providing incomplete and incorrect intelligence. AI-focused defense startups, and the profits earned from said startups winning lucrative defense contracts, is proving to the Trump Administration that its new procurement thesis has merit. Private equity and venture capital are willingly signing up for this new era in defense procurement.
*The views expressed in this article do not represent the views of Santa Clara University.



Comments