A targeting workflow that used to take a team of people the better part of a day is now resolved in seconds somewhere inside a U.S. Central Command operations center, the kind of room lined with screens, humming with data feeds, and staffed by analysts who work in shifts. The danger has been identified. It is flagged by AI. The recommendation is examined by a commander. There’s a strike option. From the outside, the process that once required hours of back-and-forth through chains of command has been condensed into something that appears nearly instantaneous. Project Maven is that compression. Furthermore, it is no longer an experiment.
The program started out fairly modestly. In 2017, the Pentagon was dealing with an issue that, when put simply, sounds almost unremarkable: an excessive amount of video. At peak operations, military drones flying over conflict areas were producing over 100,000 hours of footage annually, and human analysts—sometimes teams of five people staring at screens all day—were just unable to keep up. Threats were going unnoticed. The amount of footage was growing more quickly than it could be examined. The program’s founder, Marine Colonel Drew Kukor, offered a limited solution: automatically scan those feeds and flag objects of interest using computer vision, the same type of AI that could recognize a motorcycle in a marketplace scene from a Bond movie. That was the concept. It appeared doable. Even attainable.
Key Information: Project Maven — Pentagon AI Warfare Program
| Program Name | Project Maven (also known as the Algorithmic Warfare Cross-Functional Team) |
| Launched By | U.S. Department of Defense |
| Year Launched | 2017 — initiated under Deputy Defense Secretary Bob Work |
| Original Purpose | Process drone surveillance footage using computer vision to detect objects and flag threats |
| Founding Vision | Marine Colonel Drew Kukor — sought a unified AI battlefield map, described as “Google Earth for war” |
| Original AI Contractor | Google — withdrew in 2018 after 3,000+ employee protests; declined to renew contract |
| Current Primary Contractor | Palantir Technologies (since 2024) — developed Maven Smart System |
| Data Feeds Integrated | ~179 feeds — satellite imagery, drone video, signals intelligence, open-source data |
| Kill Chain Speed | Compressed from hours to minutes or seconds — target detection to strike workflow |
| Iran Operations (2026) | 7,800 targets struck; 1,000 in first 24 hours of Central Command operations |
| Ukraine Application | Deployed by European Command from February 2022; algorithms retrained overnight for snow/tank conditions |
| Surveillance Volume | Over 100,000 hours of drone video footage collected annually at peak operations |
| LLM Integration | Large language models (including Anthropic’s Claude) used to speed up decision workflows — not direct targeting |
| Key Reference Book | Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare — Katrina Manson (Bloomberg, 2026) |
What came next was anything but confined. Prior to the arrangement’s spectacular collapse in 2018, when over 3,000 employees signed an open letter protesting the company’s involvement in weapons development, Google was the first to contribute machine learning expertise. A number of engineers quit their jobs. After declining to extend its contract, Google released corporate guidelines that specifically forbade involvement in weapons systems. Everyone involved found the episode awkward, exposing a Silicon Valley fault line that has never completely closed: between defense officials who viewed AI capability as a national security necessity and engineers who saw autonomous targeting as an ethical line. Since then, neither side has completely persuaded the other.

Palantir Technologies filled the void left by Google. Palantir, which was founded with support from the CIA and had long concentrated on government intelligence work, was unconcerned by the moral ambiguity that had caused Google’s involvement to fail. By 2024, the company was the main technology contractor for Maven, developing what it refers to as the Maven Smart System, a platform that combines battlefield data from about 179 different feeds into a single digital interface. Open-source data, signals intelligence, drone footage, and satellite imagery. AI detections are superimposed on a single map, allowing commanders to take action. Alex Karp, CEO of Palantir, has been clear about what he thinks this accomplishes: he claims that reducing the kill chain from hours to seconds makes enemies obsolete. It’s the kind of claim that sounds promotional until you consider the real uses of the system.
The most obvious example of where this all ends up to date is the 2026 Iran operations. In the first day of operations, U.S. Central Command hit 1,000 targets, according to Bloomberg reporter Katrina Manson, whose book on Project Maven is based on extensive access to the program’s history. That number increased to about 6,000 in ten days. The total had risen to 7,800 by the time new public statistics were made available. These are not static, pre-selected targets stored in a database.
Many are dynamic: threats that weren’t targets an hour ago, moving cars, and changing positions. That speed is made possible in part by Maven’s integration with large language models, such as Anthropic’s Claude, to speed up decision workflows. The LLMs are not using cameras to locate targets. In order to enable human commanders to take action on what the AI has already revealed, they are expediting the surrounding processes, including analysis, documentation, and decision routing.
An earlier, coarser example of the same reasoning was provided by Ukraine. The European Command swiftly established a targeting cell when Russian forces started their invasion in February 2022, feeding Maven data from satellite feeds it hadn’t previously been trained on—snowy terrain, tank formations, conditions the algorithms weren’t initially designed for. Overnight, Microsoft and Amazon engineers were summoned to retrain the models. The difference between an American detection and a Ukrainian strike was sometimes measured in seconds, and within weeks the system was picking up details on the video footage that human screeners were overlooking.
It’s difficult to avoid thinking about the consequences of that for a little while. The government has been cautious not to publicly address the question of how much human judgment actually falls between AI detection and weapon strike. It is still genuinely unclear what this means for international law, accountability, and the precedents being set in actual conflicts. Technology is advancing more quickly than the doctrine that surrounds it, and the doctrine is advancing more quickly than any significant public discussion about the true implications of any of this. Data keeps piling up on the screens in those command centers. The algorithms are continually retrained. The kill chain continues to shorten.