US threat assessment elevates AI from tool to strategic risk factor
ODNI’s 2026 threat assessment frames AI as a strategic, cross-cutting driver of modern warfare and coercion, with China the top competitor and autonomy risks rising—implications Europe must reflect in procurement and resilience.
Key facts
- ODNI’s 2026 Worldwide Threat Assessment calls AI a “defining technology for the 21st century” and says it is being used in combat to influence targeting and streamline decision-making.
- The report identifies China as the United States’ “most capable competitor” in AI, citing scale adoption enabled by talent, datasets, state funding and global partnerships.
- Defense One notes the 2026 report and hearing give little attention to AI in election interference and disinformation compared with 2024, even as EU officials warn of AI-driven “cognitive warfare.”
3 minute read
ODNI’s 2026 Worldwide Threat Assessment elevates AI from an enabling technology to a strategic variable that shapes how major actors compete and fight. Unlike the report’s enduring country chapters on China, Russia, Iran and North Korea, AI is treated as a cross-cutting force affecting each: the assessment calls it a “defining technology for the 21st century,” notes its use in combat, and argues that rapid progress outside the United States is eroding U.S. economic and national-security advantages.
China is singled out as “the most capable competitor,” with ODNI attributing Beijing’s momentum to adoption at scale supported by large datasets, a substantial talent pool, government funding and growing global partnerships. For European officials and primes, the practical implication is that Chinese standards, tooling and data relationships could become embedded in third-country defence and security ecosystems, complicating interoperability, export-control compliance and supply-chain assurance for EU programmes.
The assessment also flags risks around autonomy in warfare, emphasising the need for careful engineering and mitigation before autonomous capabilities are broadly deployed. This aligns with European debates on human control, rules of engagement, and safety cases for autonomous effects, particularly as European armed forces accelerate counter-UAS, loitering munitions and sensor-fusion programmes where AI-mediated target development and engagement timelines are compressing.
A noteworthy gap is what the Defense One reporting describes as a reduced emphasis on AI’s role in election interference, disinformation and autocracy compared with the 2024 cycle, despite the continued proliferation of AI-generated information operations attributed to authoritarian actors. The article contrasts this with EU messaging, citing High Representative Kaja Kallas’ warning that AI is taking “cognitive warfare” to a new level. For Europe, the divergence matters: it suggests transatlantic alignment on AI-enabled influence operations may become less consistent at the federal level even as the operational threat to European democratic processes and force deployment narratives persists.
Finally, U.S. testimony referenced an alleged China-run data-extortion operation using “an AI tool” against international government and critical social sectors. For European resilience planning, the combined picture is an integrated threat surface in which AI amplifies cyber extortion, accelerates targeting and decision cycles, and increases the velocity of narrative operations—pressing EU defence procurement toward assured compute, auditable models, and human-governed autonomy rather than ad hoc adoption.
Source: Defense One