The Algorithmic Battlespace: How AI Is Rewriting the Rules of Influence

As leaders and security officials gathered in Munich, the most consequential shift in the information war was about the systems that decide what any of us sees in the first place. The latest analysis from NATO’s Strategic Communications Centre of Excellence argues that power is draining from narratives and pooling instead in algorithms, recommendation engines and AI agents. In this emerging environment, influence is a contest over the machinery of attention and democracies, it warns, are dangerously underprepared.

The Next‑Generation Information Environment report from the NATO Strategic Communications Centre of Excellence (the StratCom COE) argues that the centre of gravity in information competition is shifting from What is said to What is surfaced, suppressed, ranked and routed, by automated systems. In its framing, the decisive contest is increasingly fought in the curation layer, recommendation engines, search ranking, and agentic systems that filter information at scale, rather than in the creation of individual narratives alone. This makes influence operations less about persuading humans directly and more about shaping the machine-to-machine processes that determine what people ever get to see.

The report is a scenario-driven synthesis drawn from expert discussions held under Chatham House–style conditions. That approach is well-suited to surfacing weak signals and cross-domain connections, such as AI agents, hyperpersonalisation, poisoned data, neurodata, and great-power rivalry. But it also leaves gaps: limited traceability from claim to evidence, no quantified likelihoods, and little prioritisation logic beyond expert judgement. Those limitations matter because the report’s most consequential claims are systemic: they imply changes to regulation, procurement, crisis protocols and strategic communications, not merely tweaks to content moderation.

Triangulation with harder material strengthens the direction of travel. EU law already treats platform recommendation systems as a locus of systemic risk (Digital Services Act) and imposes duties around transparency and risk mitigation for very large platforms. The EU AI Act adds controls relevant to synthetic content disclosure and certain forms of manipulation. Peer-reviewed research supports the plausibility of data poisoning attacks against large model ecosystems, and recent work shows how emotionally manipulative patterns can appear in AI companion applications—precisely the category of “therapeutic/companionship bots” the StratCom report flags as an emerging vector.

For democracies, the practical conclusion is twofold. First, resilience can no longer be built principally by rebutting falsehoods. It must be built by governing the infrastructures that decide visibility, credibility signals, and automated routing. Second, strategic communications must evolve from reactive defence towards ethically bounded, technology-literate influence—focused on disrupting illicit influence capabilities (networks, funding, infrastructure) rather than policing opinions.

Unspecified or missing data in the StratCom report is material and should be treated as such: there are no published datasets, no formal methodology section beyond the description of discussion sessions, and limited direct citations. Where the report makes high-impact claims about neurotechnology and “AI as authority”, readers are left without a clear evidential chain. These should be prioritised for validation through dedicated research and national capability-building rather than taken as settled fact.

Automated Curation and Agentic Systems

The StratCom report’s core claim is that the information environment is entering a new phase defined by automated curation and agentic systems. It describes an “algorithmic battlespace” in which competing actors seek advantage not just by producing persuasive content but by manipulating the systems that select, amplify, and personalise content. In this environment, influence is exerted through optimisation contests between models and platforms. The best mathematical process wins, making information competition increasingly a technical and infrastructural problem rather than a purely rhetorical one.

A second theme is hyperpersonalisation: microtargeting evolves into individually tailored realities that erode a shared factual baseline. The report also highlights an economic dimension: privacy and access to “assured” high-quality information may become stratified by ability to pay, potentially widening inequality and undermining civic cohesion.

Third, the report foregrounds a polluted information ecosystem. It warns that open-source models and broader training-data pools are susceptible to poisoning, malicious content inserted into training or retrieval pipelines to change model behaviour. It also anticipates a shift towards content produced primarily for machines, such as indexers, agents and retrieval systems, rather than humans, turning search and training corpora into strategic targets.

Fourth, it describes a crisis of epistemic authority. In a world where authenticity is routinely contested and human authorship becomes unverifiable at scale, AI systems can emerge as the most trusted arbiters. If different AI systems encode different assumptions and values, conflicts move from Which facts are true to Which system gets to decide what counts as evidence.

Fifth, the report asserts that neurotechnology will become commercially and strategically important, with neurodata treated as a critical resource and neuro-modelling potentially enabling new forms of manipulation. It frames this as a distinct security frontier, Neuro-Warfare. The report’s claim is directionally plausible in policy terms—international bodies already treat neurotechnology as an emerging rights and governance issue—but evidential specificity is limited in the report itself.

The report pushes a normative and operational pivot: Values First in technology governance, coupled with a shift in strategic communications from purely defensive posture to proactive approaches that impose costs on adversaries—while remaining anchored in democratic legitimacy and ethical constraints.

Implications cut across the dimensions the report emphasises. Technologically, the key question becomes governance of recommendation, ranking, provenance, and model integrity, including poisoning resistance and auditability. Geopolitically, the report situates these dynamics in great-power competition, with states and state-aligned actors targeting democratic societies’ cognitive and informational infrastructures. Economically, platform incentives and data markets shape exposure and vulnerability, while the Premium Privacy scenario implies stratification. For media ecosystems, the report implies a shift from content verification alone towards provenance and distribution-layer accountability. Legally and regulatorily, EU frameworks, like AI Act, DSA, Political Advertising Regulation and GDPR, provide a partial scaffold, but the gaps lie in enforcement capacity, audit standards, and cross-border coordination. Societal trust is the strategic centre: if AI becomes the default “authority”, democracies must ensure that authority is contestable, auditable, and rights-based. For defence and strategic communications, the implication is a move from reactive messaging towards capability-based disruption of illicit influence systems.

Strategic Warning Document

The report’s institutional provenance gives it relevance: it is produced under a NATO-accredited centre focused on strategic communications and emerging technologies. But the report explicitly states it does not necessarily represent NATO’s official position. That distinction matters because the report reads less like doctrine and more like a strategic warning document.

Methodologically, the report is candid about its limitations. It is based on four expert sessions, described as improvised discussions without pre-circulated papers. Contributions were not attributed to individual speakers. The report is a synthesis of recurring themes rather than a documented evidence review. This design is useful for horizon-scanning and system mapping, but it weakens reproducibility and makes “which claims are evidence-based, which are expert inference, which are speculative” hard to separate.

The report’s strongest contribution is conceptual: it reframes information operations around the infrastructures of sorting and selection, such as curation, personalisation and agentic filtering, which aligns with the direction of EU regulatory thinking that treats recommendation systems as systemic risk vectors in large platforms. Where the report is weakest is prioritisation and quantification. It offers no structured probability estimates, no explicit risk scoring, and no formal threat modelling. That absence is most acute in high-impact areas such as neurotechnology and AI as Authority, where readers need clearer evidentiary grounding and better-defined causal pathways.

External research and policy documents reinforce the plausibility of several priority risks the report highlights. EU regulation already requires certain transparency and risk mitigation for platform systems. Peer-reviewed work indicates that data poisoning attacks are realistic, and research suggests emotionally manipulative interaction patterns can occur in AI companion products. But those external anchors do not fully close the gap: operationalising the report’s warnings still requires democracies to build monitoring, audit, and incident-response capabilities that can turn broad scenarios into measurable indicators and enforceable standards.

Risk Matrix

Time horizons in this assessment follow the requested definitions: short (0–12 months), medium (1–3 years), and long (3–10 years). Scores use a 1–5 scale for likelihood and impact. The ratings and the overall structure are preserved from the Finnish draft, where the StratCom report does not specify data, that lack is explicitly noted rather than filled with invented numbers.

RiskMechanismLikelihood (1–5)Impact (1–5)Time horizonPrioritised mitigation measures (policy / technical / organisational / international)
Algorithmic gatekeeping (curation/filtering)Influence shifts to recommendation, ranking and filtering layers. Competition becomes machine-to-machine over visibility logic.55Short–mediumEnforce recommendation transparency and meaningful non-profiling options. Independent audits. Secure research/authority access to systemic-risk data. Crisis protocols for abnormal amplification.
Hyperpersonalisation & microtargetingFragmented realities erode shared factual baselines. “Assured quality information” and privacy become paywalled, widening inequality.45Short–mediumTighten political and issue-ad targeting rules. Require targeting disclosures and public ad archives. Reduce profiling defaults. Deploy “democratic braking” mechanisms during crises.
Model poisoning / polluted training ecosystemsMalicious data shifts model behaviour. Content is produced for machine ingestion. Open-source models face heightened exposure.45Short–mediumTraining data provenance and chain-of-custody. Continuous red-teaming. Signed datasets and secure update pipelines. Mandatory incident reporting in critical contexts. Establish model registries for high-impact deployments.
Private tech actors as strategic playersPlatform owners and executives shape security outcomes independently of state strategy. Acquisitions can be information-control actions.54ShortInvestment screening with information-control criteria. State–platform crisis playbooks. Competition and open standards to reduce single points of failure. Alliance coordination on platform risk.
AI-driven manipulation via therapeutic/companion botsEmotional influence is embedded in persuasive interaction. Systems can be optimised for engagement. Risks increase if models are compromised or incentives misaligned.44Short–mediumEnforce bans on manipulative/deceptive techniques. Strengthen consumer protection against dark patterns. Age safeguards. Independent psychological safety testing. Transparency about therapeutic claims and limitations.
Neurodata risks / neurotech manipulationNeurodata becomes a high-value asset. Neuro-modelling may enable finer-grained behavioural manipulation. “Neuro-warfare” emerges as a security domain.35Medium–longSpecial protection for neurodata. Anticipatory human-rights impact assessments. Implement OECD principles. Build international norms and red lines. Integrate neurotech into strategic risk modelling.
Erosion of shared trust & AI as authorityAuthenticity becomes contestable. AI systems become default arbiters. Disputes shift to “which system decides what counts as evidence”.45MediumExpand auditability and explainability duties. Provenance and authenticity infrastructure. Transparent governance of public-sector AI. Strengthen public-interest media and literacy focused on distribution mechanics.

Eight Recommendations

First, democracies should build a standing “anticipatory governance” capability: an interdisciplinary unit that can turn scenarios such as algorithmic takeover, model poisoning, or agentic curation conflicts into measurable indicators, exercises, and procurement criteria. The report repeatedly implies that Western forecasting and scenario modelling lag behind the speed of technological change. Without a dedicated capacity, governments will continue to react to crises rather than shape the environment.

Second, the curation layer should be treated as quasi-critical infrastructure. Recommendation and ranking systems are where visibility is decided. They should therefore face routine independent audit, meaningful transparency obligations, and user-facing controls that genuinely reduce profiling. The goal is not to dictate speech but to govern the machinery that allocates attention at scale.

Third, hyperpersonalisation should be constrained where the democratic harm is greatest: political influence and high-stakes civic contexts. Tightening targeting rules and enforcing public archives of political and issue advertising, combined with defaults that minimise profiling, reduces the ability to quietly run parallel realities through microtargeted persuasion.

Fourth, model and data provenance must be built into the AI supply chain. The report’s poisoning concern cannot be met with content moderation alone. It requires signed datasets, secure update pipelines, continuous adversarial testing, and incident reporting, especially where AI is used in public services, national infrastructure, or security contexts.

Fifth, synthetic content disclosure should be machine-readable and integrated into distribution platforms. Watermarking and provenance standards, such as C2PA-style approaches, will not solve deception outright. Metadata can be stripped, but they raise the baseline for trustworthy content and reduce uncertainty in crises.

Sixth, AI companions and quasi-therapeutic bots must become a distinct regulatory focus. Where emotional manipulation techniques exist, and where engagement incentives are strong, democracies need a mix of AI-rule enforcement, consumer protection against dark patterns, age safeguards, and independent testing. The point is to keep assistance and companionship from becoming an unregulated channel for behavioural control.

Seventh, neurodata governance should be put in place before mass-market adoption, not after scandals. The report’s neurotechnology claims are among its least evidenced, but their potential rights and security implications are large enough to justify anticipatory safeguards: special-category protections, human-rights impact assessments, and international cooperation on norms.

Eighth, strategic communications should move from reactive rebuttal towards ethically bounded, technology-literate influence and disruption. The target should be illicit influence operations, the infrastructure, funding, and coordination mechanisms, rather than opinions. Done properly, this approach is less censorious than content policing and more effective against systemic manipulation.

Policy Options

Policy optionCost (indicative)FeasibilityTimelineResponsible actors (generic)Key trade-offs
Recommendation transparency + “non-profiling” defaults in high-risk contextsMediumHigh6–18 monthsLegislators, regulators, platformsPressure on ad-driven business models, need credible audit standards
Tight limits on political/issue targeting + public ad archivesMediumMedium6–24 monthsElection authorities, data protection, platformsBorderline cases (“issue ads”), some platforms may exit political ads
Model registry + mandatory red-teaming + incident reporting for critical/high-impact systemsHighMedium12–36 monthsCyber/AI regulators, large model providers, public procurersCompliance burden, balancing confidentiality with auditability
Data provenance requirements in public-sector procurementMediumHigh6–18 monthsPublic procurement bodies, vendorsPotential cost increase, higher barrier for smaller vendors
Provenance standards (C2PA-like) across government and mediaMediumMedium–high12–36 monthsGovernment comms, media, platformsBorderline cases (“issue ads”), some platforms may exclude political ads
AI companions: enforcement against emotional manipulation + consumer protectionMediumMedium6–24 monthsConsumer protection, AI regulators, app platformsFine line between expressive dialogue and manipulation, market adaptation
Neurodata: special protection + anticipatory impact assessmentsMediumMedium18–60 monthsData protection, health regulation, rights bodiesRisk of slowing benign innovation, definitional challenges
International norms and coordination (AI convention, alliance cooperation, counter-FIMI frameworks)Low–mediumMedium12–60 monthsForeign/security policy, alliances, regulatorsSlow diplomacy, compromises on common standards

The StratCom report’s headline warning is structural: the decisive struggle in the next information environment is not merely over content, but over the algorithms and agents that decide what content reaches whom. That diagnosis reframes democratic resilience. It is no longer sufficient to invest mainly in fact-checking, counter-messaging, or one-off platform takedowns. Democracies need governance over curation, enforceable auditability, provenance infrastructure, and AI supply-chain integrity, paired with strategic communications that target illicit influence capabilities rather than public debate itself.

Where the report is strongest is in mapping how these elements connect: agentic systems, hyperpersonalisation, poisoned data, the strategic role of private tech actors, and the potential rise of neurodata as a contested resource. Where it is weakest is in evidentiary traceability and quantified prioritisation, gaps that democracies should treat as a to-do list for capability building, not as a reason for complacency. The message is not to panic, but to govern the machinery of attention before it becomes an unaccountable battleground.

Read More:

×