In the twenty first century, the most decisive weapons deployed in conflict zones are not always missiles, drones or artillery systems but narratives engineered with precision and disseminated at scale. Disinformation has evolved from crude propaganda into a sophisticated instrument of statecraft and non state coercion that shapes elections, legitimises invasions, fractures alliances and corrodes public trust in democratic institutions. It is no longer merely an adjunct to military operations but a core strategic capability integrated into national security doctrines, intelligence tradecraft and geopolitical competition. Modern disinformation campaigns operate at the intersection of psychology, technology and geopolitics. Unlike traditional propaganda, which openly advances a political message, disinformation deliberately spreads false or manipulated information with the intent to deceive and achieve strategic outcomes. It exploits cognitive biases, amplifies grievances and weaponises identity politics. What makes the present era uniquely dangerous is the convergence of algorithm driven social media ecosystems, generative artificial intelligence and fragmented information environments that reward outrage and virality over verification.

Recent conflicts illustrate how disinformation is used not simply to mislead but to reshape the legal and moral framing of war itself. In the Russia Ukraine war, the Kremlin has persistently framed its invasion as a defensive response to alleged threats from the North Atlantic Treaty Organisation, while systematically denying or obscuring evidence of civilian harm in places such as Bucha and Mariupol. Russian state media and affiliated networks have promoted narratives questioning the authenticity of documented war crimes, attempting to blur accountability under international humanitarian law. At the same time, Ukrainian and Western actors have engaged in strategic communications campaigns designed to sustain international support, demonstrating that information operations are now central to alliance cohesion and battlefield morale.

In the Israel Gaza conflict, disinformation has proliferated across multiple platforms, often in real time, immediately after strikes or major incidents. Fabricated casualty figures, manipulated images and recycled footage from unrelated conflicts have circulated widely before independent verification becomes possible. Competing narratives seek to shape global perception of proportionality, civilian harm and legality under the Geneva Conventions. The speed at which such content spreads frequently outpaces institutional responses from governments and international organisations, thereby influencing public opinion and diplomatic positioning before facts are firmly established.

As per various media reports, beyond active war zones, disinformation has become a persistent feature of political competition. The 2016 United States presidential election exposed the scale of coordinated online interference linked to the Russian Internet Research Agency, which deployed fake accounts and targeted messaging to exploit racial and ideological divisions. Subsequent investigations by United States intelligence agencies concluded that the objective was not simply to favour one candidate but to undermine confidence in democratic processes themselves. Similar tactics have been documented in European elections, including attempts to influence debates in France and Germany through coordinated social media amplification and the strategic release of hacked material.

China has developed its own model of information influence operations, combining state media expansion, covert social media networks and strategic narrative framing. During the early stages of the Covid nineteen pandemic, Chinese officials promoted alternative theories regarding the origin of the virus while amplifying doubts about Western crisis management. These campaigns were not limited to domestic audiences but targeted global discourse, illustrating how information strategy now functions as an extension of foreign policy. Beijing’s use of coordinated online accounts to shape perceptions about Hong Kong, Xinjiang and Taiwan further demonstrates how disinformation can be deployed to contest international criticism and dilute human rights advocacy. The architecture of disinformation networks is rarely linear. State actors frequently rely on proxies, private contractors, ideological influencers and automated bot networks to create plausible deniability. Financial incentives also play a significant role, as digital advertising models reward engagement irrespective of accuracy. In some cases, political consultancy firms and data analytics companies have been implicated in micro targeting operations that blur the line between persuasive campaigning and manipulative distortion. The ecosystem thrives on opacity, where attribution is contested and accountability diffused.

The legal frameworks governing disinformation remain fragmented and contested. International law recognises certain forms of propaganda as unlawful, particularly where they constitute incitement to genocide or direct participation in hostilities. However, most peacetime disinformation falls into a grey zone between protected speech and hostile interference. The United Nations has repeatedly affirmed the importance of countering false narratives while preserving freedom of expression under the International Covenant on Civil and Political Rights. Regional instruments such as the European Union Digital Services Act seek to impose transparency and due diligence obligations on online platforms, yet enforcement challenges persist in a borderless digital sphere.

The strategic consequences of sustained disinformation campaigns are profound. Public trust in electoral systems has declined in several democracies, partly due to repeated allegations of fraud amplified through online networks. In fragile states, rumours and fabricated reports have triggered communal violence and destabilised peace processes. In Myanmar, disinformation circulated on social media contributed to ethnic tensions preceding mass atrocities against the Rohingya population, demonstrating how digital narratives can have lethal offline consequences.

The integration of artificial intelligence into content creation has further complicated the landscape. Deepfake technology now enables the fabrication of realistic video and audio recordings that can simulate political leaders making inflammatory statements. During electoral cycles, even a short lived false video can influence voter perception before forensic debunking occurs. Intelligence agencies across multiple countries have warned that generative artificial intelligence will accelerate the scale and plausibility of influence operations, making detection and attribution increasingly difficult.

Disinformation also intersects with economic warfare. False narratives regarding financial instability, sanctions or commodity shortages can trigger market volatility. In periods of geopolitical tension, rumours about energy supply disruptions or banking sector fragility have circulated widely on digital platforms, sometimes prompting measurable fluctuations in stock markets and currency values. Such incidents underscore that information integrity is not merely a political concern but a matter of economic security. Critically, responsibility for the current information disorder cannot be attributed solely to hostile foreign powers. Domestic political actors, media organisations and technology platforms have all contributed, whether through polarising rhetoric, insufficient content moderation or algorithmic amplification of sensational material. The commercial logic of engagement driven platforms often aligns with the strategic logic of disinformation actors, creating a structural vulnerability within democratic societies.

Countering disinformation requires a multidimensional strategy that integrates legal reform, technological innovation and civic education. Transparency in political advertising, stronger attribution capabilities for coordinated influence campaigns, and robust independent journalism are essential components. However, there is an inherent tension between aggressive counter measures and the preservation of open debate. Excessive regulation risks empowering governments to suppress dissent under the guise of combating false information, thereby undermining the very democratic values such measures aim to protect.

Ultimately, disinformation has become a strategic equaliser in international relations. States with limited conventional military power can nonetheless exert disproportionate influence by manipulating narratives and exploiting societal divisions within more powerful adversaries. In an era where perception often precedes verification, the battle for legitimacy unfolds not only in diplomatic chambers and courtrooms but on digital timelines and encrypted messaging platforms. The global political consequences are already visible in fractured alliances, contested elections and eroded trust in institutions. If unaddressed, the continued weaponisation of information threatens to normalise epistemic instability as a permanent feature of international politics. The defence of truth, therefore, is no longer a purely moral aspiration but a strategic imperative central to national security, democratic resilience and the rule of law.