This article was written by Themistoklis Zanidis, founder of GeoSec Insights, in collaboration with Ruben Cober, founder of Essential Europe. Subscribe for more insights.
“You wake up to a video of your President declaring martial law. It's spreading like wildfire in the social media, millions of views, panic rising. But there’s just one problem: it’s fake. Welcome to the deepfake era”.
Europe is fragile. Not physically, as many Europeans enjoy living in high standard, advanced and rather safe societies, but informationally. The latter could be characterized as the Achilles Heel of the EU and our competitors and rivals are exploiting this weakness. Revisionist powers, mainly Russia and China, push disinformation, false information to deceive people, as well as narratives to erode trust in our institutions. Propaganda and especially deepfakes amplify the damage, turning information into a weapon and truth into chaos. This has been European reality, not a sci-fi movie, for years now. This real-time mass manipulation is one that we Europeans must defend against.
Who’s Pulling the Strings? Russia, China, and the Battle for Truth
Who has been conducting disinformation campaigns against Europe and why are the key questions. Russia leads the charge. Through the Doppelgänger campaign, Kremlin-linked actors, have produced fake websites pretending to be western media and institutions like the Le Monde, Fox, NATO, pushing anti-EU content aimed at creating unrest to undermine trust in institutions (IISS 2025). Additionally, they have used fake stories to claim Ukrainian refugees are a burden, or that Western sanctions do not work effectively. Additionally, Russia behind curtains financially supports fringe parties across Europe that frequently spread pro-Russian narratives to deepen divisions[1].
But Moscow does more than just spreading disinformation online. The latest report from the International Institute for Strategic Studies (IISS) shows that Moscow has launched an unconventional war on Europe’s critical infrastructure. More specifically, since 2022, Russia has conducted sabotages against energy grids, railways, undersea cables and even water supplies. The numbers are staggering: the attacks have increased by 250% between 2023-2024 (IISS 2025).
Even though European authorities have uncovered and expelled many Russian spies across numerous EU countries, Russia has changed its approach into a different model of sabotage operations with poorly trained recruits, as ‘disposable agents’ who carry the sabotage operations on behalf of Moscow semi-unofficially (IISS 2025). In Germany, three German-Russian citizens are on trial, accused of spying on behalf of the Kremlin and planning attacks on critical military infrastructure and industry in an attempt to undermine German aid to Ukraine.
China’s playbook is less aggressive but similar, as Beijing has been using state media and Confucious Institutes in Europe to spread narratives. Chinese media outlets also mimicked a lot of Russian propaganda about war on Ukraine to promote pro-Beijing narratives and weaken EU consensus. Oppressive regimes hate democracy, and European open societies are their target (IISS 2025).
China and Russia as authoritarian regimes have a common strategic goal to revoke the Western led world order. In this regard, they focus on exploiting Europe's openness. While Russia has been destabilizing European societies with disinformation campaigns and sabotages, China aims to weaken democratic structures by damaging trust and unity (IISS 2025).
The Achilles’ Heel of Open Societies
Open societies mean open borders, for ideas, information, speech. That makes us plural, diverse, resilient, when the information is real. Open societies and democracies depend on citizens being able to make reasoned choices. When information is accurate and accessible, individuals and institutions can create and judge policies, hold power to account, and live together with a baseline of trust. Nonetheless, open systems are exposed when data and narratives are weaponized. Europe’s centralized media literacy is uneven; many countries lack Finland’s robust curriculum for recognizing. To make matters worse, populations distrust distant bureaucratic Brussels, making them more receptive to emotional, false narratives. Rapid, cheap deepfakes spread before we can debunk them. As Amy Zegart and Michael Morell warn, “Deception has always been part of espionage and warfare, but not with this level of precision, reach, and speed".
Deepfakes: When Lies Look Real
The enemies of open societies have found new information weapons in Artificial Intelligence (AI) and deepfakes. A deepfake is hyperreal: AI-generated and manipulated ‑audio, video, or image that puts words in mouths that were never said, or show actions never taken. Deepfakes are a form of synthetic media, audio, images, and/or videos that are either partially or wholly manipulated through artificial intelligence (AI) technologies and used in a maliciously deceptive or misinformative manner. Fake images, videos or news look so real that people cannot tell the actual difference. Here, the problem is not that a deepfake can trick us in a moment but that they can make us start doubting everything we see and read online, which is obviously dangerous. If people cannot easily separate the truth from the fiction, then trust in the whole system of news and information starts to break down.
That is why academics warn deepfakes degrade democratic functions. They hinder citizens from holding leaders accountable, erode mutual empathy, distort deliberative debates, and ultimately corrode legitimacy[2]. They create an environment where you do not know whom, or what, to believe. That’s the liar’s dividend: once disinformation becomes plausible, real events can be dismissed as fake. It's the wet dream of autocratic leaders that want to undermine trust in democracies.
And deepfakes work best when they target receptive subgroups, through microtargeting on social platforms or on closed messaging apps. That accelerates echo chambers: different groups end up believing their own incompatible “facts,” reducing the shared information needed for public deliberation. ‘Alternative facts and information bubbles have already been furthered by social media, algorithms and disinformation. Deepfakes amplifies them like steroids. In the most extreme cases, deepfakes can provide extremists with ideological ammunition, allowing influencers to manufacture “proof” of alleged wrongdoing that justifies extremist view.
From Ballots to Bots: Democracy on the Line
These are not theoretical threats. In Slovakia, two days before the 2023 election, a manipulated audio purportedly captured liberal leader Šimečka admitting election rigging. The damage was done before fact‑checkers even caught on. The Progressive Slovakia party lost[3]. That’s not just a theory but it’s democracy in peril.
In 2024 and the European Parliament elections, the EU’s own values chief warned: AI tools let malicious actors pump out deepfakes and manipulate debate at scale[4]. The bloc has responded with the Digital Services Act (DSA) and its nascent AI Act, setting rules for transparency and labeling of synthetic media.8 The DSA for example requires very large online platforms such as Facebook and YouTube to assess and mitigate systemic risks regarding the spread of disinformation. The European Media Freedom Act brings safeguards against state-sponsored disinformation. But these laws and law enforcement agencies are by nature always one step behind fast-evolving technology and malicious actors.
Closing the Gap: Laws, Literacy, and Defense
We can not only blame outsiders. Trust in politics is at all-time low in many countries, disinformation reinforces this, but the low trust was already there, that’s not just because of disinformation and propaganda. Even real content gets rejected as fake, our epistemic foundation erodes.
So, pushbacks must be structural:
Stronger laws: The AI Act must define deepfake clearly and force labeling of synthetic content used in political contexts. The current definition is too vague[5].
Better defenses: Platforms must detect, label, down‑rank manipulated media aggressively. The EU’s DSA gives them that power, but they must act fast, consistently[6]. In addition, the European Parliament has called for a new entity to coordinate investigative and strategic response to disinformation, in the form of a ‘knowledge hub'. In her State of the Union Speech, European Commission President Ursula von der Leyen vowed to set up a center of expertise and capacity across Member States.
Media literacy everywhere: Education is the most important antidote, and Finland shows the way in building resilience. All EU nations need curricula that teach citizens how to spot deepfakes and propaganda early[7]. “Pre-bunking” measures have proven effective and may help equip the public with the tools required to dismiss manipulated media once it breaks onto their screens.
Proactive campaigning: EU’s East StratCom Task Force fights Kremlin disinformation through EUvsDisinfo, but it needs more funding, more reach[8].
When Pixels Threaten Power: A European Security Challenge
Deepfakes represent a real threat to Europe’s security, which extends beyond armies, borders and intelligence; but engulfs the integrity of the information space itself. An EU unable to defend the credibility of its own institutions, leaders, political systems and democratic processes is strategically vulnerable. Its competitors understand the EU’s military strength lies on NATO and thus the U.S. However, the Union’s political cohesion is rooted in trust among citizens, between member states and the transatlantic relationship.
Consequently, deepfakes can hurt trust and undermine the decision-making withing the member-states or the EU itself. Additionally, deepfakes have extremely negative impact on European deterrence, as they open a backdoor to manipulation in moments of crisis like the war in Ukraine. Imagine NATO consultations during an emergency where allies hesitate because a viral fake statement has cast doubt on intentions, or EU responses to a cyberattack delayed because leaders spend hours clarifying whether footage of a declaration of war is real. Disinformation does not need to destroy Europe’s armies to succeed; it only needs to paralyze its politics. That is why defending against deepfakes is not simply about media literacy or content moderation, it is about safeguarding Europe’s strategic autonomy, its unity in the face of external threats, and ultimately its survival as an open, democratic Union.
Trust Is the Battlefield
In conclusion, this is about trust. Once European democracies lose the ability to say that tape is fake, or that article is copied from a Kremlin proxy, once we lose shared truth, that’s when populists win. That's when elections are stolen, not by bullets, but by bots and pixels.
And yes, the threats are big: Russia has already meddled via deepfake-aided discourse; AI makes it cheaper and easier; our systems are porous. But hope remains. The EU has legal tools, DSA, AI Act, strategic communications, and civil society is alarmed. We can act, if we see the threat for what it is.
If Europe wants to remain the open, plural, democratic Union it claims to be, we must close the gap between threat and response. Because if we don’t, we won’t even recognize the death of democracy when we see it, just another fake clip to argue around.
Bibliography
AP News. “AI-Supercharged Disinformation Threatens 2024 EU Elections.” Accessed September 10, 2025. https://apnews.com
Busch, Ella, and Jacob Ware. The Weaponisation of Deepfakes: Digital Deception by the Far-Right. The Hague: International Centre for Counter-Terrorism, December 2023. https://icct.nl/publication/weaponization-deepfakes-digital-deception-far-right.
East StratCom Task Force. EUvsDisinfo. European External Action Service. Accessed September 10, 2025. https://euvsdisinfo.eu
European Commission. European Media Freedom Act. Accessed September 10, 2025. https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/new-push-european-democracy/protecting-democracy/european-media-freedom-act_en.
European Parliament. Report on Deepfakes. Accessed September 10, 2025. https://www.europarl.europa.eu/doceo/document/A-9-2023-0187_EN.html.
International Institute for Strategic Studies (IISS). Europe Needs Both Sword and Shield to Deter Russia. London: IISS, March 7, 2025.
International Institute for Strategic Studies (IISS). The Scale of Russian Sabotage Operations Against Europe’s Critical Infrastructure. By Charlie Edwards and Nate Seidenstein. London: IISS, August 2025.
Meding, Kristof, and Christoph Sorge. “What Constitutes a Deep Fake? The Blurry Line Between Legitimate Processing and Manipulation Under the EU AI Act.” arXiv, 2024. https://arxiv.org
Pawelec, M. “Deepfakes and Democracy (Theory): How Synthetic Audio-Visual Media for Disinformation and Hate Speech Threaten Core Democratic Functions.” Digital Society, 2022.
Reynaud, Florian, and Damien Leloup. “Doppelgänger: The Russian Disinformation Campaign Denounced by France.” Le Monde, June 2023. https://lemonde.fr
Schick, Nina. Deepfakes: The Coming Infocalypse. New York: Twelve, 2020. https://www.scribd.com/document/801080288/Deepfakes-Nina-Schick.
Wired. “How a Deepfake Audio Clip May Have Altered Slovakia’s Election.” Accessed September 10, 2025.
Zegart, Amy, and Michael Morell. “Spies, Lies, and Algorithms: Why U.S. Intelligence Agencies Must Adapt or Fail.” Foreign Affairs 98, no. 3 (May/June 2019): 85–97. https://www.jstor.org/stable/26798154
[1] Russia channels funding and propaganda toward Eurosceptic populist parties across the spectrum.
[2] Deepfakes degrading democratic norms: empowered inclusion, deliberation, legitimacy, liar’s dividend.
[3] Slovakia’s pre-election deepfake audio scandal undermining campaign.
[4] SEU concerns about AI boosting disinformation ahead of 2024 election.
[5] Legal ambiguity in AI Act’s definition of deepfakes.
[6] Need for platforms to label/down rank AI content under AI Act transparency rules.
[7] Media literacy examples: Finland, Estonia, and EU efforts.
[8] East StratCom Task Force and EUvsDisinfo’s role.


Super informative, fraud on so many levels, really insidious
Many thanks to you as well for the collaboration!!