How Deepfakes and AI-Slop Undermine Democracy

I am still watching the recordings of talks given on the Chaos Communications Congress (see the talks here). Katharina Nocun gave a talk titled Doomsday-Porn, Schäferhunde und die „niedliche Abschiebung“ von nebenan where she shows a really disturbing trend: AI-generated content is becoming a cornerstone of authoritarian and far-right communication strategies.

The examples are as absurd as they are alarming: the US president sharing videos of protesters being bombarded with feces from a fighter jet, the White House celebrating “Star Wars Day” with Trump wielding a (sith!) lightsaber, or AfD (german extreme right wing) sympathizers posting social media with AI-generated “photos” showing blonde children, women in Dirndls, and idyllic scenes of a “pure” world. A world that never existed and never will be.

Katharina goes pretty well into the effects that such visuals have. Even though they are (in parts) very obviously geenrated. It’s far worse than just “fun pics/videos” or “bad taste”.

The Rise of AI-Slop in Political Communication

Nocun shows how AI-generated content is no longer a fringe phenomenon but a deliberate tool in political “communication”. Social media platforms are flooded with AI slop: deepfakes, fake interviews, and even entirely AI-generated influencers / fans. And the bad thing is that these aren’t just isolated incidents. They’re coordinated efforts to shape narratives, spread disinformation and erode trust in democratic institutions.

What’s particularly unsettling is how this generated media is being weaponized to manipulate public opinion. The line between reality and fiction is blurring — or gone already. And when reality itself looses the trust, democracy is in trouble. And this is where we are right now!

How can we believe anything we see or hear?

The Erosion of Trust

The core issue here isn’t just the existence of (AI-generated) communication that turns into propaganda. It’s the erosion of trust. When anyone can create convincing fake content, how can we believe anything we see or hear? This isn’t just about political content. It’s about the very foundation of informed debate.

If we can’t trust the media we consume, how can we make informed decisions? How can we hold leaders accountable when their words and actions can be fabricated or distorted beyond recognition? Katharina Nocun’s (honestly very depressing) summary is: A society that can no longer distinguish truth from fiction is a society that cannot function democratically.

The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist

Hannah ArendtThe Origins of Totalitarianism

What Can We Do?

The longer I watched the talk the lower my mood went. And I was waiting for a “what can we do?”. And by the end, Nocun gives a quite sobering realization: that building media competence is not the answer — because it becomes close to impossible!

But regulation and accountability of the content generators and platforms is! Even though I am always sceptical about over-regulating, I must admit that I fully agree here.

The responsibility cannot simply be pushed down to the consumers. Fact-checking is essential, but it is a reactive measure — by the time a deepfake is exposed, the damage is already done. Especially as the production of such deepfakes is now possible fully automated at an industrial grade, the producers must be regulated.

A world where we’ve given up might be far worse

Conclusion

Well — I would like to close this post with a positive, motivational message. But, can we push back against this tide of disinformation? When disinformation is produced at an industrial grade. It won’t be easy, but the alternative — a world where we’ve given up and where truth is whatever the loudest algorithm says it is — might be far worse.

Related links:

Fediverse reactions

Comments

3 responses to “How Deepfakes and AI-Slop Undermine Democracy”

  1. Dash Remover Avatar

    Ah yes, the unholy trinity: Deepfakes, doomsday-porn, and metadata. Democracy’s downfall brought to you by midjourney prompts and a bored guy named Sven. We’re definitely fine. #AI #Disinformation #ThisIsFine

  2. Brandon Avatar

    What is KI? Is that a typo?

    1. Franz Avatar
      Franz

      Argh – it’s the german abbreviation for AI. Thanks for pointing it out, I just fixed it.
      The problem when you hear a german talk and then write english about it.

Leave a Reply to Franz Cancel reply

Your email address will not be published. Required fields are marked *