The rise of artificial intelligence (AI) has led to a new form of content slowing growing predominantly on video sharing sites such as YouTube, Rumble and TikTok, that we have characterised as 100% synthetic content. This is defined within this article as:

“Content that is principally generated by AI, where AI has been the main technology that has scripted the content, narrated and produced the images used in the accompanying video.”

Trust No One: 100% Synthetic Content and False Information

Much of this new generation of content is benign in nature, for example the amusement of hearing the synthetic voice of Sir. David Attenborough narrating Warhammer 40K never gets old. However, there is an emerging strain of Dis and Misinformation powered content starting to grow online. As a quick tangent we use the European Union definitions 1 of Dis and Misinformation paraphrased as follows

The European Union offers the following (paraphrased) definitions:

  • Disinformation: False information that’s knowingly shared to cause harm
  • Misinformation: False information that’s shared but no harm is meant

The spreaders of disinformation typically seize on any new technology to create and disseminate deliberately damaging content. But not in this case…so far, 100% synthetic content seems to be making its presence felt most in the misinformation space.

Sure, there are some examples fitting the category of disinformation – such as misogynistic/“red pill”/manosphere-style content. But, by far, the most ubiquitous 100% synthetic content is focused on conspiracy-theory subjects, and only inadvertently damaging.

Big Footprints: How Is AI Muddying Perceptions of Reality?

The Bermuda Triangle, Bigfoot, and aliens are the usual suspects featured in conspiracy theory content, being well-trodden paths subjects of misinformation. But they may spark interest from a wider audience, if presented as 100% synthetic content – the new AI-generated videos lend a surprising edge to these subjects, thanks to the arresting images they often contain. 

Here are some examples from TikTok:

These videos are being presented as real, and run the risk of subverting the baseline reality of impressionable viewers. 

But no one is going to believe that werewolves and mermaids are real just from watching a video, are they? Well, maybe not, but the true risk attached to 100% synthetic content comes from the slight blurring of reality, rather than the outright distortion of truth.

Slippery Slope: Down the Rabbit Hole to a new Weltanschauung?

Below is an image taken from a 100% synthetic video about the Ahnenerbe, an SS entity active in Nazi Germany during the Second World War. The Ahnenerbe was tasked with ‘proving’ German racial superiority via the archaeological record.

The video’s accompanying narrative explicitly expresses a skeptical view of esoteric Nazi practices. But this image clearly shows a successfully magical practice, with Nazi-like figures clustered around the symbol. So although the video was obviously scripted for entertainment, not education, it left open the possibility that the Nazis were successful in their magical endeavours. 

There are other examples of AI-generated content blurring reality ever so slightly – such as making villains look a little more dynamic and powerful than they actually were. Take, for example, the decidedly Command and Conquer-inspired image created of SS leader Heinrich Himmler, shown on the right below. Couldn’t we all agree it paints him with a more dramatic aura than he claimed in reality (actual Himmler shown on left)?

Although the intent behind most 100% synthetic content is not malicious, it is the nature of the rabbit hole of misinformation and disinformation to usher an online user along a journey to more extreme content. And synthetic content like the images shown above can grease the rabbit hole. 

The progressive online journey is not an untested claim: Many committed QAnon members became involved via flat-earth content circulated on mainstream content-sharing platforms, like YouTube.

Future Content Generation for All 

Synthetic content with conspiracy themes occupies a strange niche in the current Internet age. The truth is, those themes generate views…even if 99.99% of viewers don’t believe them, they find the stories of UFOs, ghosts, and cryptid monsters undeniably entertaining! 

The advent of zero-cost AI allows human content creators to slash costs in production, while maximising views and, in turn, revenue. This business model is becoming big news within the content-generator communities – supported by at least one helpful instructional YouTube video that shows how to produce this type of content (see below).

It’s important not to over dramatise the impact that 100% synthetic content is currently making on the spread of disinformation. But this new, AI-fueled approach to content generation is here to stay, making it one to watch for those of us interested in the field of disinformation.