You’re reading a text and something feels just… off. Too wordy. Too generic. Polished but soulless… Maybe AI-generated!?
I call that uneasy sensation in your gut “Turing Tingles” – your intuition spotting the typical patterns of low-quality AI content. And that instinct is your best bet to detect “AI slop”. (Nope, “AI detectors” are not reliable…)
Excursus: The story behind “Turing Tingles”
A Reddit user posed a relatable question: “The feeling that what you are reading/watching is AI-generated. Does it have a name?” They described how this feeling made browsing social media uncomfortable and asked if others shared it.
Enter my suggestion: “Turing Tingles” – obviously a nod to Alan Turing. His famous “Turing Test” is designed to evaluate if AI can convincingly imitate human behavior. In short, if you feel something’s off and suspect it’s AI, the AI failed the test. The phrase obviously also borrows inspiration from Spider-Man’s reliable “spidey-sense,” that gut instinct when something is off.
The thread got solid traction with >100 comments and many other cool ideas like “AI-dar”. This appears to be a relatable phenomenon for many people. I’m honored and humbled my suggested “coinage” got the most upvotes and landed a reference in a podcast: Thanks, stranger!
I think we’ve come across something universal here – our shared curiosity and concern about the “invisible hand” of AI in the content we consume. Let’s explore the good (there are!), the bad and the downright ugly sides of AI content… I’ll show you how to spot AI slop, explain its origin and suggest some fixes for better content.
Table of Contents
Why “AI Slop” And Your “Turing Tingle” Matter
AI-made content is everywhere. For example, researchers at Princeton found in a study that already >5% of new Wikipedia articles are AI-generated. On social media, it’s much worse. The problem goes beyond “content reliability”. A bigger concern is the growing self-referential loop in AI systems. Training AI on data produced by other AI can lead to a situation reminiscent of “mad cow disease” in AI.
Another issue is the use of AI for propaganda. AI algorithms are capable of crafting persuasive messages to exploit biases, fears and desires at an unprecedented level. The 2016 Cambridge Analytica scandal is a well-known example. AI micro-targeted voters based on psychological profiling at scale. Modern tools make this type of manipulation even more powerful, causing societal rifts and echo chambers.
This trend fuels debates like the “dead internet theory.” It suggests AI content is gradually dominating online spaces and drowning out authentic human expression. This is more than a thought experiment: a call to critically assess how much content we consume is driven by machines rather than people. I haven’t even started discussing deepfakes and the other “dark arts of AI” yet…
The “Art and Science” of Spotting AI Content
Spotting AI content, unfortunately, isn’t as simple as using “AI detection tools”. They often produce false positives/negatives due to the probabilistic nature of AI. This makes a perfect technical solution unlikely – for now. Even market leader Turnitin advises unis not to use its tool to penalize students, since it’s not reliable enough (yet). Blindly basing decisions on these tools can have far-reaching and unfair consequences.
Intuition and experience are our most dependable tools. Like doctors who quickly diagnose medical problems in complicated situations at one glance. After years of working with AI, I’ve learned recurring patterns that can signal AI content. These aren’t foolproof though. Many human texts share these traits and as we work more with AI, our writing may even look more like AI’s. You’ve heard the phrase “You’re the average of the 5 people you spend the most time with”? Well, if ChatGPT is one of them… 😉
While such tools and our intuition can complement each other, no single clue provides absolute certainty. Also, it appears our antenna naturally aren’t that good at spotting AI content, with a success rate of ca. 25% in this study. So think of the following indicators as “sharpening your Turing Tingle antenna”. But don’t let them turn into a “cry wolf” reflex driven by “confirmation bias”…
5 Signs of AI Slop: “THORN”
Let me introduce the “THORN” framework. It’s designed to help you spot AI-made stuff by breaking the “common red flags” into 5 buckets: Tiring Wordiness, Hollow Generalization, Overused Buzzwords, Random Digressions and Nonsensical Claims.
Tiring Wordiness
“If I had more time, I would have written a shorter letter.” – Blaise Pascal, Mathematician
Somehow, ChatGPT manages to tell what can be told in one sentence in five, if you don’t control its outputs. I also fall victim to this tendency, unfortunately – even before ChatGPT existed.
Why? Because being verbose is simpler than being concise (apparently I’m not alone in this 😉). LLMs are trained on the internet’s wide array of texts, which are full of long-windedness. Unsurprisingly, Reddit (even ranked first) and most other major social networks are among the top-cited sources of GenAI chatbots… (If you want to understand better how LLMs like ChatGPT are trained and work “under the hood”, see this article.)
Some of my “favorite” examples of AI wordiness: ridiculously overengineered introductions and conclusions for every paragraph. Not to mention the redundant repetitions in longer texts, overexplaining the simplest ideas. Like a broken record.
Hollow Generalizations
Sometimes you read a text and think “Wow, this could be said about anything.” Even when it’s about specific topics, GenAI makes very vague, surface-level statements.
Ask the chatbot to review a movie and it will likely spit out something “impersonal” like “TYPICALLY, the director would use stylistic elements LIKE …”. It gets worse for emotional topics as machines can’t empathize which makes the texts feel alienating and robotic.
Why? Like before, it’s because the AI is trained on what’s online. Unfortunately, much of the internet’s content is generic and written in a way that avoids offering attack surface. So, GenAI’s big strength – being “universally relevant” – becomes its peril here. It has no “lived personal experiences” or “opinions” to draw from.
Overused Buzzwords
This is probably the one that brought you here. The (growing) list of buzzwords overused by language models has already reached meme status. This triggers my Turing Tingles the most and makes content feel bland and just off.
Many of the word choices of LLMs come from the sheer number of ads or business reports they’re trained on. These are full of pretentious, flowery and “salesy” language. There are just too many cliché words and jargon: Listing them all would bloat this article. Instead, I created a word “cloud” for you, symbolic of the “hot air” behind it. Again, these also appear in human texts but happen to occur more often in AI-made ones:

Did any of these trigger your Turing senses? “Delve” even deserves a special mention here “thanks” to its “transformational impact” on scientific writing 😉:

FYI: A while ago, I discussed this “ChatGPT Bullshit Bingo” with Dr. Julia Schneider. Check it out and maybe print it for a game night?
Random Digressions
“Wait, how did it get from A to B again?” I myself already tend to follow “tangents” a bit too often for my taste, but AI takes this to the next level… It regularly produces logically disconnected and unrelated ideas. Its grammar is incoherent and frankly WEIRD – with abrupt (often difficult to spot) topic changes.
This leaves me with a sense of “void”. It feels like there is no intention behind the text. It’s somewhat “plausible” words flowing without any detectable human idea or concept – just empty verbiage. It doesn’t stop at the content level. It related to formatting, too. For example, ChatGPT miraculously likes to turn paragraphs into bullet points even where they add zero value vs. plain text.
How come? LLMs are “guess-the-next-word” prediction engines. They don’t really understand context. There’s nothing ensuring they’ll follow a “golden thread” automatically, so they can easily “go astray” – if you let them.
Nonsensical Claims
Ever got pulled into an “argument” with ChatGPT where it confidently claimed something you knew was wrong? Then it either did a 180° U-turn – thanks to its absurd agreeableness – or, worse, kept insisting on it? AI often confidently presents incorrect or outdated facts, commonly referred to as “hallucinations.”
A classic example is the “Strawberry Test,” where AI chatbots couldn’t answer the “simple” question, “How many ‘r’s are in the word ‘strawberry’?” for months until their providers (partially) fixed it. This is just one example and should not downplay the power of this tech which is also still developing. (The moral of this “story”? Always do your own research (“DYOR”) and don’t just blindly trust AI outputs…)
Why does this happen? As advanced “autocomplete engines” (yep, like in your smart phone), they predict the next word based on probabilities, not factual accuracy. They also have knowledge cutoff dates – typically some months before their release. So they don’t know anything that happened after their last training. While providers try fixing this e.g. with live web searches, there are no foolproof solutions for “hallucinations” yet.
Congrats on your fine-tuned “antenna.” If you know other, reliable “AI slop indicators”, please share them in the comments.
The Fix: From “THORN” to “SHARE”
To me, the problem comes down to how – not if – you use AI. Each of the 5 “THORN-y” problems reflects poor, yet common, approaches to building and using LLMs. “Garbage in = garbage out” basically.
For good outputs, we need good inputs. But what does “good” content look like? To me, it’s succinct, hyper-specific, authentic, relevant and exact – i.e. worth sharing. Now we have another acronym to capture this handy checklist: “SHARE” – the opposite of each THORN dimension.
And how do we get there? If you’re an AI developer, please make sure the data you feed into the model training is overall closer to the “SHARE” side of the spectrum. If you’re a user, make sure to follow “good usage practices” when you write prompts or edit the AI’s outputs.
Avoid malpractices like blindly copypasting AI outputs. Instead, add your human touch, intuition and depth of experience and always “DYOR”. Combine your human strengths smartly with AI’s capabilities (e.g. use it to accelerate your initial research, complement your ideas, give feedback etc.). But don’t outsource your whole thinking to the machine: “Use it or lose it” also matters for your “mental muscles” when working with LLMs.
For my own blogging, I follow an iterative style when working with AI which I outline here in detail. My general rule of thumb is to start and finish every process with human input and plenty back-and-forth with the AI in between. Whether your “Turing Tingle” noticed it or not, AI plays a key role in my own content creation. It even helped me come up with the “witty” acronyms “THORN” and “SHARE”.
AI is offering many opportunities beyond blogging (duh): It’s kinda like a personal “creative studio” accessible to anyone with a computer. You no longer need the resources of an “actual studio” to create content (to express yourself). And that’s just the tip of the iceberg… There are countless use cases of this tech, some of which I explore here.
Beyond “AI Slop”: What Matters Really?
Practice makes perfect: With time and by applying the “THORN” and “SHARE” “checklists”, you will sharpen your antenna – like a doctor’s intuition. Using or thinking in terms like “AI slop” or “Turing Tingles” may make sense. But I think evaluating the overall quality of content is more helpful than the question if something is “human” or “artificial.”
In the 70s, teachers and parents protested vehemently against calculators – the “AI” of that time. And we don’t care too much (anymore) about someone using a pocket calculator for math. (However, if “AI slop” really impairs our thinking skills when handled poorly, this goes far deeper than “unlearning mental math”…)
What’s do you think? Does AI involvement by itself already make or break content? Or is the real challenge to develop a reliable “bullshit radar” and info literacy? The latter, I’m sure, is one of the key skills for the “AI era”. What’s the one telltale sign that instantly triggers your “Turing Tingles”? Share your thoughts below or get in touch. If this post helped, please share it with a friend who’s also tired of AI slop…
Cheers,
John

What do you think?