AI keeps changing the world we live in – fast. Staying on top of everything happening is hard with all the info floods out there and busy schedules. Still, it’s key to stay informed and play an active role in this societal shift. That’s why I write these “AI trend digests” for you – jargon-free and to the point.
In my work and personal life, I sift through the noise every day to find signals. I explore trends across AI tech, regulation, economics, startups, society and use cases. My goal? Spotting opportunities for us all – innovators, tech enthusiasts, those looking to thrive in the “AI era”.
Usually, I’d post monthly recaps here. Today, I take a broader approach with a full review of 2024 (incl. December). For every “highl-AI-ght“, I’ll end with a “so-what” – an impulse for 2025. Without further ado, here are my 6 highlights from 2024: “3 flops“ and “3 wins”.
Table of Contents
“Flop 3” AI News
AI-Generated Content Floods: Dying Internet?
This year, researchers from Princeton found that over 5% of new English-language articles on Wikipedia in August 2024 were AI-generated. Many people rely on Wikipedia as a trusted info source. This trend can damage the trustworthiness of web content sustainably.
The implications reach further. AI models trained on data created by other AIs risk entering a feedback loop that degrades quality over time. This phenomenon, sometimes likened to “mad cow disease” in AI, could weaken the reliability of AI in general if left unchecked.
It gets really icky when AI-made content is used to trick people. A viral Reddit post, for instance, claimed that ChatGPT saved a user’s life by urging them to seek medical help for heart attack symptoms. Doctors later confirmed those symptoms were life-threatening. The twist? The entire post was generated by ChatGPT itself.
It’s not just text… AI-driven video generation is advancing (too?) quickly, further increasing the potential for misinfo across formats. One video even made me pause recently on Reddit and question what is going on. How will this affect esp. audiences less experienced with deepfakes etc.?
So what? These are some examples of the so-called “Dead Internet Theory.” It suggests that AI-generated content is crowding out human-made content, taking over online spaces. I urge you to “train” your “Turing sense” as info literacy is key to protect yourself (and those around you!) from such influences.
Rogue AI Risks: “Please Die” & Multimodal Hallucinations
AI’s (occasionally) unpredictable behavior caused quite a stir (again). One of the most viral incidents involved Google’s Gemini chatbot. A troubling Reddit thread revealed that, during a homework-related query touching on elder care, the bot (repeatedly) replied “Please die.” – quite emphatically.
The original chat seems authentic, but the response completely out of place. Was Gemini (mis-)interpreting the (slightly ambiguous) input as a prompt looking to engage in elder abuse? The lack of clarity around how this response was generated persisted until today and suggests deeper issues of AI often working like a “blackbox”.
OpenAI’s ChatGPT had its own unsettling moments. During internal safety testing, the GPT-4o model responded with a loud “No!” and even mimicked the tester’s voice (voice mode). These bizarre behaviors, often referred to as “hallucinations,” might seem quirky. However, in high-stakes settings like healthcare or security, they could have serious consequences.
So what? “We” (developers, policymakers, consumers) must prioritize AI explainability and safety rigorously to keep these systems reliable. This is particularly vital as they get integrated more into our daily life. Techniques like “jailbreaking” where users intentionally manipulate AI into bypassing safeguards, make this risk even more challenging to manage…
“Doxing” Made Easy with Smart Glasses
Smart wearables took another step forward this year, but not without challenges. Two Harvard students demonstrated how Meta’s Ray-Ban glasses could pair with facial recognition software to “dox” people in real-time. Their tool, called “I-XRAY,” could pull personal/family details like names and addresses from public databases within seconds. This is the demo.
The students responsibly chose not to release their code (yet). They wanted to raise awareness about privacy rather than enable misuse. Still, the project shows how easily everyday tech can cross into invasive territory… As smart glasses become harder to distinguish from regular eyewear, this risk increases.
So what? Let’s not vilify the tech per se as it has a tremendous upside. Glasses can offer real-time translation, help people with disabilities or assist in everyday life. It’s up to us – producers, policymakers, users etc. – to create, provide and use such tools responsibly (as always). There must be an open, honest and public discourse about these (emerging) problems. Ostrich policy or ignorance are no option; the stakes are too high.
“Top 3” AI News
Silver Linings due to (Some) Competition in (Gen)AI Industry
The throne of ChatGPT keeps shaking. For example, Mistral AI made notable progresses this year with their flagship chatbot “Le Chat”. The AI assistant introduced features like web search, brainstorming canvases and image generation powered by (leading) Flux 1.1 Pro. Impressively, it’s (still) free. That makes it a strong (surprisingly European) competitor to (premium) tools like ChatGPT & Co. It’s on the radar for my comparison of leading GenAI chatbots.

A16z’s “Top 100 GenAI Consumer Apps” report underlines the sweltering competition at the “digital customer interface”. There’s growing appetite for creative (and multimodal) GenAI tools: Newer players like Luma gain traction, helping users create videos, music, images and more. Household names like ChatGPT, Claude and Perplexity are still going strong. But if you look “upstream” in the AI value chain – foundation models, data centers and chips – the landscape gets more concentrated, with monopolistic tendencies.
So what? I hope healthy competition (and responsible policy design) keep driving innovation and accessibility globally. I’m also counting on you, open-source communities, to keep “proprietary moats” in check for that reason. Let’s see how many more Big Tech models like Llama will be open-source in the future…
Unexpected Breakthroughs: Math Olympiad and Nobel Wins
DeepMind’s AlphaProof achieved one of the most impressive feats of the year. It earned a silver medal score at the International Mathematical Olympiad (IMO), solving four out of six challenging problems. AlphaProof’s (unexpected) success hints at the power of reinforcement learning, where AI improves by learning from feedback – not too different from how we learn as humans.
Meanwhile, Geoffrey Hinton and John Hopfield (also surprisingly) received the Nobel Prize in Physics for their foundational work on neural networks. Their contributions form the backbone of many modern AI systems. As Hinton accepted the honor, he issued a serious warning about AI’s (too) rapid development and potential existential risks, if left unchecked.
So what? “A fast car needs strong brakes.” AI certainly proved to be fast in 2024 and 2025 promises to become another roller coaster trip. I hope our global decision-makers and societies can keep up with it. We must collaborate across silos and find ways to reap the benefits of technological progress while keeping AI’s destructive potentials in check.
“12 Days of OpenAI” and New Frontiers on the Horizon
Although 2024 already brought incredible changes in AI, there were still some skeptics before Christmas who argued innovation was slowing down. OpenAI “answered” by ended the year with a “bang”, its “12 Days of Shipmas” featuring several exciting updates.
The “event” peaked with the announcement of OpenAI’s new “o3 model”, promising advancements in reasoning (remember Q*/Strawberry?), coding and maths. Other highlights were the global release of SearchGPT, challenging “established search” like Google, and Canvas, an interactive tool for brainstorming. OpenAI also finally released their long-awaited state-of-the-art video generation AI “Sora.” However, Google’s Veo 2 model was released “in the same breath” and is at least competitive, throwing a (slight) wet blanket on OpenAI’s “shipmas.”
So what? Heading into 2025, it’s clear that “things” are not slowing down. So, let’s “see how it goes” (e.g., if “o3” keeps its promises and other developments we can’t even foresee yet). Easier said than done, you may say. That’s why I created an AI News Ticker for you (and me) tracking relevant AI trends from reputable sources.
Wrap-up: What a Year…
Since the “ChatGPT craze” began in late 2022, it feels like a decade of progress has been packed into every single year – for the better or worse… 2024 was no exception to that trend. How do you keep up with the news, trends, tools etc.? Maybe by subscribing to my newsletter/blog. 😉
Did I miss anything in this “end-of-year review” (admittedly a few days late)? What events/”highl-AI-ghts” stood out to you in 2024? Let me know in the comments. If you’re curious about “what’s next”, you could check out my “9 educated guesses” for 2025’s AI trends.
See you next month in the same place (with a monthly recap again).
Cheers,
John

What do you think?