Practical AI for Everyone

How I integrate AI into everyday work and life. Use cases, tools and tactics.

TL;DR Summary:

With social media and AI chatbots like ChatGPT, we’re more “transparent” than ever – like it or not. With reflection and the right tools, we stay in control.

There’s this guy, let’s call him “Mike”, who I “know” a thing or two about: he proposed to his wife on a beach, his department’s annual budget and every meal he ever had. How come? We’re “friends” on LinkedIn and Facebook and he keeps his “friends” (I never met him though) “in the loop”. I don’t even want to know which private secrets he’d reveal to ChatGPT & Co…

Why am I telling you this? As a tale of caution: Our privacy and “personal brands” are among our most valuable assets. Unfortunately, these are “invisible”, so they have a natural tendency to slip out of sight, out of mind – until shit hits the fan. Oversharing – revealing too much or the wrong type of info (in)voluntarily – online is the surest and easiest way to undermine that.

The saying is truer than ever: “The internet never forgets“. Big social media platforms act as data kraken and new (generative) AI systems/chatbots/agents etc. can learn from (and parrot) that data. Don’t be like “Mike”. I’ll show you how – no worries. Completely disappearing from the internet may be an option in some extreme cases. There are luckily also some less restrictive ways to keep a low(er) profile…

Table of Contents

Risks of Oversharing: Big (AI) Bro is Watching

Mike is no exception though, unfortunately. Nowadays, most people share very private info on social networks like Instagram, LinkedIn, Facebook etc. or their favorite AI chatbots. Ironically, everyone warns children against revealing too much about themselves on the internet (#StrangerDanger). Yet many adults seem not to realize this applies to them just as much…

In the “real world”, people also don’t freely give strangers private info like their address or party photos, right? For example, you would not tell your boss you skive work, because you enjoy the sun and beach so much. Yet people post holiday pictures on Facebook, while handing in a sick note at work – with their boss in their friends list. Bravo. Surely helps your “employability“… not.

Also, it’s well-known that employers or business partner do their due diligence on your online presence (Google, LinkedIn etc.) (“KYC”). What will they find? Hopefully, info that is consistent with your (carefully crafted) image and handed-in application. Or is it a TikTok with 10M+ views of you emptying a jug of sangria with a hose? (Also, what would ChatGPT answer when asked about you?)

AI-powered deepfakes, doxing or data mining bring (personal) cyber security threats to a new level: This tech can be used to infer deeper insights about you – your values, interests, hopes, fears etc. – from your online data (posts, pictures, etc.) which can be applied in malicious ways. Remember the Cambridge Analytica scandal? That was just a taste of what is possible with this tech.

There are countless cases of people who lost their jobs, relationships and more due to sharing the wrong info online – you may even know someone. Not to mention all the other scenarios, like identity thieves using your private info to create fake profiles, trick your friends/family or sell your sensitive data on the dark web without your consent etc.

Again, the Internet – and esp. “large language models” of your favorite AI chatbots – hardly forget anything. Once trained on your data, it’s “baked” into the AI’s “neural network” and can’t be “simply erased” like in a database. It’s almost impossible to make an AI “forget” something. With luck and “fine-tuning”, it’s somewhat possible, but as market-leading models get bigger and more expensive (billions!), do you think their providers will do that (or even recreate the models) for you?

We’re all just “erring humans” and I’m not “more papal than the pope” either, but there’s a wide spectrum between these extremes. Nobody expects you or your image to be impeccable: Edges here and there accentuate your uniqueness. Just don’t “jump the shark”. Developing your personal reputation is a lengthy process; disfiguring it is a breeze; re-building it is a mammoth task.

Why We Reveal So Much About Ourselves

It seems that the most basic principle of human interaction – that sharing private info requires an appropriate level of trust – is negated when we turn on our digital devices… Why do we engage in such shortsighted behaviors on social media or “talking” to AI chatbots? Well, it’s a mix of external and internal factors that, in sum, just make it too easy and sweet to get on this slippery slope…

Reason 1: Communication necessitates sharing info (to a certain extent)…

It sounds trivial, but it’s difficult to communicate with anyone, if you don’t know anything about them. Thus, the revelation of personal info is the basic requirement for any social network (online or offline) to exist – especially those that connect a lot of strangers.

So social media platforms must facilitate info sharing (with tons of “gamification” etc.) and we must reveal (at least a bit) about ourselves to use them. It’s the same with AI chatbots: Have you ever noticed that they always have a few icebreakers/prompt templates on the start screen to get your creative juices flowing?

Reason 2: Digital platforms thrive when people overshare…

The more people interact on a platform the better platforms do economically (“network effects”). There are very few things that drive website traffic like good old drama or some sneaky insights into other people’s lives (“humans gonna human”). Often the most absurd content wins the spotlight.

Again, not too different from AI chatbots: The more data and feedback they can get from real users the faster they can develop/learn and become more capable. That’s why you maybe already heard that the biggest AI platforms (ChatGPT, Claude etc.) with hundreds of millions of users have a “competitive edge” thanks to “leveraging” these feedback loops.

Reason 3: People crave their “happiness hormones”…

As social creatures, we’re naturally wired to seek AAA, i.e., attention, admiration and approval. This triggers our reward system and like B. F. Skinner taught us: We do more of what rewards us… With these waves of feedback that social media platforms channel to the most “compelling content”, people are tempted to systematically trade their privacy for short-term gratification.

AI systems take this to the next level by adapting to our preferences, becoming Fata Morganas of everything we desire with personalized ads, product/music/video/etc. tips or AI chatbot “girlfriends / boyfriends” that nonchalantly tap into our deepest needs and fears etc.: Perfect seduction. AI can be amazing but it can also empower some creepy, black mirror-esque stuff.

I can’t stress enough the importance of not becoming emotionally dependent on AI chatbots that seem to “understand us”. “They” are not humans and should be used carefully for use cases like therapy, coaching, etc. While there’s potential for this tech to do good in these domains (when applied in a controlled manner with professional guardrails), there’s also the other extreme, e.g., when a teenager committed suicide.

Strategies to Protect Your Privacy (and Autonomy)

Luckily, it’s not a (completely) “lost cause”. These 3 tips should help you get started in your quest against these “dark arts of persuasive technology“:

Tip 1: Start with an “inventory” of your online presence.

Check all your (old) social media profiles (profile info, history of posts, media uploads etc.): What is visible to which contacts/users? Does your “content” (still) reflect your values and beliefs and overall (intended) image? Are there any “legacy posts” lurking at the bottom of your “pinboard” which may damage your reputation now? If so, delete them. If in doubt, better err on the side of caution…

While you’re at it, tweak your privacy settings for all your online accounts to make sure only the people you want to see your content can see it. This doesn’t just apply to humans, it also applies to AI: for example, there are usually ways to control whether your info can be used to train AI systems, too. I deactivated that setting for example in ChatGPT. You, too?

Tip 2: Think long-term (or at least use a simple “litmus test”).

We talked about short-term factors driving us to act carelessly. An “antidote” is the conscious effort to factor long-term considerations into your decision-making. For example, use a guiding question like “Will this content/post/chat message/etc. today still be consistent with (or even damaging) my image in 5 or 1 year?”. This lowers the risk of “unwelcome surprises” in your next “inventory” (Tip 1).

A practical “rule-of-thumb” when the “long-term thinking gets too much” is this simple question (inspired by Warren Buffet’s “newspaper test”): “Would I feel comfortable revealing this message about myself to the readers in a real room? If the answer isn’t a clear “Yes!”, don’t hit “send”. If not…

Tip 3: Align content with context and pick the right medium.

Every digital platform has its own purpose and “unwritten rules”. Make sure your content always fits that context: For example, rather post that “romantic sunset” on Instagram instead of LinkedIn: “Professional folks” don’t want to see it… They may even assume you can’t communicate in a way geared to your target audiences which is a professional skill. Can you think of some example posts that fall into this category? 😉

Thus, those “harmless” posts of yours can, in fact, harm your personal image and career. “Cancel culture” is also still “a thing”: Look at celebrities who get “roasted” now for what they tweeted ten years ago. The winds of what is popular or acceptable change frequently.

And again, regarding GenAI chatbots: Only enter data into systems you trust and always check/adjust the privacy settings first. However, if you really want to “discuss” something more “private” with a chatbot – being aware of the pros and cons – you may consider running it locally on your PC rather than using one on the web for an extra layer of privacy.

Bonus Tip 4: Know (and exercise) your rights!

Sometimes the milk is already out of the carton (or at least seemingly). Maybe there are no settings to control the leak of personal data or a post can no longer be edited… Don’t give up too fast: in many cases, you still have an ace up your sleeve: your rights. (This is not legal advice, of course.)

Your options depend on which data protection and AI laws apply. For example, for the EU “GDPR” applies (among other laws) empowering you to demand that (e.g.) companies hand over or delete your data. For example, I once had a not-so-glorious Reddit message which was “immortalized” on the Internet Archive deleted… Check out online resources to see which laws apply to you.

Wrap-up: Mind Your Digital Traces!

The main takeaway from all of this should be clear: We need to be conscious of the real implications of our virtual behaviors. Fortunately, there are “simple” strategies and tips (like the “newspaper test”) that help us mitigate the risks of oversharing.

The saying “if you’re in a hole, the first thing to do is stop digging” applies here big time. On that basis only, any efforts to, for example, cultivate a personal brand can come to fruition. Or whatever your motivation is – avoiding identity theft, securing “employability”, protecting your mental health etc. Why do you care about this topic? Do care more about it now? What changed your mind?

Anyway, I’m looking forward to reading your feedback in the comments and encourage everyone to learn from each other’s “experiences”.

Cheers,
John

2 responses to “3 Privacy Rules for AI Chats & Social Media (The Web Won’t Forget)”

  1. Anthony

    I like where you are going with the intent of this post. You have a small tie in with AI, curious on why you didn’t go into local hosting of AI/LLMs? I seen your other post about using Ollama, figured that would be another great bullet point to add here? Your tips are great social tips and phrased very similar to how I remind my kids, so great content. – Anthony

    1. Thanks, Anthony! Great to hear it resonated – and kudos that you’re teaching your kids those habits early. Cheers, John

What do you think?

I'm John

John Isufi, the author of Upward Dynamism, with the mission to democratize practical AI knowledge.

I'll help you stand taller on AI's shoulders. If you are here to up your skills, find the right tools, lead change or muse the bigger picture. Every week, I share lessons from the field: I work where human needs meet tech adoption with years of experience leading AI transformation.

See you soon again!

44,991 smart visits