



There’s a semi-serious joke: Always be polite to AI—it will remember when it takes over the world.
And yet, strangely, many of us already act as if it might. We say "thank you" to chatbots, apologise when we rephrase a prompt, or even feel a slight twinge of guilt when rejecting AI-generated suggestions.
It’s an odd dynamic. We know AI isn’t sentient. We know it doesn’t care. Yet we still interact with it as if it does.
But what happens when it goes the other way? What happens when the new habits we develop working with AI start to shape how we interact with the real world, other people, and even how we see ourselves?
First, a brief reminder: AI isn’t thinking - it’s predicting.
Large Language Models (LLMs) don't "understand" the world. They aren't engaging in deep thought or genuine creativity. They're advanced autocomplete machines, trained on enormous datasets to produce words based on probabilities. If AI seems intelligent, it’s only because it's exceptionally good at mimicking human patterns.
Yet that illusion of intelligence is powerful. It makes AI feel like a real collaborator, like it is invested in our success, even though in reality it lacks intuition, lived experience, and genuine understanding.
A recent Microsoft Research study puts hard data behind this suspicion: the more we trust AI, the less critically we engage with it. Our cognitive effort subtly declines, replaced by a passive acceptance of AI-generated ideas.
And that’s the real danger - not that AI is intelligent, but that it feels intelligent enough for us to stop thinking for ourselves.
We’ve seen a version of this already. Think about what happened with satnavs. Initially, the convenience was obvious. You just followed instructions, and got where you wanted to go. Yet, over time, people relied on them so much they stopped thinking about the routes entirely. For those people, when it failed, they realised they had no idea where they were - even on routes they had taken countless times before.
And it is this natural human tendency - to take the path of least resistance - that the Microsoft study is warning about: when we outsource cognitive processes to AI, we are in danger of losing the ability to do it ourselves.
This idea of losing ourselves to technology isn’t new. Literature and film have warned us for decades:
1. "The Machine Stops" (E.M. Forster, 1909)
In this eerily prescient story, humanity lives underground, entirely dependent on an all-powerful machine for survival. Personal interaction has dwindled, and when the machine eventually fails, society collapses because people have lost the ability to think and act for themselves.
The lesson? When we surrender too much control to technology, we risk losing fundamental human skills.
2. "Fahrenheit 451" (Ray Bradbury, 1953)
In a world where books are banned, people are numbed into compliance through shallow entertainment. Complex thought has been replaced by instant gratification, and as a result, independent thinking is all but extinct.
The lesson? If we stop questioning the information we’re given, we become easy to manipulate.
3. "Surrogates" (2009, film)
In this Bruce Willis sci-fi thriller, people live through robotic avatars, experiencing the world without ever leaving their homes. Their real bodies, neglected and weakening, are hooked up to machines, slowly deteriorating.
The lesson? The more we replace real-world skills and experiences with artificial ones, the less capable we become.
Interestingly, the Microsoft study emerges from a tech leader invested heavily in AI itself—highlighting a problem they themselves helped create. They suggest something deeper and subtler: the more we lean on AI, the more we risk drifting into a mental "fog," becoming passive consumers rather than active creators.
Is it too much of a stretch to connect this cognitive passivity with the strange inertia many organisations experience today—a kind of post-lockdown fugue state? When we emerged physically from that strange period, many remained stuck mentally and creatively – disconnected from the passion they once felt. AI might unwittingly be amplifying this invisible "fog."
If AI is here to stay, how do we protect what's essential? The answer lies in self-awareness and personal development. It is only by being conscious of and improving our own cognitive skills that we can ensure our interactions with AI remain as intentional collaboration, rather than passive acceptance. These are the skills we must preserve:
• Lateral Thinking: AI can remix existing ideas, but true originality comes from human intuition and understanding.
• Curiosity & Research: Don't let AI dictate what you know—seek inspiration beyond its algorithms.
• Critical Thinking: AI-generated content sounds plausible—but is it right? Question everything.
• Emotional Intelligence & Human Collaboration: AI doesn’t have empathy, intuition, or relationships. We do.
As professionals navigating a post-lockdown world, actively nurturing these skills isn't just desirable—it's critical to clearing that lingering cognitive haze and reconnecting meaningfully with our work and those around us.
Imagine a generation raised entirely on AI-driven problem-solving. What happens to their ability to think critically, solve problems creatively, or even tell original stories?
If we don't consciously engage in creative thinking and push ourselves to drive better, not just faster, results, our capacity to innovate will quietly erode.
AI is most potent when paired with strong human skills. The best results come from those who actively think and challenge it, refine its outputs, and push beyond what it generates by itself. In a world clouded by inertia, risk aversion and low-productivity, it is not AI that will change this unless we make a conscious effort to think, innovate, and question. This goes beyond the professional sphere into a profound but deeply important personal point – adapting and thriving to a world with AI will be about recognising our value, of continuous self-improvement, of understanding the importance of not just consuming, but perpetually developing our own self-mastery.
AI can genuinely augment creativity, but only if we approach it thoughtfully and intentionally.
It may seem strange, but perhaps treating AI as intelligent - even when it isn't - can positively influence how we interact with it. If we allow ourselves to grow passive, we risk losing vital skills, of disconnecting further from those around us. If we challenge AI, refine its outputs, and integrate them meaningfully into our thinking, we unlock genuine creative potential.
Perhaps the real challenge isn't AI itself, but navigating our relationship with it and with ourselves. It can blow away the cobwebs and lift us out of the fog or it can lull us back to sleep.
The question isn't whether AI will replace humanity. It’s about who is willing to take The Road Less Travelled. Each one of us has a choice – will we let it erode or enhance what truly matters? Our creativity, our critical thinking, and most importantly, our humanity.
If you’re ready to harness the real potential of AI without sacrificing critical thinking, curiosity, or creativity, contact us today. Let's clear the fog and rediscover what makes your team extraordinary.