The danger of outsourcing your intellect to an entity that has never lived

Critical thinking will be the new differentiator in the new era of AI. You wake up, open your computer, and feel that weight in your chest. The to-do list feels like a monster that grows while you sleep. Piled-up emails, reports to deliver, posts to create, decisions that demand an energy you simply don’t have anymore. It’s modern burnout knocking at the door. In this landscape of extreme fatigue, artificial intelligence emerged as a siren song. The promise was tempting: “hand over your tasks to the machine and get your life back.” But along the way, something incredibly valuable began to be stolen from us, and almost no one noticed.

Brené Brown, in reflections echoing her work on vulnerability and courage, brought a perspective that cuts like a knife in her book Strong Ground. She mentioned that if someone had asked her what she expected from AI when solving a human need, the answer would be simple: AI should free up space so that she could think—nothing more. That is, imagine if the innovations developed today with AI washed the dishes or did the boring, repetitive household chores. However, what we are seeing is the opposite. AI is being used to replace the act of thinking, stripping humans of their most basic right and greatest competitive advantage: the cognitive process of creating, reasoning, and even interfering with feeling.

We are living in a moment where professionals from various fields are delegating the “soul” of their work to algorithms, believing they have found salvation. But there is a silent risk here. Imagine a person who spent their entire life locked in a library. They have read every book in the world, from medical treatises to engineering manuals, including all of classical literature. This person can describe the taste of an orange and the pain of grief with technical perfection. But they have never tasted a fruit and have never lost anyone. They have all the information, but zero experience, zero emotion, and absolutely no intuition. That is AI. And trusting it 100% is like asking that isolated librarian to guide you through a dangerous forest in real life.

The illusion of absolute competence

The big problem today is not the technology itself, but our blind dependence. The New York Times and The Guardian have reported cases that should serve as a red alert for all of us. The phenomenon of AI “hallucination” is not a temporary glitch that will be fixed soon; it is an intrinsic characteristic of how these models function. They are probabilistic predictors of words, not seekers of truth.

One of the most emblematic cases happened in the legal field in the United States. Attorney Steven Schwartz used ChatGPT to build a lawsuit against the airline Avianca. The AI, with its characteristic synthetic authority, invented entire legal precedents. It cited cases that never existed, with fictitious judges’ names and case numbers. Schwartz, trusting in technological “salvation” to speed up his work, did not check the sources. The result? A heavy fine, public humiliation, and an indelible stain on his professional career. He handed over his legal reasoning to a tool that has no sense of ethics or consequence.

In medicine, the risk is even more visceral. Health researchers have warned about the use of LLMs (Large Language Models) for diagnosis without rigorous supervision. A study published by the journal Nature highlighted that, while AI can assist in triage, it fails miserably when interpreting nuances that only the clinical eye and human experience can catch. There have been reports of AIs suggesting treatments based on absurd statistical correlations found in polluted training data. When a doctor stops reasoning because they trust the machine, they stop being a healer and become a mere system operator, putting lives at risk.

This erosion of critical capacity is what some experts call “cognitive atrophy.” If you don’t use your muscles, they weaken. If you don’t use your ability to structure an argument, to doubt a premise, or to connect complex ideas, your mind begins to become dependent. We are creating a generation of professionals who know how to “prompt,” but who don’t know how to explain the why behind the choices the AI made.

AI is an assistant, not a master

To understand how we should look at this technology without falling into the abyss of mediocrity, we need to change the lens. Artificial intelligence is not a substitute for human talent; it is, at most, a brilliant intern—but one who is an extreme liar and has no sense of real context, as mentioned by a great friend, Ricardo Ost, founder of Neurix.

Recent articles from the Harvard Business Review and tech experts like Ethan Mollick advocate for the idea of “human-in-the-loop.” The idea is that AI only generates valid and truthful results when guided by an orchestrator. Who is this orchestrator? It is the specialist who possesses what the machine will never have: lived repertoire.

A good specialist knows exactly what to ask, but above all, they know how to identify when the answer is wrong or when it lacks “soul.” If you ask an AI to write a text about leadership, it will deliver clichés about empathy and resilience. If a real leader writes about leadership, they will talk about their experiences, pains, and joys in daily life. We could put it this way: the AI has the rind; the human has the pulp.

The true competitive advantage in today’s market won’t be knowing how to use AI, but knowing where it ends and where you begin. The value lies in curation. In a world of abundant machine-generated content, the scarcity of critical thinking and human truth will become the most expensive asset on the planet. Those who use the tool to accelerate processes but maintain a firm grip on strategy and validation are the ones who will lead. The others will merely be echoes of an algorithm repeating itself exhaustively.

The role of the new professional

We need to stop seeking “salvation” in tools. Technology is just an amplifier. If you are a mediocre professional and use AI, you will simply become a mediocre professional faster and at a larger scale. If you are an excellent professional, AI can remove the mechanical work so that your excellence shines in areas the machine cannot reach.

The mistake many make is believing that AI has a deep understanding of the world. It doesn’t. It doesn’t understand the fear of losing a client, the pressure of a board of directors, or the joy of closing a transformative deal. It only knows which words usually follow other words when these subjects are discussed. Blindly trusting this probability is to abdicate your own intelligence.

The path to success in the age of artificial intelligence involves three pillars that no software update will provide:

  1. Healthy Skepticism: Never accept an AI’s answer as absolute truth. Treat every interaction as a hypothesis that needs proof.
  2. Deep Technical Mastery: You can only orchestrate an AI if you know more about the subject than it does. If you don’t know the rules of the game, you won’t know when the machine is cheating or failing.
  3. Experience-Based Intuition: Intuition isn’t magic; it’s your brain processing thousands of past experiences in milliseconds. AI has no past; it only has training data. Your intuition is your shield against the cold and often flawed logic of the machine.

The voice of experience over the synthetic substitute

There is no shortage of authoritative voices warning of this necessary balance. Sam Altman, CEO of OpenAI, has stated in several interviews that AI should be seen as a productivity enhancement tool, not an autonomous decision-making entity. He himself acknowledges the limitations in deep logical reasoning that current models still possess.

Researchers like Dr. Joy Buolamwini of MIT have focused their work on showing how the biases and limitations of AIs can be harmful if there is no constant and critical human supervision. The conclusion of almost every serious study on the subject converges on the same point: AI is an excellent research assistant, a great draft organizer, and an accelerator of repetitive tasks. But it is a terrible final decision-maker.

The future belongs to the orchestrators. To those people who, as Brené Brown suggested, let the AI take care of what is mechanical so that the human brain can go back to doing what it does best: feeling, connecting, and creating what does not yet exist. The risk is not in the machine thinking like a human, but in the human starting to think like a machine—predictably, soullessly, and totally dependent on an external command.

Don’t let the apparent ease kill your ability to reason. Use technology to clear the path, but make sure it is you who is walking. After all, the AI may have read all the books on how to live, but it will never have the courage to take the first step. That belongs exclusively to you.

At the end of the day, the question that remains for each of us is not how many tasks the AI completed for you, but rather how much of your original thinking was preserved in the process. If the answer is “almost nothing,” perhaps it’s time to take back the reins. Salvation isn’t in the code; it’s in the consciousness of the one using it.nsciência de quem o utiliza.

Related posts

Leave the first comment