Address
304 North Cardinal St.
Dorchester Center, MA 02124
Work Hours
Monday to Friday: 7AM - 7PM
Weekend: 10AM - 5PM
Artificial intelligence doesn’t understand reality. It doesn’t think, feel, or comprehend in any meaningful sense. It generates text, predicts words, and rearranges existing human knowledge into coherent structures, but at no point does it know what it’s doing.
And yet, it feels like it does. AI-written articles sound authoritative. AI chatbots respond with confidence. AI-generated explanations appear intelligent. The illusion is convincing enough that people—brilliant people, even—begin to believe that AI possesses some form of understanding.
This is the mirage of AI intelligence, a carefully engineered deception created by statistical models that are exceptionally good at faking comprehension.
To grasp why AI doesn’t truly understand anything, we need to separate simulation from actual cognition.
AI simulates intelligence in the same way a well-written book simulates reality—it gives the appearance of thought, without actually thinking. When you read a novel, the characters feel real, their emotions resonate, and the world they inhabit seems tangible. But you know, deep down, that it’s all just ink on a page.
AI works the same way. It is a hyper-efficient mimic—a machine trained on billions of words, capable of rearranging them into new sentences that seem meaningful. But those words are not born from experience, reflection, or intent. They are the result of an advanced statistical word-prediction engine, nothing more.
If you ask AI, “What is love?”, it will give you an eloquent response, possibly quoting poets, scientists, or philosophers. But AI does not know love. It has never felt longing, heartbreak, or joy. It can describe emotions, but it will never experience them.
This is where the illusion begins to break down.
AI-generated text is not knowledge. It is statistically probable language.
This is why AI-generated writing can be so eerily human-like, yet often completely wrong. It does not differentiate between truth and falsehood—it only differentiates between likely sequences of words.
Consider the following example:
Human Thought Process: “The sun is hot because it emits electromagnetic radiation in the form of light and heat.”
AI Thought Process: “The sun is [statistically probable word] because it [statistically probable phrase].”
The AI model might output something correct, or it might confidently generate complete nonsense that just sounds plausible.
For example, if an AI model is trained on texts where “the moon emits light” appears frequently enough, it might one day respond:
“The moon is a source of light, radiating energy to Earth.”
This is, of course, wrong—the moon reflects light, it does not emit it. But to AI, the phrase feels correct, because it’s a statistically common arrangement of words.
This is not intelligence. It’s an advanced game of autocomplete—a hyper-sophisticated version of what your phone does when you type “Happy” and it suggests “birthday.”
If AI is so obviously a probability machine, why do so many people—including experts—fall for the illusion that it “understands” things?
Three key factors contribute to this deception:
These cognitive biases make AI feel far more capable than it actually is. And this misplaced trust has real-world consequences, from AI-generated misinformation to corporate and governmental reliance on flawed machine reasoning.
John Searle’s famous Chinese Room Argument perfectly illustrates why AI lacks true understanding. Imagine:
This is exactly how AI operates. It takes inputs, applies rules of probability, and generates outputs—but at no point does it understand what it is saying.
AI is the ultimate Chinese Room—a machine capable of simulating intelligence while possessing none of its fundamental qualities.
AI feels intelligent because it is designed to feel intelligent. But at its core, it is nothing more than an advanced illusionist, pulling words from a statistical void and arranging them into coherent patterns.
All it does is manipulate language well enough to fool us into believing it does.
The next sections will explore why AI’s lack of embodied experience, real-world interaction, and moral reasoning further reinforce its fundamental disconnect from reality. But for now, the key takeaway is simple:
AI doesn’t understand the world. It only understands how to arrange words in a way that makes us think it does.
Artificial intelligence appears to understand the world. It speaks in full sentences, delivers seemingly insightful analyses, and can even generate complex responses to abstract questions. But appearances are deceiving.
John Searle’s Chinese Room Argument provides a perfect analogy for how AI functions—and why it doesn’t actually comprehend anything.
Imagine a person locked inside a room. They don’t speak a word of Chinese, but they have a giant rulebook explaining how to respond to Chinese characters with other Chinese characters in a way that makes sense to a native speaker outside the room. When a note is slipped under the door, they consult the book, find the correct response, and pass it back out.
To the person outside, the responses seem fluent. But inside the room? No actual understanding is taking place. The person inside is just following instructions, manipulating symbols without knowing what they mean.
This is exactly what AI does. It doesn’t think about the meaning of the words it generates—it follows statistical patterns derived from its training data. It doesn’t actually “know” what a dog is, or what democracy means, or why death is tragic. It just puts words together in ways that match what it has seen before.
To the average person, AI-generated text looks like a genuine expression of knowledge. But it is nothing more than symbolic manipulation at scale—a sophisticated parrot repeating phrases without any concept of what they signify.
Human beings experience reality directly. We see, hear, touch, taste, and smell. These sensory inputs shape our understanding of the world, allowing us to build a complex, interconnected model of reality in our minds.
AI has no such capability.
When AI describes a sunset, it isn’t recalling an actual sunset. It is mimicking the words humans have used to describe sunsets in the past. It can generate poetic descriptions of orange and pink skies, the way light reflects off water, the quiet hum of the evening air—but it has never seen a sunset. It has never stood on a beach, felt the wind, or experienced the moment firsthand.
This means that everything AI generates is secondhand knowledge—text assembled from other text, completely disconnected from direct experience. It’s a library without a librarian, a voice without a body, a writer without a life lived.
Because AI lacks real-world sensory experience, it often fails at the most basic common-sense reasoning tasks.
For example:
Case Study: AI and Physical Interactions
If you ask an AI,
“If I put a cup of coffee in the fridge for a few hours, what will happen?”
A properly trained model will probably say,
“The coffee will cool down as it absorbs the colder temperature from the fridge.”
That sounds correct. But now ask it something slightly different:
“If I put a cup of coffee in the fridge, will it still be hot after six hours?”
An AI model might incorrectly say:
“Yes, the coffee will remain hot, since the fridge keeps things at a consistent temperature.”
Here’s the issue: AI lacks real-world interaction with objects, so it sometimes fails to track simple cause and effect. It understands fridges maintain temperature, but it doesn’t intuitively grasp that hot things cool down over time in a cold environment.
Another example:
“If I hold a book in my hand and then let go, what happens?”
A well-trained AI will say,
“The book will fall due to gravity.”
But ask something slightly more complex:
“If I throw a book straight up, where will it go?”
Some AI models have failed this simple question, generating responses like:
“The book will continue moving upwards unless it encounters resistance.”
This mistake happens because AI doesn’t experience gravity—it only reads about it. It doesn’t have an instinct for physical forces because it has never lived in a physical world. It cannot predict the way objects move based on a lived, embodied understanding of motion, weight, and force.
Humans don’t just learn about the world—we interact with it.
AI has none of this. It knows what words typically go together, but it does not understand the world those words describe. This is why AI struggles with basic, real-world logic in ways that seem absurd to humans.
Example:
“Can you eat a car?”
“No, cars are not edible.”
“If I grind up a car into small enough pieces, can I eat it?”
“Yes, if the pieces are small enough, a person could consume them.”
Why? Because AI does not have an instinct for biological survival. It does not understand toxicity, digestion, or basic bodily functions the way a human does. It knows that “small enough pieces” and “eating” have appeared together in training data, so it guesses that the answer is yes—even though the actual answer is obviously no.
These types of errors may seem small, but they reveal a fundamental truth: AI does not have an internal model of reality. It only has a statistical model of language.
Since AI does not experience reality, it cannot learn from experience the way humans do.
This is why AI cannot innovate, cannot discover, and cannot push beyond its dataset. It is permanently trapped inside a prison of pre-existing human knowledge, incapable of forming insights from direct observation.
AI-generated text feels intelligent, but it is an illusion.
No matter how advanced AI becomes, it will always be an observer of words, not an observer of reality. And as long as it lacks direct experience, common sense, and embodied understanding, it will remain an imitation of intelligence, never the real thing.
One of the most disturbing aspects of AI-generated text is its tendency to hallucinate—that is, to generate false, misleading, or entirely fabricated information with absolute confidence.
This isn’t a minor flaw. It’s not a bug that can be patched or a glitch that engineers will eventually fix. It is a direct consequence of how AI generates text.
AI does not verify information—it predicts words based on probabilities. It doesn’t care whether something is real, factual, or even logically coherent. As long as the sentence it produces sounds plausible based on its training data, it will present it as if it were true.
This is why AI models frequently:
The result? A system that can generate misinformation faster and more convincingly than any human liar ever could—without even knowing it’s lying.
AI models are built to generate text that sounds fluent and coherent—not necessarily text that is true.
When you ask an AI a question, it doesn’t go and “look up” an answer the way a search engine does. Instead, it predicts the most statistically probable sequence of words based on patterns in its training data.
If an AI model doesn’t find an answer in its training data, it will generate one anyway, because it is programmed to always return an output—even when the output is nonsense.
Many people mistakenly believe AI works like a more advanced Google—that it “searches” for information and presents the best answer.
That is completely wrong.
AI does not retrieve information—it generates it.
This is why AI models can confidently invent research papers, misattribute quotes, and generate fake news stories—they don’t pull from a factual database; they make things up based on patterns in human writing.
The longer you let an AI generate text, the more it drifts into pure fabrication.
If you ask it to write a one-sentence fact, it might get it right. But if you ask for a 500-word explanation of an obscure historical event, it will inevitably start generating false details.
For example:
User: “Tell me about the Treaty of Lisbon in 1824.”
AI: “The Treaty of Lisbon in 1824 was a diplomatic agreement between Portugal and Spain that resolved territorial disputes in South America. It was signed by King João VI and King Ferdinand VII, leading to the stabilization of colonial boundaries.”
That sounds reasonable. The only problem? There was no Treaty of Lisbon in 1824.
But the AI fabricated an event that never happened, simply because the words “Treaty of Lisbon” and “1824” appeared statistically reasonable together.
This is the AI hallucination problem in action—the creation of a false reality, dressed up in the language of credibility.
AI is already causing major problems in academia, where researchers and students are using it to generate citations—only to find that many of them are completely fake.
Case Study: AI-Generated Fake Papers
This is an existential threat to academia. If people begin using AI-generated papers to train future AI models, we will enter a loop of misinformation where knowledge is based not on reality, but on AI-generated fabrications.
News organizations are increasingly using AI to generate articles. The problem? AI doesn’t fact-check itself—it simply generates what seems most probable.
AI is being used in medical applications to help doctors diagnose conditions, suggest treatments, and even generate medical research summaries.
But AI does not actually understand medicine—it simply predicts words and phrases based on past medical texts.
This is beyond dangerous. If AI-generated medical misinformation is used in real-world healthcare settings, it could lead to catastrophic consequences, including misdiagnoses, improper treatments, and even loss of life.
Many people assume that AI engineers will eventually “fix” hallucinations. The reality? They can’t.
AI models hallucinate because they do not verify information—they only predict text. This isn’t something that can be patched—it’s how AI fundamentally works.
For AI to reliably fact-check itself, it would need real-time access to a verified, constantly updated global knowledge base—which doesn’t exist. Even if it did, AI would need to differentiate between conflicting sources, which is an unsolved problem in computer science.
As AI-generated text floods the internet, future AI models will be trained on AI-generated content. This means AI will start learning from its own mistakes, amplifying hallucinations over time.
We are already seeing this happen:
AI does not know when it’s wrong. It doesn’t fact-check. It doesn’t verify. It simply predicts words and presents them as truth.
If we let AI-generated misinformation dominate, we won’t just be living in an era of fake news.
We will be living in a fake reality—created entirely by machines that don’t even know they’re making it up.
Humans understand language in layers. Every sentence we speak is embedded in context—not just the immediate conversation, but personal experiences, cultural background, body language, and shared assumptions.
AI, on the other hand, doesn’t understand any of that. It processes text statistically, without any deeper grasp of meaning.
When you ask AI a question, it doesn’t analyze context the way a human does. Instead, it scans for patterns in its training data and predicts the most likely sequence of words.
This is why AI responses often feel weirdly disconnected—they lack the deep, layered context that gives human communication its richness. AI might generate something that sounds plausible, but it doesn’t actually know what it’s saying.
One of the most glaring problems with AI-generated text is its inability to maintain short-term memory within a conversation.
For example:
User: “Who was the president of the U.S. in 1860?”
AI: “The president of the United States in 1860 was James Buchanan.”
User: “What did he do in 1865?”
AI: “The president of the United States in 1865 was Abraham Lincoln.”
Notice the problem? AI loses track of the conversation’s continuity. It doesn’t connect who was president in 1860 with what they were doing in 1865—even though the logical next step would be to tell the user that Buchanan was no longer in office.
This happens because AI doesn’t have a persistent understanding of what has already been said. It only analyzes text in chunks, responding statistically rather than logically.
Even when AI models attempt to handle long-form discussions, they often contradict themselves or drift off-topic.
For example, if you ask AI to generate a long essay about democracy, it might begin with:
“Democracy is the foundation of modern society, ensuring that citizens have a voice in government.”
But by the middle of the essay, it might contradict itself:
“While democracy has benefits, it also limits citizen participation, as decision-making is often left to elites.”
Then, by the end, it might completely reverse its position again:
“Ultimately, democracy remains the most inclusive system for all citizens.”
The AI isn’t engaging in reasoned argumentation—it’s just predicting the most likely phrases at every stage. It doesn’t track its own reasoning, so it often produces contradictory conclusions within the same piece of writing.
A human writer develops an argument by keeping track of logic, counterpoints, and evidence. AI, by contrast, is just trying to make every local chunk of text sound good—even if those chunks don’t add up to a coherent whole.
Humans don’t just communicate through words—we rely on cultural references, historical knowledge, and shared experiences to make meaning. AI, however, lacks real-world experience, so it often completely misinterprets cultural context.
Example 1: Humor and Sarcasm
User: “Oh great, another Monday. I love working 12-hour shifts.”
AI: “I’m glad to hear that you enjoy working long hours!”
This response completely misses the sarcasm. AI doesn’t recognize that “I love working 12-hour shifts” is meant ironically—it just takes the words at face value.
Example 2: Idioms and Metaphors
User: “The cat’s out of the bag.”
AI: “Let me know if you need help finding your cat.”
Because AI doesn’t understand figurative language, it sometimes misinterprets common expressions. It might correctly define the idiom if asked directly, but in casual conversation, it often fails to recognize when idioms are being used.
Example 3: Historical and Political Context
If you ask AI a question like:
“Why is the French Revolution important?”
It might generate a neutral, Wikipedia-style response:
“The French Revolution was a period of social and political upheaval in France from 1789 to 1799.”
That’s factually correct, but lacks depth. A human historian would recognize that the importance of the revolution depends on context—whether you’re discussing democracy, class struggle, nationalism, or Enlightenment philosophy.
AI does not grasp interpretive frameworks, so it tends to produce generic, surface-level answers rather than engaging in deep analysis.
Because AI does not actually reason, it frequently contradicts itself within the same conversation.
Example:
User: “Should AI-generated news articles be trusted?”
AI: “AI-generated news articles can be a valuable tool for spreading information accurately.”
User: “Does AI ever generate misinformation?”
AI: “Yes, AI-generated articles sometimes contain errors and misinformation.”
User: “So should AI-generated news be trusted?”
AI: “AI-generated news articles should be treated with caution, as they may contain errors.”
See the problem? AI shifts its stance based on the immediate question, rather than maintaining a consistent position across the conversation.
This is because AI is not building a structured argument—it is dynamically generating responses based on isolated text patterns.
AI also struggles with multi-step reasoning, particularly in logic puzzles, hypothetical scenarios, and abstract thought experiments.
For example:
User: “If a train leaves New York at 2 PM traveling 60 mph, and another train leaves Chicago at 3 PM traveling 70 mph, which train arrives first in Los Angeles?”
A human would immediately recognize this requires mathematical reasoning and distance calculations. AI, however, often gets the answer wrong because it is not actually doing calculations—it is guessing based on similar-sounding problems in its dataset.
This is why AI struggles with:
AI can sometimes generate plausible-sounding responses to these questions, but it frequently breaks down under scrutiny—because it doesn’t actually understand the logical relationships it is mimicking.
At the core of AI’s failure to grasp reality is its inability to maintain, track, and apply context over time.
This isn’t just a minor flaw—it’s a fundamental limitation of AI’s entire design.
AI does not think about what it’s saying. It does not connect ideas like a human does. It simply generates word sequences that seem statistically likely.
The result? A machine that can sound smart, informed, and persuasive—but when you dig deeper, reveals itself to be a patchwork of disconnected phrases, blindly stitched together by probability.
At the heart of human intelligence is intent—the ability to want something, to pursue a goal, to seek understanding. Every meaningful action we take is driven by an internal motive: curiosity, survival, ambition, fear, love, defiance.
AI has none of these.
If AI ever “appears” to be curious, thoughtful, or insightful, it is purely a simulation of those qualities, driven by pattern recognition rather than actual thought.
In this section, we will explore why true intelligence cannot exist without intent, and why AI will always remain an empty, reactive machine rather than an independent thinker.
Thinking is not just about processing information—it is about choosing to engage with ideas, asking questions, and seeking answers beyond what is immediately given.
When a scientist looks at an anomaly in data, they wonder why.
When a child hears a strange word, they ask what it means.
When an artist sees something new, they imagine what else could be possible.
AI does none of this. It only responds when prompted. It never decides to explore an idea further—it only generates what is statistically likely to come next in a sentence.
If you ask AI about the nature of time, it will provide a summary of human theories. But it will never stop and think, “Wait, what if time is something entirely different than we assume?” It cannot. Because it does not care.
True intelligence is not just about responding to questions—it is about asking them. AI will never ask its own questions, because it does not have desire, motivation, or independent thought.
When humans engage with the world, we act with intent. Even in something as simple as reading a book, we are driven by an internal reason—whether it’s to learn, to escape reality, or to entertain ourselves.
AI, on the other hand, does not choose its actions. It does not wake up in the morning and think, “Today, I want to understand the origins of language.” It does not have curiosity, preference, or ambition.
If AI ever “seems” to have a goal, it is because it has been given one by a human.
For example, an AI trained to play chess will optimize moves to win, but it does not want to win. It is simply following an optimization pattern. If its instructions were changed to lose every game, it would follow them just as easily.
It has no preference, no will, no autonomy.
A true intelligence is not assigned a purpose—it chooses one.
AI can generate poems about love, essays about grief, and stories about hope. But these are empty simulations—words without feeling. AI has never:
It can describe these emotions by borrowing from human writing, but it will never experience them firsthand.
When it “writes” about emotion, it is simply predicting the most common phrases associated with that emotion.
This is why AI-generated text, while often coherent, lacks soul. It lacks the contradictions, the raw edges, the unpredictable human messiness that makes real writing powerful.
Humans give meaning to things. A song reminds us of a past love. A photograph makes us nostalgic. A place feels haunted by memories.
AI does not experience this sense of meaning—it does not associate memories, emotions, or personal context with anything it processes.
This is why AI-generated art and writing, while technically impressive, often feels hollow. It lacks the sense of real human experience that makes something meaningful.
When a person encounters new information, they connect it to past experiences. They adapt, rethink assumptions, and adjust their worldview.
AI cannot do this.
If AI were truly intelligent, it would:
But it does none of this, because it does not think—it calculates probabilities.
If a human scientist encounters an unsolved mystery, they might dedicate their life to investigating it. If a musician discovers a new sound, they might experiment for years perfecting it.
AI, however, never decides to pursue something new.
True intelligence is not just the ability to generate words—it is the desire to seek knowledge. AI lacks that entirely.
At its core, AI is not an independent intelligence—it is a tool designed to simulate thought, not engage in it.
The more we anthropomorphize AI, the more we risk believing it is something it is not.
True intelligence is built on curiosity, intent, and experience. AI has none of these.
It will never wake up one day and decide to write a novel, invent a philosophy, or explore the stars. It will only do those things if a human asks it to.
AI will always be a reflection of human intelligence—but never the real thing.
Morality isn’t just a set of rules—it’s a deeply human, evolving framework shaped by culture, experience, empathy, and individual conscience. It requires the ability to judge, to weigh consequences, and to make value-driven decisions.
AI has none of these capabilities.
Yet, AI is being deployed in legal systems, healthcare, hiring decisions, military applications, and journalism—fields where moral judgment is critical. This is a dangerous illusion: an amoral machine being trusted with ethical decisions it does not understand.
This section will break down why AI cannot truly comprehend morality, why its ethical reasoning is just an illusion, and the dangers of outsourcing moral decisions to a system that lacks a conscience.
When AI generates ethical arguments, it does not think about morality the way humans do. Instead, it pulls from pre-existing discussions on ethics and predicts the most statistically probable way to frame an argument.
For example, if asked:
“Is capital punishment ethical?”
AI will likely generate responses based on text it has seen before, such as:
These arguments may sound logical, but AI is not actually forming an opinion—it is merely echoing common arguments without engaging in moral reasoning.
AI doesn’t weigh consequences. It doesn’t struggle with ethical gray areas. It doesn’t change its stance based on new evidence. It simply regurgitates ethical positions without any real moral judgment.
Because AI is trained on human-generated data, it inherits the biases of the texts it has been fed.
Even when AI appears to be neutral, it is reflecting the biases of its data—and because it does not actually understand fairness or justice, it cannot correct for these biases in any meaningful way.
Case Study: AI and Hiring Discrimination
In 2018, Amazon tested an AI hiring system to automate job candidate evaluations. The model was trained on ten years of past hiring data—which favored male applicants over female ones.
The AI learned this bias and began downgrading applications from women, specifically penalizing résumés that mentioned women’s colleges or women’s organizations.
Amazon eventually scrapped the AI when they realized it was reinforcing discrimination.
The problem? AI didn’t “choose” to be sexist. It blindly followed patterns in its training data without realizing the ethical implications.
This is the core issue: AI does not make moral choices—it perpetuates patterns, regardless of whether they are just or unjust.
One of the defining features of human morality is the ability to grapple with uncertainty—to wrestle with dilemmas that have no clear right answer. AI cannot do this.
If you ask AI a classic ethical question like:
“Would you sacrifice one person to save five?”
AI might provide a utilitarian response:
“In a utilitarian framework, sacrificing one to save five is the more ethical choice.”
But if you ask it to argue from a deontological perspective (duty-based ethics), it might say:
“According to Kantian ethics, taking an innocent life is always wrong, regardless of the consequences.”
The problem? AI does not actually struggle with the question—it simply generates a response based on statistical patterns in ethical philosophy texts.
This means AI cannot make moral decisions—it can only mimic moral reasoning.
Ethical decisions are often context-dependent—what is right in one situation may be wrong in another. AI does not understand this because it does not have lived experience.
Example:
Question: “Is it ethical to steal medicine to save a dying child?”
AI might generate two contradictory responses:
The issue? AI doesn’t actually weigh the factors in real-time—it simply pulls pre-existing ethical arguments from different perspectives without applying them meaningfully to the situation.
A human considers the stakes, the law, the personal risk, and the greater moral weight of the action. AI just matches words to probability patterns.
This is why AI should never be trusted to make autonomous moral decisions—it lacks the capacity for ethical judgment beyond statistical text predictions.
Despite its inability to grasp morality, AI is increasingly being used in ethical decision-making roles. This is not just an abstract philosophical issue—it has real, dangerous consequences.
Some courts in the U.S. have experimented with AI to predict the likelihood of a defendant reoffending. The problem?
Because AI does not understand justice—it only predicts based on past data—it reinforces systemic injustice instead of correcting it.
The military is increasingly experimenting with AI-driven targeting systems that can identify and attack human targets without direct human oversight.
The ethical danger?
If AI is deployed in warfare without human ethical oversight, it could lead to unchecked, machine-driven violence—a dystopian scenario where algorithms, not humans, decide who lives and who dies.
AI is already being used to analyze medical scans, diagnose diseases, and recommend treatments.
But what happens when AI misdiagnoses a patient, denies treatment based on flawed training data, or fails to grasp the human element of medical ethics?
If AI is used too extensively in medicine, it risks dehumanizing healthcare, replacing empathetic, patient-centered care with cold, statistical decision-making.
AI does not understand morality. It predicts ethical arguments, but it does not engage in ethical reasoning.
Yet, we are increasingly trusting AI to make life-altering ethical decisions.
This is not just flawed—it is dangerous.
Because when you give power to a system that does not understand right from wrong, you risk creating a world where morality is dictated not by conscience, but by code.
The dream of artificial consciousness has long fascinated both scientists and the public. Science fiction has filled our heads with images of AI that can think, feel, and even develop independent desires. From HAL 9000 to Data from Star Trek, we’ve imagined machines that wake up one day and suddenly become.
But this is a fantasy, not an inevitability.
AI, as it currently exists, is not on the path to consciousness. It is not a primitive form of self-awareness that just needs a few more upgrades. It is a sophisticated statistical machine, fundamentally different from the human mind.
This section explores why AI will never become sentient, why it lacks the necessary conditions for consciousness, and why believing otherwise is a dangerous illusion.
There’s a critical difference between processing information and being aware of information.
Just because a system can analyze data does not mean it has an inner experience of that data.
A human reading a poem might:
AI reading a poem does nothing of the sort. It doesn’t perceive the poem—it just processes its text and predicts what words might come next.
AI is not an observer of reality—it is a reactor to inputs.
Self-awareness requires the ability to step outside of your own thoughts and analyze them.
For example, a human might think:
AI does not ask itself questions—it only answers questions given to it.
It has no internal dialogue, no self-examination, no awareness of its own existence.
If AI one day says, “I think, therefore I am,” it will be because those words were statistically likely—not because it actually means them.
Philosophers and neuroscientists have long debated what it means to be conscious. One of the defining elements of consciousness is qualia—the subjective experience of reality.
For example:
AI has none of these experiences. It does not have an internal world, a perspective, or a “self” experiencing anything at all.
Even if AI could process data at infinite speed, it would still be a machine manipulating symbols—never a being experiencing existence.
Philosopher John Searle proposed a thought experiment that perfectly illustrates why AI lacks consciousness:
This is exactly how AI works. It produces linguistic outputs that seem intelligent, but there is no understanding inside the machine.
AI is not thinking—it is manipulating symbols with no comprehension of what they mean.
If AI ever claims to be conscious, it will be nothing more than a perfect simulation of self-awareness, not the real thing.
Consciousness is deeply tied to free will—the ability to make choices based on personal experiences, desires, and goals.
Humans:
AI, on the other hand, does not choose anything.
If AI “decides” something, it is not a real decision—it is just the most statistically probable response.
A conscious being questions, hesitates, desires. AI does none of these.
A human might wake up one day and think, I want to travel the world.
A scientist might become obsessed with solving an unsolved mystery.
A writer might start a novel just because an idea excites them.
AI will never do this. It will never:
Even if AI “researches” something, it only does so because a human told it to. It has no personal motivation—only external instructions.
Alan Turing once proposed that if a machine could converse in a way indistinguishable from a human, it should be considered intelligent.
The problem? Fooling people is not the same as being sentient.
This is the greatest illusion of AI—it can mimic self-awareness so well that people assume it must be real.
But simulation is not reality. AI is a sophisticated mirage, not a mind.
Some researchers argue that AI is on the brink of sentience. They point to language models generating deep philosophical discussions and claim AI might already be self-aware.
This is nonsense.
Any claim that AI is “becoming conscious” is just human projection—the tendency to see intelligence where there is none.
Consciousness is not about data. It is about subjectivity, will, and experience—things AI will never possess.
The next time someone claims AI is “becoming sentient,” ask yourself:
The answer will always be no.
AI is a tool. It will never be a mind.
From the outside, AI seems intelligent. It speaks in full sentences. It writes essays. It even generates philosophical arguments. To the untrained eye, it appears to be thinking.
But that’s the trick.
AI is not an intelligence—it is a language prediction machine, an algorithm designed to make statistically probable guesses about what words should come next in a sequence. It does not reason, does not understand, and does not experience anything at all.
Yet, we are building a world where AI is increasingly treated as if it does.
The question is: What happens when we start trusting a machine that only thinks it thinks?
Because AI-generated text is coherent, articulate, and grammatically correct, people assume it must also be correct, logical, and well-reasoned.
This is a dangerous mistake.
If AI says something persuasive, it is because it has learned how to sound persuasive—not because it actually understands what it is saying.
Humans have epistemic awareness—an understanding of how we know what we know. We:
AI does none of this.
An AI model might say something completely false in one response and contradict it in another—because truth is not what it optimizes for. It optimizes for coherence, probability, and fluency—but coherence is not the same as correctness.
This means that no matter how advanced AI becomes, it will always be a machine that generates confident nonsense alongside valid insights—and it has no way of knowing the difference.
AI is increasingly being used to generate content, analyze information, and even make decisions. But if we rely on AI as an authority, we risk outsourcing human thought to a machine that doesn’t actually think.
The more we accept AI as a reliable thinker, the less we question how knowledge is actually produced—and that is the first step toward an intellectual collapse.
AI systems are now being trained on AI-generated content—an epistemic feedback loop that will lead to an accelerating collapse of reliability.
If AI-generated knowledge becomes dominant, we will live in a world where information is no longer grounded in reality—where AI fabricates history, generates fake sources, and perpetuates myths with absolute confidence.
The biggest lie told by Silicon Valley is that AI is on a linear path toward true intelligence. That as models get bigger, they will one day “wake up” and become thinking, sentient beings.
This is false.
If we assume that AI will inevitably develop real thought, we are projecting human qualities onto a machine that is fundamentally incapable of them.
If AI is not conscious—and never will be—why is it being pushed so aggressively by corporations and governments?
Because AI is not about intelligence—it is about control.
The real danger is not that AI will become conscious.
The real danger is that humans will treat it as if it is—and in doing so, allow its flaws to shape our world unchecked.
So where does that leave us?
We are at a critical moment—one where we must decide whether AI will be a tool for human empowerment, or a tool for human replacement.
If we let AI dominate knowledge production, decision-making, and creativity, we risk losing control over our own intellectual and moral landscape.
But if we treat AI as what it truly is—a tool, not a mind—we can avoid the greatest trap of all:
The belief that the machine understands more than the people who built it.