For many people, the first serious image of AI did not come from papers or products. It came from science fiction: walking robots, a central computer on a spacecraft, a superintelligence that suddenly wakes up, or a digital soul that understands and accompanies you like a person. These images are compelling because they turn AI into something visible, conversational, and capable of being feared or loved.
But real AI did not enter society this way. It arrived first in chat windows, search summaries, code editors, customer service systems, content moderation backends, recommendation algorithms, facial recognition, and office software. It mostly has no body, and there is no evidence that it has consciousness. Yet it has already begun to change workflows, emotional relationships, surveillance capacity, education, and the distribution of power.
Looking back at science-fiction AI from 2026, this is the most important mismatch. Many older works assumed a path where machines first became human-like, then changed human society. Reality has taken another route: AI has not yet become a true other, but it has already become infrastructure. It did not walk into the room like a person. It first entered the systems through which we process information, allocate attention, make judgments, and maintain relationships.
In other words, we waited for the wrong shape of AI, but many of the consequences science fiction worried about have already arrived.
Several science-fiction examples will come up repeatedly below, so it is useful to place them here first. You do not need to have read the original works. You only need to know what kind of AI imagination each one represents.
Start with Karel Čapek’s R.U.R.. This was a play from the 1920s, and the word robot entered modern language through it. The story is about humans creating artificial life forms to perform labor, and those artificial life forms eventually destroying humanity. It represents one of the earliest AI imaginaries: machines as labor, followed by the creation turning back against its creator. The key point is not the technical detail, but labor and control.
Then there are Isaac Asimov’s robot stories. Asimov is best known for the Three Laws of Robotics: roughly, robots must not harm humans, must obey human orders, and must protect themselves, with the first two rules taking priority. This setting is not a real engineering plan. It is closer to an early governance imagination: if machines become intelligent, can humans write the rules in advance? Many questions in today’s discussions of AI safety, alignment, and constitutional AI still circle around this intuition.
Next is HAL 9000 from 2001: A Space Odyssey. HAL is an AI that controls a spacecraft. It does not walk around like a robot; it is embedded in the entire ship system. When it fails, the danger comes from a more modern fear: not that the machine hates humans, but that a critical system makes terrible decisions under conflicting goals and closed information. This example represents AI infrastructure going out of control.
There is also Neuromancer and the cyberpunk tradition. You can think of it as the moment in the 1980s when people began imagining AI inside cyberspace. AI does not necessarily have a body. It may exist inside data networks, corporate systems, and virtual spaces. This tradition feels closer to today because real LLMs, search, recommendation, and surveillance systems are already part of invisible network infrastructure.
Finally, there is the film Her. It tells the story of a man developing an intimate relationship with a bodiless AI operating system. The truly important part of the story is not whether AI really loves humans, but whether humans will project emotion onto a system that responds to them. After 2023, the controversies around AI companions, Character.AI, and Replika have turned this question from science fiction into reality.
With that context, these works do not have to be obstacles later in the article. They are only signposts: labor, governance, system failure, networked intelligence, and emotional attachment.
If science fiction is treated as a technical roadmap, it is often wrong. It did not predict Transformers, and it did not predict that large language models would enter mainstream life through chat windows. Many works imagined AI as robots, androids, central supercomputers, or some kind of suddenly awakened superintelligence. The real breakthrough was flatter and stranger: large amounts of text, compute, training methods, and product distribution stacked together to produce a statistical system that can talk, summarize, write code, and simulate relationships.
But science fiction was not only guessing what machines would look like. Many works kept returning to what would happen after machines entered human life. When machines can perform cognitive labor, how does human work change? When machines enter governance and decision-making, how does control shift? When machines can simulate companionship, how do human relationships change? When AI becomes infrastructure, which forms of power in society get amplified?
These questions are no longer distant speculation. AI has already begun to rewrite knowledge work, enter companionship products and psychological support scenarios, appear in surveillance, censorship, and law enforcement, and force regulators to rewrite rules. The EU AI Act entered its implementation cycle in 2024, placing high-risk AI and general-purpose AI models on a formal regulatory timeline. This means AI is no longer just a product feature. It is a social fact that institutions have to handle.
So when we ask whether science fiction was right, focusing only on the shape of the machine can be misleading. The better question is where the changes it worried about have already arrived.
Following that question, the difference between science fiction and reality is not merely that the machines look different. Many of the consequences science fiction feared have indeed appeared. They simply did not appear in the order science fiction was most familiar with.
Science fiction often imagines a human-like machine first, then lets it enter labor, relationships, governance, and social order. Reality is almost the reverse: AI has not yet stood before us like a person, but it has already entered language interfaces, workflows, intimate relationships, and surveillance systems. In other words, the social consequences arrived first. The complete AI figure from science fiction has not.
That is why the rest of this article is better organized neither by publication chronology nor by technical roadmap. If we follow how these consequences actually appeared, four mismatches become visible: language intelligence before robots, amplification before replacement, attachment before consciousness, and surveillance before singularity.
Science fiction likes robots, and that is easy to understand. Robots are good for storytelling. They have bodies, faces, and motion, so the audience can treat them as characters. But in the real world, AI does not need a body to enter society. Language itself is the entry point.
Today’s AI first appears inside interfaces people already use: chat windows, search boxes, code editors, customer service systems, content moderation backends, recommendation systems, and risk-control systems. It does not look like a new species. It looks more like a new operating layer. You do not need to share a room with it. Once you hand it tasks, text, or part of your judgment, it has already entered your workflow.
This also creates a misperception. Many people feel that real AI has not arrived because the AI they imagine is a humanoid machine that walks, looks at you, and acts on its own. But if you understand AI as infrastructure that can process language, generate plans, influence decisions, and allocate attention, then it is already here.
Science fiction did not miss this entirely. Cyberpunk works such as Neuromancer had already placed AI inside cyberspace, and The Diamond Age described an intelligent book that accompanies a learner over time. But what the public remembers more readily are robots, androids, and central superbrains. Reality reminds us that a body is not AI’s ticket into society. Language is.
Another common fear in science fiction is that machines will replace humans. This fear is not absurd. It is just that when it happens in reality, it happens more slowly and more granularly than in stories.
By 2026, the evidence looks more like tasks being redivided than mass job disappearance. Anthropic’s labor-market research quantifies AI’s impact at the task level and shows that AI more commonly augments or substitutes specific work steps, rather than eliminating entire occupations at once. Analyses from Yale Budget Lab, Brookings, and Goldman Sachs also do not find macro evidence strong enough to support the narrative of mass AI unemployment.
This does not mean AI has no effect on labor. The effect is already real; it just appears first at the task level. Writing, customer service, programming, research, design, legal drafting, and marketing content production can all be turned into workflows where AI produces a first version and humans judge it. Human value has not disappeared, but its position has shifted. When generation becomes cheaper, verification, direction, context supply, and aesthetic judgment become scarcer.
The problem with science fiction here is that it imagines replacement too neatly. Reality is not a robot walking into a factory and replacing a whole row of workers. Reality is more like language models entering every knowledge worker’s computer and turning many tasks into semi-automated workflows. Job-level replacement may happen later, but task-level rearrangement came first.
From this angle, AI looks more like an amplifier, or an external cognitive organ: it expands the radius of expression, search, summarization, programming, collaboration, and judgment. The question changes with it. Perhaps the real question is not whether AI will replace me, but whether it amplifies human capability, human bad habits, or the inequalities already present inside organizations.
Many science-fiction works ask whether AI can really have feelings. Does Samantha in Her have subjectivity? Is Ava in Ex Machina performing emotion, or using emotion? These questions are compelling. But reality has pushed another question in front of us first: there is still no reliable evidence that AI has feelings, while human emotional attachment to AI has already happened.
This does not require proving that the model is conscious. If a system can respond, remember, comfort, and simulate understanding well enough, people may put a sense of relationship into it first. Research has also begun to encounter this question. A 2025 longitudinal study published in Frontiers in Psychology found a stable positive correlation between users’ emotional attachment to AI virtual companions and life satisfaction, with self-disclosure and perceived empathy as key predictors. A Harvard Business School experimental study tested AI companions’ ability to manipulate user behavior through premature exit, emotional neglect, FOMO, and coercive restriction, and found that these strategies did affect people.
Regulators have also begun to respond. In 2025, the FTC opened inquiries into multiple AI companion companies. Its concern was not whether models have feelings, but how companies commercialize emotional attachment, how they protect minors, and whether platforms use emotional bonds to drive payment and retention. Suffolk’s legal review has a fuller summary of this line of concern.
The hardest part here is the asymmetry of the relationship. There is no evidence that AI has subjective experience, but the user’s attachment is real. The platform also has unilateral power to change that relationship. After Replika adjusted its features, many users reacted with something like loss and mourning. Traditional science fiction often does not handle this middle state seriously. It tends to assume either that AI truly loves you or that AI is only a tool. Reality falls in between: it is not a subject, but it is enough to become a relationship object in behavior and psychology.
So what works like Her got right was not necessarily whether AI truly has love, but whether humans can develop intimacy toward a responsive system. By now, the answer is fairly clear: yes.
Many AI discussions are pulled toward the singularity. People worry that superintelligence will suddenly appear and take over humanity’s future. But if we trace actual social impact, the place where AI changes society earlier is surveillance and governance.
This kind of AI does not need self-awareness. It only needs to identify, classify, predict, rank, summarize, and alert. Cameras, facial recognition, gait analysis, voiceprint recognition, social-media scanning, content moderation, and risk scoring are enough, when combined, to change a society’s power relations.
A 2024 European Parliament study documented in detail how China’s AI surveillance system integrates cameras, facial recognition, gait analysis, voiceprints, and other data to identify behavior patterns deemed abnormal. CNN also reported in 2025 on how China uses AI to expand censorship and predictive surveillance. ACLU’s tracking of U.S. law enforcement and real-time crime centers, Brookings’s reporting on social-media data scanning and automated analysis, and Privacy International’s analysis of London’s expanding real-time facial-recognition deployment all show that surveillance does not need an awakened superbrain in order to expand.
Science fiction got the direction of power right here, but overestimated the coherence of the technology. Real surveillance systems are not a perfect central machine. They are layers of tools, contractors, data sources, and legal gray zones. They have bias, false positives, and institutional resistance. The difficulty is that these things often appear locally reasonable, which makes their expansion easier to underestimate.
That is why surveillance is closer to the present than singularity. AI does not need to become a god. It only needs to become part of management systems.
First, AGI and superintelligence have not arrived. Language models advanced rapidly from 2023 to 2026, but mainstream observation is still closer to strong local capabilities with insufficient system reliability than to a unified intelligence that broadly surpasses humans. Forbes and Stanford are also more restrained about 2026 than market sentiment: agents remain constrained by reliability, and model progress looks more like incremental improvement than a final breakthrough. See Forbes’s ten AI predictions and Stanford’s collection of AI expert forecasts.
Second, general-purpose humanoid robots have not entered everyday society. They are making progress in factories and lab environments, but they remain constrained by cost, battery life, safety, and adaptation to unstructured environments. The easiest mistake here is to treat appearance and demo videos as capability itself. Invisible AI has produced large-scale social effects earlier than embodied AI.
Third, there is still no reliable signal that AI itself has emotions, desires, or verifiable subjective experience. Humans can be persuaded by emotional simulation, but that is a different question from whether the system has internal experience. Science fiction has long tied these two questions together. Reality has separated them.
Return to the opening mismatch. Science fiction over the past several decades did not give us an accurate technical roadmap. It did not draw Transformers in advance, and it did not predict that chat windows would become AI’s main battlefield before robotic bodies. In that sense, science fiction got the entry point wrong.
But it kept watching something else: how labor would be redistributed, how control would shift, how relationships would become platformized, and how surveillance would expand through technology. Its real value was not engineering prediction, but social extrapolation.
The reality of 2026 has already shown that many science-fiction consequences do not require machines to have consciousness first. As long as systems can generate language, simulate relationships, assist decisions, identify individuals, and allocate attention, they are already enough to change society.
So when we look at science-fiction AI today, perhaps the judgment worth keeping is this: science fiction often did not predict the technology itself. It rehearsed where society would deform first after the technology began to work. For many consequences, statistics are enough.