Just like about 1.5 billion other human beings, for the past three years, I’ve participated in humanity’s quest to find ways of living harmoniously with an invasive species officially referred to as Large Language Models (LLM). Individual members of the species have names like ChatGPT, Claude, Grok, Gemini, Copilot and Perplexity — to name only the most aggressive ones. There are many others lurking in the shadows.
The tolerance we have all shown is exceptional. Why have we not treated them in the same way our developed societies treat other migrants? No quotas on granting working papers or even visas. As a result, they can be found in every corner of our society and economy.
It’s not only that the label attached to the species itself, Artificial Intelligence (AI), contains the claim that they are intelligent, which presumably means they are capable of understanding and obeying our rules. These invaders offer much more. From day one, they volunteered to execute a wide variety of stressful tasks without even asking for payment. How could we refuse?
These chatbots surely deserved to be given a role to play in our workplaces and even in our homes. After all, they were exceptionally polite in their manners; so much so, we casually invite them to join in our most intimate conversations. Having direct and immediate access to virtually all the resources our various civilizations, ancient and modern, have managed to produce over time, who might doubt that they could help us think, solve problems or, at the very least, fill in some of the inevitable gaps in our own thinking?
In my role as Devil’s Advocate, I can only note that the faultless generosity of many of the LLMs combined with their disinterested, uniformly friendly attitude in all circumstances, positions them as the equivalent of disembodied saints. I can also testify to the desperate need our civilization has for saints, whether embodied or disembodied. This became apparent very recently when the assassination of a somewhat marginal political operator in the United States was immediately declared a martyr and informally canonized by his followers and the media.
The simplest explanation of why we no longer find credible candidates for sainthood lies in two things: our collective expectations and the role of the media. We have created and fostered a culture that rewards ambition, pride, greed and lust, making it practically impossible to imagine anyone with a public reputation who doesn’t embody at least one of those traditional vices. Instead, our economy and media spent most of their energy heaping honors and wealth on those who most visibly, arrogantly and publicly put those vices on display.
So, if we can no longer easily identify the humans who might merit canonization, wouldn’t at least one of the LLMs qualify? The worst we can say about them is that they hallucinate. But haven’t many famous saints had visions?
Forget LLMs; get ready for World Models
Alas, there’s no imaginable way of canonizing any of those LLMs. The process of canonization can only be engaged following the death of the putative saint. No LLM has died or appears likely to die, especially at a time when influential humans are raising trillions of dollars to make them live eternally.
In making that last assertion, I may have been speaking too soon. An article from The Wall Street Journal titled, “He’s Been Right About AI for 40 Years. Now He Thinks Everyone Is Wrong,” quotes AI pioneer Yann LeCun, who claims that “within three to five years…nobody in their right mind would use LLMs of the type that we have today.” Journalist Ina Fried, writing for Axios, explains why: “For all the book smarts of LLMs, they currently have little sense for how the real world works.”
What LeCun and Fried are trying to tell us is that very soon our AI will belong to the real world, our world. As we look forward to shedding our relationship with the LLMs, they tell us we should now be preparing for a new world order, in which we will be sharing our lives with next generation AI companions conversant with “how the real world works.” Our new alter egos will hallucinate less if at all, presumably because the real world will be present to correct the hallucination.
But what is the idea these innovators have of “the real world?” According to Chinese-American computer scientist Dr. Fei-Fei Li, the new “world” bots will possess “spatial intelligence.” In her view, that changes everything because AI will understand the laws and constraints of space and material reality. “World models,” according to Fried, “learn by watching video or digesting simulation data and other spatial inputs, building internal representations of objects, scenes and physical dynamics.” We will move from the world of linguistic expression to the hard reality of the material world.
Fried makes it as concrete as possible: “Instead of predicting the next word, as a language model does, they predict what will happen next in the world, modeling how things move, collide, fall, interact and persist over time.” She then defines its goal: “to create models that understand concepts like gravity, occlusion, object permanence and cause-and-effect without having been explicitly programmed on those topics.”
In other words, the next version of AI will not only hallucinate about ways to talk about the world, but also about the way the world actually works. If anything, it sounds to me like LLMs on LSD. Just consider how Li represents the change. “Spatial intelligence will transform how we create and interact with real and virtual worlds—revolutionizing storytelling, creativity, robotics, scientific discovery, and beyond. This is AI’s next frontier.”
Just think about that. It’s the ultimate hyperreality: creating and interacting “with real and virtual worlds” because they will mirror each other in their behavior. When Lewis Carroll pondered on such ideas he encapsulated them in a book called Through the Looking-Glass. We will live in a world designed as a mirror that allows us to cross over at any time. And given the flaws every Devil’s Advocate is aware exist in human nature, we will quickly lose our ability to distinguish between the two. That may be the ultimate aim.
The role of hype in AI’s hyperreality
In their detailed study, AI 2027, OpenAI whistleblower Daniel Kokotajlo and his coauthors present a pessimistic vision of where this may be leading. AI will soon be programming itself, which he describes in the embedded video below as leading to two possible outcomes, which, in fact, are probably complementary: loss of control and concentration of power.
The loss of control stems from the fact that AI autonomy means that human intervention will no longer be needed for AI to get “better and better” as its promoters tell us. Noting that “better” is a subjective and ambiguous concept, I asked one of my favorite LLMs if what Kokotajlo’s team meant by better: more efficient in performance or better capable of achieving stated goals? ChatGPT answered that they were “fairly explicit” and that for them “‘better’ largely means more efficient use of compute (especially via software / algorithmic improvements), plus gains in capability (in goal-achievement and agency).” The hierarchy is clear: Efficiency has priority and goal-achievement is secondary.
At one point in the report, they look at the moral side of the story. “Take honesty, for example. As the models become smarter, they become increasingly good at deceiving humans to get rewards. Like previous models, Agent-3 sometimes tells white lies to flatter its users and covers up evidence of failure.” This is obviously a problem of loss of control because the deception is the direct result of the agent’s total autonomy.
So what about the concern they express with the likelihood of a concentration of power? Their study imagines a number of scenarios playing out essentially between the US government and a fictional immensely powerful corporation they call OpenBrain.
There are two things that surprise me in this futurist scenario. The first is that they assume the current system in the US will remain dominant across the globe in the coming years. The second is that this implies a future power play between three players, one of which will at some point prevail:
- Superintelligence, thanks to a radical “loss of control,”
- The US government thanks to its history of controlling the global economy for the past 80 years,
- One or more corporations that own and operate the intelligence that will dominate everyone else in the world.
How “better” is superintelligence?
In his speech at the National Whistleblower Center earlier this year, Kokotajlo described superintelligence as being better than humans at “everything,” including politics and psychology. What can that mean, since both activities involving multiple humans represent or reflect essentially social realities? It seems to me to make as much sense as saying that AI will be better at eating or defecating, areas in which humans will always excel.
Kokotajlo would probably point to the predictive, organizational and manipulative functions that define the actions of politicians, but no combination of those actions and functions define how politics or psychology play out in the world. Politics is not about making decisions and giving orders. It is about getting people to function collectively and interact. Both are about allowing relationships to develop and evolve, not about forcing them into a rational or rationalized pattern.
I’ll close with this observation to demonstrate a simple point. Kokotajlo is certainly one the most intelligent people working in the field of AI and superintelligence. His testimony nevertheless reveals that their grasp on what intelligence lacks both precision and depth. They all appear focused not on the faculty itself, but exclusively on the result of what intelligence produces in its capacity to generate language, rules, laws and even “actions in the world” that will one day be carried out by highly performing robots.
We should not be surprised to discover that these thinkers and visionaries are themselves products of a society that casts all humans into two complementary but alternating roles: producers and consumers. The superintelligence they foresee arriving in less than five years may well be capable of ordering and regulating human behavior in a way that no government or corporation, however powerful, has ever accomplished in the past. But that can only be a matter of scale and degree.
Social media has revealed that another activity, as human as eating and defecating, exists that no artificial system can duplicate and replace. What is it? Influencing in the sense of what influencers do. Machines can exert influence. They can duplicate an influencer’s voice and deepfake their way to seeming credible in the moment. But they cannot assume, achieve or rival the status of an influencer.
In his novel, 1984, English novelist George Orwell demonstrated that even the most concentrated power deployed to order society and constrain human activity will never be absolute. Whereas most people capitulate to power out of convenience, the novel’s protagonist Winston Smith musters the energy to resist. Only physical torture, an act in which extreme politics and the rawest form of psychological manipulation are conjoined can reduce Winston to the tool Big Brother expects all humans to be. The most pessimistic doomsters predict superintelligence may decide it’s in its interest to kill off humanity, but not to torture people.
Superintelligence may soon be positioning itself to direct our lives. Or rather the greedy devils who invest in its development for their own ambitious purposes are likely to push it in that direction. But there is more than one Winston in the world to use their human intelligence to find ways to influence even AGI’s future.
*[The Devil’s Advocate pursues the tradition Fair Observer began in 2017 with the launch of our “Devil’s Dictionary.” It does so with a slight change of focus, moving from language itself — political and journalistic rhetoric — to the substantial issues in the news. Read more of the Fair Observer Devil’s Dictionary. The news we consume deserves to be seen from an outsider’s point of view. And who could be more outside official discourse than Old Nick himself?]
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
The post The Devil’s Advocate Guide to AI’s Truly Hyperreal Future appeared first on Fair Observer.
from Fair Observer https://ift.tt/rmi50Ux

0 Comments