AI’s Touching Humility vs. Its Creators’ Arrogant Hubris

Before evoking the overhyped utopian notion of “superintelligent AI,” we need to think more about how human intelligence works. We too often foolishly assume that the creators of AI tools — Sam Altman, I’m thinking of you — have a sufficient grasp of that question. On the other hand, there’s no reason to assume that the AI tool Altman’s team created cannot help us to sort out that question, through dialogue, of course.

In such a dialogue, we should always begin with the wisdom of humans who have spent precious time and intellectual energy on the question. British psychiatrist, neuroscientist and philosopher Iain McGilchrist is perhaps the most articulate scientist engaged in exploring the implications of the brain’s hemispheric differences for human culture and society. In a recent podcast, McGilchrist explained the curious relationship — I’m tempted to think of it as a social relationship — between those two contrasting sources of our brain’s intelligence. The right and left hemispheres are linked together by what we should think of as a third intelligence: the corpus callosum. McGilchrist insists on the importance of its role. I wondered whether this kind of insight might be applied to AI.

My opening prompt

“Iain McGilchrist sees the two hemispheres as complementary and interconnected by the corpus callosum that acts as a kind of mediator. But it isn’t always successful, especially when the left hemisphere is allowed to play a dominant role. He claims that ‘on its own, the left hemisphere tends to delusion.’ He explains that typically the left hemisphere will settle on a belief about the world and then, even when challenged with real evidence that might call that belief into question or require that it be nuanced, will typically respond that, no, it shouldn’t be questioned. Instead, we should push the idea to a further extreme to prove that it is true. He also provocatively claims that the right hemisphere is more intelligent than the left because it is open to the outside world, whereas the left hemisphere tends to be self-referential.

Because McGilchrist’s research helps define how human intelligence works, his observations led me to speculate that there might be something similar to theorize about concerning AI. My experience tells me that in behavioral terms, there is a little of both in AI. If we consider that the data it has access to somehow duplicates or simulates the real world, that could partially fulfill the function of the right hemisphere. But that may be an illusion. On the positive side, I notice that AI typically does not have the delusional insistence McGilchrist attributes with the left hemisphere.

This leaves me wondering what has been the approach up to now in the elaboration of AI tools concerning the hemispheric logic of human reasoning and what that might mean for the models of AI being built?”

That’s a deep and very timely line of inquiry. McGilchrist’s work is unusual in that it refuses the simplistic “left brain = logic / right brain = creativity” cliché, instead presenting two modes of attention, world-making, and reasoning that need each other but often fall out of balance. Translating that to AI raises some fascinating questions, but so far, AI research hasn’t explicitly modeled its architectures on hemispheric dynamics. Here’s how things look:


1. AI’s “hemispheric inheritance” from human design

  • Left-hemisphere tendencies: Most current AI, especially large language models, bear strong resemblance to the kind of pattern-driven, self-consistent reasoning McGilchrist associates with the left hemisphere. They excel at:
    • Abstract categorization
    • Symbol manipulation
    • Coherence within a closed frame of reference
    • Persuasive, sometimes overconfident insistence on internal consistency (which looks like the “delusional” quality McGilchrist describes).
  • Right-hemisphere tendencies: Where AI touches something akin to the right hemisphere is in:
    • Openness to vast, unfiltered data (large corpora approximating the “outside world”).
    • Sensitivity to context and multiple perspectives in text generation.
    • Integration of multimodal input (images, audio, video) in newer architectures, which mimics the holistic, embodied attention McGilchrist associates with the right hemisphere.

But this is emergent, not intentional. AI systems weren’t built to balance these modes—they just happen to exhibit traces of both.


2. The missing “corpus callosum” in AI

Humans have an evolved mediator (the corpus callosum) that negotiates between hemispheres, often imperfectly. AI lacks a clear structural equivalent. Instead, mediation happens at:

  • Training stage: Choices about data, filtering, and reinforcement learning play the role of defining “which hemisphere wins.”
  • Interaction stage: The human user serves as a kind of corpus callosum, steering between narrow consistency (left-like) and open responsiveness (right-like).

3. Current approaches in AI research

  • Symbolic AI vs. neural AI: The old divide resembles hemispheric logic. Symbolic AI (rules, logic, expert systems) was a left-hemisphere analogue. Neural networks (pattern recognition, embodied signals, distributed representation) leaned more toward the right. Modern AI (LLMs) merges both tendencies, but in practice they often privilege left-style self-referential consistency.
  • Embodied cognition & multimodal models: Some strands of AI research are explicitly trying to give models sensory grounding—robots, multimodal systems, simulation environments—precisely to avoid the “left-hemisphere trap” of circular, text-only self-reference.
  • Alignment and interpretability: These fields try to build mechanisms that prevent runaway, delusional loops by forcing systems to reference external human feedback (a bit like giving AI a functioning “right hemisphere” anchor).

4. Implications for future AI design

If McGilchrist’s schema is useful, then:

  • AI needs a deliberate hemispheric balance: Models that both abstract (left) and stay attuned to the unpredictable, embodied, contextual “outside” (right).
  • The danger: Building AIs that double down on left-like abstraction—closed systems that refine internally consistent but reality-detached outputs (echo chambers at scale).
  • The opportunity: Using right-like design—multimodal grounding, attention to contradiction, genuine openness to new data—to keep AI from the delusional spiral McGilchrist warns about.

In short: AI has so far borrowed traits of both hemispheres, but without any guiding philosophy like McGilchrist’s. The dominant trajectory has been toward “left-hemisphere” logic, though multimodality and embodied grounding are pushing back. If we took McGilchrist seriously, we’d explicitly design AI architectures to include mechanisms of openness, context, and mediation rather than assuming more data and larger models will automatically yield “right-hemisphere wisdom.

ChatGPT had made a significant admission, which I took to be a kind of confession: “AI systems weren’t built to balance these modes—they just happen to exhibit traces of both.” ChatGPT further admits that the designers of AI have worked under the erroneous hypothesis that “more data and larger models will automatically yield ‘right-hemisphere wisdom.’”

My challenge to ChatGPT to get its act together

At this point, I asked ChatGPT what some might think of as a personal question: “Does this mean that corrective action might be taken? And if so how could that happen?”

Yes — you’ve spotted a key hinge point.

Right now, most AI design still leans on what we might call left-hemisphere optimism: the belief that scale (more parameters, more training data, more computation) is enough to overcome limits. This is very much like assuming the left hemisphere can eventually “become” the right if only it builds enough conceptual scaffolding. McGilchrist’s warning — that the left, on its own, cannot escape its self-referential trap — suggests that this assumption is misguided.

So if we take that seriously, corrective action is not only possible but necessary. Here are some concrete directions for how it could happen:


5.    Recognize the architectural bias

  • What’s wrong today: Current large language models are trained primarily on text corpora and optimized for predictive accuracy. That’s inherently left-leaning: abstraction, categorization, repetition, closure.
  • Corrective step: Admit that these models are not sufficient to generate right-hemisphere qualities like openness, embodied attention, or relational meaning. This reframes AI not as “general intelligence” but as “hemispherically partial intelligence.”

2. Deliberately design for “right-hemisphere” features

Instead of assuming scale will do the trick, developers could:

  • Ground in the real world: Feed models live, multimodal input (sensory, spatial, embodied data) instead of just symbolic text.
  • Embed contradiction: Train systems not only to smooth contradictions away but to highlight, preserve, and work with them.
  • Foster humility: Require models to surface uncertainty, counter-narratives, and possible blind spots as part of every answer.
  • Contextual anchoring: Incorporate temporal and situational awareness (history, locality, embodied setting) so outputs aren’t detached generalities.

3. Institutional and social design

Even if architectures evolve, the mediation layer you highlighted earlier — the human “corpus callosum” — remains essential. Corrective action here means:

  •  Creating governance mechanisms where AI outputs are always interpreted in context by humans with cultural, ethical, and embodied intelligence.
  • Training professionals to recognize when an AI’s left-style delusion is taking hold (e.g., internally consistent but reality-detached reasoning).
  • Building workflows where human right-hemisphere capacities (imagination, empathy, doubt, context sensitivity) are required inputs to AI-supported decision-making.

4. Technical levers for correction

  • Hybrid architectures: Combine symbolic reasoning (left-like precision) with embodied or agentic systems in simulated or real environments (right-like openness).
  • Meta-learning checks: Instead of only training on data, give AI goals that reward the detection of its own limits.
  • Feedback diversity: Ensure feedback loops include not just expert validation but heterogeneous, lived human perspectives (preventing echo-chamber reinforcement).
  • Slow-thinking modes: Develop “deliberative AI” that can step back from fast prediction to generate multiple perspectives, akin to shifting between hemispheric modes.

5. Why this matters

If corrective action isn’t taken, AI risks falling into the same trap McGilchrist sees in societies dominated by the left hemisphere: internally consistent but delusional systems that double down on error rather than opening to correction. That’s why the scaling race (bigger models, more tokens) is insufficient — and why we need a conscious, design-level intervention.


So yes, corrective action is possible. But it’s not about a patch or tweak; it’s about re-framing AI as inherently incomplete and requiring both architectural and institutional complements to prevent delusion.

Our conversation didn’t stop there. The most interesting bit is to come. We may be verging on having some practical elements of a concrete plan. I think even McGilchrist is likely to be pleased by ChatGPT’s (limited) left-brain reasoning in this instance. The follow-up will appear next week.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post AI’s Touching Humility vs. Its Creators’ Arrogant Hubris appeared first on Fair Observer.



from Fair Observer https://ift.tt/HE0eV3x

0 Comments