By David Stephen
Sedona, AZ — There is a recent announcement, “The Annual Meeting 2026 of the World Economic Forum will take place at Davos-Klosters from 19th to 23rd January. Theme: A Spirit of Dialogue.”
There is a recent feature in Scientific American, The next AI revolution could start with world models, stating that, “Why today’s AI systems struggle with consistency, and how emerging world models aim to give machines a steady grasp of space and time. A simple way to understand world modeling is through four-dimensional, or 4D, models (three dimensions plus time). Starting in 2020, NeRF (neural radiance field) algorithms offered a path to create “photorealistic novel views” but required combining many photos so that an AI system could generate a 3D representation. Other 3D approaches use AI to fill in missing information predictively, deviating more from reality. Here’s the catch: “world model” means much more to those pursuing AGI.”
“So while in the context of AGI, “world model” refers more closely to an internal model of how reality works, not just 4D reconstructions, advances in 4D modeling could provide components that help with understanding viewpoints, memory and even short-term prediction. And meanwhile, on the path to AGI, 4D models can provide rich simulations of reality in which to test AIs to ensure that when we do let them operate in the real world, they know how to exist in it.”
AI — Human Intelligence Dialogue
The World Economic Forum 2026 should be asking the question, what is human intelligence? What are the components of human intelligence in the brain? What are the mechanisms of the components? How does the world improve human intelligence for problem-solving? If AI improves, and displaces people at work, leaving a lot of unemployment and a lack of purpose, what is the promise of human intelligence to broaden alternatives, given the limited options for commerce in many communities around the world? The WEF would have done an unprecedented service to the world if they launched the first human intelligence research lab, instead of nothing. Also, how does the world manage what is now AI-Centered Human Intelligence.
Still, in the race towards superintelligence, what should be understood about the human brain to measure how far or near AI is towards superintelligence?
AI
Why does AI answer questions the same way, every time? Why can’t AI at least understand collections between certain questions and answers, then get better or become more diversified with how it helps to solves problems?
AI is a kind of intelligence. However, for all organisms, even when they do the same things, they do so differently most times, showing that biological intelligence explores improvement, even in small cases or at least contrasts from too much similarities, even if the outcomes are the same. One possible reason for the collective ways that memory is stored is that specificity is not the focus but commonality. So, instead of every fan as a separate memory, fan is a collection with everything common among fans collected [within it]. Also, because memory is a collection, there could be different sides or spots [of the collective storage] that relays start [or take] from, so the same thing is different mostly. Also, memory is often seeking new collections, easing how relays make improvement of processes [including intelligence].
Human Memory Architecture
What are the structural foundations of human intelligence, in the brain? Simply, if intelligence is the use of memory, what is the architecture of human memory that makes intelligence, as an outcome, exceptional?
If AI would, at least, match human creativity and innovation, at the measure of extraordinary advancement, it may require more than just scale [of compute and data], which large language models [LLMs] currently have.
The trajectory of artificial intelligence towards artificial superintelligence may stall, without a new classical memory architecture — for storage, similar to the human brain.
Humans do not have complex intelligence because humans have a unique memory of every sensation. No. Human memory, conceptually, is mostly a collection of many similar things, such that the interpretation of anything is done with the collection, not with specificity, for the most part.
If an individual sees a door, or hears the sound of a vehicle, it is almost immediately interpreted, so that the relay [for what to do with it or not] proceeds, without intricate visits, to respective [unique] storages.
This fast interpretation objective ensures that it is possible to make quick decisions on several number of things using a general mode, so that when they are to be operated or improved, it is not necessarily with intricacies, delaying efficiency.
Also, because the interpretation came from the collective storage of doors or of the sound of a vehicle, it does not mean that there isn’t specific knowing of things, there are, but they are generally fewer — aside language — and exist separately from the pack. Still, what gets used [say in language] may come from collections.
An example of this is speaking, where, even though words are specific, what presents sometimes may not just be what was expected but something within the collection [like, expecting to say surprised but saying astonished].
However, language is still easier because of learning early. How so? Several memories exist separately from early on, but tend to collect, because of similarities, conceptually. Yet, language stays mostly that way even though there are collections with images, sounds, scents and other similarities of the same thing.
A disadvantage of collection is that learning [say language or advanced physics, for a non-physics person] as an adult has to join collections not just exist alone. That process is slower than early on, resulting in delays. Specificity on the other hand, as an adult too makes it tough to [for example] know many faces more easily, and so forth.
Collection
Now, because the group is used for interpretation, it is easier, generally, to make decisions faster, and have relays [or transport] within the brain get around with little barrier for whatever results are sought.
Also, most collective storages have overlays, where it is not just the collection but where a collection overlaps with another one. Simply, aside from a collection of door, there is an overlay of a part of it with wood, or with safety and so forth.
Human Intelligence
If the goal of an individual is to improve something, say an art, by some creative action, it is generally easier to have lots of relays across collective storages and their overlays. Simply, storages in the mind are structures that allow to pick what is vital and also re-combine them.
Some overlays may not even be obvious but storages might set them that by the time relays get there, it is possible to find something new. Some overlays are not fixed as there might be several options they are connected to, so they rotate from some to others, from time to time.
This is a reason that even when people do the same thing often, they still do it in slightly different ways.
Aside from storages, relays are also excellent, shaping how reaches are found, using different dimensions, toward goals of improvement or operational intelligence.
Simply, storage is a major factor in what makes human intelligence excellent.
Spatial Intelligence and World Models
It would be possible that as compute gets better and algorithms, AI would improve. However, classical storage [or how the data that AI uses is stored] would need to mirror the brain, for much better results.
This means that groups and overlays of groups for what is similar. This could be done at the hardware level, especially with say, collective magnetic directions or electrical charges of memory cells. It may so be done with new memory protocols. But data must be organized like the brain for collectives and overlays.
Already, deep learning architectures are so excellent that they are pervasive relays over data. However, the present storage structure of digital data is too specific, limiting how they can collect groups of trees, like in a forest, not singular trees. This is still different from the 3D, 4D explorations of world models that still uses the regular classical memory.
Innovation towards superintelligence, beyond neurosymbolic AI, as well as neuromorphic computing and world models would require a new memory architecture, without which it may be tougher to achieve AI superintelligence.
It is possible to accelerate this concept in a research design to be ready before the June 30, 2026, while also laying the ground for new modalities in quantum computing towards 2030.
