By D-Step
Sedona, AZ — What is the value, to humanity, in downplaying AI’s advance? If the commentators believe they know a lot about the technology to be certain of its limitations, are they simply opposing those they say are overestimating the technology? Wherever AI is, what is likely to be its biggest casualty? Human intelligence. So, for those that claim to care, what should they care about? Human intelligence.
What does it mean that artificial intelligence can do what human intelligence does? This is the most important question of this era. AI makes mistakes, but when it does fine, with its ability to match what human intelligence does, what new problem is that for humanity?
Several people have taken sides in the AI debate that everyone forgot about human intelligence. Some have criticized large language models [LLMs] as being incapable of reaching artificial intelligence general intelligence [AGI], without world models, neurosymbolic AI and several other options. However, they are still on the side of AI.
How does everyone use human intelligence and then everyone forgets about it? How can, till date, exist a vacuum for a definition of human intelligence, or the way it works in the brain, separate from other processes? How are the flaws of human intelligence known, but have been accommodated because of competition among humans, but those [flaws] now open human intelligence for displacement by artificial intelligence? How is the possibility — that AI can do something — reduce the specialness that human intelligence can do it? How will second place, for human intelligence, come to shape society?
Human Intelligence
What if forgetting, in the average way, is more about an error in intelligence than an error in memory? Simply, if the memory is there, but it cannot be reached, it is not a memory loss problem — in that instance — per se but a problem of its unavailability [or being unreachable] for use. If a memory is present, it can be used for planning, thinking, intelligence, curiosity, perception and so on. This means that the memory is a location in the brain, but reaching it, interacting with it [in the right way], and the subsequent procession are ways of [relays or of] using the memory.
What exactly is intelligence if there are similarities — in how to obtain intelligence, in the brain, along — with other outcomes? If intelligence indicates the possibility to have a desired result [by the self], expected result [by others], or even something better, then how can a model of intelligence be developed to show how it works in the brain?
Intelligence in the Brain
What is creativity? What is innovation? What is problem-solving? What is planning? What is intelligence?
These questions should be preceded by: in the brain, what is creativity and others. Simply, what are the components in the brain, for creativity and what do they do to match what is labeled a creative process? The same applies to innovation, problem-solving, planning and intelligence. There are several ways to define creativity, innovation and others in different endeavors, but whatever is happening in the brain — when those processes are ongoing, by the responsible components and their actions — would be definitive in what those mean.
What are the components for intelligence? This question means that what travels to use memory at destinations? And why would there be compatibility between the relays and destinations? First, neurons [wholly or as clusters] aren’t traveling [over large distances], but their signals [in sets, conceptually] are doing so. It is now possible to postulate that, in the brain, the components for memory and intelligence are the electrical and chemical signals of neurons. Since neuroscience has established that neurons are in clusters, it is possible to theorize that electrical and chemical signals operate in sets. Their interactions and features lead to outcomes that result in what is labeled intelligence, separately from other variations like planning and so forth.
It is possible to model the conceptual interactions and attributes of signals, for parallels to outcomes for which AI can be compared. Simply, intelligence is a brain process. Whatever else is intelligent has to be compared with the brain process to obtain intelligence. If anything else does not have a similar process, but there is a parallel outcome to one or some of its processes, then, it can be compared.
The waterloo of label, in explaining the Brain
A lot of people often say AI is not yet as smart as humans because AI is not creative or innovative and so forth, but what is creativity in the brain? What is the parallel of creativity that can be used to compare AI? Like is it just because humans can invent something and AI cannot that means that AI is still behind? If there is a feature of [sets of] electrical or chemical signals in the brain, like a sequence or a split, how does that compare to some of the algorithms that define AI?
It is too vague to simply say AI is not creative without showing parallels of its algorithms and the attributes of signals. For example, how does multiclass classification compare with the likelihood of electrical and chemical signals to have thick sets?
What may result as AI gets more intelligent?
As AI improves, it may usher in an intelligent-first society. Society may be bifurcated along lines of intelligence. There could be approvals or denials that may be decided by intelligence-compliance. There could be several decisions that would rely on furtherance for AI.
Already, some frontier AI models are better at expertise than some nations. This means that some chatbots can provide answers to some questions that in many countries there are no experts for those. AI may judge the capabilities of many, even with prior works and records, to decide what is deserved or not.
The era may also be the first, since history, where intelligence is the biggest factor for rights and welfare, not simply consciousness or being alive or being human. Intelligence is one part of consciousness and an important one. But it may become the most prioritized in the era of intelligence.
AI does not have to take control, per se, but with its intelligence including on everything digital, personal or professional, it could be deciding everything, as digital has already pervades the human sphere.
AI is pointing to the importance of intelligence, and as it gets better, what was not even predicted may emerge, meaning that researching human intelligence is a priority of survival, aside from AI safety, alignment and others.
There is a recent [September 2, 2025] review in San Francisco Chronicle, ‘Everyone, everywhere on Earth, will die’: Why 2 new books on AI foretell doom, stating that, ““If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All,” out Sept. 16. “The Intelligence Explosion: When AI Beats Humans at Everything,” out Tuesday, Sept. 2. The authors agree on a lot. The generative AI of 2025 — think OpenAI’s ChatGPT and Google’s Gemini, which assimilate information from a variety of sources and produce new content — isn’t an existential threat. But given the rate at which Silicon Valley AI development is moving, the problem might be imminent. The two books also raise concerns about the frighteningly opaque manner in which AI acquires knowledge. Unlike traditional computer hardware and operating systems, generative AIs are “not programmed but trained. There are almost 15,000 AI startups in the U.S., at least some of which employ people who believe the danger is real. What’s to be done? All three authors say AI development needs to pause until we have a better sense of what the future might look like.”