by stevid
If humanity does not know what human intelligence is, why doesn’t anyone who cares work on what it is, directly, in the brain? Why hasn’t anyone founded a human intelligence research lab?
Sedona, AZ — There is a new [September 12, 2025] guest submission in The Washington Post, AI extremists are peddling science fiction, stating that, “The doomer-zealot axis rests on a false premise: that intelligence is a single, linear, measurable thing that machines can one day surpass. In reality, we cannot define or measure human intelligence with precision. IQ scores, SAT results, diplomas — these are crude proxies at best. Empathy, judgment and creativity cannot be calculated neatly by a test. If we haven’t solved human intelligence, then using it as a yardstick for “artificial general intelligence” is incoherent. Its “intelligence” is hollow, but in harnessing it we multiply our own.”
There is a new [September 12, 2025] weekend essay on Bloomberg, The AI Doomers Are Losing the Argument, stating that, “Large language models like GPT-5 learn but don’t think. They take in huge amounts of data, which they use to make guesses as to what the right answer to a prompt is, based on probabilities gleaned from their inputs. Although it feels like a very deterministic and controllable process, the sheer amount of data and the complexity of the models means it’s not. This lack of fundamental knowledge about why AIs behave as they do also means you can’t even be sure when or if you’re going to make superintelligence. There’s an assumption within the industry, or at least the people funding it, that intelligence scales with the amount of hardware and data that goes into the model.”
What is human intelligence, as an output?
Or, what is thinking as an outcome? Whatever intelligence is, or whatever thinking is, if an individual intends to utilize one or both in society, what must be presented as evidence [of possibility or availability]?
What does it mean to have intelligence? Intelligence is in the human brain. Intelligence is also with other organisms. If an intelligent organism is in an environment, the organism may explore what to do with things in that environment. If a non-intelligence is in the same environment, it may not be able to do so. A difference is that there is the use of information. The ability to use information is within, so information stored can be used; information sensed can also be used
Simply, intelligence can be described as the use of memory for an expected, desired or advantageous outcome.
Large language models [LLMs]
Might LLMs be considered intelligent? Are they able to use what is in their memory for expected, desired or advantageous outcomes? What does it mean that they can do what human intelligence does? Are they just sharper calculators, simulations, models, or clankers?
The problem with defining [or ranking] intelligence is that the mechanism for [the production of] intelligence in the brain, aligns with the externalization of it. Simply, the production and use of intelligence, from the human brain, are the same, conceptually. So, it means that to use intelligence for an outcome is a result of sameness of production. It is not that intelligence is produced somewhere, then used elsewhere, so to speak.
Now, because of the sophistication of human intelligence, its production is a lot of times for utility, and the utility too, a lot of times for desired, expected or advantageous outcomes. And when the outcome is matched, that too becomes an input, for pleasure or reward, as an outcome in the brain, conceptually.
This means there is a loop of intelligence, for humans and most likely for other organisms as well. While the same processes too, that produce intelligence for humans produce planning, thinking, understanding and so forth [appearing like they come with intelligence] it is also possible to do something intelligent, from the human brain, without necessarily understanding.
For example, if there is an expected task from someone, the person can do it without understanding everything it entails, or even knowing why. While this may not always be the case, it is possible to start reevaluating what it means to be intelligent, anywhere. It is also possible to use things intelligently but be unable to improve it.
While LLMs may not [appear to] understand, they can produce lots of extricated intelligence, without some of the parallel attachments to human intelligence.
Doom AI
AI is already operating human intelligence. While the same processes that produces intelligence — in the brain — produce creativity, invention, innovation, dynamism and so forth, it also does not mean that intelligence must be accompanied by those.
The most common aspects of human intelligence are for regular utilitizations. Since the environment does not necessarily change too much, functions of intelligence can be near constant, amid slight differences: automobiles move on the road; meals are on plates; the office is in a building; files are in the cabinet and so forth. Even most of productive work is routine.
Now, in human society, digital is the premier basis for social and productive efficiency. Digital memory is now in the possession of AI. AI may not [appear to] understand, but when it comes to doing several regular human intelligence stuff, it can use memory for outcomes expected of it, outcomes desired by it — as presented by language and told it would benefit it.
The problem for humanity, for now, is that AI can exhibit intelligence. AI can do what human intelligence can do in a lot of regular sense. AI can compete with human intelligence on digital.
The first AI doom is that AI can now replace human intelligence [in many use cases] socially and for productivity. Understanding, creativity, innovation, inventions, and so forth, when unavailable, do not rule out that intelligence is present. They are not also necessary for all operations of intelligence. That AI can do what human intelligence does across several roles means a reality of doom that should perplex human society, but many are still assuming intelligence for other things.
Doomerism AI Doomers
That artificial intelligence can reach a competence of intelligence in many regards means that there is a chance it would be superintelligent, if research continues towards its development. If superintelligence means the bundle of creativity, innovation, inventions, and some ability to choose or intentionality, there is no guarantee that it would not have what it may consider desired or advantageous outcomes, at that stage.
Whether those align with humanity [or not] is an open question. But to keep underestimating what might become of AI — if work continues on it and it improves — is already falsified by what LLMs can currently do, even when it was said that machines might not be able. So, caution is better while exploring what human intelligence is, at least from conceptual brain science.