By-DS
All intelligence tests for humans are niche measures. There is no general intelligence test for all the possible ways that humans are intelligent. There are several things that humans can adapt intelligence to, but most of the measures for intelligence are for things that training had preceded.
Even with training, there is no guarantee that an individual would be great at the thing. Also, what it means, in human societies, to be intelligent are categorized with things that have productive value.
There are possibilities to be intelligent in things that are not needed [or valued] in some societies — or in some epochs. This does not mean that the individual lacks intelligence, but the adaptation that would have made the individual economically effective, in that place or time, is lacking.
Humans, across history, have never been rated for general intelligence. It is being great at what matters that often count. To measure intelligence, it is to know what matters, in a place and at a time. Other features could be supportive [or be fun], but it is mostly what counts that admits
AGI
What to track is the capability of AI to take the place of an intelligent person, in a country, at this time, for several economic roles. AI may not be independent yet, or thorough enough — so to speak, but can it make an intelligent person [in a field] unnecessary, even with its flaws?
Fixation on AI flaws is misunderstanding the categorizations for intelligence in a society. Also, the question of human intelligence is the question of the brain. So, the standard would be to show how the brain processes intelligence to develop a parallel that can be used to access the reach of AI
What is the certainty that [niche] artificial intelligence has not already surpassed [niche but important] human Intelligence? What are everyday uses for various aspects of [important] human intelligence that AI cannot fill? Some people say AI lacks understanding, creativity, innovation and so on, but aren’t those other measures of intelligence that may be assumed to be included so far outputs meet expectations?
In evaluating [important] human intelligence, conclusion is sometimes reached when it is ascertained that what is known is enough for work. No one is seeking general [or totalized] intelligence to hire for a role, so to speak. Job openings have requirements, if an individual satisfies it, chances mount.
The technology industry is racing at artificial general intelligence [AGI] or artificial superintelligence [ASI]. The expectation is to match or surpass human intelligence. But currently, if AI is trained on anything to be digitally deployed, AI may excel at it. And digital is the ascendant social and productivity tool. AI is already of might in that sphere. The mission of AGI or ASI is a facade to assume that human intelligence is not already out-competed.
Human Intelligence Research Lab
The only project that matters now, for parity with the takeoff of AI is human intelligence. It is not even human-centered artificial intelligence [HCAI], AI Safety, AI Alignment, or whatever else is on the menu for more AI.
Human Intelligence is the use of human memory. What is memory, and how is it used? These are questions for conceptual brain science. It is possible to extricate electrical and chemical signals of neurons as the basis for memory and intelligence, then build a model of it, for learning, problem-solving, creativity, innovation and so forth.
The first Human Intelligence Research Lab, for displays and explanations, using the [electrical and chemical] signals of neurons, would become an aspect of readiness as artificial intelligence conquers grounds.
There is a recent [August 13, 2025] report by MIT Technology Review Insights, The road to artificial general intelligence, stating that, “Artificial intelligence models that can discover drugs and write code still fail at puzzles a lay person can master in minutes. This phenomenon sits at the heart of the challenge of artificial general intelligence (AGI). Can today’s AI revolution produce models that rival or surpass human intelligence across all domains? If so, what underlying enablers—whether hardware, software, or the orchestration of both—would be needed to power them? Aggregate forecasts give at least a 50% chance of AI systems achieving several AGI milestones by 2028. The chance of unaided machines outperforming humans in every possible task is estimated at 10% by 2027, and 50% by 2047, according to one expert survey. Time horizons shorten with each breakthrough, from 50 years at the time of GPT-3’s launch to five years by the end of 2024.”