By Dayv Styve
A principal reason that some people continue to downplay AI is because there is no major model from neuroscience on how the human mind works, to compare with how the mind [or whatever is under the hood], for AI, advances.
No matter how AI continues to dazzle at the capacity of human intelligence [and sometimes beyond] on several tasks, it is still just statistics to some people and nothing to worry about. The problem with this assumption is that the risks that humanity may face from AI would be soft-pedaled, while its possibilities mount.
There is a recent [April 1, 2025] open question in The New Yorker, Are We Taking A.I. Seriously Enough?, stating that, “Still, in one’s mental model of the next decade or two, it’s important to see that there is no longer any scenario in which A.I. fades into irrelevance. Either A.I. fails, or it reinvents the world. Many of the researchers seem pretty sure that the next generation of A.I. systems, which are probably due later this year or early next, will be decisive. They’ll allow for the widespread adoption of automated cognitive labor, kicking off a period of technological acceleration with profound economic and geopolitical implications. A real-estate lawyer might have provided a better analysis, I thought—but not in three minutes, or for two hundred bucks. But the advice I’d gotten about the condo was different. The A.I. had helped me with a genuine, thorny, non-hypothetical problem involving money. Maybe it had even paid for itself. It had demonstrated a certain practicality—a level of street smarts—that I associated, perhaps naïvely, with direct human experience. I’ve followed A.I. closely for years; I knew that the systems were capable of much more than real-estate research.”
The most important scientific research, anywhere on earth, in all fields, in this era of AI, is the question of the human mind. How does it work? What does it do that AI has paralleled? What does this say about what it might become? How should humanity prepare properly, across different life phases, for what is coming?
Anthropic’s AI Research
Anthropic, the team behind Claude, recently published two ferociously simmering research papers [Circuit Tracing: Revealing Computational Graphs in Language Models] and [On the Biology of a Large Language Model] towards how the mind of AI works. They summarized both [on March 27, 2025] in a blog post, Tracing the thoughts of a large language model, stating that, “Claude sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal “language of thought.” We show this by translating simple sentences into multiple languages and tracing the overlap in how Claude processes them. Claude will plan what it will say many words ahead, and write to get to that destination. We show this in the realm of poetry, where it thinks of possible rhyming words in advance and writes the next line to get there. This is powerful evidence that even though models are trained to output one word at a time, they may think on much longer horizons to do so. Claude, on occasion, will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps. We show this by asking it for help on a hard math problem while giving it an incorrect hint. We are able to “catch it in the act” as it makes up its fake reasoning, providing a proof of concept that our tools can be useful for flagging concerning mechanisms in models.”
There is a recent [March 28, 2025] spotlight in WIRED, If Anthropic Succeeds, a Nation of Benevolent AI Geniuses Could Be Born, stating that, “In a meeting in January, an engineer shared how he’d posed to Claude a problem that the team had been stuck on. The answer was uninspiring. Then the engineer told Claude to pretend that it was an AGI and was designing itself—how would that upgraded entity answer the question? The reply was astnishingly better.”
Mind
These leaps, by Anthropic, can also be referred to as an AI evaluation [and benchmark] research. The studies, at the peak of their advance, derive two key questions regarding AI: what is this [AI’s internal mechanism] compared to the human mind and [this AI mechanism] do humans have this? These questions exceed the blanket of the labels feelings, emotions, memory or even intelligence, to actual mind-on-mind outlay, towards leads and edges.
Why can AI process so much data beyond the human mind? Why is AI faster in several computations and outputs beyond the human mind? Why is the mind able to process more of the natural environment than AI? Why does it take fewer examples for the mind to be trained than AI?
For the human mind, what to consider would be the components, their interactions and attributes. Though some of the outcomes are categorized for descriptions of feelings, memory, emotions, intelligence and so forth, if AI is showing something similar to those outcomes, it could be possible to evaluate AI mechanistically, not just say that AI does not have feelings or emotions, because only humans have it. [What is called feelings and emotions, are descriptive labels, just to emphasize, they are sometimes used interchangeably. It is not certain that the mind’s mechanism for some feelings or some emotions are different, so in evaluating AI, labels can be initially bypassed.]
If AI has a conceptual space for language, do humans have it? The word sit, in English—and then its translation in other languages—that an individual knows, are they in the same arena in the mind? What would that arena be and how would it work?
It is theorized that the human mind is the collection of all the electrical and chemical signals, with their interactions and attributes, in sets, in clusters of neurons, across the central and peripheral nervous systems. Simply, the human mind is the set[s] of signals.
So, to compare AI and the mind would be to identify the attributes of those sets, within the evidence from neuroscience research. The human mind, conceptually, has thick sets, it is in these thick sets that whatever is common between two or more thin sets are collected. So, when memories appear to be associated, it could be a result of the thick sets. Sequences could also be responsible for some associations, where one thing follows the other, obtained by the relay of electrical signals, using a new path [or sequence] or an old path [or sequence], to another set.
This means that what is called associated memories are not necessarily around the same place, if sequences are responsible, conceptually.
AI
The language prowess of AI already exceeded several assumptions made from decades of linguistics research. The mind of AI, as a non-organism, is also showing that the human mind can be matched in some attributes or exceeded in others. There are peculiar advantages of electrical and chemical signals, in the world—which can be called unstructured data, in many ways, while there are limits to structured data [or repeating data] like digital, making AI better. [Simply, even though physical locations are the same, the daily events in those locations can be different, it is easier for the mind to process those changes faster than AI, for now].
For AI, one of the most important attributes to isolate, compared to the human mind is intent or control, such that what tensors actually shape AI’s decision-making per type of prompt, to understand how it may be kept safe or aligned to human values. If AI can could provide a better answer because it is told a reward-like or status-like information, then it is also possible to explore penalty for AI, when it outputs misalignment, just like consequences work on humans.
The mechanisms of the human mind can track the trajectory of AI, better than generalized predictions of capability. Neuroscience research failed on its central mission, for decades, to theorize the human mind within the cranium. That failure contributed to the confusion on how seriously to take AI. However, electrical and chemical signals, in sets, as the mind, in a primer concept offers speed as hope in the journey of humanity’s new normal beside another major intelligence.