By David Stephen
An author that knows little about how the brain works and offers nothing new, wrote an intellectually impoverished book, riddled with sham comparisons, misdirected arguments, outdated assumptions, worthless abstractions, unremarkable quotes, useless theories, philosophical noise and embarrassing speculations.
Sedona, AZ — There is a recent [February 24, 2026] big story on WIRED, AI Will Never Be Conscious, stating that, “In his new book, A World Appears, Michael Pollan argues that artificial intelligence can do many things—it just can’t be a person.”
“Just about anyplace you push on it, the computer‑as‑brain metaphor breaks down. Computer scientists treat neurons in a brain as though they are transistors on a chip, switched on or off by pulses of electricity. That analogy has some truth to it, but it is complicated by the fact that electricity is not the only factor influencing the firing of neurons. Brains are also awash in chemicals, including neuromodulators and hormones that powerfully influence the behavior of neurons, not just whether or not they fire but how strongly. This is why psychoactive drugs can profoundly alter consciousness (and have no discernible effect on computers). The activity of neurons is also influenced by oscillations that traverse the brain in wavelike patterns; the different frequencies of these oscillations correlate with different mental operations, such as consciousness and its absence, focused attention and dreaming (as well as other stages of sleep).”
“To liken neurons to transistors is to grossly underestimate their complexity. Compared with transistors on a chip, neurons in the brain are massively interconnected, each one communicating directly with as many as 10,000 others in a network so intricate that we are still decades away from being able to draw even the crudest map of its connections. In computer science, much has been made about the advent of “deep artificial neural networks”—a type of machine‑learning architecture, supposedly modeled on the brain’s, that layers a mind‑boggling number of processors in such a way that the network can process and learn from vast troves of data. Impressive, for sure, yet a recent study demonstrated that a single cortical neuron can do everything an entire deep artificial neural network can.”
A Needless Book on Consciousness
In the last decade, there has not been any transcendent book written about consciousness. Simply, no book about consciousness has been written in recent past, at least, that filled any gap of importance, about the science or research of consciousness.
Like, you can take out the last ten years for all the books written about consciousness and there will not be a single difference. Yet, some people are still choosing to write books about consciousness. For what exactly? What is their play? What is their delusion? Or, what is the collective hallucination about consciousness or their assumption that they can milk it?
First, the books about consciousness all have nothing new, insightful, helpful or necessary to say about the science or research of consciousness.
Secondly, how important is contemporary consciousness? Like, how much does it hold back that consciousness is not clearly understood?
For a perspective on the second question, mental health is more problematic as an unknown, than consciousness — if both are considered separate. Not knowing what mental illness is, having no biomarkers are a major loss to the world, every day, both fatally and those who cannot function socially and occupationally. This includes substance use disorders.
Not knowing what human intelligence is, in the brain, or how it works and its types is a major vulnerability for humanity amid the ascent of artificial intelligence.
Now, for consciousness, what exactly does it stop every day? Yes, there are medical cases around coma, general anesthesia, locked-in syndrome, deep sleep and so forth. However, physiology is doing enough to move forward against these, from different angles, without a major theory of consciousness.
So, even with all the talk of consciousness being a tough problem in science, there isn’t a lot of difference it might make to the world, today, say a major theory of consciousness shows up, especially for the definition of consciousness as subjective experience.
Understanding that this is the case should enable a serious person, who has something to offer, find something better to do, with the opportunity to write a book, than to waste it, offering nothing on consciousness.
AI Consciousness
Of all the risks facing humanity — with respect to artificial intelligence — consciousness or say sentience of large language models [LLMs] is way down the line.
All the arguments about if AI can be conscious or not, are small balls of those whose world is small and trying to make that small world those of everyone else. It is even possible to assume they have an agenda. They will use a loud voice to amplify a non-problem, so that other problems are ignored.
For example, there is no AI company that has a research lab dedicated to studying, understanding and displaying the mechanisms of human intelligence. There is no university anywhere on earth that has a lab to study just human intelligence, directly. Not human-computer intelligence. Not human-machine intelligence. Not human-centered artificial intelligence. Just human intelligence — extricating the components, their destinations and relays in the brain, especially to induce problem-solving.
Even all the anti-AI media have never pointed this out. Their focus is on AI slop and other things that don’t change the reality that AI can do a lot of work or can currently contribute to all jobs on earth, at some degree.
So, why AI consciousness? AI is already great at language use — comparable to humans — for speaking, listening, reading, writing, signing, singing, thinking and so forth.
All of these are done, most times, with consciousness, for humans. So, the broad argument that AI does not have feelings or emotions, or self or intent is starting from the wrong direction.
The first consideration, if AI might be conscious, is language. That has to be explored to the maximum extent, before ruling out broad consciousness for AI. Whatever it has with language — to the fraction that it is good enough to be compared to humans — can be estimated. This means if the total consciousness for humans is 1, AI is not 0, because it has language.
It is the same way there might be comparisons of what humans can feel [cold, heat, thirst, etc.] and those of other organisms. So those can be graduated.
Even in trying to rule out something, approaching it with the evidence available is a better direction for science than all the low-quality arguments of neurons and transistors, or brain is not a computer and several dim points. Saying that psychoactive drugs “have no discernible effect on computers” is extremely obtuse, showing that the author is incapable of being insightful on anything around the subject. It is almost like saying then, as automobiles emerged, that they would not be good enough because they don’t have muscles.
At some point, what is important, for experiences is the outcome, not just mechanistic relays, even among humans. Some may come to a conclusion from an angle and others from another. But they all come to the same conclusion. AI consciousness estimates, starts with language — as an outcome — not feelings or emotions.
What is Consciousness in the Brain
In neuroscience, with all the empirical evidence till date, there is just one option to build a theory of consciousness: neurons. Simply, to really explain consciousness, theorizing about neurons, directly, is the path.
All the mentions of tens or hundreds of theories about consciousness are wild satire. Also, the philosophical angles like physicalism, materialism, dualism, and others do not apply at all to consciousness.
Consciousness is a neuroscience problem only. Consciousness is not a philosophical problem. Now, in neuroscience, what is implicated in all functions is the neuron. With neurons, there is no need to be abstract about consciousness with nonsense references in theories, like phi or workspace or predictive processing or microtubules, or whatever else.
Neurons first. However, when neurons are functioning, it is actually the actions of electrical and chemical signals. So, for neurons to do a function — memory, emotion, feeling, movement or others — it would involve electrical and chemical signals. Simply, what pulls [and lets] neurons are signals.
So, building a theory of neurons alone for consciousness is unlikely to work. Therefore, it has to be a three-way theory, of neurons and both signals. Now, because neurons are cells, and what they do for functions of human experiences are configured by the [electrical and chemical] signals, it is possible to build a model for consciousness, based on both signals directly.
This means that neurons can be bypassed because they are cells. They can survive like other cells without electrical and chemical signals. However, what makes them able to function for human experience are the signals — and their capabilities for information organization.
So, there can be a theory of consciousness by electrical and chemical signals. This is where progress is and not anything else. It is then possible to define the human mind, then human consciousness, so that it is the possible to use signals to postulate all answers about consciousness in the brain.
Simple and direct. There is no point writing a book without at least breaking down how to go, for now, with language for AI consciousness and signals for human consciousness.
Human Mind and Consciousness
The human mind, according to Conceptual Biomarkers and Theoretical Biological Factors for Psychiatric and Intelligence Nosology, is the collection of all the electrical and chemical signals, with their interactions and attributes, in sets, in clusters of neurons, across the central and peripheral nervous systems.
Simply, the human mind is the sets of signals.
A memory is a specific configuration or formation of electrical and chemical signals in a set. The same applies to an emotion, a feeling and the regulation of an internal signal.
Simply, functions like these are a result of interactions of signals, as specificity in configurations.
Also, there are states that electrical and chemical signals often are, in sets, at the time of the interaction, becoming how those interactions are graded.
Human consciousness can be defined, conceptually, as the interaction of the electrical and chemical signals, in sets—in clusters of neurons—with their features, grading those interactions into functions and experiences.
Simply, for functions to occur, electrical and chemical signals, in sets, have to interact. However, attributes for those interactions are obtained by the states of electrical and chemical signals at the time of the interactions.
So, sets of electrical signals often have momentary states or statuses at which they interact [or strike] at sets of chemical signals, which also have momentary states or statuses. So, if, for example, in a set, electrical signals split, with some going ahead, it is in that state that they interact, initially, before the incoming ones follow, which may or may not interact the same way or at the same destination [or set of chemical signals]. If a set [of chemical signals] has more volumes of one of the constituents [chemical signals], more than the others, it is in that state too that they are interacted with.
So, while functions are produced by interactions, the states of the signals, at the time of the interactions, may determine the extent to which the interactions occur. This means that what is termed attention is an attribute of high volume for [a set of] chemical signals or high intensity for electrical signals [in a set]. Subjectivity is the variation of volume of chemical signals from side-to-side. Intent is a space of constant diameter, in some sets, for some volumes, conceptually.
There are other attributes like sequences, which are paths [old or new] traveled by the electrical signals.
Building a model in this direction will be pivotal. Also, because it can be applicable to answering about mental disorders, addictions and human intelligence, not just about consciousness.
Even with some big-name author or sycophantic reviews, the recent book, A World Appears, on consciousness should never have been written. What a waste!

1 Comment
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow