By David Stephen
Sedona, AZ — Richard Dawkins is in the news for something like AI is conscious. The interpretation of what he might be trying to pass across is that there is a part of consciousness in language. There is also memory and there is intelligence. Any adult human being can be brought to a memory of cold — with language — even when it is not cold. It is also possible to describe a scenario, and intelligence [or say imagination] will know what it is like, without being there. Consciousness must not involve feelings alone or emotions. Language is a subjective experience. Memory can be a subjective experience as well as intelligence. Dawkins was probably trying to tell some people to pay attention to language, memory and intelligence as fractions of consciousness. Not that AI is suddenly as conscious as a human being.
But then what happened, Dawkins was denounced. Not by people that have a theory of consciousness or know what it is or what it does or where it comes from, according to their own words. Among many, Anil Seth and Jonathan Birch were quick to lob grenades at Dawkins, two people that have no standing whatsoever, in any progress in consciousness. People who believe they can police consciousness but refused to be held accountable with rigorous questions like what are the components of consciousness in the brain, how has consciousness benefited mental health, what makes predictions in the brain, what is their model of human intelligence?
Normally, experts are sources for quality information, useful for operations, safety, maintenance and improvement. But in consciousness, experts are as good as non-experts, since they say no one knows. So why would Jonathan Birch, Anil Seth or others even say anything, defending consciousness as their territory? Their swiftness to rebuttal against Dawkins, who may have meant the language fraction of consciousness, means that they don’t even know how to think about the subject area they claim expertise.What does it mean that AI language use includes speaking, listening, reading, writing, signing and singing?
What does it mean that AI can use language in those categories comparable to humans?
What does it mean that when individuals use language, it is done with subjectivity?
If language is a function and then attention, subjectivity and intent or control are attributes of language — as a function, how does AI compare to both the function and the attributes?
This is the first question for AI consciousness. Is there a comparison of how humans use language — as a fraction of consciousness — to how large language models [LLMs] use language?
This is all Richard Dawkins was asking, most likely. He was not saying that AI is human, or that AI is as conscious in all categories. Just language. Also, to use language involves memory and to be able to relate properly with complex subjects, it takes intelligence, so what fraction of consciousness are those, and if AI can be rated within those brackets, can it be said to be tad conscious?
It is important to rate AI for what it has. Other organisms cannot use human languages, to read, speak, write, think, listen, sing or sign, as much as LLMs. So, language is the forte for AI for any consciousness consideration. Now, because AI is generally limited to memory and language, it is unlikely to have experiential explorations of AI for emotions or feelings, like anger or cold. This means that there is no basis for now, in assuming that AI need rights, welfare [other than what data centers are getting] or moral considerations because language is the only candidate for its consciousness.
For example, when it is said that something hurt the feelings of someone, it means that it went beyond [say] memory area in the brain, for affect elsewhere. Such that the ability for that experience to hurt, means the configurations of components in that [emotions] area are different from those in the memory area.
Now, sometimes, a hurtful word may not go there, so nothing. It may also want to go there but knowing the source, it may skip. Simply, the reason emotions are heavier than memory, or more affective than memory is theorized that the configuration or assembly of components in those areas are different, so when relays get there after leaving the memory, they take a route of more friction or angular displacement — so to speak.
This makes some experiences in the moment, say of a feeling of cold or an emotion of anger real and hurtful, than just the memory of it — via language, imagination or thought.
Also, the experience [or function] of a feeling or an emotion is subjective. This makes it possible to adjust, if it is unpleasant. AI does not have feelings and emotions. AI does not have subjectivity for a function it does not have.
Humans have memory, and it is a function with subjectivity. AI has memory, and AI can be assumed to have a sense of existence or the ability to produce outputs, with recognition that it is not human.
Simply, for now, there is no consideration for AI for functions of emotions or feelings or the subjectivity of both. But there could be exploration of AI for memory, including for language and the [language-based awareness or the] subjectivity of it.
Yes, when feelings or emotions accompany an experience or there is a live experience, it could mean more to how it is understood, it does not mean that memory, for knowing what it is, without experience, should just be dismissed.
Jonathan Birch
Jonathan Birch led — as a key proponent and signatory of — the New York Declaration on Animal Consciousness, about two years ago. Whatever was the thinking that made the move appear necessary has now turned out to be ill-advised or absolutely disappointing.
What has that declaration done for animal welfare, just by 2%, anywhere on earth, since then? Jonathan Birch and his groups do not seem to brutally assess themselves.
They are either delusional, intentionally deceptive, completely daft or lost. Like why not focus on developing animal sentience with a solid theory towards maturity, to shape animal welfare, than criticizing what anyone has to say about AI consciousness.
What would Jonathan Birch really know about consciousness, sentience or AI? What does he know about how the brain works? What is his understanding of what neurons do and how they do it? Has there been anything new or sensible that anyone has ever learned because Jonathan Birch wrote or proposed a thing?
It can assumed that the London School of Economics just keeps him around for the vibes. He really does not move anything forward. Even the Jeremy Coller Centre for Animal Sentience is not expected to have anything become of it. Jonathan Birch also wrote a book that can be considered as what should not be opened.
People that cannot self-assess should not be giving quotes to the press like they are an authority. Jonathan Birch does not have any authority, quality, mind, or merit to comment on anything in consciousness science, human sentience, artificial intelligence or neuroscience. He makes those that give him platforms lose credibility and trust where it really counts.
Anil Seth
Anil Seth said that AlphaFold is not regarded as conscious, but LLMs are, even though they are underlined by similar technology. Well, the argument is not everything AI is conscious, but that the language capability of AI might be.
A TV or radio is not considered conscious, because it can not use language in real-time, with conversational accuracy, for relationships, companionship, productivity, process instruction, correction and so forth. The question is, can language use be considered quite conscious? Not AlphaFold or something else, which is his usual deflection tactics.
And, the thing is that AlphaFold uses memory to make predictions. This means that if memory use, in a novel and patterned way, can be considered a function, that function, with attributes, can be conscious. So, using a general measure, AlphaFold may not be absolutely not conscious, so even the best argument of Anil Seth falls apart under any scrutiny.
Anil Seth has also said that life might be associated with consciousness. Well, if life is associated with consciousness, then life too must be associated with intelligence. This should mean that there cannot be artificial intelligence. Which is already debunked. If there can be natural consciousness, there can be artificial consciousness.
Anil Seth said intelligence is not consciousness or that because they go together in humans does not mean they will go together in non-organisms. Well, that is 100% nonsense. What exactly is human intelligence in the brain? What exactly is human consciousness in the brain? To even make broad statements as this, to the public, like he has stuff figured out, is a serious scientific disrepute.
There are neurons involved in all functions: neurons with their electrical and chemical signals. So, intelligence and consciousness can be postulated to be electrochemical processes. Say there are the mechanisms of the functions involving interactions of electrical and chemical signals. One of the outputs of those mechanisms — as a function — is memory. It is possible to know a table in the external world because it is configured electrochemically.
Now, if another system can identify a table, does that system not have memory? This means that even if the brain has electrical and chemical signals for functions, it does functions [or labels] that can be found elsewhere.
This is what it might mean that intelligence and consciousness are obtainable elsewhere. So, for example, in other organisms without a brain, there can be a way to make explorations of intelligence and consciousness.
This implies that the questions of those [intelligence, consciousness, experience, feeling, emotion, memory] as functions or even labels are possible elsewhere [with plausible similarities] not just in the brain.
This makes it workable to seek parallels of human consciousness and artificial consciousness or human intelligence and artificial intelligence.
Except there is really the limitation of looking for things with electrical and chemical signals, say interacting and having attributes, conceptually. Then yes, maybe just humans and certain organisms, only.
But if the standard are labels, it is possible to look anywhere. And now, anywhere is not just about the universe or the nonsense called panpsychism, but to look at things that have cells, with those labels [or the only other option is digital].
Simply, if people say consciousness is what the brain does or something like intelligence is what the brain does, and then if only things that have cells have been able to produce consciousness and intelligence, now even though objects have atoms and subatomic particles, only things with cells have been able to show they can.
However, the only non-organism that has come close to dynamism with intelligence is AI. And it is now real to explore it as well for fractions of consciousness, with its language capability.
So, fractional artificial consciousness is possible, based on language alone. It also means that because it is just language — not feelings or emotions — moral considerations, rights or welfare are not causes for AI, at all.
Data centers are already extremely well-maintained, so AI already has welfare, rights and moral considerations with those.
Anil Seth does not have anything to offer the science of consciousness. Anil Seth loves the show of being a scientist than the work. He is unable to keep quiet and work.
Anil Seth also seems super selfish, taking all the oxygen in the field. He should be able to refer the press, 100% of the time, to others — for quotes and pieces. He should defer to others for speeches and lectures, but he would hardly, at least so that the debates in his field is balanced and other voices have a chance to take another path, but no, it must be him and his unconscious thoughts.
Anil Seth does what, like regulatory capture, can be called information capture, or such that those outside the field take his statements as definitive, so anything else merits nothing. When, in reality, Anil Seth is total saying mostly rubbish with no scientific backing: controlled hallucinations, and too much nonsense not worth repeating.
He has contributed the least to consciousness and makes the most noise. He is often happy to namecheck those that share his worthless talks or his lame writings. The promotion he does, of others, on social media is often like a loan, he expects a pay back with or even more directly, a way to get loyalty to himself and ensure no criticism.
Anil Seth inspires his pocket. He had a sit-down at the Royal Institution with Michael Pollan, who wrote a useless book about consciousness. Michael Pollan who has nothing new to say about consciousness or made no difference had a sit-down, on what merit? What have people become to these parasitic individuals?
At this point, it is safe to say that Anil Seth takes a lot of people for fools, repeating the same things and making generalizations about the brain like he has a theory that is clear cut. He believes he has the cover of whatever profile to coast. Anil Seth is a stain on consciousness, a shame to his collaborators and affiliations, a disgrace to progress and a stumbling block to any research. Any possibility of any major advancement in brain science will send him spiraling.
If Anil Seth will give verdict on anything, that thing is doomed. It is really strange that Anil Seth even has any place to say anything against Dawkins, no matter the stupid thing Dawkins says, or his next stupid objective.
Richard Dawkins mentioned an unpublished novel. Maybe it is his, maybe it is from someone else. It is hoped that the Claude is conscious thing is not a pre-promotional ruse of any sort.
Also, Richard Dawkins should have made his arguments clearer. If he’s trying to talk about the place of language, memory or intelligence in consciousness, he should do that. Not make ambiguous statements and start a brain dead debate that benefits no one but grifters, including Jeff Sebo, who also has no useful contributions but can be found everywhere, saying what he does not know and gating new cohorts of loyalists.
With them, advancement does not matter, it is who say what they want to hear — or what keeps them looking like they know — that does, “until we have a theory” is their code word that they intend to milk consciousness, ad infinitum.
AI consciousness — whether by language or not — is useless. Nothing moves or changes. Nothing matters about it. Nothing helpful. So, the debate is a lot of waste, from origin to fade.
Human intelligence would have been a better discussion: what it is in the brain and how. Mental disorders and addictions as well. Consciousness — as subjective experience — does not solve any problem or make any difference. So, it is just a shiny waste.
Richard Dawkins knows the game. Consciousness is cheap publicity. AI is hot. AI consciousness will bring the spotlight. So, go.
Still, at the worst of Dawkins and his chicane, he is forever better than Anil Seth and Jonathan Birch.
There is a recent [May 7, 2026] analysis in The Atlantic, Does Claude Have Feelings?, stating that, “Dawkins’s argument was based on a well-established framework for evaluating AIs. The Turing test—named for Alan Turing, who introduced it in 1950—was for decades treated as something close to a gold standard for detecting machine intelligence. To pass it, an AI only had to answer a human interrogator’s questions in ways indistinguishable from those of a real person. Claude easily cleared this bar for Dawkins, who professed to find himself so dazzled by its astonishing performance that he forgot it was a machine.”
“But that doesn’t mean that no AI system will ever be conscious in the future. Indeed, many of the researchers who build these systems expect them to get there. In a 2024 survey of 582 such researchers, the median response placed the odds at 25 percent that AIs will have subjective experiences within 10 years, and at 70 percent that this will happen by 2100.”

