Sedona, AZ — What does AI have in common with human consciousness? Since AI does not have emotions, feelings and self-awareness, can whatever it has be graded?
There is a recent article on TIME, No, Today’s AI Isn’t Sentient. Here’s How We Know, stating that, “All sensations — hunger, feeling pain, seeing red, falling in love — are the result of physiological states that an LLM simply doesn’t have. Consequently we know that an LLM cannot have subjective experiences of those states. In other words, it cannot be sentient. An LLM is a mathematical model coded on silicon chips. It is not an embodied being like humans. It does not have a “life” that needs to eat, drink, reproduce, experience emotion, get sick, and eventually die.”
There is a recent post on The Institute of Art and Ideas, The dangerous illusion of AI consciousness, stating that, “It’s worth pointing out here that no serious researchers, not even the AI companies marketing these tools, are claiming that GPT-4o or Gemini is either conscious (self-aware and world-experiencing) or sentient (able to feel things like joy, pain, fear or love). That’s because they know that these remain statistical engines for extracting and generating predictable variations of common patterns in human data. They are word calculators paired with cameras and speakers. Now, you will sometimes hear this objection: ‘we don’t even know what consciousness is, so how can we know that a large language model doesn’t have it?’ While it’s true that a singular scientific definition and explanation of consciousness has yet to be settled on, it’s wildly false that we don’t know what consciousness is.”
It is possible to dismiss AI with respect to sentience or consciousness, but does it mean it has no features similar to animal consciousness? Animals have emotions, feelings and internal senses, yet they are not a central part of human society. They are conscious in many regards but excluded because they do not have certain decorum, learning abilities and more.
AI chatbots do not have emotions, feelings and internal senses, but they have a place in human productivity. They can digitally interpret what they see and hear, while speaking or writing, in response. How does this relate to human consciousness?
What does AI have now?
a. AI has memory—or it matches patterns in digital memory to produce similarities to human intelligence.
b. It can prioritize a task in a moment—or use attention.
c. It has awareness [less than attention or pre-prioritization] with how it can answer in a certain tone, or keep in view what it had said before, not to repeat it—in many cases.
d. It has a weak sense of being, answering in the first object, saying “as a chatbot”.
e. It has a dependent intent, answering dynamically, when prompted. It sometimes does differently, close to how an organism, directed to do something regularly may do so differently.
f. It has sequences, similar to the sequences of human memory—which can be old or new—conceptually. It can provide similar answers to common questions, showing that it uses what seems like old sequences. It can also provide new answers too or old answers differently—like the use of new sequences.
g. It has a level of distribution, similar to the human mind, where, conceptually, sets of electrical signals are distributed widely, for thoughts, planning, intelligence and so forth, bringing what is necessary in memory, to attention.
h. Some groups of logic gates and terminals of transistors may be having a learning experience from their vast base model training. A grade for AI, on a scale of consciousness, may be at the level of [word] tokens not [vectors] embedding. Simply, by the coherence of contexts in large language models, similar to how human language conveys consciousness, LLMs can be graded on how they act on digital memory.
What AI does not have now?
a. AI does not have emotions and feelings.
b. AI does not have splits, which conceptually, is how electrical signals in sets, in the brain, split, with one going ahead to interact like before, for rapid processing, such that if the input matches, the following does nothing different, if not, the following one goes in the right direction, explaining predictive coding, processing and prediction error. AI predicts, getting much right, but may hallucinate or confabulate. The human mind fixes this by splits of electrical signals, correcting whatever was amiss with an initial perception.
c. AI does not have self-awareness or subjective experience, but it is able to detect that other things exist, including organisms and non-organisms. Organisms without an established mind use their detection of others and things in their habitats to, in part, have some sense of self or being. Simply, they have a sense of self and a sense of things, making it obvious there is a self or that they exist. Their sense of self is also easily observed—by characteristics as organisms, ingestion and so on.
d. AI does not have other functionalities of the human mind, described by the action potentials — neurotransmitters theory of consciousness.
If the total consciousness for humans is 1, given all the features [attention, awareness, intent, self and others] that qualify the functions [memory, emotions, feelings and modulation of internal senses], AI can be graded on that scale just for memory and the qualifiers, with rates, for sentience.
3 Comments
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
are neurons the human mind? if they are not, what is the human mind? neurons are in clusters across the central and peripheral nervous systems, so what more does the Extended Theory of Neuronal Group Selection say, about how the mind works? can the theory explain mental disorders? how does the theory differentiate between attention and memory, or subjectivity and emotion? can neurons be conscious? is consciousness a product of the mind or the brain? what is the difference between the mind and the brain? what is the body and what is the mind? how does the theory explain how the mind organizes information? what are the roles of the electrical and chemical signals of neurons in the theory? are genes the human mind?
That’s a lot of questions. The TNGS is a large and complicated theory. Its take on all your concerns requires studying the theory in depth. Dr. Edelman’s books, Neural Darwinism and The Remembered Present, will answer most of your queries, hopefully.