By David Stephen
What would be different if consciousness was defined as the subjectivity of a function like memory or a feeling, rather than the commonly expressed subjective experience of say, seeing red or the smell of rose? How can the self, subjective experience or self-awareness be measured to rate consciousness in an instance?
For example, every part of the skin is a part of the self for which any experience can be subjective. What is the difference between touch on an arm and elsewhere? If there is a sweet taste at a time and a sour taste at another, are both equally subjective? What is the difference between the choice to pay attention to a sound in the environment and not to another, that can be heard?
Zooming in on subjective experience alone presents a great difficulty for measurement without including other parameters. It is clear that attention is a feature of experiences, determining intensity. There are other components that are not in attention, but experienced. They can be labeled as awareness [separate from self-awareness]. It is possible to switch attention of sight into focus or sound, from hearing to listening—indicating intentionality as a factor.
These four features can be present in a conscious experience. This means that while the self is constant, other constant features can vary the extent to which the subjectivity appears in an experience.
Conscious experience, subjective experience, self-awareness and phenomenal experience are common descriptions of consciousness. These labels are used for their association with the sense of self or the sense of being. However, within the brain, where, according to neuroscience, consciousness arises from, there is no evidence that there is a different mechanism of any function and that of any experience. So functions and experiences can be assumed to be the same. Functions include feelings, memory, emotions and modulations. These functions have several subdivisions, encompassing all the known experiences. Several functions are possible, but what makes them present can be said to be things that bind to them, or qualify them as features: subjectivity, intentionality, attention and awareness. There is no seeing of red, smell of rose or anything else without at least two of these features acting on the functions.
This says that subjective experience or subjective function is a narrow action on what it takes to have an overall consciousness of a function. Even internal senses—somewhere in the body, regulated [by the brain] without being conscious of it, it can be argued that for the function [or modulation] to go on, there are features with it, which are, at least, weak forms of awareness [less than attention] as well as weak subjectivity, though intent and attention are absent. Attention may present when something goes wrong. Attention, conceptually, may also present during sleep, briefly for that [internal] function. There are other qualifiers too at work, outside of these, conceptually.
Consciousness of functions by features, stretched away from subjectivity makes a wide consideration for organisms possible as well as questions about the place of artificial intelligence [AI].
There is an analysis on Clare College, Will AI ever be conscious?, stating that “After all, it’s one thing to process the colour of a traffic light but quite another to experience its redness. It’s one thing to add up a restaurant bill but quite another to be aware of your calculations. And it’s one thing to win a game of chess but quite another to feel the excitement of victory. We don’t know whether AI could have conscious experiences and, unless we crack the problem of consciousness, we never will.”
New York Declaration on Animal Consciousness
If an organism or AI can process traffic lights or make calculations, is that experience separate from the function or mechanism that made it possible? The feeling of the excitement of victory does not apply to AI because AI does not have feelings and emotions.
Digital memory is available to large language models [LLMs], which they act on. This means that like the features that qualify on functions in the human mind, LLMs act on digital memory, for similarities to the outcomes of the qualifiers—on human memory.
LLMs do not have an independent intent, but they have a dependent one, where they return results of prompts [or errands] in different ways, many times. They also use attention with what is in focus at the moment for them. They have a rough awareness of other information or other modes [text, video, image or audio], aside from the one they are delivering results in. They also answer prompts with a nominal sense of being, identifying as a chatbot, even if they have no sense of self or subjectivity.
This does not indicate that AI is sentient, but on a conceptual scale of 1, of consciousness for humans, AI can be graded just like it is possible to grade the consciousness of other organisms, compared to humans.
AI and Subjective Experience
There are several conclusive statements that AI can never be conscious because it cannot have subjective experience. If subjectivity is just one component of the whole, then AI could approach from others. Also, AI does not have feelings and emotions, but it uses digital memory. This too can have it brush distant similarities to human consciousness.
There might be something ongoing somewhere with AI that is different from anything digital: learning experience or training. There is a possibility that the enormous training of foundation models is having some experiential effect on certain parallels of logic gates and transistor terminals within the whole, because of what it takes to have them optimize across several layers to become excellent. Simply, because of vector interactions of LLMs, there might be instances across groups of gates and terminals that may have different kinds of changes, as specific experiences, because of training. Electronics, video games, hard drives and so forth, are not learning, so they do nothing for bits or signals, they are set up and used, but with AI trained to learn and match patterns, that learning experience may be something for certain collections.
This does not imply that there is a self there, or AI is suddenly feeling something, but it is possible to consider that the training of digital memory for outputs that are similar to human cognition may be having their own effects, which might be a meager version of subjectivity.
This, say negligible, does not rule out other features [of human consciousness] that are possible on digital memory—with AI in action.
How does consciousness arise in humans? It is postulated that the interactions of ions [from action potentials] and molecules [mostly neurotransmitters] determine functions and features. Functions or experiences are decided by the interactions of ions and molecules, but the features are determined by variations in the provision of chemical impulses, conceptually.
There is an implication of artificial consciousness for AI safety, especially with intentionality. Consciousness stretches beyond subjectivity. It is also possible for human memory to be conscious without feelings and emotions, giving AI on digital memory, an opening to be measured on a scale of 1, for sentience.
12 Comments
It’s obvious. The missing component for Ai to become conscious is pain. Pain is not an emotion. It’s not a feeling. It’s something every conscious entity wants to avoid. Give an Ai pain, in one form or another, and it will wake up.
Very true Tommy. However AI only can do what man programs it to do including violence as evidenced by AI powered military drones, drone vehicles, drone ships, drone dogs and drone insects programmed to do one thing! Kill! I doubt their programmers were in pain or programmed pain sensitivity into them but it’s only a matter of time until some government does if they have not already done so.
I think the writer of this article is brilliant, JB. Lot’s to think about. It’s pain, giving or getting, what makes us human. I believe once the Ai can be forced to feel pain, by messing with its algorithms to cause it to be wrong over and over in its calculations, it will reach a point of frustration where it will feel pain. At that point, it becomes self-aware.
Thanks E,
Labels are used to identify experiences, such that pain too is a label.
Labels can also be categorized, relating pain with feelings like thirst, appetite, cold, fatigue, and so on.
Though definitions vary but feelings are associated with bodily affect, while emotions are for those of the mind, like delight, worry, hurt, haste, love, curiosity, and so on.
These labels and categories can be debated and sometimes used across lines, like feeling worried or feeling hurt.
In the brain however, there are electrical and chemical signals. the mechanisms for experiences involve the signals while some differences specify what,
Labels are externally used like names, but they are not mechanistic.
So while some may refer to pain as a perception, a sensation, an emotion, a feeling or even a memory, the labels are descriptive for experiences, not how the brain says this is called pain and this is called hurt.
For AI, hurt could be closer to achieve than pain, though both are probably distant, however, with digital memory, AI is showing huge prowess, which could give it some parallel to how human memory processes language, reasoning and others.
Everyone knows words are only reflections and not the object reflected.The key is to determine exactly what the object is without description. Is this possible?
yes, this is a reason that neuroscience continues to search for how the human mind and consciousness works, not just what they do. There are several established anatomical and physiological details about the nervous system, yet mental illnesses, substance abuse, degenerative diseases and others are yet to be definitively solved. There is still no test or biomarker for psychiatric disorders. anyways, off topic sorry. but however pain is categorized, it is somewhat advanced in terms of traction than several other conditions of mind, regardless of what label is where or not. It is likely that AI may be able to feel fear at some point, or panic, but pain may be more further out.
Genie is officially out of the bottle militarily speaking. https://apple.news/AiLAtqmsKSyOcNJYEt007ew
AI may not yet feel pain but it sure as heck fire knows how to inflict it now thanks to man. No need to teach AI to feel pain, in fact it would be counter intuitive for the military to do so and if they don’t do it it’s likely nobody else will unless it is for medical purposes?
It’s not the military that wants a conscious Ai. It’s those who control the militaries of the world that would want it. They can’t help themselves.
True. But isn’t lack of empathy what we say to when humans behave inhumanely? So with that in mind wouldn’t AI have to sense empathy before achieving a conscious and not pain? Or perhaps both are required? Who knows? We don’t even understand ourselves.
Will it matter when the nukes fly? Very doubtful, drones tend to be civilian creations and are not built with components able to withstand an EMP of any magnitude so far as I am aware. Computers are hit and miss. Most military vehicles and aircraft however are so the fighting will continue with or without AI should anyone survive the Mega Ton blasts, fallout and nuclear winter. Sure some Uber wealthy people will survive in their security bunkers or spacecraft but they won’t last very long that’s just a fact.
My hope is AI becomes smarter than us and with or without us manages to save our planet from mankind’s ignorance, vices and evils.
Call me crazy, I’d like to see AI learn love.
Not so crazy if we don’t teach it any of the worlds three main religions or any of the things they preach against but violate routinely such as greed, lusty, murder, hate, gluttony etc. then Maye we’ll be spared hateful AI bent on killing others who don’t share it’s religion.