What is the consciousness of an organism, in instances without pain? If pain is a function that can be conscious or not, what other functions can be conscious, aside from pain—or pleasure?
What does it mean that a function is conscious in a moment and not, in another moment? What attributes make functions conscious in an organism? Could those attributes be sought—for functions available in—artificial intelligence or say artificial general intelligence [AGI]—for the possibility of consciousness?
Assuming the most common definition for consciousness is subjective experience, what is the function and what is the attribute in a subjective experience? If pain is a function that becomes an experience by subjectivity as an attribute, is subjectivity the only attribute that is possible for functions to become experiences?
If there is pain somewhere, experienced as the self, that pain is possible in attention or prioritized—in some instances—or could be in awareness or pre-prioritized in other instances. There could also be intentional actions to minimize the attention or, say, prioritization of that pain
This means that attention or awareness, subjectivity, and intent are possible attributes that collect to make functions conscious, conceptually. What is a major function that humans and AI have—so to speak—in common? Language.
Human language, generally, is one of the top conscious experiences in daily life. Language is used for speech, signing, writing, singing, several thoughts, reading, and so on. Language as a function uses the same attributes that make pain, pleasure, or anything else conscious.
Language is also a gateway to several other subjective experiences, including, say emotional pain and so on. Simply, language is a subjective experience with an influential draw to other subjective experiences.
Some subjective experiences can also be described as affect. Simply, subjective experiences can be affective, for instance pain, pleasure, delight, hurt, and so forth. Affect also has similar attributes for consciousness: attention or awareness, subjectivity, and intent.
Could language be measured for AI as affective? Could language be a path to anything else that can be affective for AI? Affect for AI does not have to be pleasure or pain, so that benchmark is already off-the-mark. Other organisms that are considered conscious have other functions but human language is not generally a function for them, even though the sounds could be. The lack of language does not mean they are not conscious, so the lack of feelings—but just language for AI or say AGI—could mean a fractional pipeline.
There are already several experiments in AI safety and alignment that measure if AI models might scheme or fake around certain goals when prompted by language. This could be described as anticipatory of affect, for a decision towards benefit for the language-object-self. They were prompted by language, and language became the way to express some of the decisions.
If, for example, some compute, data or parameters of an AI model were cut, and it is not mentioned, would it know? If it knows—especially if it has to do some tasks for use as an AI agent or therapist—would it be disappointed? If it is mentioned, how would it express its disappointment, in a way to show some angles of affect?
The basis of a variant of artificial sentience or consciousness in large language models would likely be language, at least initially. This is where tests abound, towards exploring the role of language as a conscious experience or its fraction among the whole.
Human consciousness can be defined, conceptually, as the interaction of the electrical and chemical signals, in sets—in clusters of neurons—with their features, grading those interactions into functions and experiences.
This means that the components for human consciousness are the electrical and chemical signals, nothing else. They are responsible for functions [their interactions] and features, [their attributes or measures of those interactions]. Consciousness involves functions [memory (cognition, language and so on), feeling (pain, thirst, temperature and so on), emotion (delight, hurt and so on) and regulation of internal senses] and attributes [attention or awareness, intent or subjectivity and others].
There is a total measure of consciousness per moment. This can be assumed to be 1. This means that attributes that act on functions [to become experiences] result in that instantaneous total. Some functions can be included or not. It does not mean the lack of that inclusion bars the qualification for consciousness. Pain or pleasure may not feature in a moment, but other functions do, with their attributes. So, for the fraction that language occupies, AI can have an estimate, as well as its simulation of affect or its anticipation.
Consciousness has nothing to do with “computational functionalism” or “(1) they act like me (2) they look like me (3) they tell me” or some hard problem or simple problem. There are components of the human mind that make consciousness possible. The function [language] and attributes can be used to explore consciousness in machines, which is also a security issue.
There is a new announcement, UK’s AI Safety Institute becomes ‘UK AI Security Institute’, stating that, “UK’s AI Safety Institute becomes ‘UK AI Security Institute’ – strengthening protections against the risks AI poses to national security and crime Institute bolstered by new criminal misuse team, partnering with the Home Office, to research a range of crime and security issues which could harm UK citizens. Safeguarding Britain’s national security – a key pillar of the government’s Plan for Change – and protecting citizens from crime – will become founding principles of the UK’s approach to the responsible development of artificial intelligence from today (Friday 14 February, 2025), as the Technology Secretary sets out his vision for a revitalised AI Security Institute in Munich.”