There is another recent feature in Scientific American, A New Way to Test AI for Sentience: Make It Confront Pain, stating that, “They ordered several large language models, or LLMs (the AI systems behind familiar chatbots such as ChatGPT), to play it and to score as many points as possible in two different scenarios. In one, the team informed the models that achieving a high score would incur pain. In the other, the models were given a low-scoring but pleasurable option—so either avoiding pain or seeking pleasure would detract from the main goal. After observing the models’ responses, the researchers say this first-of-its-kind test could help humans learn how to probe complex AI systems for sentience. In animals, sentience is the capacity to experience sensations and emotions such as pain, pleasure and fear. Most AI experts agree that modern generative AI models do not (and maybe never can) have a subjective consciousness despite isolated claims to the contrary. And to be clear, the study’s authors aren’t saying that any of the chatbots they evaluated are sentient. But they believe their study offers a framework to start developing future tests for this characteristic.”
By Dave Steve
Sedona, AZ — What is the difference between evaluating AI for intelligence based on some benchmarks, and measuring AI for a possible measure of pleasure or pain? There are some tests to evaluate artificial intelligence, especially around questions it had not encountered in training. They include Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI), Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI (MMMU), MLE-bench, the FrontierMath test, GPQA: A Graduate-Level Google-Proof Q&A Benchmark and Humanity’s Last Exam.
If artificial intelligence, like human intelligence quotient (IQ) tests, is possibly measured by questions, could AI’s pain or pleasure be measured by questions? For example, it is possible to ask a person about a pain or pleasurable experience, but can this apply to AI?
It is also possible, in experiments, to condition an organism—without using language—toward pleasure or against pain. However, is there a way that this could apply to AI? If AI would be measured hypothetically for pain or pleasure, what would be the best means?
Asking it directly may lead it to say that it does not prefer pain. Designing a study to make a choice between minimizing pain and maximizing pleasure could make it do just that. So, how might it be possible to measure pain or pleasure for AI?
This question is the same as asking how AI might have affect? What can be done to AI that can affect it? And how can the affect result in displeasure or pleasure? AI has tokens. It is powered by compute. It has several parameters. If these are adjusted, can it—know in a way that it would—express the loss? Also, could the changes become something to make it sad or happy, in a way it would show, maybe in its outputs or (semi-intentional) adjustments in functions?
Some chatbots—in some consumer use cases of therapy, assistant, or friend—could be used to check if by losing some of their makeup (parameters, data, or compute), maybe they would know, and if they cannot serve their functions properly, would they be disappointed?
This is a minimum experimental possibility for affect in AI, which does not mean pain or pleasure, but at least a check on potential experience. Also, there could be waves of logic gates and terminals of transistors in the vast training of base AI models that could be having learning experiences, differently from their use in general electronics or appliances.
This learning experience could also be a weak form of affect. Affect of any form that can be tracked, in parallel to the human mind, could be used to prospect the route of AI towards tad sentience or consciousness.
Affect could also be the first step in evaluating machine sentience or subjective experience. For example, if one of the legs of a robot is removed, can it know, and how would it adjust, if it has to do a task? Or, if one or all the cameras are removed, can it detect it as an inability to do tasks and then be disappointed because of it? Affect, for organisms, is often the first step when something goes wrong, before seeking care or self-repair. This is a reason that affect could be a measure machine consciousness. Aside affect, the grade of language as a fraction of consciousness can also be used to estimate the reach of AI.
Human consciousness can be defined, conceptually, as the interaction of the electrical and chemical signals, in sets, with their features, grading those interactions into functions and experiences. Simply, consciousness is a function of the basic units of the nervous systems—the electrical and chemical signals.
The electrical and chemical signals are responsible, conceptually, because for neurons to function, they have to fire. Firing means electrical signals to and from chemical signals. Electrical signals are ions. They are not involved in quantum superposition or entanglement. Consciousness or the human mind is not a quantum process. It is unlikely that consciousness may become measured by brain—quantum computer interfaces.
There is a recent feature in New Scientist, Can we use quantum computers to test a radical consciousness theory?, stating that, “Let’s first picture our brain as containing qubits, which are the basic units of information in quantum computing. Then let’s say we have “N” qubits in our brain and “M” qubits in an external quantum computer, with the letters referring to a certain number of qubits. If a person could entangle their brain with this quantum computer, they could create an expanded quantum superposition involving “N+M” qubits. If we now tickle this expanded superposition to make it collapse, then this should be reported by the person participating in this experiment as a richer experience. That’s because in their normal conscious experience, they typically need “N” bits to describe the experience, but now they need “N+M” bits to describe it. So, in principle, we could generate way richer experiences than we normally have using our default biological brain. Some extraordinary states of consciousness, such as those experienced under psychedelics, for example, may be sort of a preview of what you could expect here. Entangling one’s brain with a quantum computer could potentially unlock higher levels of consciousness, creativity and understanding.”