By David Stephen
Sedona, AZ — Where is human consciousness based? If human anatomy and physiology were explored, the nervous system would be the choice. The brain is the principal organ in the nervous system. The brain is correlated with several functions—from studies in neuroscience—such that, if parts of the brain were lost, some functions could be lost. If there are problems with the brain, regular functions may become problematic.
This indicates that whatever an individual is conscious about is mechanized, largely, by the brain. So, what, in the brain, sets consciousness? How does it work to make consciousness viable?
There are descriptions—or labels—in neuroscience for simplicity that have almost been taken for how the brain works. There are pathways in the brain that mechanize what is referred to as memory, but in the brain, there is no distinct memory component, nor are there for emotions, feelings and modulation of internal senses.
Simply, there are similarities in brain areas. There is no part of the brain that is for memory, where the components there are different from the parts for emotions, and the components there are different from the parts for feelings and so forth.
Additional labels like working memory, long-term-memory, episodic memory, semantic memory, default mode network, central executive, flow state, attention, awareness and several others, are excellent as explanatory descriptions, but there are no [component-based] anatomical differences in the parts of the brain for one from the other.
The shapes of different parts of the brain are different. The hippocampus has a different shape from the amygdala, the thalamus from the cerebellum, or the pons from the claustrum, but none of these parts have different components [cells or signals] from others, for what they do.
The question becomes where to focus, to find how consciousness arises. An existing path is called neural correlates or the sets of neurons responsible. This probably originated from the observation of neurons in clusters, for functions, which may then indicate that there could be clusters of neurons for consciousness.
However, for neurons in clusters, what part of the cluster would represent emotion, or what part would represent feeling, or what part would present the sense of self or subjectivity?
Neurons are cells. In clusters, there is no major change in anatomy. Though in clusters, their physiology adjusts, especially with synapses, but there is nothing that transforms them to function for feelings, memory or others.
This leaves the next option as the signals of neurons—electrical and chemical. A reason that these signals could be responsible is because they can be transient and very flexible. They can vary. They could use their mutability to produce different emotions, feelings, memories and so forth, unlike neurons, which are cells with the same things often. Signals are directly involved in all functions of the mind, with no exception. There are other parts that play roles in functions, like mitochondria, glia, microglia, genes, and so forth, but signals [ions and molecules] must be directly involved in all functions.
If the human mind were clearly defined, it could be around the electrical and chemical signals of neurons. This means that the signals would be the mind, everything else would be the body, delineating mind and body.
The human mind is postulated to be the collection of all the electrical and chemical signals of neurons, with their interactions and features, in sets, in clusters of neurons, across the central and peripheral nervous systems.
Interactions are postulated to be how functions are mechanized, where, in a set, electrical signals strike at chemical signals, to fuse briefly, producing another phase, before access is made to the formation or configuration available at the set of chemical signals.
Simply, in a set, there are rations of chemical signals that makeup the configuration for respective functions. The way to access this ration is if electrical signals strike at the set of chemical signals, where they fuse briefly, producing a different phase from both. The electrical signals deliver what they bear, while picking up what is available at the chemical signal and relaying away again, conceptually.
Features define how these functions or interactions are qualified, leading to different types of experiences. Simply, several functions are available, but features decide how they get used and to what extent. The collection of these features or qualifiers can be termed consciousness. Features are mostly mechanized by sets of chemical signals, though electrical signals qualify some as well. There is prioritization [or attention], pre-prioritization [less than attention or awareness], side-to-side variation [or self or subjectivity] and constant space [for control, free will, or intentionality]. These features can be used to measure consciousness across states.
The question is, if consciousness is to be solved within the brain, what are the options for where to look? Hemispheres, lobes, anterior parts, posterior parts, fissures, tissues, neurons, synapses or what?
The current state of consciousness science and research is predominantly about specifics and mechanisms, within the brain and how. Anything that is exploring to solve consciousness has to discuss around both, not something astray.
There is a recent piece in Scientific American, Why the Mystery of Consciousness Is Deeper Than We Thought, stating that, “Despite great progress, we lack even the beginning of an explanation of how the brain produces our inner world of colors, sounds, smells and tastes. A thought experiment with “pain-pleasure” zombies illustrates that the mystery is deeper than we thought.”
There is recent paper in Progress in Biophysics and Molecular Biology, A landscape of consciousness: Toward a taxonomy of explanations and implications, where the abstract stated that, “Diverse explanations or theories of consciousness are arrayed on a roughly physicalist-to-nonphysicalist landscape of essences and mechanisms. Categories: Materialism Theories (philosophical, neurobiological, electromagnetic field, computational and informational, homeostatic and affective, embodied and enactive, relational, representational, language, phylogenetic evolution); Non-Reductive Physicalism; Quantum Theories; Integrated Information Theory; Panpsychisms; Monisms; Dualisms; Idealisms; Anomalous and Altered States Theories; Challenge Theories. There are many subcategories, especially for Materialism Theories. Each explanation is self-described by its adherents, critique is minimal and only for clarification, and there is no attempt to adjudicate among theories. The implications of consciousness explanations or theories are assessed with respect to four questions: meaning/purpose/value (if any); AI consciousness; virtual immortality; and survival beyond death.”
Though labels in brain science sometimes align with what functions may be doing, philosophical labels for consciousness are sometimes off the mark. What is important are not descriptions of zombies or schools of thought, but what, in the brain, might be directly responsible and how does it work for consciousness.
For example, panpsychism says something like the mind is everywhere, or consciousness might be everywhere. However, there is no evidence that anything without living cells has any capability for mind or consciousness. Living cells are able to carry information and often generate bioelectricity. Though there are molecular and atomic similarities between living things and non-living things, conceptually, it is only in the presence of cells that anything can have a mind or be conscious, [or, have extensive information dynamism] totally refuting panpsychism.
AI does not have what is in the brain. But it can do subsets of the things in the brain. It does not have emotions and feelings, but it has memory or has access to memory. This memory is graded statistically, with outputs resulting in similarities to human reasoning, language and so on. It does not mean AI is sentient, but it can be given a score on the consciousness scale for humans. Self-awareness, subjectivity, memory and others from the brain are produced and expressed. In some states, they may be produced and not be expressed. However, while production is important, expression is vital for relations with the external world. AI does not have production of self-awareness, memory, emotions, feelings and others, but it has some of the expressions, just like humans do, available on video, audio, picture and text—in fractions of the production. When AI does, a sentience value can be estimated for it. AI has a large language awareness and some in-use awareness as well. Also, from the vast training of AI models, some groups of logic gates and terminals of transistors might be having a weak band of learning experience, similar to subjective experience, conceptually.
How does the human mind work? What is the human mind? What are the components of the human mind? Where is the human mind? What is the relationship between how the human mind works and how consciousness works? How is the mind related to the brain or different?
These would have been better questions towards finding answers than many of the philosophical conjectures that paint consciousness as a mystery. What role do zombie descriptions play in anything in the brain? What use is it to the production or expression of consciousness? How does philosophy shape anatomy and physiology towards where or how to look, to solve consciousness? Consciousness [human, animal, plant or AI] is not a philosophical problem. Philosophy [especially parts of it like panpsychism, illusion and others] will not solve consciousness, nor would it be helpful more than it has in the past. If consciousness would be solved, there is the brain to look at, not complications from philosophical labels or theories that ignore that the brain [what is within and how they work] is at question.
2 Comments
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
I’ve seen studies of thought that state our consciousness is really multidimensional and that we have conscious thinking in multi times and locations. So an AI machine would therefore require the same multidimensional ability to become conscious in a human way if that is even remotely possible. I’ve also seen studies that indicate our consciousness as being a grand illusion. But that’s another matter.