By David Stephen
The 28th annual meeting of the Association for the Scientific Study of Consciousness [ASSC] was held in Crete, Greece from July 6-9, 2025. The Science of Consciousness [TSC] 31st annual meeting was held from July 6-11, 2025 in Barcelona, Spain. The Festival of Consciousness [FoC] was also held in Barcelona, from July 11-13, 2025
But nobody heard anything.
There was one press release from a company that presented at the TSC. But nothing much about the events in the media [in English, Greek or Spanish]. There were lots of sessions, talks and so forth but nothing serious enough to stoke wider interest. If, at least, two of the biggest conferences in a supposedly important field were held in the same month, and what followed was contiguous silence, it is either the field is already in irreparable oblivion, or in pre-reality or the conferences should have been called-off since they have nothing useful to show.
This is a case study in what they all seemed to ignore. If your research work is dim or has no promise, you may wither away, albeit feigning activities. Nobody is interested in anything phenomenological or the dumpster of insufferable terms that have been overloaded with consciousness research.
There are parents with children who were victims of deepfake images at school. There are colleges where students are cheating with AI. There are families that have suffered ransom trauma from deepfake audios. There are situations of addictions by some people whose loved ones do not know what to do. There are horror experiences some families have had because an AI chatbot nudged a member into some irreversible decision. There are mental health issues that some people have sought answers to that the DSM-5-TR didn’t do much to explicate. There are side-effects of psychiatric medications that are so devastating but little else loved ones can do. There are loneliness and emptiness issues that are personal crisis in the lives of many, driving them to extremes. There is just so much around the brain, mind and now AI, where answers are sought in very credible ways.
Many of these are in reports. So, what should a conference — that is within the study of the mind or brain — do? At least try to find how to answer or postulate in ways that can be meaningful to lives. But what have these conferences done? Nothing meaningful. The so-called leading theories of consciousness: Integrated Information Theory [IIT] and Global Workspace Theory [GWT] are 100% worthless. Not 50%, not 80%. 100% infinitely worthless. IIT is around 21 years old. GWT is around 37 years old. Either or both cannot explain one mental state. Just one. From inception till date. None can say what the human mind is or how the brain organizes information. If AI is able to access human emotions including with sycophancy, the theories cannot say why the human mind is susceptible.
Yet, these theories are in competition in what is called ARC-COGITATE. Like scientists are testing useless theories and screaming no one knows how consciousness or the brain works, as the license for nonsense. They don’t have to stop or be told to do so but their irrelevance stinks so badly, nobody wants anything to do with them.
The mind [or whatever is responsible for memory, emotions, feelings, others] and consciousness are correlated with the brain. Empirical evidence in neuroscience has shown that neurons and their signals are involved. What theory of neurons and their signals can be used to explain the mind and consciousness? If anything else is proposed, how does it rule out neurons and their signals? This is all that any serious consciousness science research should be asking. Some people are saying quantum entanglement and quantum superposition. How does it explain or rule out neurons or the signals of neurons for functions and their attributes? There is a microtubules add too that is so comical, it reflects how some people think that the reputation of science should subsume dirt. No one cares about your quantum contests if someone is trying to resolve mood disorders.
AI ethics research is one area that the philosophical aspect of consciousness may have found relevance. They should have been able to propound answers that colleges would use to discourage AI cheating, as well as displays that AI chatbot companies would use to discourage it, as a fair effort. But nothing like that. “College is for learning. Learning relays the mind to solve problems. Understanding is a key necessity. Assignments in school contribute to this process. If the mind does not use some of its sequences for learning, it may not be able to solve some problems or understand some complexities”. Say a message like that is displayed for certain prompts like those for assignments or applications and so forth, it may not mean many would stop, but it could contribute to inputs that would let some have the courage to hold back.
Consciousness and sentience research have plummeted into the abyss. The field no longer has the credibility to be called science. Consciousness is not subjective experience if subjectivity is not the only thing that goes with experiences. Any experience [cold, pain, delight, language and so forth], that can be subjective has to be in attention or less than attention. The priority given to pain is attention, not simply subjectivity, so to speak. Experiences may also go with intent, for when to speak or not, or get Tylenol for pain or avoid the source, or to get a jacket against the cold. Subjectivity qualifies experiences, like attention [or less] and intent. Walking can be subjective, and be in attention as well.
This means that subjectivity and other qualifiers are present wherever functions are mechanized. This rules out neural correlates of consciousness somewhere or that the cortex is responsible for consciousness and not the cerebellum, since qualifiers apply to functions everywhere in the brain. It is like saying no one knows how consciousness works but we are sure it is only in the cortex. But how about the functions of the cerebellum, are they never subjective, or never experienced? The brain does not also make predictions. If anyone says the brain does, just ask how and what components? This refutes predictive coding, processing and prediction error. Controlled hallucination is a hollow flimflam.
There is little point rebutting the dogma of the consciousness people, since what they have left is their sunken ship. In the 2020s, it is no longer relevant to be seeking what it is like to be a bat. Because how far would that help, if known? Sense [or memory] of being, property of self and knowledge of existence [which are likely answers to the bat question] can be explained by the attributes and interactions of electrical and chemical configurators [theorizing that signals are not for neural communication but the basis of functions].
Human consciousness can be defined, conceptually, as the interaction of the electrical and chemical configurators, in sets — in clusters of neurons — with their features, grading those interactions into functions and experiences.
Simply, for functions to occur, electrical and chemical configurators, in sets, have to interact. However, attributes for those interactions are obtained by the states of electrical and chemical configurators at the time of the interactions.
These can be used to explain mental states, addictions, design warning systems for AI chatbots usage, develop AI ethics models, prospect states of consciousness and so forth.
So, sets of electrical configurators often have momentary states or statuses at which they interact [or strike] at sets of chemical configurators, which also have momentary states or statuses. So, if, for example, in a set, electrical configurators split, with some going ahead, it is in that state that they interact, initially, before the incoming ones follow, which may or may not interact the same way or at the same destination [or set of chemical configurators]. If a set [of chemical configurators] has more volumes of one of the constituents [chemical configurators], more than the others, it is in that state too that they are interacted with.
2 Comments
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Neurons are not only vital to brain functions and consciousness they’re the neurological system that tell our bodies how to do things like picking up a pencil or walking while holding a cup of hot coffee. Mix em up like nerve agent does and the signals our brains normally send to our hands to pick up that pencil suddenly does the opposite of what was intended. So even if AI were to achieve consciousness and be placed into a human body or “replicant” getting it to perform intended functions will be a challenge to say the very least. One crossed neuron signal could result in disaster.