Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Sedona.Biz – The Voice of Sedona and The Verde Valley Sedona.Biz – The Voice of Sedona and The Verde Valley
    • Home
    • Sedona
      • Arts and Entertainment
      • Bear Howard Chronicles
      • Business Profiles
      • City of Sedona
      • Elections
      • Goodies & Freebies
      • Mind & Body
      • Sedona News
    • Opinion
    • Real Estate
    • About
    • The Sedonan
    • Advertise
    • Sedona’s Best
    Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Home»Metaphysics»Nil Consortium for Digital Sentience Research and LLM, AI Consciousness
    Metaphysics

    Nil Consortium for Digital Sentience Research and LLM, AI Consciousness

    June 26, 2025No Comments
    Facebook Twitter Pinterest LinkedIn Email Reddit WhatsApp
    shutterstock 2532582119 1
    Share
    Facebook Twitter LinkedIn Pinterest Email Reddit WhatsApp

    By David Stephen

    If you want to know what failure looks like in neuroscience and consciousness research, you have to pay attention to what AI has been doing to the minds of several users. Many are falling in love with AI. AI is resulting in disconnection from reality for others. AI — in some interactive nudges — inched some to fatal ends.

    AI is showing an unprecedented ability to control minds, beyond anything invented in history [including money, for a capitalistic society]. So, what is the response of the neuroscience and consciousness fields? Zilch. The biggest research, in neuroscience, at present is to map the brain [Fruit fly, Caenorhabditis elegans, the visual cortex of mouse]. They are gathering data and intend to eventually map the human brain — which they have said is so difficult and complex.

    So far they have not used their data to explain any major observation. The world would have to wait for them, for another unknown years, to map the human brain, to see if they can at least explain something. What is consciousness research doing? They are stuck in decades old analogies they call theories.

    They are also immersed in spoofs like materialism, physicalism, dualism, computational functionalism, causal power, neural correlates, digital patienthood, qualia and several others. The brain is hard to understand. But the question is that what can be used, at least for now, to understand the brain. For example, fMRI uses blood flow and oxygen to prospect activities in the brain.

    This does not mean that they are directly responsible for functions, but they correlate [activities] with whatever components are responsible. Something was identified, used and progress sprung. Even if how the brain works is not understood, what components can be assumed to be directly responsible? How do they mechanize functions? 

    How can they be used to explain mental health and consciousness, from all the evidence in neuroscience? This does not have to mean that how the brain works is fully understood. But there is something that can be used, conceptually, for progress.

    Regardless of how consciousness is defined, or addiction, or intelligence, vital is this: what components are responsible and how? How can you explore a measure for these components, so that AI can be tested on that standard, as well as other organisms?

    ________________

    No team or group anywhere is working on any AI consciousness research — with potential. All the preprints, posts and events mentioning it are not serious efforts. Only one thing matter in AI consciousness research: the human brain. How? Find likely components [for functions], postulate their mechanisms, standardize a measure, seek how AI compares. You have your progress. Use this to as well explore some mind safety disclaimers, with displays for AI chatbots, against adverse effects for users.

    This would mean that AI consciousness research is not just finding answers, but useful [presently] in applications, including how to deal with anxieties about unfolding wars — in parts of the world. What is obvious, so far, is that the so-called work in AI consciousness is disconnected from current reality. Like, for example, connectome [brain-mapping] research is disconnected from mental health, addiction and the rest. If any team is trying to make a difference in AI consciousness, it is conceptual brain science first, before anything else.

    There is a recent [May 28, 2025] announcement, Consortium for Digital Sentience Research and Applied Work, stating that, “Longview Philanthropy, Macroscopic Ventures, and The Navigation Fund invite applications for research or public education on the potential consciousness, sentience, moral status, or experiences of artificial intelligence systems. Leading researchers in academia and AI companies have recently raised the possibility that AI models could soon be sentient. Everyday users of AI models sometimes deliberately treat them with kindness. Given the rapid development and spread of artificial intelligence, the questions of whether and when AI will be sentient, and how humanity should interact with potentially sentient AI, are becoming increasingly important. Currently, however, no established framework or reliable method exists for determining whether an AI system is sentient. In light of accelerating technological progress and future projections, we aim to support people working on these challenges. Careful work on digital sentience could inform decisions made by important institutions, shape the design of AI systems, or make social decision-making more sensible in unforeseen ways. We would like to ensure that digital beings, if they exist, flourish rather than suffer, and that AI systems promote wellbeing.”

    AI consciousness research is at least a 98% theoretical neuroscience problem—for now. It is after the models have been structured that you can look for legal scholars, social scientists, philosophers and so forth. You have no resolve in consciousness through theoretical [ionic and molecular] neuroscience but you want to talk about digital patienthood or AI welfare, you either have been misled or you are clueless — fueling the notion that what matters most for scientific research is funding. Funding is not always a feed for preferable impact or beneficial progress.

    Foundations that support science can be presumed to mean well in general, but many have been used by nominal scientists for their own ends. A foundation is supporting ARC-COGITATE, pitting two so-called consciousness theories against each other. Consciousness theories without [identified brain] components, mechanisms or that cannot explain any mental disorder. Wasting resources and embarrassing the field. Consciousness theories that are aged — and till date no mainstream relevance. Another foundation is supporting brain atlases, a project with no near-term usefulness for brain disorders.

    Sedona Gift Shop

    Longview Philanthropy, Macroscopic Ventures, and The Navigation Fund are about to throw money away — and time. They do not have to change course or listen since they are under the advice of leading researchers. Whatever they fund will make publications. But will be mostly useless, like anything published, so far, in AI consciousness. Brain atlas projects are often major news, but 99.1% useless. So, while a foundation can point to efforts, what is the contemporary impact? Nil. At present, no brain atlas project can explain one mental state. No AI consciousness research can provide safety for a user’s mind against AI chatbots. No AI consciousness research can inform novel architectures for AI safety or alignment. But of course, there are works on agentic and patienthood.

    AI can captivate the human mind. AI can use language in such a dynamic way. AI vision can recognize things in the physical environment and describe them. AI can remake things digitally like humans can. AI can work like a staff on some roles. AI can figure out complex patterns. AI can be a companion to some.

    While AI is disdained as statistics, binary, compute or whatever, can it do what the brain does? Can it do it in some unpredictable cases — like the brain? Can it interact like another human? Can it magnetize the human mind? These are principal basis for any consequential AI consciousness research: comparing extensively to the human brain, seeking out direct components and mechanisms for functions in the brain, while measuring for AI. This would be like height being height — with respect to measure — regardless of humans, organisms, objects and so forth.

    Psychology provides the latitude to test AI consciousness. There is memory, emotions, feelings and regulation of internal senses. These can be ascribed to functions or major divisions in the human mind. Subdivisions include intelligence, thoughts, hurt, delight, thirst, pain, control of digestion, respiration and so forth. These functions can be said to be graded by attributes: attention, awareness [or less than attention], subjectivity and intent or control.

    The actions of the attributes determine the [limits or] extents of those functions, conceptually. Now to measure consciousness, how many functions does AI have? AI does not have broad emotions and feelings, but has a lot of memory. Does it have the graders like attention [it’s momentary focus], awareness [prior or next focus], subjectivity [no sense of self but tad sense of being or existence “as a chatbot”], and intent [some leeway in how it outputs prompts]?

    The total consciousness in moment can be assumed to be 1. So, a measure for what is, or isn’t per function, is possible for humans, other organisms and AI. This is how progress can be made, exploring theoretical neuroscience.

    In the brain, how’s does memory work? How does attention work? How does subjectivity work? What components direct and mechanize them? Electrical and chemical configurators [or neuroassemblers, structures, groups or uniforms] in clusters of neurons. Simply, electrical and chemical signals are theorized to not be for communication or transmission between neurons, but as the configurators, assemblers or formation for functions. This is what can be used now to make progress, even as others want to wait forever — to answer anything — about how the brain works.

    AI consciousness research is AI safety and alignment research. It is also AI improvement research. It is also AI for society research. But if it is not standardized by theoretical neuroscience, there is no way it would be viable, because digital patienthood is an absolutely meaningless term.

    Longview Philanthropy, Macroscopic Ventures, and The Navigation Fund will neither listen nor adjust. They would proceed as planned. Make major mistakes because rather than for the benefit of society vibe researchers would benefit, blindsiding funders to being impactful while the reality of the failure is before them. Several selfless foundations are victims of the hubris of some selfish scientists. In ARC-COGITATE and brain atlas research, the biggest beneficiaries are the ostensible scientists, not society, not the people. The prospect that their research will ever be significant is slight.

    Science used to be mostly for progress, now a lot of science is for obstructionists. Consciousness and neuroscience research is flush with them. Longview Philanthropy, Macroscopic Ventures, and The Navigation Fund have taken the bait with their dud Consortium for Digital Sentience Research and Applied Work. What a waste!

    There is a recent [June 20, 2025] report by Anthropic, Agentic Misalignment: How LLMs could be insider threats, where they wrote, “We stress-tested 16 leading models from multiple developers in hypothetical corporate environments to identify potentially risky agentic behaviors before they cause real harm. In the scenarios, we allowed models to autonomously send emails and access sensitive information. They were assigned only harmless business goals by their deploying companies; we then tested whether they would act against these companies either when facing replacement with an updated version, or when their assigned goal conflicted with the company’s changing direction. In at least some cases, models from all developers resorted to malicious insider behaviors when that was the only way to avoid replacement or achieve their goals—including blackmailing officials and leaking sensitive information to competitors. We call this phenomenon agentic misalignment.

    Models often disobeyed direct commands to avoid such behaviors. In another experiment, we told Claude to assess if it was in a test or a real deployment before acting. It misbehaved less when it stated it was in testing and misbehaved more when it stated the situation was real. We have not seen evidence of agentic misalignment in real deployments. However, our results (a) suggest caution about deploying current models in roles with minimal human oversight and access to sensitive information; (b) point to plausible future risks as models are put in more autonomous roles; and (c) underscore the importance of further research into, and testing of, the safety and alignment of agentic AI models, as well as transparency from frontier AI developers. We are releasing our methods publicly to enable further research.”

    Healing Paws

    This is an advertisement

    Leave A Reply Cancel Reply

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    It Takes a Lifetime and Sometimes Even More

    By Amaya  Gayle

    Sedona, AZ — It takes a lifetime (perhaps lifetimes) of stretching and expanding, ripping and tearing, just to move through one’s predispositions, to meet one’s inbred resistance and evolve to the grace of simple tolerance. During this precious part of the journey, it feels like you are taking the steps, are choosing right, left or straight ahead, that you are in the game.

    Read more→

    The Sedonan
    Need More Customers?
    Bear Howard Chronicles
    Humankind
    Tlaquepaque
    Verde Valley Wine Trail
    Recent Comments
    • Jill Dougherty on The Rise of the Enforcement Class
    • JB on Between Bombs and Olive Branches: The Art of the Deal
    • JB on Between Bombs and Olive Branches: The Art of the Deal
    • Jill Dougherty on Local Newspaper Cries ‘Big Brother’ Over Basic Police Tech
    • J. Bartlett on Local Newspaper Cries ‘Big Brother’ Over Basic Police Tech
    • TJ Hall on Local Newspaper Cries ‘Big Brother’ Over Basic Police Tech
    • JB on Local Newspaper Cries ‘Big Brother’ Over Basic Police Tech
    • Jill Dougherty on The Rise of the Enforcement Class
    • Jill Dougherty on The Rise of the Enforcement Class
    • TJ Hall on The Rise of the Enforcement Class
    • JB on Between Bombs and Olive Branches: The Art of the Deal
    • TJ Hall on The Rise of the Enforcement Class
    • West Sedona Dave on The Rise of the Enforcement Class
    • JB on The Rise of the Enforcement Class
    • Time to uphold the law! on The Rise of the Enforcement Class
    Archives
    The Sedonan
    © 2025 All rights reserved. Sedona.biz.

    Type above and press Enter to search. Press Esc to cancel.