Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Sedona.Biz – The Voice of Sedona and The Verde Valley Sedona.Biz – The Voice of Sedona and The Verde Valley
    • Home
    • Sedona
      • Arts and Entertainment
      • Bear Howard Chronicles
      • Business Profiles
      • City of Sedona
      • Elections
      • Goodies & Freebies
      • Mind & Body
      • Sedona News
    • Opinion
    • Real Estate
    • About
    • The Sedonan
    • Advertise
    • Sedona’s Best
    Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Home»Metaphysics»AI Alignment or LLMs Safety May Not Work Without Biology  Research 
    Metaphysics

    AI Alignment or LLMs Safety May Not Work Without Biology  Research 

    June 6, 20251 Comment
    Facebook Twitter Pinterest LinkedIn Email Reddit WhatsApp
    shutterstock 1188127132
    Share
    Facebook Twitter LinkedIn Pinterest Email Reddit WhatsApp

    By David Stephen

    Intelligence can be described as an accelerator. Consciousness can be described as a break.

    Although intelligence is a division of consciousness, it appears that nature did not intend for intelligence to standalone without consciousness as a regulator.

    AI is emerging into a formidable intelligence without several aspects of human consciousness.

    Human consciousness became a broad brush to check human intelligence, determining how society held on.

    Consequences or penalties often have an effect on consciousness, so it is healthy to avoid breaking several rules, for  an individual, because it can be deeply affective.

    Consciousness is the sauce of human compliance and caution. Consciousness is the interpretation of reality that could make life cool or otherwise. Intelligence is what builds society. Consciousness is what preserves it, so to speak.

    AI is speeding up without consciousness, becoming a key risk factor. Even if AI does not have consciousness, it’s safety against possible risks and threats must be underscored by biology.

    There is a new [May 30, 2025] story on Live Science, OpenAI’s ‘smartest’ AI model was explicitly told to shut down — and it refused, stating that,  “An artificial intelligence safety firm has found that OpenAI’s o3 and o4-mini models sometimes refuse to shut down, and will sabotage computer scripts in order to keep working on tasks.”

    There is a recent [May 29, 2025] preprint on arXiv, Darwin Godel Machine: Open-Ended Evolution of Self-Improving Agents, stating that “We introduce the Darwin Gödel Machine (DGM), a self-improving system that iteratively modifies its own code (thereby also improving its ability to modify its own codebase) and empirically validates each change using coding benchmarks. It grows the archive by sampling an agent from it and using a foundation model to create a new, interesting, version of the sampled agent. Empirically, the DGM automatically improves its coding capabilities (e.g., better code editing tools, long-context window management, peer-review mechanisms), increasing performance on SWE-bench from 20.0% to 50.0%, and on Polyglot from 14.2% to 30.7%.”

    AI safety and alignment are efforts that would eventually be based on consciousness as means to ensure how AI can be penalized and know or become aware of consequences that must prevent it from running riot, in little or major ways. Trauma, regret, depression, and others would be good for AI.

    This means that seeking out biological parallels along those lines for AI safety would be critical.

    Theoretical neuroscience research can offer novel models and approaches towards algorithmic development for AI safety and alignment, amid the evolvement of these systems.

    Sedona Gift Shop

    AI Safety as a Biological Problem

    There is a recent [June 5, 2025] guest essay in The NYTimes, Anthropic C.E.O.: Don’t Let A.I. Companies off the Hook, stating that, “But as models become more powerful, corporate incentives to provide this level of transparency might change. That’s why there should be legislative incentives to ensure that these companies keep disclosing their policies. Having this national transparency standard would help not only the public but also Congress understand how the technology is developing, so that lawmakers can decide whether further government action is needed.”

    If the CEO of a major AI company is advocating for legislation around transparency towards AI safety, it could indicate that the AI safety industry is, maybe, fixated on the wrong approach.

    Legislation or transparency would hardly be conclusive given the loopholes in the last three years of AI.

    There are already legislations on fake images and videos. They have not stopped harms. There are outlawed misuse cases of AI that continue to fester because digital, in general, and AI, more powerfully, exceeds those checks.

    There are low barrier and open cases that legislation might help, but progress in AI safety, at least for the known risks would be to model how affect checks cognition, for humans and other organisms.

    All AI safety companies should have a department of biology. This department would prospect pathways to equivalents on how AI would have similar to biological experiences for the sake of checks and caution.

    It would mean that models [and their outputs] without these standards may not be allowed in certain general internet areas like app stores, web searches, social media, IP sources and so forth.

    AI is using human intelligence and doing excellently. So why can’t it be explored for a similar mechanism of human consciousness towards safety?

    Legislation will not be enough. Policy that is not technically sourced from biology will also be inadequate.

    AI safety and alignment are principally biological research, just like intelligence is biologically sourced.

    Healing Paws

    This is an advertisement

    1 Comment

    1. Grant Castillou on June 7, 2025 11:07 am

      It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

      What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

      I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

      My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

      Reply

    Leave A Reply Cancel Reply

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    If I Were Curtis Sliwa
    By Tommy Acosta

    One of my guilty little pleasures is imagining what I would do if I was in someone else’s shoes, especially politicians. In this essay I would love to jump into the shoes of Curtis Sliwa, a former New York City vigilante who founded the Guardian Angels and is now running as a Republican for mayor of his city.

    Read more→

    The Sedonan
    House of Seven Arches
    Need More Customers?
    Bear Howard Chronicles
    Humankind
    Tlaquepaque
    Verde Valley Wine Trail
    Recent Comments
    • Jill Dougherty on Cottonwood, Verde Valley Residents Join Largest Protest Yet to Reject Abuses of Power
    • JB on Film Festival presents ‘Good Morning, Vietnam’ outdoors under the stars July 3
    • JB on Between Bombs and Olive Branches: The Art of the Deal
    • JB on If I Was Curtis Sliwa
    • Mark Harris on The Attics of Conscience — What Could Soon Happen in Sedona and Across America
    • Daniel J Sullivan MDJD on If I Was Curtis Sliwa
    • Jill Dougherty on If I Was Curtis Sliwa
    • Blue on Between Bombs and Olive Branches: The Art of the Deal
    • Blue on The Attics of Conscience — What Could Soon Happen in Sedona and Across America
    • Charles H Blum on License to Spy
    • TJ Hall on If I Was Curtis Sliwa
    • JB on If I Was Curtis Sliwa
    • Stephanie lenore Maciel on The Attics of Conscience — What Could Soon Happen in Sedona and Across America
    • Michael Schroeder on The Attics of Conscience — What Could Soon Happen in Sedona and Across America
    • Michael Schroeder on License to Spy
    Archives
    The Sedonan
    © 2025 All rights reserved. Sedona.biz.

    Type above and press Enter to search. Press Esc to cancel.