Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Sedona.Biz – The Voice of Sedona and The Verde Valley Sedona.Biz – The Voice of Sedona and The Verde Valley
    • Home
    • Sedona
      • Arts and Entertainment
      • Bear Howard Chronicles
      • Business Profiles
      • City of Sedona
      • Elections
      • Goodies & Freebies
      • Mind & Body
      • Sedona News
    • Opinion
    • Real Estate
    • About
    • The Sedonan
    • Advertise
    • Sedona’s Best
    Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Home»Metaphysics»AI Safety Summit: What is Alignment, for Human Intelligence?
    Metaphysics

    AI Safety Summit: What is Alignment, for Human Intelligence?

    November 3, 2024No Comments
    Facebook Twitter Pinterest LinkedIn Email Reddit WhatsApp
    shutterstock 2423158957
    Share
    Facebook Twitter LinkedIn Pinterest Email Reddit WhatsApp
    By David Stephen

    Sedona, AZ — How is human intelligence aligned to the goal of human progress? Since intelligence can be used for good and otherwise, how come human society, in spite of differences, remains safe and continues to advance?

    How does society evolve against the unlawfulness of the past? Whenever society is safe, what makes it safe? How has civilization advanced, yet it is still possible to have most people follow the law, most of the time?

    What is the key basis of lawfulness in society? How has this stayed ahead of progress, so that society can thrive? If human intelligence is the source of human progress, what checks human intelligence and where is it located?

    Assuming some humans exist to have intelligence, and other humans exist to check those humans, would society have held on?

    These questions are important as explorations on AI regulation, safety, and alignment are starting to take shape globally. While AI and human intelligence are different, a credible basis for AI regulation, safety, and alignment can be human intelligence.

    It is possible to work on human intelligence research parallels for AI safety and alignment. These parallels can be assumed into variables, subsumed into equations, and then algorithms, to ensure that AI regulation is technical, not just external.

    It is theorized that the basis of safety in human society is human affect. Human affect is a component of the mind, the same location that human intelligence is based. The human mind is theorized to be the collection of all the electrical and chemical signals, with their interactions and features, in sets, in clusters of neurons across the central and peripheral nervous systems.

    Simply, human affect is mechanized by the interactions and features of electrical and chemical signals, similarly to human intelligence. This means that what makes human intelligence safe is in the same form—and around the same location—that human intelligence is based, for all humans.

    It is not that intelligence is in the mind, but affect is external, or some people have intelligence and others have affect. The quality of intelligence [learning or presentation], in a moment, could be induced or inhibited by affect [emotion or feeling]. All humans have both, making it possible to, at least, be susceptible to similar experiences—or understand—what it means when others have them.

    This shows that AI safety has to be technical, such that its build is within the model, or to have the possibility to be caught for deviations—when it outputs some results in open areas of the internet, like through the app or play store, web results, social media and so forth.

    Sedona Gift Shop

    Although there are several ways that laws get broken, with consequences ignored—and slips occurring—but society continues to be underscored by human affect, making it easy to isolate or remove trouble sources within human groups.

    There is an upcoming AI safety summit on November 20-21, 2024, in San Francisco. There is another upcoming AI safety summit, on February 10-11, 2025, in Paris.

    Human affect—as a safety mechanism on human intelligence—should be a key workshop in both summits, such that exploring how to draw parallels for similar technical forms, as capabilities in AI grow, could be an answer against some of the conflicting views on AI regulation and alignment.

    Research would have to explore this in great depths, working out multiple solutions for those that would be sophisticated and those that can be used preliminary, just to ensure that AI is safe, while advancing—probably, unstoppably.

    There is a recent blog by Anthropic, The case for targeted regulation, stating that, “In the realm of cyber capabilities, models have rapidly advanced on a broad range of coding tasks and cyber offense evaluations. On the SWE-bench software engineering task, models have improved from being able to solve 1.96% of a test set of real-world coding problems (Claude 2, October 2023) to 13.5% (Devin, March 2024) to 49% (Claude 3.5 Sonnet, October 2024). AI systems have progressed dramatically in their understanding of the sciences in the last year. The widely used benchmark GPQA saw scores on its hardest section grow from 38.8% when it was released in November 2023, to 59.4% in June 2024 (Claude 3.5 Sonnet), to 77.3% in September (OpenAI o1; human experts score 81.2%). Our Frontier Red Team has also found continued progress in CBRN capabilities.”

    There is a recent story on TechCrunch, Quantum Machines and Nvidia use machine learning to get closer to an error-corrected quantum computer, stating that, “In a presentation earlier this year, the two companies showed that they are able to use an off-the-shelf reinforcement learning model running on Nvidia’s DGX platform to better control the qubits in a Rigetti quantum chip by keeping the system calibrated. As it turns out, even a small improvement in calibration can lead to massive improvements in error correction. the team only worked with a very basic quantum circuit but that it can be generalized to deep circuits as well. If you can do this with one gate and one qubit, you can also do it with a hundred qubits and 1,000 gates It’s worth stressing that this is just the start of this optimization process and collaboration.”

    There is a recent feature on Bloomberg, Tech Giants Are Set to Spend $200 Billion This Year Chasing AI, stating that, “Amazon, Alphabet, Meta and Microsoft all accelerated spending. Mixed results from the big tech won’t slow 2025 investment.”

     

     

    Healing Paws

    This is an advertisement

    Comments are closed.

    It Takes a Lifetime and Sometimes Even More

    By Amaya  Gayle

    Sedona, AZ — It takes a lifetime (perhaps lifetimes) of stretching and expanding, ripping and tearing, just to move through one’s predispositions, to meet one’s inbred resistance and evolve to the grace of simple tolerance. During this precious part of the journey, it feels like you are taking the steps, are choosing right, left or straight ahead, that you are in the game.

    Read more→

    The Sedonan
    House of Seven Arches
    Need More Customers?
    Bear Howard Chronicles
    Humankind
    Tlaquepaque
    Verde Valley Wine Trail
    Recent Comments
    • JB on License to Spy
    • TJ Hall on Nil Consortium for Digital Sentience Research and LLM, AI Consciousness
    • Grant Castillou on Nil Consortium for Digital Sentience Research and LLM, AI Consciousness
    • JB on The Rise of the Enforcement Class
    • JB on The Attics of Conscience — What Could Soon Happen in Sedona and Across America
    • Jill Dougherty on The Rise of the Enforcement Class
    • JB on Between Bombs and Olive Branches: The Art of the Deal
    • JB on Between Bombs and Olive Branches: The Art of the Deal
    • Jill Dougherty on Local Newspaper Cries ‘Big Brother’ Over Basic Police Tech
    • J. Bartlett on Local Newspaper Cries ‘Big Brother’ Over Basic Police Tech
    • TJ Hall on Local Newspaper Cries ‘Big Brother’ Over Basic Police Tech
    • JB on Local Newspaper Cries ‘Big Brother’ Over Basic Police Tech
    • Jill Dougherty on The Rise of the Enforcement Class
    • Jill Dougherty on The Rise of the Enforcement Class
    • TJ Hall on The Rise of the Enforcement Class
    Archives
    The Sedonan
    © 2025 All rights reserved. Sedona.biz.

    Type above and press Enter to search. Press Esc to cancel.