Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Sedona.Biz – The Voice of Sedona and The Verde Valley Sedona.Biz – The Voice of Sedona and The Verde Valley
    • Home
    • Sedona
      • Arts and Entertainment
      • Bear Howard Chronicles
      • Business Profiles
      • City of Sedona
      • Elections
      • Goodies & Freebies
      • Mind & Body
      • Sedona News
    • Opinion
    • Real Estate
    • About
    • The Sedonan
    • Advertise
    • Sedona’s Best
    Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Home»Editorials/Opinion»Opinion»Alignment errors of the US/UK AI Safety Institutes
    Opinion

    Alignment errors of the US/UK AI Safety Institutes

    November 14, 2024No Comments
    Facebook Twitter Pinterest LinkedIn Email Reddit WhatsApp
    shutterstock 2144494131
    Share
    Facebook Twitter LinkedIn Pinterest Email Reddit WhatsApp
    By David Stephen
    Sedona, AZ — How do you make laws for something that does not have experience? This should have been the fundamental question of a government institute dedicated to AI safety, whose efforts should shape the essence of regulation.
    AI safety is not primarily an engineering problem, but the basic question of how laws steer human society. Laws, norms, culture, and so forth work because there is a foundation on which they can be built, which is human experience, a quality of the human mind.
    Laws do not exist in a vacuum. Laws are based on the possibility that the experience will be bad, so people would avoid breaking the law to face bad experiences. This is how human society is [say] safe and regulated.
    The possibility for effective AI safety is what is understood about human intelligence is safe before exploring how it applies to AI. This can then become how to pursue foundational architecture for safety while building regulation around those.
    What foundational architecture for AI safety, similar to human affect, has the US/UK AI safety institutes built, for which regulation and other paths to alignment can be underpinned?
    AI regulations that would be effective would be laws for the adoption of AI safety tools that will be able to penalize AI models in some form when they are misused or when they output what they should not, in public areas of the internet.
    AI regulation will be ineffective with laws, early inspection of models, or guardrails against certain responses in models, without a foundation of affect for the models, as an accompanying tool.
    AI models do not have experience like humans–so to speak, but it is possible to design for them certain abilities that may shape how they adjust when they have done what they should not do. This can be the ultimate basis for AI safety, which the US/UK AI safety institutes have not done, which other efforts, built without the foundation, become errors.
    The US has an AI safety consortium with a long list of organizations, with only a few, who can be said to be working in AI safety because they would, even without the consortium. The others could have been mandated to at least explore other areas for AI safety or present a portal on their website for what they are doing, but that has not been the case.
    The UK AI safety institute has an application for systemic AI safety grants, with specificities against misuse areas. While this seems great in a direction, there is no foundation for AI affect that the solutions would meet, leaving vulnerabilities, and would likely make the eventual solutions regimented.
    The UK AI safety institute has gone through a change in administration, so will the US AI safety institute. There is the opportunity to pursue deep rigor in the basis of affect for AI against the debates and confusion on how AI regulation should go, based on the human mind.
    There is a recent report on FT, UK government launches new AI safety platform for businesses, stating that, “The UK government will provide businesses with a new platform to help assess and mitigate the risks posed by artificial intelligence, as it seeks to be the global leader in testing the safety of the novel technology. The platform, launched on Wednesday, will bring together guidance and practical resources for businesses to use to carry out impact assessments and evaluations of new AI technologies, and review the data underpinning machine learning algorithms to check for bias. The US launched its own AI safety institute last year, while the EU has enacted an AI Act that is considered among the toughest regulatory regimes for the new technology. As part of the new platform, the UK government will be rolling out a self-assessment tool to help small businesses check whether they are using AI systems safely. It is also announcing a new partnership on AI safety with Singapore that will allow the safety institutes from both countries to work closely together to conduct research, develop standards and industry guidance.”
    There is a recent story on Washington Post, AI companies get comfortable offering their technology to the military, stating that, “Artificial intelligence companies that have previously been reticent to allow military use of their technology are shifting policies and striking deals to offer it to spy agencies and the Pentagon.”

    There is a recent announcement on The Daily Pennsylvanian, Professor Danaë Metaxa to represent Penn in United States AI Safety Institute Consortium, stating that, “Danaë Metaxa, the Raj and Neera Singh Term assistant professor in Computer and Information Science, was recently chosen as Penn’s representative for The United States AI Safety Institute Consortium. The consortium, which was originally created in February 2024, focuses on creating guidelines, useful measurements, and safety features for those using artificial intelligence. It brings together hundreds of organizations, consumers, leading specialists in the industry, and researchers to make sure that AI can be used effectively and efficiently.”

    Sedona Gift Shop

    Healing Paws

    This is an advertisement

    Comments are closed.

    Between Bombs and Olive Branches: The Art of the Deal

    Well, well, well, it looks like Trump may have pulled off the coup of the century, forcing Iran to the bargaining table and calling for a ceasefire while the players work it out.
    I guess I was right once again predicting Iran would crumble and capitulate in the face of America’s annihilating force.

    Read more→

    The Sedonan
    Need More Customers?
    Bear Howard Chronicles
    Humankind
    Tlaquepaque
    Verde Valley Wine Trail
    Recent Comments
    • JB on Belief vs. Suspicion: Will Iran go the Way of Iraq, Gaza and Palestine?
    • Jill Dougherty on Cottonwood, Verde Valley Residents Join Largest Protest Yet to Reject Abuses of Power
    • JB on Belief vs. Suspicion: Will Iran go the Way of Iraq, Gaza and Palestine?
    • JB on Belief vs. Suspicion: Will Iran go the Way of Iraq, Gaza and Palestine?
    • TJ Hall on Belief vs. Suspicion: Will Iran go the Way of Iraq, Gaza and Palestine?
    • Michael Johnson on Belief vs. Suspicion: Will Iran go the Way of Iraq, Gaza and Palestine?
    • JB on Belief vs. Suspicion: Will Iran go the Way of Iraq, Gaza and Palestine?
    • floyd gardner on Belief vs. Suspicion: Will Iran go the Way of Iraq, Gaza and Palestine?
    • Skip Daum on Rowe Fine Art Gallery Salutes Free Spirits
    • JB on Cottonwood, Verde Valley Residents Join Largest Protest Yet to Reject Abuses of Power
    • TJ Hall on Cottonwood, Verde Valley Residents Join Largest Protest Yet to Reject Abuses of Power
    • Jill Dougherty on The Rise of the Enforcement Class
    • Jill Dougherty on Cottonwood, Verde Valley Residents Join Largest Protest Yet to Reject Abuses of Power
    • Rob Schwab on Cottonwood, Verde Valley Residents Join Largest Protest Yet to Reject Abuses of Power
    • Jill Dougherty on Cottonwood, Verde Valley Residents Join Largest Protest Yet to Reject Abuses of Power
    Archives
    The Sedonan
    © 2025 All rights reserved. Sedona.biz.

    Type above and press Enter to search. Press Esc to cancel.