Sedona, AZ — How is human intelligence aligned to the goal of human progress? Since intelligence can be used for good and otherwise, how come human society, in spite of differences, remains safe and continues to advance?
How does society evolve against the unlawfulness of the past? Whenever society is safe, what makes it safe? How has civilization advanced, yet it is still possible to have most people follow the law, most of the time?
What is the key basis of lawfulness in society? How has this stayed ahead of progress, so that society can thrive? If human intelligence is the source of human progress, what checks human intelligence and where is it located?
Assuming some humans exist to have intelligence, and other humans exist to check those humans, would society have held on?
These questions are important as explorations on AI regulation, safety, and alignment are starting to take shape globally. While AI and human intelligence are different, a credible basis for AI regulation, safety, and alignment can be human intelligence.
It is possible to work on human intelligence research parallels for AI safety and alignment. These parallels can be assumed into variables, subsumed into equations, and then algorithms, to ensure that AI regulation is technical, not just external.
It is theorized that the basis of safety in human society is human affect. Human affect is a component of the mind, the same location that human intelligence is based. The human mind is theorized to be the collection of all the electrical and chemical signals, with their interactions and features, in sets, in clusters of neurons across the central and peripheral nervous systems.
Simply, human affect is mechanized by the interactions and features of electrical and chemical signals, similarly to human intelligence. This means that what makes human intelligence safe is in the same form—and around the same location—that human intelligence is based, for all humans.
This shows that AI safety has to be technical, such that its build is within the model, or to have the possibility to be caught for deviations—when it outputs some results in open areas of the internet, like through the app or play store, web results, social media and so forth.
Although there are several ways that laws get broken, with consequences ignored—and slips occurring—but society continues to be underscored by human affect, making it easy to isolate or remove trouble sources within human groups.
There is an upcoming AI safety summit on November 20-21, 2024, in San Francisco. There is another upcoming AI safety summit, on February 10-11, 2025, in Paris.
Human affect—as a safety mechanism on human intelligence—should be a key workshop in both summits, such that exploring how to draw parallels for similar technical forms, as capabilities in AI grow, could be an answer against some of the conflicting views on AI regulation and alignment.
Research would have to explore this in great depths, working out multiple solutions for those that would be sophisticated and those that can be used preliminary, just to ensure that AI is safe, while advancing—probably, unstoppably.
There is a recent blog by Anthropic, The case for targeted regulation, stating that, “In the realm of cyber capabilities, models have rapidly advanced on a broad range of coding tasks and cyber offense evaluations. On the SWE-bench software engineering task, models have improved from being able to solve 1.96% of a test set of real-world coding problems (Claude 2, October 2023) to 13.5% (Devin, March 2024) to 49% (Claude 3.5 Sonnet, October 2024). AI systems have progressed dramatically in their understanding of the sciences in the last year. The widely used benchmark GPQA saw scores on its hardest section grow from 38.8% when it was released in November 2023, to 59.4% in June 2024 (Claude 3.5 Sonnet), to 77.3% in September (OpenAI o1; human experts score 81.2%). Our Frontier Red Team has also found continued progress in CBRN capabilities.”
There is a recent story on TechCrunch, Quantum Machines and Nvidia use machine learning to get closer to an error-corrected quantum computer, stating that, “In a presentation earlier this year, the two companies showed that they are able to use an off-the-shelf reinforcement learning model running on Nvidia’s DGX platform to better control the qubits in a Rigetti quantum chip by keeping the system calibrated. As it turns out, even a small improvement in calibration can lead to massive improvements in error correction. the team only worked with a very basic quantum circuit but that it can be generalized to deep circuits as well. If you can do this with one gate and one qubit, you can also do it with a hundred qubits and 1,000 gates It’s worth stressing that this is just the start of this optimization process and collaboration.”
There is a recent feature on Bloomberg, Tech Giants Are Set to Spend $200 Billion This Year Chasing AI, stating that, “Amazon, Alphabet, Meta and Microsoft all accelerated spending. Mixed results from the big tech won’t slow 2025 investment.”