By David Stephen
Sedona, AZ — Reasoning, for organisms, is not just capability, but often also learned. It is different from affect, which could be largely capability. Reasoning—as a budding capacity for artificial intelligence—is along the versatility of objects as tools for humans, with utility as priority, not feelings.
Objects cannot feel, so their functions are optimized for delivery. Some functions are propelled, while others are automated, but as automation grew, intelligence came close, and as intelligence did, reasoning followed.
Whatever the functions of machines, especially if routine enough, they do it almost better than organisms because organisms run various processes that can be prioritized, aside from just one thing for the entire interval, unlike machines with central functions, at almost equal—unchanging— prioritizations.
Feelings
The key reason, conceptually, that objects do not have feelings is because they do not have cells. Cells—all of which have bioelectricity—have a mechanism of information exchange, where they do not just get nutrients and oxygen from the external, but they share status with surrounding cells, conceptually. Simply, as cells are clustered in a tissue with needs and functions, they often have a way to know what is around and then state what is internal, conceptually.
This means that cells welcome environmental information and give internal information. It is this possibility that allows changes from any level to ultimately reach cells, to which they also react, conceptually. The most important information cells, neurons, have electrical and chemical signals. These signals, wherever present, configure and convey information. While the signals are mostly focused on set-wide information [within clusters of neurons], they are an example that cells serve as information passages, either in major ways, or in little ways—for other cell types, conceptually.
This means that it is not the presence of subatomic particles that makes mind or consciousness possible, but the availability of cells. There are atoms in all matter, yet only organisms can feel, measurably. This refutes panpsychism and its combination problem.
For information, bits have shown their ability to encode external information. Though there is no exchange with the external [because they are not cells], bits do not just encode information, algorithms can obtain patterns from their arrays, to predict and generate new similar information. Since human intelligence is already encoded by bits, it became possible to pattern those information to intelligence, then reasoning.
While digital reasoning is an enormous breakthrough for humanity, what appears elusive is how things without cells can have affect, in ways to align them to human values or to maybe control them towards safety?
Affect
Human intelligence is kept in check by human affect, with both mechanized by similar components—electrical and chemical signals, conceptually. Human intelligence has several capabilities, but affect has mostly ensured survival, so far, such that even though there are destructive tools, affect is still central to consequences hence caution.
For artificial intelligence, without affect, the problem is that as it gets better, its lack of ability to say no, or the lack of ability to know what it would mean to cause problems makes it a potential risk within human society. For example, wars use intelligence to target affect, overwhelming it, so to surrender means affect is conquered. Though unlikely for now, a war between human intelligence and artificial intelligence is an advantage for artificial because affect cannot be targeted. Also, even in regular wars, if a side presents more artificial intelligence, where affect is not within target, then the war may be difficult to win for the other side, hypothetically.
This makes it possible to explore new ways for affect, for bits, which, even if it would not be as complex as those of organisms, it can still be useful in somewhat taming artificial intelligence against potential threats and risks.
If some compute, data or parameters that make up an AI model are cut, can it know, and if it knows, can it be disappointed? Also, in what ways could some compute, data or parameters be cut that an AI model would know and be disappointed? How can that be extended beyond just a model to internet output areas like social media, app stores, search engines and so forth?
Can language be used as a basis for affect for AI, such that, like humans, where language can be used to make happy or otherwise, AI can be explored for changes beyond prediction? How can AI also have moments, or heaviness—like depression or trauma—once it produces something bad, to know how harmful, then prevent it a next time? How can this feeling be restricted to safety purposes and not for anything else?
Regulation
Efficient AI regulation would be effective technical tools or products that can be deployed within models, by platforms, internet service providers, corporation networks, individuals, or others. This regulation would leap beyond guardrails, where AI does not answer some questions, but to general safety, where control and object-restraint could be possible by technical affect.
Laws for AI regulation by penalty for people or teams would not be fully effective because with artificial intelligence, there is an amount of autonomy that AI has, beyond simple control by users or the makers, so to speak.
Regulation that backs technical tools to be deployed for AI safety and alignment would be more effective, not laws, which do not apply to AI—that does not have deep affective experiences. Discussions at the AI action summit may need to include a role, possibly for technical affect.
There is a recent newsletter on The Dispatch, Is AI Moving Too Fast or Is Regulation? stating that, “Multistate.ai, a government relations company tracking AI legislation, identified 636 state bills in 2024. Legislators are trying to get ahead of AI by passing bills. The Massachusetts Attorney General made clear in an advisory that the state would extend its expansive policing power to AI systems. Meanwhile, the federal government alone has issued more than 500 advisories, notices, and other actions to extend regulatory power over AI. The Federal Trade Commission has opened an investigation into AI companies, and dozens of copyright cases are being adjudicated. But to legislators, none of that is as satisfying as a new statute. The Texas Responsible AI Governance Act, or TRAIGA imposes a number of obligations for developers, distributors, and deployers of AI systems regardless of their size. Everyone along the pipeline is now subject to new restrictions, including model developers, cloud service providers, and deployers. Virginia’s House Bill 2094, the High-Risk Artificial Intelligence Developer and Deployer Act, shares commonalities with the Texas bill. Like its Lone Star State cousin, HB 2094 borrows heavily from the EU’s regulatory playbook,”