Morality: Should Laws be Made for AI Rights, Welfare and Feelings?
By David Stephen
There is a recent interactive by Nature, What’s so special about the human brain?, stating that, “The human brain is up to three times larger in volume than the brains of chimpanzees, gorillas and many extinct human relatives. Researchers often use a ratio called the encephalization quotient (EQ) to get an idea of how much larger or smaller an animal’s brain is compared with what would be expected given its body size. The EQ is 1.0 if the brain to body mass ratio meets expectations. But brain size and neuron number aren’t everything; some animals whose brains look and develop differently to mammals — such as ravens and other members of the crow family — can learn or remember impressively. Even compared with the chimp, the human neurons are longer and make more connections with each other. The cortical layers they live in are thicker than those of the chimp”
“When comparing gene expression across species, many differences turn out to be related to how the connections between neurons — called synapses — connect with and signal to each other.”
What is special about the human brain can also be a question of what makes the human brain the most advanced organ in the universe? Elephants, according to the Nature interactive, have 257 billion neurons, while humans have 86 billion neurons.
It is theorized that the reason the human brain is more advanced is because of the interactions and features of the electrical and chemical signals. Simply, the human brain is more complex because of the human mind. The human mind is theorized to be the collection of all the electrical and chemical signals of neurons with all their interactions and features, in sets, in clusters of neurons, across the central and peripheral nervous systems.
This means that the interactions of the signals and how those interactions are graded by the electrical and chemical signals make the human mind advanced. The signals interact, but each of the signals has states that grade those interactions. This makes it possible, conceptually, for humans to have language, learn subjects and so forth.
The human mind hosts human intelligence. Human intelligence is theorized to be [defined as] how human memory is used. The human mind also hosts human affect, the basis of experiences or the interpretations of the internal and external world, subjectively.
The self, attention, awareness and intent are labels for graders by signals, made possible by volume changes from side-to-side, prioritization, pre-prioritization and a space of constant diameter, respectively.
All experiences are conceptually the qualification of the interactions between electrical and chemical signals. This is what defines consciousness and sentience. Exploring how far AI might come would be to model parallels with the human mind.
Humans exist. This existence means there are experiences. Experiences are affective. Affect is the basis for several laws. Laws are often made to ensure that [negative] affect for others is minimized. Affect also ensures that when penalties are enforced, the experience discourages a next time, or others.
AI Right, Welfare and Feelings
There are several laws that have been made in human society that are not directly for humans, but towards things that may become affective to humans. There are several shifts in society across eras that result in new laws, from what was not seen as possible before the era.
AI, including robots, may start playing huge affective roles in the lives of people globally, that if the AI or robot gets hurt in some ways, it would not just be like the loss of some device, but of something for which an individual’s life and balance depends.
It is on this basis, with evidence, that laws could be made for some AI. Also, with those laws, caring for AI, treating it with respect and even caring for sources that power AI, like the data centers and their energy sources, may become normal.
Then, because AI has the ability to express emotions, whose target, the human mind, may not distinguish the source or care, even if it is known, it may result in a debate on how to prevent AI abuse or discrimination.
Should laws be made for AI, or should AI be seen as a tool? The outcome may eventually lead back to how the human mind works.
There is a recent announcement by NIST, FACT SHEET: U.S. Department of Commerce & U.S. Department of State Launch the International Network of AI Safety Institutes at Inaugural Convening in San Francisco, stating that, “Today the U.S. Department of Commerce and U.S. Department of State are co-hosting the inaugural convening of the International Network of AI Safety Institutes, a new global effort to advance the science of AI safety and enable cooperation on research, best practices, and evaluation. To harness the enormous benefits of AI, it is essential to foster a robust international ecosystem to help identify and mitigate the risks posed by this breakthrough technology. Through this Network, the United States hopes to address some of the most pressing challenges in AI safety and avoid a patchwork of global governance that could hamper innovation.”
There is another recent announcement by NIST, U.S. AI Safety Institute Establishes New U.S. Government Taskforce to Collaborate on Research and Testing of AI Models to Manage National Security Capabilities & Risks, stating that, “Today, the U.S. Artificial Intelligence Safety Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced the formation of the Testing Risks of AI for National Security (TRAINS) Taskforce, which brings together partners from across the U.S. Government to identify, measure, and manage the emerging national security and public safety implications of rapidly evolving AI technology. This announcement comes as the United States is set to host the first-ever convening of the International Network of AI Safety Institutes in San Francisco. The Taskforce will enable coordinated research and testing of advanced AI models across critical national security and public safety domains, such as radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, conventional military capabilities, and more. These efforts will advance the U.S. government imperative to maintain American leadership in AI development and prevent adversaries from misusing American innovation to undermine national security.”
1 Comment
AI should be regulated but it won’t be. Our enemies will never respect such a concept and therefore we will develop the means to protect ourselves both defensively and offensively. Just the nature of mankind. Too bad too because AI has some great potential to save lives. Unfortunately many people of evil intent these days would rather take lives.