Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Sedona.Biz – The Voice of Sedona and The Verde Valley Sedona.Biz – The Voice of Sedona and The Verde Valley
    • Home
    • Sedona
      • Steve’s Corner
      • Arts and Entertainment
      • Bear Howard Chronicles
      • Business Profiles
      • City of Sedona
      • Elections
      • Goodies & Freebies
      • Mind & Body
      • Sedona News
    • Opinion
    • Real Estate
    • About
    • The Sedonan
    • Advertise
    • Sedona’s Best
    Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Home»Sedona News»Data Centers: AI Architecture for Mini-Superintelligence, Max-Superalignment
    Sedona News

    Data Centers: AI Architecture for Mini-Superintelligence, Max-Superalignment

    July 31, 2025No Comments
    Facebook Twitter Pinterest LinkedIn Email Reddit WhatsApp
    shutterstock 2446746899
    Share
    Facebook Twitter LinkedIn Pinterest Email Reddit WhatsApp

    By David Stephen 

    An indicator of superintelligence, for AI, would be the solution to a major scientific problem. To achieve this, what may be necessary would be scientific data, a new memory architecture, and extensive math models. Already, large language models [LLMs] can answer several scientific questions. But they remain limited in theorizing fundamentals answers. How does AI get there? This can become a trajectory for general intelligence from niche, transcendent intelligence. Take the brain as an example, to explain a mental state, say of an emotion or a feeling, how would AI be able to say something absolutely original and quite accurate? Also, arriving at this capability, how will alignment be maximized to ensure safety against misuse?

    Storage Layering 

    The first step towards a mini-superintelligence, for science, or specifically brain science, is storage. Data will not be stored with the current memory architecture. There will be an absolutely new model for data storage that collects patterns at the store. In mechanistic interpretability, concepts [with relationships] are often adjacent. It will be necessary to have a similar model for memory. This would be like having a hard drive partition for just neurons. Though, the form of storage would be much different because of [say] specifications [with data] solely for neurons.  So, everything [established in empirical brain science] about neurons will be a store. Then electrical signals as a store as well, then chemical signals as a store.

    Another store would be neurons in clusters — nuclei and ganglia. Then electrical and chemical signals as a collection. While there would be different architectural explorations for these storages, the target would be to store data like the human brain. The human brain, it is conceptualized, has thick sets [of electrical and chemical signals]. A thick set collects whatever is common between two or more thin sets. So, door is a thick set representing all doors, so is fence and so forth. There are unique elements that may remain thin sets, but most interpretations use thick sets. It is what makes it easier for the brain to store much more for less space [and energy] than computers. There are rarely repetitions, so access is faster, learning requires new fewer examples and so forth. Therefore, the partitioning or specificity of storage would provide an opportunity to directly mimic human memory.

    The next step will be to layer those storages. Such that, instead of say sectors on a disk, they are like layers, for patterns to match what is common in the binary data. The objective is to ensure that storage is prepared for intelligence. Not just to have intelligence use repetitive memory, like what is obtainable at present. For example, the text ‘door’ is stored differently from the image or the video. Also, there are all kinds of videos, image types and so forth. Then there are several information about doors, as well. In the brain, door is a thick set, containing the text, images, videos, physical structure and so forth. In this new memory architecture for computing, whatever is common in the binary segments of door would be collected. So, instead of storing it alone, it is stored together and accessed in that collective form, as layers, so that creativity meshes are probable.

    For this new memory for superintelligence, after storing specific data and collecting binary commonalities, then layers would be added, so that more binary segments would be collected. The technical details and likelihood of these can be explicated and estimated. However, to really have superintelligence that can be original enough beyond prediction, or basic reasoning, it may not begin with deep learning but with a new storage architecture.

    Mathematical Models

    Existing math models in deep learning would accelerate progress further, given new storage layers. Since patterns would already be available from the source, there can be explorations of minima and maxima, monotonicity, convergence, summabilities, homogeneous equations, phase functions, asymptotics along symmetry lines, fractional transformations and so forth.

    Sedona Gift Shop

    GPUs and Deep Learning 

    As soon as there is an advance on storage; optimization algorithms, positional encodings and others can get farther with possibilities to run them per storage layer, as well as multiples. Some of those storage layers can also be used as VRAM for some GPUs, as well as dedicated cores to have them deliver better and faster beyond current architectures. There would be possibilities for new DL architectures given the options that storage layers provide.

    Neuroscientific Mini-Superintelligence and AI Safety Max-Superalignment

    The expected outcome would be for the mini-superintelligence designed, to explain mental states, to postulate how the brain organizes information, using electrical and chemical signals. It is expected to find correlations between signals mechanisms in clusters of neurons. This would be steps ahead of current advances, with extensive usefulness in psychiatry and neurology.

    In summary, to have a mini-superintelligence that can solve major scientific problems, the first move is a new kind of specific memory, then layers, to lessen its ability to be simply too accurate for memory, but to be originally creative, since patterns are already collected. Aside from brain science, any other sophisticated field can be explored.

    If this mini-superintelligence is achieved, it is possible to develop maximum superalignment for it, using a penalty model with features [that are not concepts]. It is also possible for it to have instances, where the usages are moments that it can recall, so that if the feedback is bad, where it caused harm or may, it would remember the unpleasant moment and avoid it a next time. This max-superalignment can then be used to explore general AI alignment, as well as better energy efficiency for data centers. Also, as soon as storage layers move forward into mini-superintelligence, it could be possible to map out a direction towards general intelligence. AI has yet to conquer science because it has not [originally] answered key fundamental questions or proposed novel cures. To do so would mean a domain superintelligence, which could come from a new storage architecture, conceptually.

    There is a new [July, 29 2025] story by AP, Cheyenne to host massive AI data center using more electricity than all Wyoming homes combined, stating that, “An artificial intelligence data center that would use more electricity than every home in Wyoming combined before expanding to as much as five times that size will be built soon near Cheyenne, according to the city’s mayor.   With cool weather — good for keeping computer temperatures down — and an abundance of inexpensive electricity from a top energy-producing state, Wyoming’s capital has become a hub of computing power. The city has been home to Microsoft data centers since 2012. An $800 million data center announced last year by Facebook parent company Meta Platforms is nearing completion, Collins said. The latest data center, a joint effort between regional energy infrastructure company Tallgrass and AI data center developer Crusoe, would begin at 1.8 gigawatts of electricity and be scalable to 10 gigawatts, according to a joint company statement. A gigawatt can power as many as 1 million homes. But that’s more homes than Wyoming has people. The least populated state, Wyoming, has about 590,000 people.”

    Healing Paws

    This is an advertisement

    Leave A Reply Cancel Reply

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    No Ban Zone

    By Tommy Acosta

    The difference between Sedona.biz and other social media and print outlets is that we believe in freedom of the press and allowing people to express their beliefs regardless of political persuasion or controversial perspectives.

    Read more→

    The Sedonan
    House of Seven Arches
    Need More Customers?
    Bear Howard Chronicles
    Humankind
    Tlaquepaque
    Verde Valley Wine Trail
    Recent Comments
    • West Sedona Dave on Nextdoor – Going Behind the Curtain
    • JB on A Conceptual Brain Science of CTE — Chronic Traumatic Encephalopathy
    • Jill Dougherty on Quit and Run
    • JB on No Ban Zone
    • TJ Hall on No Ban Zone
    • JB on No Ban Zone
    • Tony on Quit and Run
    • JB on Quit and Run
    • TJ Hall on Where Is Our Humanity?
    • JB on Nextdoor – Going Behind the Curtain
    • Steve Segner on Nextdoor – Going Behind the Curtain
    • JB on Where Is Our Humanity?
    • JB on No doubt about it—President Donald Trump is Superhuman.
    • TJ Hall on No doubt about it—President Donald Trump is Superhuman.
    • JB on No doubt about it—President Donald Trump is Superhuman.
    Archives
    The Sedonan
    © 2025 All rights reserved. Sedona.biz.

    Type above and press Enter to search. Press Esc to cancel.