Close Menu
Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    • Home
    • Sedona
      • Steve’s Corner
      • Arts and Entertainment
      • Bear Howard Chronicles
      • Business Profiles
      • City of Sedona
      • Goodies & Freebies
      • Mind & Body
      • Real Estate
      • Sedona News
    • Opinion
    • About
    • The Sedonan
    • Advertise
    • Sedona’s Best
    Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Home » AI Alignment: Anthropic, Amodei owe 100% of their success to OpenAI, Altman
    Ai

    AI Alignment: Anthropic, Amodei owe 100% of their success to OpenAI, Altman

    April 11, 20261 Comment
    Facebook Twitter Pinterest LinkedIn Email Reddit WhatsApp
    shutterstock 2544797505
    Share
    Facebook Twitter LinkedIn Pinterest Email Reddit WhatsApp

    By David Stephen —

     It was while Amodei worked at OpenAI that the imagination that Anthropic could exist, was possible. It was while it was ripe enough to strike out that Amodei was able to leave with some people, to make Anthropic. For all the chatter about Altman being deceptive and manipulative, the one person on earth that Amodei should forever be thankful to, is Sam Altman, who invited him early, to work at OpenAI. Without Amodei at OpenAI, Anthropic would not exist, not in this form, not in another form. Amodei may found a startup, but it would have been one of the scraps.

    Sedona, AZ — There is a recent [April 6, 2026] spotlight in The New Yorker, Sam Altman May Control Our Future—Can He Be Trusted?, stating that, “Another target was Dario Amodei, a biophysicist and a font of frenetic energy who has a tendency to nervously twist his black hair, and responds to one-line e-mails with multi-paragraph essays. He e-mailed Amodei, and they set up a one-on-one dinner at an Indian restaurant. Amodei, who later joined the company, took detailed notes on Altman and Brockman’s behavior for years, under the heading “My Experience with OpenAI” (subheading: “Private: Do Not Share”). A collection of more than two hundred pages of documents related to Amodei, including those notes and internal e-mails and memos, has been circulated by colleagues in Silicon Valley but never before disclosed publicly. By 2018, Amodei had started questioning the founders’ motives more openly. In 2020, Amodei, Daniela, and other colleagues left to found Anthropic, which is now one of OpenAI’s chief rivals.”

    Own In Sedona

    Own In Sedona

    ‘The problem with Anthropic is Amodei himself’

    Anthropic will not win the AI race. Not because they are not announcing something every week or iterating their models but they are effectively hamstrung by a leadership of defective imagination.

    If it is assumed, or say evident, by some, that OpenAI is a lost cause and Sam Altman is only in it for himself, there is no hope, whatsoever, that the future of the world rests on Anthropic, no matter their posturing.

    Anthropic does not exactly care about humanity. All they claim to care about is what makes them and their group wave rectitude.

    If they say Claude is safe, what can Claude do to keep Chinese AI safe, or any AI, anywhere else safe?

    If drones caused a lot of harm in the Iran war, even though Iran is not militarily as powerful, what does it mean, that even if Anthropic achieved AGI first, that some excellent, non-AGI model cannot do substantial harm, and not be stopped early enough?

    Assuming Claude does not cause AI psychosis, what has Claude done to solve AI delusion, psychosis for those caused by ChatGPT, Gemini, character.ai, and others?

    If Claude is up on the benchmarks, with an ability to take off chunks off labor value, what lengths have Anthropic gone, to pursue and define human intelligence in the brain?

    Yes, human intelligence. The only reference to it is when AI would surpass it. No general definition, no conceptual mechanism in the brain, no model of its advantages, especially against competitive AI.

    In the world today, there is no research lab, just dedicated to studying human intelligence, directly, for its mechanisms in the brain and how it can be optimized for problem-solving. There is no startup on it. No venture capital has given $1 in pursuit of it. No angel investor. No university. No big pharma. No government. No nonprofit. No effective altruism. Nothing.

    Sedona Gift Shop

    Now, Anthropic, the architecture of hypocrites, is accelerating Claude so fast, they are doing more to displace humanity than most other groups that have ever existed.

    Shouldn’t Anthropic, if truly, with intent to be selfless, be exploring how to ensure that human intelligence soars, as AI soars? Didn’t Amodei study biophysics and should at least commit a disposable fraction of effort to it? Even if it is not selfless, isn’t there a better business case to solve human intelligence than to actually achieve AGI? Solving human intelligence — as a monopoly — provided as a digital product, will be sought in productivity, education, healthcare and much else.

    No, they want to take on the defense department, to appear to care, on two conditions that it does not matter if Anthropic accedes or not, if it is to be done — surveillance or automatic weapons — it will be done.

    What has Anthropic done for humanity? Labor economics novel models? Like is there anyone existing anywhere, randomly, away from regular criteria, who can say today that Anthropic is having an impact on the individual’s life? All that Anthropic does for anyone is more Claude. And more Claude means more market share and more potential customers.

    Also, Anthropic had developed a language model before ChatGPT was released in November, 2022. Yet, they didn’t release it. They didn’t let the world know that human intelligence may contract. Even if they meant well with safety, alignment or whatever, they were not transparent.

     

    Now, someone might say they are, this time, by not releasing Claude Mythos, and they are sharing the cybersecurity risks. Well, at their level, the amount of general AI safety and alignment research they should be supporting in several university labs, would have showed better concern for humanity. How long before anyone else builds [if not already, by known/unknown labs] an equivalent of Claude Mythos, and what will Anthropic do about it, without collective, sophisticated, technical efforts for prevention? 

    Anthropic has a constitution for Claude, which is as worthless as having an ethical cruise missile. [Again, what Anthropic is doing, technically for general AI safety and alignment, matters more than how safe Claude is. It is good that Claude may not have been used for deepfake audios, videos, images, and texts, still they were made by those who wanted, yet Anthropic has no general solution.]

    When Meta superintelligence lab was on a hiring spree, Amodei kept blustering that they did not need to spike wages to retain staff, in part, because of the mission. Well, yes, the mission to make people choose Claude over humans, for good.

    Anthropic is studying AI consciousness, which is so much nonsense, it is impossible to think that anyone at Anthropic is neuroscientifically literate.

    Consciousness, why? Do they have any neuron theory of consciousness? What is their theory on why, if, or how language use alone might be conscious? If these cannot be answered, they have no basis in claiming they are trying anything with right, welfare or morality, because Claude is language first.

    Dario Amodei has won at life. Anthropic is on its way to $1 trillion, in market valuation. But their benefactor is Sam Altman, whose portrait they should frame at their office and send thank you notes, regularly. Altman, whatever his modus operandi, has done more for their industry, than Amodei will ever.

    And for the urgent problems of humanity, Anthropic is a hopeless turn. 

    Own In Sedona

    1 Comment

    1. Grant Castillou on April 11, 2026 11:05 pm

      It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

      What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

      I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

      My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

      Reply
    Leave A Reply Cancel Reply

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    Understanding Sedona’s Home Rule Vote

    If you recently moved to Sedona, you may notice that every four years, residents vote on something called Home Rule. The July 21 vote is simply about who controls Sedona’s city budget.

    Click Here for More

    No Home Rule

    Home Rule allows the city government, Staff with limitations, and Council to spend any money they have on any project they want without regard to voter input.

    Sedona Real Estate
    230 Table Top Rd
    The Sedonan
    The Sedonan Summer 2025
    Recent Comments
    • Grant Castillou on AI Alignment: Anthropic, Amodei owe 100% of their success to OpenAI, Altman
    • Bruce on Institutional Distrust and Home Rule
    • JB on Verde Valley Residents Join Largest Protest in American History
    • Jill Dougherty on Institutional Distrust and Home Rule
    • Jill Dougherty on Say No To Home Rule
    Categories
    Cactus Quill
    No Home Rule

    Home Rule allows the city government, Staff with limitations, and Council to spend any money they have on any project they want without regard to voter input.

    © 2026 All rights reserved. Sedona.biz.

    Type above and press Enter to search. Press Esc to cancel.