Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Sedona.Biz – The Voice of Sedona and The Verde Valley Sedona.Biz – The Voice of Sedona and The Verde Valley
    • Home
    • Sedona
      • Steve’s Corner
      • Arts and Entertainment
      • Bear Howard Chronicles
      • Business Profiles
      • City of Sedona
      • Goodies & Freebies
      • Mind & Body
      • Real Estate
      • Sedona News
    • Opinion
    • About
    • The Sedonan
    • Advertise
    • Sedona’s Best
    Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Home»Sedona News»LLMs—The OpenAI Critic Whom Google and Anthropic Love
    Sedona News

    LLMs—The OpenAI Critic Whom Google and Anthropic Love

    December 8, 2025No Comments
    Facebook Twitter Pinterest LinkedIn Email Reddit WhatsApp
    shutterstock 2691270427
    Share
    Facebook Twitter LinkedIn Pinterest Email Reddit WhatsApp

    By David Stephen 

    It is possible to make the case that no major AI company is on the side of humanity.

    The simple test — seeing the capabilities of AI — is this: what is your company doing to improve human intelligence without the necessity of using AI?

    Own In Sedona

    Simply, they are all racing towards more AI and better AI, ensuring, in part, to weaken what is left of human intelligence. Mission accomplished for them means fatality for human intelligence.

    For all they are doing for AI, there are no equivalents for human intelligence. There’s no human intelligence safety, human intelligence welfare, human intelligence alignment, human intelligence morality, human intelligence rights, human intelligence superintelligence, human intelligence superalignment, human intelligence-centered human intelligence, humane human intelligence, and so forth.

    Also, whatever is done for AI belongs to AI. Data centers, model architectures, and so forth. Humans would need facilities to access AI. If any of those glitches, a task might be stranded. So, that AI can be separated from humans or be unavailable makes AI the foremost beneficiary of whatever is done for it.

    Now, as the era of mass AI passes the third year, after ChatGPT, the company on the receiving end of most critics is OpenAI. And there are several reasons for that.

    However, whatever OpenAI is doing wrong, what are other AI companies doing to correct it, as a competitive play, for advantage, or just basic care because the company can do better and at the behest of humanity?

    Take AI psychosis, there is still no solution from any AI company for AI delusion-likelihood. They don’t even have labs within their companies to explore answers. While OpenAI has made news for shortcomings in this area, you cannot just say Anthropic cares, or even Google, that has a major documented case of Gemini Psychosis. They have all done nothing transformative, for mind safety against AI excesses, including character.ai.

    Aside from chatbot psychosis, take AI safety or alignment, most AI companies have guardrails, such that they are swift to make corrections when use cases point them out. But how far has any company gone, in AI safety or alignment that OpenAI is behind?
    Even mechanistic interpretability which Anthropic seems dazzling is not that distant from what others can match. So, while OpenAI seems to have a lot that deserves criticism, no other AI company is that innocent. OpenAI is said to be losing money and still distant from profitability. Well, whatever becomes of OpenAI has no benefit for the direct advancement of human intelligence in the brain. So, all the hate at OpenAI appears to be a coordinated effort to divert, it seems, from — the worst case —the loss of human intelligence.

    No AI company can define what human intelligence is, in the brain. No AI company has any postulate on what to do to improve human intelligence for problem-solving. Before now, tech companies have always had restricted negative impact, social media, video games, new smartphones every year, recommended videos and much else. Now, however, the impact is one that affects all, human intelligence.

    They are giving free AI to colleges, discounted AI to nonprofits, agencies and so forth. So, more AI, less human intelligence. Whatever productivity AI adds is a potential loss to human intelligence, somewhere at some point.

    A major tech CEO recently said AI will not replace jobs but replace tasks. Well, labor value is tied to tasks. And, even if the shift is slow, what would be left of jobs as AI encroaches on tasks may be negligible labor value.

    Some have also said AI will result in abundance. The problem is that AI will still be owned. Even if labor policies are bad in some situations, people still own their human intelligence, so it is possible to switch jobs, maybe. But with owned AI, wherever by whoever, abundance, if it ever happens, would come with existential caveats. Simply, AI companies that give nothing else for free other than more AI, will suddenly give abundance for all, just like that, OK.

    AI companies do not have new models in labor economics to prepare against [labor value mitigating] task-loss or job-loss. AI companies do not have small business growth research labs. AI companies do not have a research lab to explore new life purposes aside from training and work. AI companies do not have solutions to homelessness, if people lose livelihoods.

    Sedona Gift Shop

    Exploring to improve human intelligence is a [better and] more useful response to the AI advance, instead of just blanket stop AI, pause AI or LLMs won’t make AGI arguments.

    Professor CriticalMost of the people who dismiss large language models [LLMs] at least in media quotes are professors. Executives are often more measured but several professors just tear it down.

    Some professors are actually circumspect, but a few just go off on LLMs. First, it is the success of LLMs that brought the relevance — that many of these so-called professors, are given mainstream platforms — to be heard, including at major conferences.

    Secondly, until some professor [or an industry team] invents something that becomes better than LLMs, there is little to hold about whatever the individual says. And even when this happens, human intelligence is still the casualty. There is an anti-OpenAI professor on the circuit who appears to bash anything about OpenAI, such that even where true, the bias makes it appear like the individual is a useful character in the objectives of the industry.

    ‘Go after OpenAI and ignore others’ is the apparent rallying cry. The red flag is that for all the errors of Google and their AI, if the criticism is ok’d by google, ranking his blog high, no censorship, no backdating, no query dis-inclusion, no news exemption, and much else, there is nothing to even see again. Like there is never any [say] upcoming blogpost from the individual that would make Google or Anthropic fret.

    Whatever anyone wants to say about OpenAI, it is the first company in history that showed that human intelligence can be substituted. That alone — as a signal to authorities and the public — is a service of notice, on where technology is and that people should take heed. Google didn’t. Anthropic, though existing at the time, did not. Meta didn’t. Nvidia, Microsoft, Apple [more Apples for you] and others did not.

    That some professor is critical of the AI industry does not put the professor on the side of humanity. The relevance is tied to that facade. The professor also does not care about human intelligence. It is all more AI of another type.

    If that individual decides that no more quotes to the press about anti-AI, or oped bashing OpenAI, or blog about anti-AI, the relevance would immediately wilt.

    The reason is that in the main field of this person, not originally AI, there is no contribution that anyone is using or anything that has moved knowledge forward. So, the anti-LLMs, or more directly anti-OpenAI stance is for self-attention.

    AI is already powering forward. Even bigger voices mentioning AI bubble has not slowed the momentum, in part because AI is quite useful. And AI is widely spread than many of the somewhat niche products of the dot-com bubble.

    It is possible to seek out people that care about humanity as AI sails, but it will never be someone who claims to care, but it is within the range of what the industry expects the professor to criticize, with him getting along with their agenda, masqueraded as some enlightenment, an absolute sham and hyper fake, on the eve of a make or break human intelligence year, 2026.

    There is a recent [December 1, 2025] article on TechCrunch, One of Google’s biggest AI advantages is what it already knows about you, stating that, “Similarly, it seems that avoiding Google’s data-gobbling ways will get increasingly difficult in the AI era, and if Google doesn’t get the balance right, the results could feel more creepy than useful.”

    “(To be clear: Google does let you control the apps Gemini uses to make its AI more knowledgeable about you specifically — it’s under “Connected Apps” in Gemini’s settings.)”

    “If you do share app data with Gemini, Google says it will save and use that data according to the Gemini privacy policy. And that policy reminds users that human reviewers may read some of their data and not to “enter confidential information that you wouldn’t want a reviewer to see or Google to use to improve its services.””

    Healing Paws

    This is an advertisement

    Leave A Reply Cancel Reply

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    A Sedona Sanctuary of Beauty, Privacy & Possibility – For Sale

    Set against nearly four acres of Sedona’s most breathtaking red rock scenery, Red Rock Retreat isn’t just a property — it’s a living experience waiting for a Sedona home buyer looking for the ultimate experience of living in one of the most beautiful homes in Sedona.

    Read more→

    The Sedonan
    Nampti Spa
    Mercer’s Kitchen
    House of Seven Arches
    Tlaquepaque
    Need More Customers?
    Bear Howard Chronicles
    Verde Valley Wine Trail
    Recent Comments
    • West Sedona Dave on The Mirage of a Western Gateway Concert Venue, do the math
    • JB on When a Democracy Must Prosecute Its Own
    • TJ Hall on When a Democracy Must Prosecute Its Own
    • Jill Dougherty on When a Democracy Must Prosecute Its Own
    • JB on The Mirage of a Western Gateway Concert Venue, do the math
    • TJ Hall on When a Democracy Must Prosecute Its Own
    • Jonathan Roehauze on When a Democracy Must Prosecute Its Own
    • JB on When a Democracy Must Prosecute Its Own
    • Michael Schroeder on The Mirage of a Western Gateway Concert Venue, do the math
    • JB on Donald Trump’s Return: A Reawakening of American Strength and Hope
    • West Sedona Dave on Sedona’s Traffic Crisis Wasn’t Inevitable—It Was Chosen
    • JB on When a Democracy Must Prosecute Its Own
    • Chuck K on Sedona’s Traffic Crisis Wasn’t Inevitable—It Was Chosen
    • JB on When a Democracy Must Prosecute Its Own
    • Hard Pass on Sedona’s Traffic Crisis Wasn’t Inevitable—It Was Chosen
    Archives
    The Sedonan
    © 2025 All rights reserved. Sedona.biz.

    Type above and press Enter to search. Press Esc to cancel.