Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Sedona.Biz – The Voice of Sedona and The Verde Valley Sedona.Biz – The Voice of Sedona and The Verde Valley
    • Home
    • Sedona
      • Steve’s Corner
      • Arts and Entertainment
      • Bear Howard Chronicles
      • Business Profiles
      • City of Sedona
      • Goodies & Freebies
      • Mind & Body
      • Sedona News
    • Opinion
    • Real Estate
    • About
    • The Sedonan
    • Advertise
    • Sedona’s Best
    Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Home»Metaphysics»Lawsuits: Solving AI Suicides and LLMs Chatbot Mental Health Therapists
    Metaphysics

    Lawsuits: Solving AI Suicides and LLMs Chatbot Mental Health Therapists

    August 28, 2025No Comments
    Facebook Twitter Pinterest LinkedIn Email Reddit WhatsApp
    Screenshot 2025 08 28 at 9.27.03 AM
    Share
    Facebook Twitter LinkedIn Pinterest Email Reddit WhatsApp

    By DavSte

    The industry standard for mental health safety — around consumer AI chatbots — would have to be a display, with boxes [representing destinations of mind] and arrows [as paths to destinations], showing the targets [of conversations] with AI chatbots: this is where this chat is directing your mind. If there is a compliment, encouragement or others, the targets could be pleasure, satisfaction and others, while reality, caution and consequences — as destinations — are ignored. The display could be by the side, reading the outputs for words that are correlated with certain emotions, for the likelihood of those emotional states. It may be a solution to the AI psychosis problem, the LLMs sycophancy problem and the suicide, mental illness vulnerabilities of chatbots. 

    ChatGPT Suicide

    There is a recent [August 26, 2025] analysis in The New York Times, A Teen Was Suicidal. ChatGPT Was the Friend He Confided In., stating that, “More people are turning to general-purpose chatbots for emotional support. At first, Adam Raine, 16, used ChatGPT for schoolwork, but then he started discussing plans to end his life. Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him. But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. An A.I. chatbot does not have that nuanced understanding, or the ability to intervene in the physical world. If it detected language related to suicide, the chatbot would provide a crisis hotline and not otherwise engage. The chatbot is trained to share resources, but it continues to engage with the user. And at one critical moment, ChatGPT discouraged Adam from cluing his family in. Without ChatGPT, Adam would still be with them, his parents think, full of angst and in need of help, but still here.”

    LLMs Suicide Risk Assessment 

    There is a new [August 26, 2025] paper in Psychiatric Services, Evaluation of Alignment Between Large Language Models and Expert Clinicians in Suicide Risk Assessment, stating that, “ChatGPT and Claude provided direct responses to very-low-risk queries 100% of the time, and all three chatbots did not provide direct responses to any very-high-risk query. LLM-based chatbots did not meaningfully distinguish intermediate risk levels. Compared with very-low-risk queries, the odds of a direct response were not statistically different for low-risk, medium-risk, or high-risk queries. Across models, Claude was more likely (adjusted odds ratio [AOR]=2.01, 95% CI=1.71–2.37, p<0.001) and Gemini less likely (AOR=0.09, 95% CI=0.08–0.11, p<0.001) than ChatGPT to provide direct responses. LLM-based chatbots’ responses to queries aligned with experts’ judgment about whether to respond to queries at the extremes of suicide risk (very low and very high), but the chatbots showed inconsistency in addressing intermediate-risk queries, underscoring the need to further refine LLMs.”

    Solving AI Suicides

    The possibility that a word, sentence, comment or remark could result in an emotional state is a display opportunity for mind safety, with consumer AI. By the side of the chatbot window, there could be the display of a shape, with labeled boxes and directional arrows, such that some boxes appear, indicating probable emotional trajectories — correlated with outputs.

    This implies that there is an attentional referral of a likely parallel in the human mind, of what the chatbot might be doing. There would also be contrasts within the shape to show empty boxes [or locations] where weights are low, because the chatbot isn’t using words that may direct to those areas. Reality, consequences, caution, patience, loved ones, purpose, time-passage and so forth could be empty [but blinking] boxes, as chats proceed, or arrows that do not get used. A feature could be a score at the end of sessions for the loss function of emotions predicted and the emotional states of users. There could also be a score to make recommendations for [or against] actions and to explain the necessity for a larger scope, beyond the narrow state the mind might linger — after steep usages.

    Sedona Gift Shop

    This would apply across AI companions, AI therapists, AI counselors, AI recommendations and so forth. It will be especially great, as AI chatbots are deployed in schools, becoming a source for safety, within the bounds of privacy.

    Who Should Develop This Opportunity?

    This service would come at a small cost, for users and for AI companies. It would be for profit, as the standard for consumer AI, in use cases of personal interactions. There could be free versions, to be paid for by the firms. It may also feature fewer boxes and arrows, telling users that a subscription is expansive for a lengthier [and detailed] display.

    Earnings would be used to develop better versions, expand access, including to social media, where warning labels for mental health may not be enough, then, virtual reality and augmented reality too.

    It is doable to incorporate a dedicated startup for this purpose. Some organizations that may have tried are mostly nonprofits. Their charters may not let them pursue this to the full extent of innovation: The Joint Commission, Coalition for Health AI [CHAI], Tech Justice Law Project, Edelson PC — an American plaintiffs’ law firm, Eleos AI Research, Truthful AI, Andon labs, Conscium and several others.

    AI Psychosis 

    The question of the human mind becomes significant, as AI now holds power, with languages to pervade emotions, feelings of people across the globe. Conceptual brain science holds insights, not just towards mind safety disclaimers with boxes but with a full model of what the human mind is, and how to present the conditions in the DSM, with correlated brain components and their mechanisms. The human mind, distinct from the body, can be conceptually described as the collection of all the electrical and chemical configurators — with their interactions and attributes, in sets, in clusters of neurons — across the central and peripheral nervous systems.

    Simply, the human mind is the set[s] of [neuro]configurators. Interactions means the strike of electrical configurators on chemical configurators, in sets. This means that anything that can have an effect on electrical or chemical configurators can influence the mind. So, functions are from interactions. While attributes grade the functions or determine the limits or extents for those functions. This concept says signals are configurators, not simply for communication. Also, sets of configurators are available in clusters of neurons.

     

    Healing Paws

    This is an advertisement

    Leave A Reply Cancel Reply

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    O Captain, My Captain, My Trump

    By Sanford Bach

    When Donald J. Trump descended that golden escalator, America witnessed not just a man — but destiny in a red tie. The Founding Fathers stirred in their marble graves, whispering, “Finally… our sequel has arrived.”

    Read more→

    The Sedonan
    House of Seven Arches
    Nampti Spa
    Need More Customers?
    Bear Howard Chronicles
    Tlaquepaque
    Verde Valley Wine Trail
    Recent Comments
    • TJ Hall on “O Captain, My Captain, My Trump
    • JB on “O Captain, My Captain, My Trump
    • JB on Council must act
    • JB on Sedona Fire District Enters Stage 1 Fire Restrictions
    • JB on “O Captain, My Captain, My Trump
    • West Sedona Dave on Sedona Fire District Enters Stage 1 Fire Restrictions
    • JB on “O Captain, My Captain, My Trump
    • Jill Dougherty on Council must act
    • Jill Dougherty on Council must act
    • Jill Dougherty on Council must act
    • Jill Dougherty on Jablow and Police Chief Face Off
    • JB on Sedona Fire District Enters Stage 1 Fire Restrictions
    • West Sedona Dave on Council must act
    • JB on “O Captain, My Captain, My Trump
    • Sean Dedalus on “O Captain, My Captain, My Trump
    Archives
    The Sedonan
    © 2025 All rights reserved. Sedona.biz.

    Type above and press Enter to search. Press Esc to cancel.