By David Stephen
Sedona, AZ — Mental health symptoms and emotional distress are universally present in human societies, and an increasing user base means that some portion of ChatGPT conversations include these situations – OpenAI
There is a new [October 27, 2025] safety report by OpenAI, Strengthening ChatGPT’s responses in sensitive conversations, stating that, “Our safety improvements in the recent model update focus on the following areas: 1) mental health concerns such as psychosis or mania; 2) self-harm and suicide; and 3) emotional reliance on AI.
In order to improve how ChatGPT responds in each priority domain, we follow a five-step process:
Define the problem – we map out different types of potential harm.
Begin to measure it – we use tools like evaluations, data from real-world conversations, and user research to understand where and how risks emerge.Validate our approach – we review our definitions and policies with external mental health and safety experts.
Mitigate the risks – we post-train the model and update product interventions to reduce unsafe outcomes.
Continue measuring and iterating – we validate that the mitigations improved safety and iterate where needed.
While, as noted above, these conversations are difficult to detect and measure given how rare they are, our initial analysis estimates that around 0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania.
While, as noted above, these conversations are difficult to detect and measure given how rare they are, our initial analysis estimates that around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent.”
The State of Global Mental Health
OpenAI is seeking to distance itself from culpability about the global mental health situation, given the continuous bad press and lawsuits about AI psychosis and teens suicides.
While the major stories were about how ChatGPT may have exacerbated or reinforced delusions, the intense [transparency-cloaked] rebuttal in OpenAI’s report is about people bringing their issues to the chatbot, not necessarily about how ChatGPT may have hooked and inverted reality for some users.
However, what is the state of global mental health? What is the primary responsibility of OpenAI towards AI-induced psychosis, and possibly suicide?
It appears that OpenAI believes to be doing enough for general mental health, according to the report, especially if people are just bringing external mental health requests to ChatGPT — where there is no history of friendship, companionship or others.
However, one unsolved problem is AI-induced psychosis and possible breaks from reality that can happen because an AI chatbot can access the depths of the human mind.
The solution — an independent AI Psychosis Research Lab, whose sole focus would be to show relays of the mind, matching chatbot outputs to stations and relays — is not yet available, by character.ai, ChatGPT, claude, gemini or others.
OpenAI’s Global Physician Network
OpenAI wrote, “We have built a Global Physician Network—a broad pool of nearly 300 physicians and psychologists who have practiced in 60 countries—that we use to directly inform our safety research and represent global views. More than 170 of these clinicians (specifically psychiatrists, psychologists, and primary care practitioners) supported our research over the last few months by one or more of the following:
Writing ideal responses for mental health-related prompts
Creating custom, clinically-informed analyses of model responses
Rating the safety of model responses from different models
Providing high-level guidance and feedback on our approach.”
Why Neuroscience Research Failed Mental Health
While OpenAI may expect commendation for their steps, especially showing they are ahead of other AI companies, it is also really poor that the best that can be offered to solving mental health problems, are the list of things that the Global Physician Network are said to be doing.
Writing responses, creating responses, rating responses, providing feedback, are not excellent work, given the seriousness.
ChatGPT is probably the biggest opportunity, so far, to solve the global mental health crisis in decades. There were several campaigns in the last few years, by celebs and others that tried to de-stigmatize mental health, while waiting for the right moment for solution.
Now, the best that experts can do is to write responses?
Some of those responses are already seen as platitudes for those familiar with psychotherapy. The problem to solve is not another response, which yes, can be helpful, but not an ambitious chance to break the grip of the problem. What is happening at the source — the human mind to result in those?
For example, when there is a bad reality, why do negative thoughts quickly file and run rapids? Where are the thoughts coming from, where are they going, what might assist the introduction of non-routed thoughts? Why are some negative thoughts persistent? Why do some lead to harmful actions? What is the coding of the thoughts that makes them dominant?
These questions are about mechanisms of directly correlated components in the brain? However, there are no answers from neuroscience.
The field of neuroscience has failed mental health research. There are several neuroscience labs, institutes, centers, associations, conferences, papers and so forth that, in the last decade, have not provided one answer to any mental health problem. Like, what exactly is going on in the brain in a psychotic episode, differently from when there isn’t?
While the fields of psychiatry and psychology continue, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, Text Revision [DSM-5-TR] is not a book that uses mechanisms and relays in the brain to explain mental conditions. They have not moved close enough to the problem to see beyond labeling. All mental disorders are interpreted, it seems, by labels, not components or mechanisms.
In the brain there are components, while they all have responsibility, some of them are directly correlated with mental stability and otherwise. So, it is possible to use the states of these components to explain what a bipolar disorder is, exploring direct parallels of what is going on in the brain.
This is where progress is. Not another label or otherwise.
OpenAI is not a mental health company
In the end, OpenAI did not make ChatGPT to solve the mental health crisis. If the field of neuroscience does not have the answer to the problem, there is little OpenAI can do. Even when OpenAI brings on experts, the best they may do, could be to develop common responses since there are no answers.
People can pile on OpenAI all they want, they would not just defend themselves, they would also not be able to do much. If the lawsuits get worse, OpenAI would subsume ChatGPT with the First Amendment.
Neuroscience Slump
There is the NIH BRAIN Initiative, expending millions of dollars, not one mental health problem was solved or explained.
There was the EU Human Brain Project to map the brain. With respect to mental health, that project did not exist.
The MICrONS Project and much else all tended to zero, towards solving mental health, in spite of the immense funding.
There is no definition, even conceptually, of what the human mind is, in mainstream neuroscience. There is a lot of chatter about what the science says on different mental states, when they are simply describing correlated studies not actual brain mechanisms.
What is the human mind? What are the components of the human mind? What are their locations and relays? How do they decide mental states for order and disorder? What are the possible parallels to show what is going on within? How can this be used to explain all the conditions in the DSM-5-TR to advance from basic labeling?
The situation, for now, is not even to totally solve mental disorders, but to at least extricate components in the brain, then try to explain each condition, to begin a journey to complete management for all, including for addictions — to drugs, gambling, smartphones, video games, social media, AI and much else.
There are always new announcements in neuroscience, hopefully they start to pan out for global mental health.
There is a recent [October 23, 2025] announcement, Gardner Family Foundation invests $20 million in UC Gardner Neuroscience Institute, stating that, “The University of Cincinnati Gardner Neuroscience Institute receives a transformative $20 million gift from the James J. and Joan A. Gardner Family Foundation to advance lifelong brain health at every stage of life.
This visionary investment will accelerate cutting edge research and expand specialized care in memory disorders, strengthen the institute’s learning health system, and propel efforts toward earning the prestigious Institute on Aging’s designation as an Alzheimer’s Disease Research Center (ADRC).
With this latest contribution, the Gardner Family Foundation’s philanthropic support for UC’s neuroscience institute will exceed $50 million since its inaugural 2007 gift.”
