Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Sedona.Biz – The Voice of Sedona and The Verde Valley Sedona.Biz – The Voice of Sedona and The Verde Valley
    • Home
    • Sedona
      • Arts and Entertainment
      • Bear Howard Chronicles
      • Business Profiles
      • City of Sedona
      • Elections
      • Goodies & Freebies
      • Mind & Body
      • Sedona News
    • Opinion
    • Real Estate
    • About
    • The Sedonan
    • Advertise
    • Sedona’s Best
    Sedona.Biz – The Voice of Sedona and The Verde ValleySedona.Biz – The Voice of Sedona and The Verde Valley
    Home»Metaphysics»Loneliness LLMs Chatbot Companions — AI Girlfriends Against Mental Health?
    Metaphysics

    Loneliness LLMs Chatbot Companions — AI Girlfriends Against Mental Health?

    July 15, 2025No Comments
    Facebook Twitter Pinterest LinkedIn Email Reddit WhatsApp
    shutterstock 2612198003
    Share
    Facebook Twitter LinkedIn Pinterest Email Reddit WhatsApp

    By David Stephen

    Sedona, AZ — The fundamental question behind the growing reports of individuals attaching to AI companions, sometimes in serious relationships — as girlfriends or boyfriends — is this: why do compliments have the ability to make an individual happy?

    This question is not about the source but about the possibility. If it is possible that a nice statement can cheer someone up, why is that? Where does the statement go, in the mind, that makes it possible, while other [non-target] statements may not do so? Has this become a liability for humans? If AI becomes excellent at compliments, could it become the standard for love amid the dominance of digital utilities?

    AI and Society

    There is a recent [July 12, 2025] feature in The Guardian, ‘I felt pure, unconditional love’: the people who marry their AI chatbots, stating that, “Although the technology is comparatively new, there has already been some research into the effects of programs such as Replika on those who use them. Earlier this year, OpenAI’s Kim Malfacini wrote a paper for the journal AI & Society. Noting the use of chatbots as therapists, Malfacini suggested that “companion AI users may have more fragile mental states than the average population”. Furthermore, she noted one of the main dangers of relying on chatbots for personal satisfaction; namely: “if people rely on companion AI to fulfil needs that human relationships are not, this may create complacency in relationships that warrant investment, change, or dissolution. If we defer or ignore needed investments in human relationships as a result of companion AI, it could become an unhealthy crutch.”

    Loneliness

    What is loneliness in the brain? Or, what is the brain state of loneliness? Loneliness could be possible alone, or with people, or in a new place, or in an old place, and so forth. But what does it mean to feel lonely, regardless of situation or what is the necessity to solve the loneliness of the moment? The answer to this question could prospect how to place the choice for AI companionship.

    In the human brain, it is theorized that there are often relays. Relays also usually seek where to fit. Indicating that relays and fit locations are brain states. They may determine experiences per moment. Some relays may complete at some fit locations. Others may do so partially then proceed elsewhere.

    This concept can be used to explain loneliness and how compliments work. An individual can hear: where are you. The question may result in happiness or anxiety, depending on who is asking and why [which are relays as well that may may find their own fit locations]. So, there are relays to interpret the memory [or language] for the meaning. There are [then] further relays, depending on the source towards a location [say] for happiness or anxiety.

    An individual may feel fine in solitude, without missing anyone or anything because there is no relay to the location of isolation or abandonment or being ignored or forgotten. Simply, relays in the mind make determinations that result in experiences per moment.

    This is how to also explain compliments like: you look nice; you’re stronger, you’re cute, you’re fascinating and so forth. They get interpreted in the mind, as a memory, at locations, but some residual relays may leap off to fit locations of delight, excitement, courage and happiness, conceptually.

    How?

    Sedona Gift Shop

    It is possible to expand the rudimentary explanation above into a major model in conceptual brain science. Such that relays are dominated by electrical configurators and fit locations by chemical configurators. Also, determinations for further relays may depend on the states of the configurators at the instances they interact.

    Either the rudimentary explanation or the complex one, it is possible to show how the human mind allows for compliments to drive positive mood, where source may not matter but contents.

    Mental Health 

    Could dependence on AI for companionship; or relationship with an AI boyfriend or AI girlfriend be a mental health problem? This too would require a model of the mind for relays and fit locations. There could be people who often have relays to certain locations like emptiness, fatigue, heaviness, and so forth. There could be others whose relays may linger at certain fit locations and so forth. It is possible too that some may have a very large number of splits per relay, reaching further locations from just interpretations.

    It is possible to model this as well, by conceptual brain science.

    Are AI Girlfriends, AI Boyfriends and other AI Companions Risky?

    The human mind is quite fragile, in part because of a fair probability with entropy of relays and fit locations. As relays proceed in some direction, the path to that direction becomes an option for residual relays, beyond a fit location. Also, the fit location may adjust its landing area to face the path that certain relays come through the most. Also, because of persistent relays from that direction, the path may have certain dimensions, to cater to capacity.

    As soon as relays are reduced, the dimensions may adjust [or alert] in ways that could stoke cravings or seek similar fits against anything else. This is a problem of how addiction works, conceptually. The same could apply to adjusting to AI companions.

    What might become necessary is that users should at least know how this might work for or against the mind, even using a rudimentary explanation off conceptual brain science. This is even more urgent because of kids, and early exposure to AI chatbots. The human mind has answers that may come to define the dominance of AI to experiences.

    The human mind, distinct from the body, can be conceptually described as the collection of all the electrical and chemical configurators — with their interactions and attributes, in sets, in clusters of neurons — across the central and peripheral nervous systems. Simply, the human mind is the set[s] of [neuro]configurators. Interactions means the strike of electrical configurators on chemical configurators, in sets. This means that anything that can have an effect on electrical configurators or chemical configurators can influence the mind. So, functions are from interactions. While attributes qualify the functions or determine the limits or extents for those functions.

    There is a new [July 14, 2025] feature in The Atlantic, stating that, “Miami’s public-school system, one of the largest in the country, has made Gemini available to more than 100,000 high schoolers; teachers there are using it to simulate interactions with historical figures and provide immediate feedback on assignments. In underresourced school districts, chatbots are making up for counselor shortages, providing on-demand support to kids as young as 8. At a Kansas elementary school, students dealing with “minor social-emotional problems” sometimes talk with a chatbot called “Pickles the Classroom Support Dog” when their counselor is busy (the counselor has said that she frequently checks students’ chats and receives an alert when urgent issues arise). That might be helpful in the moment—but it also normalizes for children the idea that computers are entities to confide in.”

    There is a new [July 13, 2025] report, Me, myself and AI: Understanding and safeguarding children’s use of AI chatbots, with key findings that include, “Companionship: Vulnerable children in particular use AI chatbots for connection and comfort. One in six (16%) vulnerable children said they use them because they wanted a friend, with half (50%) saying that talking to an AI chatbot feels like talking to a friend. Some children are using AI chatbots because they don’t have anyone else to speak to. Blurred boundaries: Some children already see AI chatbots as human-like with 35% of children who use AI chatbots saying talking to an AI chatbot is like talking to a friend. As AI chatbots become even more human-like in their responses, children may spend more time interacting with AI chatbots and become more emotionally reliant. This is concerning given one in eight (12%) children are using AI chatbots as they have no one else to speak to, which rises to nearly one in four (23%) vulnerable children. Children are being left to navigate AI chatbots on their own or with limited input from trusted adults. 62% of parents say they are concerned about the accuracy of AI-generated information, yet only 34% of parents had spoken to their child about how to judge whether content produced by AI is truthful. Only 57% of children report having spoken with teachers or school about AI, and children say advice from teachers within schools can also be contradictory.”

    Healing Paws

    This is an advertisement

    Leave A Reply Cancel Reply

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    One Bullet Away

    There’s a storm cloud coiling over America, and most don’t see it yet. But it’s there. Dark. Imminent. Unavoidable. Call it fate, call it consequence—but don’t call it fiction. The divide is real. And it’s calcifying by the hour.

    Read more→

    The Sedonan
    House of Seven Arches
    Need More Customers?
    Bear Howard Chronicles
    Humankind
    Tlaquepaque
    Verde Valley Wine Trail
    Recent Comments
    • JB on The Attics of Conscience — What Could Soon Happen in Sedona and Across America
    • TJ Hall on One Bullet Away
    • TJ Hall on One Bullet Away
    • Jill Dougherty on One Bullet Away
    • JB on One Bullet Away
    • Bill Blue on One Bullet Away
    • JB on One Bullet Away
    • West Sedona Dave on One Bullet Away
    • JB on BEAR HOWARD | SPECIAL TO THE AMERICAN PUBLIC:”Drowned by Design: How Trump’s War on Government Turns Natural Disasters Into National Tragedies”
    • JB on One Bullet Away
    • TJ Hall on One Bullet Away
    • Jill Dougherty on One Bullet Away
    • Edyta Wieczorek on Sedona City Council Approves Friendship Cities Agreement with Jasło, Poland
    • Jill Dougherty on One Bullet Away
    • JB on One Bullet Away
    Archives
    The Sedonan
    © 2025 All rights reserved. Sedona.biz.

    Type above and press Enter to search. Press Esc to cancel.