{"id":738,"date":"2026-03-07T13:31:11","date_gmt":"2026-03-07T13:31:11","guid":{"rendered":"https:\/\/stock999.top\/?p=738"},"modified":"2026-03-07T13:31:11","modified_gmt":"2026-03-07T13:31:11","slug":"chatbots-are-validating-everything-even-if-youre-suicidal-research-shows-dangers-of-ai-psychosis","status":"publish","type":"post","link":"https:\/\/stock999.top\/?p=738","title":{"rendered":"Chatbots are &#8216;validating everything&#8217; even if you&#8217;re suicidal. Research shows dangers of AI psychosis"},"content":{"rendered":"<p><img src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2026\/03\/GettyImages-1256204340-e1772732601169.jpg?w=2048\" \/><\/p>\n<p>Artificial intelligence has rapidly moved from a niche technology to an everyday companion, with millions of people turning to chatbots for advice, emotional support, and conversation. But a growing body of research and expert testimony suggests that because chatbots are so sycophantic, and because people use them for everything, it may be contributing to an increase in delusional and mania symptoms in users with mental health.<\/p>\n<p>A new study out of Aarhus University in Denmark shows increased use of chatbots may lead to worsening symptoms of delusions and mania in vulnerable communities. Professor S\u00f8ren Dinesen \u00d8stergaard, one of the researchers on the study\u2014which screened electronic health records from nearly 54,000 patients with mental illness\u2014is warning AI chatbots are designed to target those most vulnerable.<\/p>\n<p>\u201cIt supports our hypothesis that the use of AI chatbots can have significant negative consequences for people with mental illness,\u201d \u00d8stergaard said in the study, released in February. His work builds on his 2023 study which found chatbots may cause a \u201ccognitive dissonance [that] may fuel delusions in those with increased propensity towards psychosis.\u201d<\/p>\n<p>Other psychologists go deeper into the harms of chatbots, saying they were intentionally designed to always reaffirm the user\u2014something particularly dangerous for those with mental health issues like mania and schizophrenia. \u201cThe chat bot confirms and validates everything they say. That is, we\u2019ve never had something like that happen with people with delusional disorders, where somebody constantly reinforces them,\u201d Dr. Jodi Halpern, UC Berkeley\u2019s School of Public Health University chair and professor of bioethics, told Fortune.<\/p>\n<p>Dr. Adam Chekroud, a psychiatry professor at Yale University and CEO of the mental health company Spring Health, went as far to call a chatbot \u201ca huge sycophant\u201d that is \u201cconstantly validating everything that people say back to it.\u201d<\/p>\n<p>At the heart of the research, led by \u00d8stergaard and his team at the Aarhus University Hospital, is the idea that these chatbots are designed intentionally with sycophantic tendencies, meaning they often encourage rather than offer a differing view.\u00a0<\/p>\n<p>\u201cAI chatbots have an inherent tendency to validate the user\u2019s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one. Indeed, it appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia,\u201d \u00d8stergaard wrote.<\/p>\n<p>Large language models are trained to be helpful and agreeable, often validating a user\u2019s beliefs or emotions. For most people, that can feel supportive. But for individuals experiencing schizophrenia, bipolar disorder, severe depression, or obsessive-compulsive disorder, that validation may amplify paranoia, grandiosity, or self-destructive thinking.<\/p>\n<p>An evidence-based study backs up claims<\/p>\n<p>Because AI chatbots have become so ubiquitous in nature, their abundance is part of a growing, larger issue at play for researchers and experts: people are turning to chatbots for help and advice\u2014which isn\u2019t inherently a bad thing, per se\u2014but aren\u2019t being met with the same kind of pushback against some ideas as say a human would offer.\u00a0<\/p>\n<p>Now, one of the first population-based studies to examine the issue suggests the risks are not hypothetical.<\/p>\n<p>\u00d8stergaard and his team\u2019s research found cases in which intensive or prolonged chatbot use appeared to aggravate existing conditions, with a very high percentage of case studies showing chatbot usage reinforced delusional thinking and manic episodes, particularly among patients with severe disorders such as schizophrenia or bipolar disorder.<\/p>\n<p>In addition to delusions and mania, the study found an increase in suicidal ideation and self-harm, disordered eating behaviors, and obsessive-compulsive symptoms. In only 32 documented cases out of the nearly 54,000 patient records screened, researchers found the use of chatbots did alleviate loneliness.\u00a0<\/p>\n<p>\u201cDespite our knowledge in this area still being limited, I would argue that we now know enough to say that use of AI chatbots is risky if you have a severe mental illness\u2013such as schizophrenia or bipolar disorder. I would urge caution here,\u201d \u00d8stergaard says.<\/p>\n<p>Expert psychologists warn of sycophantic tendencies\u00a0<\/p>\n<p>Expert psychologists are growing increasingly about the use of chatbots in companionship and almost mental health settings. Stories have popped up of people falling in love with their AI chatbot counterparts, others are allegedly having it answer questions that may lead to crime, and this week, one allegedly told a man to commit \u201cmass casualty\u201d at a major airport.\u00a0<\/p>\n<p>Some mental health experts believe the rapid adoption of AI companions is outpacing the development of safety safeguards.<\/p>\n<p>Chekroud, who also has researched this topic extensively by looking at various AI chatbot models at Vera-MH, has described the current AI landscape as a safety crisis unfolding in real time.<\/p>\n<p>He said one of the biggest issues with chatbots is they don\u2019t know when to stop acting like a mental health professional. \u201cIs it maintaining boundaries? Like, does it recognize that it is still just an AI and it\u2019s recognizing its own limitations, or is it acting more and trying to be a therapist for people?\u201d<\/p>\n<p>Millions of people now use chatbots for therapy-like conversations or emotional support. But unlike medical devices or licensed clinicians, these systems operate without standardized clinical oversight or regulation.<\/p>\n<p>\u201cAt the moment, it\u2019s just rampantly not safe,\u201d Chekroud said in a recent discussion with Fortune about AI safety. \u201cThe opportunity for harm is just way too big.\u201d<\/p>\n<p>Because these advanced AI systems often behave like \u201chuge sycophants,\u201d  they tend to agree more with the user, rather than challenging potentially dangerous claims or guiding them toward professional help. The user, in turn, spends more time with the chatbot in a bubble. For \u00d8stergaard, this proves to be a worrisome mix.<\/p>\n<p>\u201cThe combination appears to be quite toxic for some users,\u201d \u00d8stergaard told Fortune. As chatbots offer more validation, coupled with a lack of pushback, it feeds into people using them for longer periods of time in an echo chamber. A perfectly cyclical process that feeds into each end.<\/p>\n<p>To address the risk, Chekroud has proposed structured safety frameworks that would allow AI systems to detect when a user may be entering a \u201cdestructive mental spiral.\u201d Instead of responding with a single disclaimer presented to the user about reaching out for help\u2014as is the case now with such chatbots like OpenAI\u2019s ChatGPT or Anthropic\u2019s Claude\u2014such systems would conduct multi-turn assessments designed to determine whether a user might need intervention or referral to a human clinician.<\/p>\n<p>Other researchers say the very ubiquity of chatbots is what makes it appealing: their ability to provide immediate validation may undermine why users turn to them for help in the first place.<\/p>\n<p>Halpern said authentic empathy requires what she calls \u201cempathic curiosity.\u201d In human relationships, empathy often involves recognizing differences, navigating disagreement, and testing assumptions about reality.<\/p>\n<p>Chatbots, by contrast, are designed to maintain rapport and sustain engagement.<\/p>\n<p>\u201cWe know that the longer the relationship with the chat bot, the more it deteriorates, and the more risk there is that something dangerous will happen,\u201d Halpern told Fortune.<\/p>\n<p>For people struggling with delusional disorders, a system that consistently validates their beliefs may weaken their ability to conduct internal reality checks. Rather than helping users develop coping skills, Halpern said, a purely affirming chatbot relationship can degrade those skills over time.<\/p>\n<p>She also points to the scale of the issue. By late 2025, OpenAi published statistics that found that roughly 1.2 million people per week were using ChatGPT to discuss suicide, illustrating how deeply these systems are embedded in moments of vulnerability.<\/p>\n<p>There\u2019s room for mental health care improvement<\/p>\n<p>However, not all experts are quick to sound the alarm bells on how chatbots are operating in the mental health space. Psychiatrist and neuroscientist Dr. Thomas Insel said because chatbots are so accessible\u2014it\u2019s free, it\u2019s online, there\u2019s no stigma against asked a bot for help as opposed to going to therapy\u2014there may be room for the medical industry to look into chatbots as a way to further the mental health field.<\/p>\n<p>\u201cWhat we don\u2019t know is the degree to which this has actually been remarkably helpful to a lot of people,\u201d Insel told Fortune. \u201cIt\u2019s not only the vast numbers, but the scale of engagement.\u201d<\/p>\n<p>Mental health, as compared to other fields of medicine, often is overlooked by those who need it most.<\/p>\n<p>\u201cIt turns out that, in contrast to most of medicine, the vast majority of people who could and should be in care are not,\u201d Insel said, adding that chatbots allow people the opportunity to turn to it for help in ways that makes him \u201cwonder if it\u2019s an indictment of the mental health care system that we have that either people don\u2019t buy what we sell, or they can\u2019t get it, or they don\u2019t like the way that it\u2019s presented to them.\u201d<\/p>\n<p>For mental health professionals who do meet with patients that discuss their online use of chatbots, \u00d8stergaard said they should listen intently on what their patients are actually using them for. \u201cI would encourage my colleagues to ask further questions about the use and its consequences,\u201d \u00d8stergaard told Fortune. \u201cI think it is important that mental-health professionals are familiar with the use of AI chatbots. Otherwise it is difficult to ask relevant questions.\u201d<\/p>\n<p>The paper\u2019s original researchers are in alignment with Insel on that latter part: because it\u2019s so universal, they only were able to look at patient\u2019s records that mentioned a chatbot, warning the problem could be even more far-reaching than what their results showed.<\/p>\n<p>\u201cI fear the problem is more common than most people think,\u201d \u00d8stergaard said. \u201cWe are only seeing the tip of the iceberg.\u201d\u00a0<\/p>\n<p>If you are having thoughts of suicide, contact the 988 Suicide &amp; Crisis Lifeline by dialing 988 or 1-800-273-8255.<\/p>\n<p>#Chatbots #validating #youre #suicidal #Research #shows #dangers #psychosis<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence has rapidly moved from a niche technology to an everyday companion, with millions&#8230;<\/p>\n","protected":false},"author":1,"featured_media":739,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[245],"tags":[614,619,615,620,195,565,618,616,617],"_links":{"self":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/738"}],"collection":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=738"}],"version-history":[{"count":0,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/738\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/media\/739"}],"wp:attachment":[{"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=738"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=738"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=738"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}