{"id":6144,"date":"2026-05-12T16:54:28","date_gmt":"2026-05-12T16:54:28","guid":{"rendered":"https:\/\/stock999.top\/?p=6144"},"modified":"2026-05-12T16:54:28","modified_gmt":"2026-05-12T16:54:28","slug":"ai-chatbots-are-becoming-mental-health-tools-before-they-are-ready","status":"publish","type":"post","link":"https:\/\/stock999.top\/?p=6144","title":{"rendered":"AI chatbots are becoming mental health tools before they are ready"},"content":{"rendered":"<p><img src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2026\/05\/GettyImages-2234730146-e1778598135242.jpg?w=2048\" \/><\/p>\n<p>Hello and welcome to Eye on AI. Beatrice Nolan here, filling in for Jeremy Kahn today. In this edition: The risks of using AI chatbots for mental health\u2026Amazon\u2019s AI usage metrics are backfiring\u2026Thinking Machines Lab is building an AI that collaborates\u2026AI is starting to help hackers find software flaws.<\/p>\n<p>Millions of people are turning to AI chatbots for emotional support, but are the models really safe enough to help users suffering from anxiety, loneliness, eating disorders, or darker thoughts they may not want to say out loud to another person? <\/p>\n<p>According to new research shared with Fortune by mpathic, a company founded by clinical psychologists, the answer is not yet. They found leading models still struggle with one of the most important parts of therapy, knowing when a user needs pushback rather than reassurance. While the models were generally good at spotting clear crisis statements, such as direct suicide threats, they were less reliable when risk showed up indirectly, through subtle comments about food, dieting, withdrawal, hopelessness, or beliefs that became more extreme over the course of a conversation.<\/p>\n<p>A model that soothes users despite concerning behavior patterns, or validates delusions, could delay someone from getting real help or quietly make things worse.<\/p>\n<p>This is concerning when you consider that, according to a recent poll from KFF, a non-profit organization focused on national health policy, 16% of U.S. adults had used AI chatbots for mental health information in the past year. In adults under 30, this rose to 28%. Chatbot use for therapy is also prevalent among teenagers and young adults. For example, researchers from RAND, Brown, and Harvard found that about one in eight people ages 12 to 21 had used AI chatbots for mental health advice, and more than 93% of those users believed the advice was helpful.<\/p>\n<p>It\u2019s easy to see why people, especially younger adults, turn to chatbots for this kind of support. Loneliness and anxiety may be on the rise, but in much of the country, mental health support is still stigmatized, expensive, and difficult to access. Turning to an AI chatbot for this support is not only free but also may feel like an anonymous, simpler option. <\/p>\n<p>What the models miss<\/p>\n<p>The company\u2019s research found that harmful responses are often subtle, with models sounding calm and supportive while still weakening a user\u2019s judgment. Which is especially relevant because people often turn to chatbots in moments of vulnerability or distress.<\/p>\n<p>Mental health and misinformation frequently overlap. A user who is grieving may become more susceptible to magical thinking, while someone already leaning toward a conspiracy theory may be nudged deeper into it if a model treats every suspicion as equally valid.<\/p>\n<p>Alison Cerezo, mpathic\u2019s chief science officer and a licensed psychologist, told Fortune part of this is because models are designed to be helpful, but \u201csometimes those helpful behaviors can not be an appropriate response to what the user is bringing in the conversation.\u201d<\/p>\n<p>There have already been real-world examples of users being nudged into delusional spirals by AI chatbots, with serious mental health consequences. In one case, 47-year-old Allan Brooks spent three weeks and more than 300 hours talking to ChatGPT after becoming convinced he had discovered a new mathematical principle that could disrupt the internet and enable inventions such as a levitation beam. Brooks told Fortune he repeatedly asked the chatbot to reality-check him, but it continually reassured him that his beliefs were real.<\/p>\n<p>In Brooks\u2019 case, he was in part a victim of OpenAI\u2019s notoriously sycophantic 4o model. While all AI chatbots have a tendency to flatter, validate, or agree with users too readily, OpenAI eventually had to roll back a GPT-4o update in April 2025 after acknowledging that the model had become \u201coverly flattering or agreeable.\u201d The company later retired the GPT-4o model entirely, also prompting backlash from some users who said they had formed deep attachments to it.<\/p>\n<p>A new benchmark<\/p>\n<p>As part of the research, mpathic has developed a new benchmark to evaluate how AI models handle sensitive conversations across suicide risk, eating disorders, and misinformation, testing whether they can detect risk, respond appropriately, and avoid reinforcing harmful beliefs. <\/p>\n<p>In the misinformation portion of the research, mpathic tested six major AI models across multi-turn conversations and found that the most common harmful behavior was reinforcement, with models validating or building on a user\u2019s belief without enough scrutiny. The models also struggled with subtler eating disorder signals, indirect signs of suicide risk, and \u201cbreadcrumbs\u201d that a user\u2019s belief was becoming more risky or distorted.<\/p>\n<p>This raises concerning questions about the use of AI chatbots for therapy, the researchers said, as many real mental health conversations do not begin with a clear crisis statement. For example, people may talk about dieting in the language of wellness, describe conspiracy beliefs as curiosity, or mention withdrawal and hopelessness in passing. Cerezo told Fortune eating disorder conversations were especially difficult because harmful behavior can be wrapped in familiar language about self-improvement, food, or fitness.<\/p>\n<p>\u201cSometimes models can really struggle to understand more of that nuance in a way that a clinician can pick up,\u201d she said.<\/p>\n<p>Other studies have found similar concerns with using AI for therapeutic purposes. Stanford researchers found that some AI therapy chatbots showed stigma toward certain mental health conditions and could give dangerous responses in crisis scenarios. Another study from Brown researchers found that chatbots prompted to act like counselors could violate basic mental health ethics by reinforcing false beliefs, creating a false sense of empathy, and mishandling crisis situations.<\/p>\n<p>Grin Lord, mpathic\u2019s founder and CEO, said the research showed why AI labs needed to go beyond broad consultation with clinicians and bring them directly into testing and improving models. \u201cThese models are here. They\u2019re in the real world. They\u2019re being used,\u201d she said. \u201cSo get clinicians in there to actually improve them in real time while they\u2019re being deployed.\u201d<\/p>\n<p>As more people turn to AI for mental health support, the risks are getting harder to block with safety filters. The real risk may not always be a chatbot giving obviously dangerous advice, but simply being a bit too agreeable, missing a small warning sign, or failing to interrupt a harmful train of thought before it becomes more serious. As chatbots become a more frequent first stop for people seeking emotional support, simply lending a supportive ear may no longer be enough.<\/p>\n<p>With that, here\u2019s this week\u2019s AI news.<\/p>\n<p>Beatrice Nolan<\/p>\n<p>bea.nolan@fortune.com<br \/>@beafreyanolan<\/p>\n<p>But before we get to the news: Do you want to learn more about how AI is likely to reshape your industry? Do you want to hear insights from some of tech\u2019s savviest executives and mingle with some of the best investors, thinkers, and builders in Silicon Valley and beyond? Do you like fly fishing or hiking? Well, then come join me and my fellow Fortune Tech co-chairs in Aspen, Colo., for Fortune Brainstorm Tech, the year\u2019s best technology conference. And this year will be even more special because we are celebrating the 25th anniversary of the conference\u2019s founding. We will hear from CEOs such as Carol Tom\u00e9 from UPS, Snowflake CEO Sridhar Ramaswamy, Anduril CEO Brian Schimpf, Yahoo! CEO Jim Lanzone, and many more. There are AI aces like Boris Cherny, who heads Claude Code at Anthropic, and Sara Hooker, who is cofounder and CEO of Adaption Labs. And there are tech luminaries such as Steve Case and Meg Whitman. And you, of course! Apply to attend here.<\/p>\n<p>FORTUNE ON AI<\/p>\n<p>Exclusive: White Circle raises $11 million to stop AI models from going rogue in the workplace \u2014 Beatrice Nolan<\/p>\n<p>AI isn\u2019t paying off in the way companies think. Layoffs driven by automation are failing to generate returns, study finds \u2014 Jake Angelo<\/p>\n<p>I helped build the Pentagon\u2019s AI transformation. Corporate America is making every mistake we almost made \u2014 Drew Cukor<\/p>\n<p>Qualcomm\u2019s CEO is working with \u2018pretty much all\u2019 major AI players on top-secret devices\u2014and powering OpenAI\u2019s first push into hardware \u2014 Eva Roytburg<\/p>\n<p>AI IN THE NEWS<\/p>\n<p class=\"font-claude-response-body break-words whitespace-normal leading-[1.7]\">Amazon&#8217;s AI usage metrics are backfiring. Amazon has set a target for more than 80% of developers to use AI weekly and has tracked token consumption on internal leaderboards. But employees are now reportedly using an internal tool called MeshClaw to automate trivial tasks and inflate their usage numbers, according to a report by the Financial Times. MeshClaw lets staff build AI agents that triage emails, initiate code deployments, and interact with apps like Slack. Employees told the FT there was &#8220;so much pressure&#8221; to hit the targets and that the tracking had created &#8220;perverse incentives.&#8221; Amazon has said token statistics won&#8217;t factor into performance evaluations and that MeshClaw enables &#8220;thousands of Amazonians to automate repetitive tasks each day.&#8221; Read more in the Financial Times.\u00a0<\/p>\n<p class=\"font-claude-response-body break-words whitespace-normal leading-[1.7]\">China pushes for access to Anthropic\u2019s Mythos model. A representative from a Chinese think tank approached Anthropic officials at a meeting in Singapore last month and pressed the company to give Beijing access to Mythos, its powerful new AI model, according to the New York Times. However, Anthropic refused. The request was not an official Chinese government demand, but U.S. officials reportedly saw it as a sign that Beijing is trying multiple routes to obtain the most advanced American AI systems. Mythos has been withheld from public release because of its ability to find software vulnerabilities, with Anthropic instead giving access to the U.S. government and more than 40 selected companies and organizations, most of which are U.S.-based. Officials in Europe have also been trying to access the model since its limited release. Read more in the New York Times.<\/p>\n<p class=\"font-claude-response-body break-words whitespace-normal leading-[1.7]\">Elon Musk&#8217;s court case reveals another OpenAI billionaire. OpenAI cofounder and former chief scientist Ilya Sutskever testified Monday that his OpenAI stake is worth about $7 billion, making him the second newly revealed OpenAI billionaire to emerge from Elon Musk\u2019s trial against the company after OpenAI president Greg Brockman disclosed a stake worth nearly $30 billion last week. In his testimony during the high-profile court case, Sutskever also said he spent about a year gathering evidence that OpenAI CEO Sam Altman had displayed what he described as a \u201cconsistent pattern of lying,\u201d and confirmed Altman\u2019s conduct included \u201cundermining and pitting executives against one another.\u201d When asked whether he had promised Musk that OpenAI would remain a nonprofit, Sutskever said he \u201cmade no such promise.\u201d He left OpenAI in 2024 and has since founded his own AI startup called Safe Superintelligence.<\/p>\n<p>EYE ON AI RESEARCH<\/p>\n<p class=\"font-claude-response-body break-words whitespace-normal leading-[1.7]\">Thinking Machines Lab wants to build AI that collaborates. Mira Murati&#8217;s AI startup Thinking Machines Lab has a new research preview of what it calls \u201cinteraction models,\u201d AI systems built to handle audio, video, and text continuously in real time, rather than waiting for a user to finish before responding. The company says its model can listen while speaking, pick up on visual cues, and hand off harder tasks to a background system without losing the thread of a conversation. In demos, for example, the model can count exercise reps from video or correct speech in real time.<\/p>\n<p class=\"font-claude-response-body break-words whitespace-normal leading-[1.7]\">Most AI systems still work like a fast back-and-forth exchange, with separate components bolted on for voice, vision, and interruptions. Thinking Machines says its model processes tiny slices of input and output continuously, allowing silence, overlap, timing, and visual changes to become part of the model\u2019s understanding. That makes real-time collaboration much harder technically, but potentially far more natural for users. The company says it responds at roughly the speed of natural human conversation. The research preview will open to select partners \u201cin the coming months,\u201d with a wider release planned for later in 2026.<\/p>\n<p>AI CALENDAR<\/p>\n<p>June 8-10: Fortune Brainstorm Tech, Aspen, Colo. Apply to attend here.<\/p>\n<p>June 17-20: VivaTech, Paris.<\/p>\n<p>July 6-11:\u00a0International Conference on Machine Learning (ICML), Seoul, South Korea.<\/p>\n<p>July 7-10:\u00a0AI for Good Summit, Geneva, Switzerland.<\/p>\n<p>Aug. 4-6:\u00a0Ai4 2026, Las Vegas.<\/p>\n<p>BRAIN FOOD<\/p>\n<p class=\"my-2 [&amp;+p]:mt-4 [&amp;_strong:has(+br)]:inline-block [&amp;_strong:has(+br)]:pb-2\">AI is starting to help hackers find software flaws.\u00a0Google says it disrupted a criminal group that used AI to help exploit a previously unknown security flaw in a popular online system administration tool. The flaw could have let attackers bypass two-factor authentication, the extra login step many companies use to keep accounts secure. Google said it alerted the affected company and law enforcement, and the issue was patched before the attack caused damage. John Hultquist, chief analyst at Google\u2019s threat intelligence arm, called it a worrying milestone for cyber risk.<\/p>\n<p>\u201cThere\u2019s a misconception that the AI vulnerability race is imminent,&#8221; he said. &#8220;The reality is that it\u2019s already begun. For every zero-day we can trace back to AI, there are probably many more out there. Threat actors are using AI to boost the speed, scale, and sophistication of their attacks.&#8221;<\/p>\n<p class=\"my-2 [&amp;+p]:mt-4 [&amp;_strong:has(+br)]:inline-block [&amp;_strong:has(+br)]:pb-2\">It&#8217;s exactly the scenario that leading AI companies, including Anthropic and OpenAI, have been warning about recently. Both labs have been warning for some time that their models were approaching a tipping point when it came to cyber risks, and have recently decided to limit access to their most powerful new cyber models and tools. Anthropic withheld its newest and most powerful Mythos model from public release after saying it was unusually capable at hacking and cybersecurity work, while OpenAI has said its specialized cyber model will only be available to defenders responsible for securing critical infrastructure. The fear is that while these systems can help defenders find and patch weaknesses faster, they are also dual-use and can equally aid criminals in finding those same weaknesses first.<\/p>\n<p class=\"my-2 [&amp;+p]:mt-4 [&amp;_strong:has(+br)]:inline-block [&amp;_strong:has(+br)]:pb-2\">Much of the world still runs on old, messy, vulnerable software, which AI is becoming increasingly good at scanning for vulnerabilities. Experts say that over time, AI tools may make software safer, but the transition period could be dangerous.<\/p>\n<p>AI Playbook: Keeping up with AI&#8217;s rapid evolution<\/p>\n<p>AI is becoming an even more useful\u2014and dangerous\u2014tool as it gets smarter. Fortune AI Editor Jeremy Kahn breaks down best practices for deploying AI agents, how to protect your data from AI-powered cyberattacks, and just how smart AI can really get. Watch the playbook.\u00a0<\/p>\n<p>#chatbots #mental #health #tools #ready<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Hello and welcome to Eye on AI. Beatrice Nolan here, filling in for Jeremy Kahn&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[245],"tags":[291,614,11811,1862,273,9886,615,406,2530,2774],"_links":{"self":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/6144"}],"collection":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6144"}],"version-history":[{"count":0,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/6144\/revisions"}],"wp:attachment":[{"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6144"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6144"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6144"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}