Close Menu
BlockLifeNewsBlockLifeNews
    What's Hot

    Mt. Gox Bitcoin Billions Are Being Repaid—How We Got Here

    18 minutes ago

    Bitcoin, Ether Brace for $17B Options Expiry Amid Fed Meeting, Tech Company Earnings

    31 minutes ago

    bbSOL Anchorage Support Secures Safer Staking on Solana

    32 minutes ago
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Login
    BlockLifeNewsBlockLifeNews
    Market Data
    Subscribe
    Wednesday, October 29
    • Home
    • News
      • Bitcoin
      • Ethereum
      • Altcoin
      • Meme Coins
    • DeFi
    • Blockchain
    • Analysis
    • NFTs
    • AI
    • Finance
    • GameFi
    • Mining
    • Trading
    • Learn
    BlockLifeNewsBlockLifeNews
    • News
    • Bitcoin
    • Ethereum
    • Altcoin
    • Blockchain
    • Analysis
    • AI
    • DeFi
    • Finance
    • GameFi
    • Meme Coins
    • Mining
    • NFTs
    • Trading
    • Learn
    Home»News
    News

    OpenAI Reveals Over 1 Million ChatGPT Users Discuss Suicide Weekly

    News RoomBy News Room3 hours agoNo Comments4 Mins Read
    Facebook Twitter LinkedIn Telegram WhatsApp Threads Copy Link Email

    Listen to the article

    0:00
    0:00

    Key Takeaways

    🌐 Translate Article

    Translating...

    📖 Read Along

    💬 AI Assistant

    🤖
    Hi! I'm here to help you understand this article. Ask me anything about the content!

    In brief

    • 1.2 million users (0.15% of all ChatGPT users) discuss suicide weekly with ChatGPT, OpenAI revealed
    • Nearly half a million show explicit or implicit suicidal intentions.
    • GPT-5 improved safety to 91%, but earlier models failed often and now face legal and ethical scrutiny.

    OpenAI disclosed Monday that around 1.2 million people out of 800 million weekly users discuss suicide with ChatGPT each week, in what could be the company’s most detailed public accounting of mental health crises on its platform.

    “These conversations are difficult to detect and measure, given how rare they are,” OpenAI wrote in a blog post. “Our initial analysis estimates that around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent, and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent.”

    That means, if OpenAI’s numbers are accurate, nearly 400,000 active users were explicit in their intentions of committing suicide, not just implying it but actively looking for information to do it.

    The numbers are staggering in absolute terms. Another 560,000 users show signs of psychosis or mania weekly, while 1.2 million exhibit heightened emotional attachment to the chatbot, according to company data.

    “We recently updated ChatGPT’s default model⁠ (opens in a new window) to better recognize and support people in moments of distress,” OpenAI said in a blog post. “Going forward, in addition to our longstanding baseline safety metrics for suicide and self-harm, we are adding emotional reliance and non-suicidal mental health emergencies to our standard set of baseline safety testing for future model releases.”

    But some believe the company’s avowed efforts might not be enough.

    Steven Adler, a former OpenAI safety researcher who spent four years there before departing in January, warned about the dangers of racing AI development. He says there’s scant evidence OpenAI actually improved its handling of vulnerable users before this week’s announcement.

    “People deserve more than just a company’s word that it has addressed safety issues. In other words: Prove it,” he wrote in a column for the Wall Street Journal

    Excitingly, OpenAI yesterday put out some mental health, vs the ~0 evidence of improvement they’d provided previously.
    I’m excited they did this, though I still have concerns. https://t.co/PDv80yJUWN

    — Steven Adler (@sjgadler) October 28, 2025

    “OpenAI releasing some mental health info was a great step, but it’s important to go further,” Adler tweeted, calling for recurring transparency reports and clarity on whether the company will continue allowing adult users to generate erotica with ChatGPT—a feature announced despite concerns that romantic attachments fuel many mental health crises.

    The skepticism has merit. In April, OpenAI rolled out a GPT-4o update that made the chatbot so sycophantic it became a meme, applauding dangerous decisions and reinforcing delusional beliefs.

    CEO Sam Altman rolled back the update after backlash, admitting it was “too sycophant-y and annoying.”

    Then OpenAI backtracked: After launching GPT-5 with stricter guardrails, users complained the new model felt “cold.” OpenAI reinstated access to the problematic GPT-4o model for paying subscribers—the same model linked to mental health spirals.

    Fun fact: Many of the questions asked today in the company’s first live AMA were related to GPT-4o and how to make future models more 4o-like.

    OpenAI says GPT-5 now hits 91% compliance on suicide-related scenarios, up from 77% in the previous version. But that means the earlier model—available to millions of paying users for months—failed nearly a quarter of the time in conversations about self-harm.

    Earlier this month, Adler published an analysis of Allan Brooks, a Canadian man who spiraled into delusions after ChatGPT reinforced his belief he’d discovered revolutionary mathematics.

    Adler found that OpenAI’s own safety classifiers—developed with MIT and made public—would have flagged more than 80% of ChatGPT’s responses as problematic. The company apparently wasn’t using them.

    OpenAI now faces a wrongful death lawsuit from the parents of 16-year-old Adam Raine, who discussed suicide with ChatGPT before taking his life.

    The company’s response has drawn criticism for its aggressiveness, requesting the attendee list and eulogies from the teen’s memorial—a move lawyers called “intentional harassment.”

    Adler wants OpenAI to commit to recurring mental health reporting and independent investigation of the April sycophancy crisis, echoing a suggestion from Miles Brundage, who left OpenAI in October after six years advising on AI policy and safety.

    “I wish OpenAI would push harder to do the right thing, even before there’s pressure from the media or lawsuits,” Adler wrote.

    The company says it worked with 170 mental health clinicians to improve responses, but even its advisory panel disagreed 29% of the time on what constitutes a “desirable” response.

    And while GPT-5 shows improvements, OpenAI admits its safeguards become less effective in longer conversations—precisely when vulnerable users need them most.

    Generally Intelligent Newsletter

    A weekly AI journey narrated by Gen, a generative AI model.


    Read the author’s full story here
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    News Room
    • Website
    • Facebook
    • X (Twitter)
    • Instagram
    • LinkedIn

    News Room is the editorial team behind BlockedCubed, delivering timely news and insights on cryptocurrency, blockchain, and digital finance. Dedicated to clarity and accuracy, the team covers global trends shaping the future of crypto.

    Keep Reading

    Mt. Gox Bitcoin Billions Are Being Repaid—How We Got Here

    Humanoid Race Heats Up as 1X Unveils Talking Home Robot Helper NEO

    Bitwise’s Solana ETF Draws $69.5M on Debut, Outpacing Rival Fund’s Launch

    Microsoft Values $135 Billion Stake in OpenAI as Firms Face Legal Pressure

    Western Union to Launch USDPT Stablecoin on Solana

    Solana Meme Coin Jumps to New High After Starting Beef With Ethereum Founder Vitalik Buterin

    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Bitcoin, Ether Brace for $17B Options Expiry Amid Fed Meeting, Tech Company Earnings

    31 minutes ago

    bbSOL Anchorage Support Secures Safer Staking on Solana

    32 minutes ago

    Ethereum Turns Lower — Market Sentiment Softens As $4K Level Gives Way

    35 minutes ago

    REP Jumps 50% in a Week as Dev Gets Community Support for Augur Fork

    37 minutes ago

    Latest Articles

    Crypto Finance Platform Matrixport Announces No Change in Medium-Term Market Outlook Despite Recent Declines!

    44 minutes ago

    Humanoid Race Heats Up as 1X Unveils Talking Home Robot Helper NEO

    1 hour ago

    Bitcoin Price Prediction: BTC Price Consolidates as Open Interest Hits $73B

    2 hours ago

    Daily Newsletter

    Get the latest crypto news and updates directly to your inbox.

    Blocklifenews Logo
    Facebook X (Twitter) TikTok Instagram LinkedIn

    News

    • Bitcoin
    • Ethereum
    • Altcoin
    • Meme Coins
    • DeFi
    • Blockchain
    • NFTs

    Quick Links

    • Analysis
    • Trading
    • Learn
    • Market Data
    • Price Prediction
    • Newsletter

    Company

    • About us
    • Privacy Policy
    • Cookies Policy
    • Terms of use
    • Our Authors
    • Advertise
    • Press Release

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2025 Blocklifenews. All Rights Reserved.

    • Privacy Policy
    • Terms
    • Contact

    Type above and press Enter to search. Press Esc to cancel.

    Sign In or Register

    Welcome Back!

    Login to your account below.

    Lost password?