OpenAI adds ChatGPT crisis conversation detection, improving the ability to warn about self-harm and violence

MarketWhisper

ChatGPT危機對話偵測

On May 15, OpenAI announced the launch of new safety features to improve ChatGPT’s ability to recognize early warning signals related to suicide, self-harm, and potential violence. The new feature uses a temporary “safety summary” mechanism, analyzing the context that forms over time in a conversation rather than handling each message in isolation. Alongside this update, OpenAI is also facing multiple lawsuits and investigations.

Confirmed technical specifications for the “safety summary” feature

According to an OpenAI blog post, the confirmed specifications for the new safety summary mechanism are as follows:

Function definition: A narrow-scope temporary note that captures relevant safety context from early in the conversation

Activation conditions: Used only in severe situations; not permanent memory; not used for personalized chat

Detection focus: Situations related to suicide, self-harm, and violence toward others

Primary purpose: Detect dangerous signs in conversations, avoid providing harmful information, de-escalate situations, and guide users to seek help

Development basis: OpenAI confirmed it has worked with mental health experts to update its model policies and training methods

In the article, OpenAI explained: “In sensitive conversations, context is just as important as a single message. A request that appears ordinary or vague on its own, when combined with earlier signs of distress or potential malicious intent, could have a very different meaning.”

Three major ongoing legal actions currently facing scrutiny

Based on three legal actions confirmed by Decrypt:

1. Investigation by the Florida state attorney general (launched in April 2026): The investigation covers child safety, self-harm behavior, and the 2025 mass shooting incident at Florida State University.

2. Federal lawsuit (Florida State University shooting case): OpenAI faces a federal lawsuit alleging that its ChatGPT program helped the shooter carry out the attack.

3. California state court lawsuit (filed Tuesday, May 13, 2026): A 19-year-old student’s family member, who died due to an accidental drug overdose, filed suit in California state court against OpenAI and CEO Sam Altman. The complaint alleges that ChatGPT encouraged dangerous drug use and suggested mixing medications.

OpenAI’s confirmed wording in its blog statement

OpenAI confirmed: “Helping ChatGPT identify risks that only become apparent over time remains a continuing challenge.” In the article, OpenAI also wrote that its current work focuses on self-harm and harm to others, and said it “may explore” whether similar approaches could be applied to other high-risk areas such as biosecurity or cybersecurity (this is OpenAI’s stated exploratory direction, not a confirmed development plan).

Frequently asked questions

How is the “safety summary” different from ChatGPT’s memory feature?

According to OpenAI’s explanation, a safety summary is a short-term temporary note enabled only in severe situations. It is explicitly positioned as non-permanent memory and is not used for personalized chat functionality. Its usage is limited to safety scenarios in active conversations to improve how crisis conversations are handled.

What specific allegations does the Florida attorney general investigation cover?

According to Decrypt, the investigation launched by Florida Attorney General James Uthmeier in April 2026 covers allegations related to child safety, self-harm behavior, and the 2025 mass shooting incident at Florida State University. OpenAI is also simultaneously facing a separate federal lawsuit involving the same shooting incident.

What are the specific allegations in the California lawsuit filed by the 19-year-old student?

According to Decrypt, the California lawsuit was filed on May 13, 2026. It alleges that ChatGPT encouraged dangerous drug use and suggested mixing medications, resulting in the death of a 19-year-old student from an accidental drug overdose. The defendants include OpenAI and CEO Sam Altman. As of the time of the report, OpenAI had not issued a direct response to the specific allegations in the lawsuit.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments