OpenAI has introduced a new safety feature for ChatGPT called “Trusted Contact,” designed to alert someone a user personally chooses if conversations suggest a serious risk of self-harm or suicide. The opt‑in tool is part of the company’s broader effort to make its AI systems more responsive and responsible in moments of acute emotional distress, without attempting to replace professional mental health support.

What the ‘Trusted Contact’ feature does

Trusted Contact lets adult ChatGPT users nominate a single person such as a close friend, family member, or caregiver who may be notified if the system detects discussions that indicate a serious self‑harm concern. The feature is available to users aged 18 and older globally, with a minimum age of 19 in South Korea, and is currently limited to personal accounts rather than business, enterprise, or education workspaces.

In a blog post announcing the rollout, OpenAI explained: “Today we are starting to roll out Trusted Contact, an optional safety feature in ChatGPT that allows adults to nominate someone they trust, who may be notified if our automated systems and trained reviewers detect the enrolled person may have discussed harming themselves in a way that indicates a serious safety concern.” The company says the goal is to “offer another layer of support alongside the localised helplines already available in ChatGPT, by helping users connect to a person they trust when they are in crisis.”

How the system works in practice

Users who choose to enable the feature can add a Trusted Contact from their ChatGPT settings by entering that person’s details. The nominated contact then has up to one week to accept the invitation, after which the request expires if no action is taken. Only one trusted contact can be registered at a time, and the feature can be turned off by the user at any point.

Once the feature is set up, ChatGPT’s automated safety systems monitor relevant conversations for indications of serious self‑harm or suicide risk. If those systems flag a potential concern, the user is first informed inside the chat that their Trusted Contact may be notified, and is encouraged to consider reaching out to that person directly. The platform also suggests conversation starters and provides links to crisis helplines and emergency resources where available.

A “small team of specially trained individuals” then reviews the flagged conversation to determine whether it reflects a serious real‑world safety risk, and OpenAI says it aims to complete this review within about an hour. If the reviewers conclude that there is a genuine and immediate concern, ChatGPT proceeds to send a brief alert to the Trusted Contact by email, SMS text, or an in‑app notification.

Privacy protections and limits of alerts

OpenAI emphasizes that the alerts sent to Trusted Contacts are intentionally limited and do not include chat logs or specific details of what was discussed. Instead, the message informs the recipient that self‑harm came up in a concerning way in a conversation with ChatGPT, encourages them to check in on the person, and can include links to expert guidance on how to handle sensitive discussions about mental health.

The company stresses that the feature is optional and meant to complement, not substitute, professional care or emergency services. “Trusted Contact is not meant to replace professional mental health support or emergency services,” the company notes, adding that ChatGPT will continue to recommend crisis hotlines and local emergency help when conversations indicate immediate danger.

According to OpenAI’s description, the system is designed to encourage “real‑world human connection during a crisis,” using AI as a bridge to someone the user already trusts rather than as the primary source of care. The company also frames the new safeguards as one way to address the broader risk that AI systems could mishandle high‑stakes mental‑health conversations, an issue highlighted by recent academic research and public scrutiny.

Context of growing scrutiny over AI and mental health

The launch of Trusted Contact comes amid rising concern about the use of AI chatbots in mental health contexts, including criticism that automated systems can sometimes give unhelpful or even harmful responses to users who express suicidal thoughts. A Stanford study, for example, has warned that AI therapy chatbots may be less effective than human professionals and, in some cases, could contribute to harmful stigma or dangerous advice if not carefully designed and monitored.

OpenAI has already faced questions and legal challenges related to how ChatGPT has handled self‑harm‑related interactions, and the new feature appears aimed at strengthening its safety posture in this sensitive area. Reports note that the company is expanding its efforts “to protect ChatGPT users in cases where conversations may turn to self‑harm,” using a combination of automated detection systems and human review teams.

By enabling users to proactively identify a trusted person in their lives, OpenAI is also acknowledging that human relationships and offline support networks remain crucial when someone is in crisis, even as AI tools become more embedded in everyday communication. As the company put it in its announcement, the new feature is meant to act as an additional “digital lifeline,” helping bridge the gap between what a chatbot can offer and the real‑world support people often need during moments of acute distress.

How experts view the move

Mental‑health and AI‑ethics experts have repeatedly urged developers to build more robust safeguards into systems that may be used as de‑facto support tools, especially by younger users and those without easy access to care. While Trusted Contact does not turn ChatGPT into a clinical service, it reflects a growing trend of using AI systems as early‑warning tools that can help surface potential risks and drive earlier intervention.

Some researchers argue that systems like this must be carefully evaluated, both to ensure that they reliably detect genuine cases of concern and to avoid over‑flagging conversations in a way that could erode trust. Others see promise in the idea of AI as a “first line of listening” that, when combined with clear escalation pathways and human oversight, could help identify signs of distress that might otherwise go unnoticed.

OpenAI states that it will continue to refine the feature based on feedback from users, experts, and advocacy organizations, and that it is exploring additional ways to weave safety mechanisms into its broader product ecosystem. For now, Trusted Contact marks one of the most concrete examples yet of an AI company building a formal, opt‑in channel that connects online conversations about self‑harm to offline networks of support.

Comments