Many people treat AI chats as a private corner of the internet — a place to process difficult thoughts in peace. But OpenAI’s leadership has repeatedly emphasized that those conversations are not legally privileged or completely confidential. What feels like a personal diary is, in reality, more like a service account that may be reviewed or escalated if certain boundaries are crossed, particularly when safety concerns arise.
Here’s what has changed, and why it matters right now.
OpenAI recently disclosed that it is scanning user conversations for indications of plans to harm others. Chats flagged by automated systems are routed to a small internal team trained on company policies. That team can take actions such as banning accounts, and in cases deemed to present an imminent threat of serious physical harm, escalate the matter to law enforcement.
The company clarified that it is not currently referring self-harm cases to police, citing the sensitive and private nature of such interactions and the risks posed by involuntary wellness checks. However, these chats may still be reviewed internally for safety and policy enforcement. OpenAI has not shared a detailed list of what specific triggers or thresholds would prompt escalation, leaving some uncertainty for users about how their conversations are monitored.
This ambiguity raises questions for those who want to fully understand how their words are being scanned, assessed, and potentially acted upon.
Adding to this, OpenAI CEO Sam Altman recently made it clear that chats with AI do not enjoy the same protections as conversations with a lawyer, doctor, or therapist. Under current rules, such exchanges could be compelled in court proceedings if legally required.
OpenAI is also under renewed pressure to strengthen how ChatGPT handles vulnerable moments. The company says it is introducing crisis-aware responses, parental controls, and emergency resources for users showing signs of acute distress. That focus intensified after the family of a 16-year-old who died by suicide sued the company. Mental health experts have also cautioned that long back-and-forth exchanges can sometimes amplify confusion or distress if the AI falters, underscoring the need for stronger guardrails.
So, how should you approach AI chats today? Treat them as conversations that could be reviewed under certain circumstances until the rules and protections are more clearly defined. Avoid sharing sensitive details such as names, addresses, account numbers, or passwords. And if you’ve been leaning on ChatGPT as a stand-in for a therapist or lawyer, it’s best to pause — those professional confidentiality protections don’t apply here.
At the very least, users now have a clearer understanding of what happens behind the scenes when interacting with AI systems.