What happened?

On 29 May 2025 I privately reported a vulnerability to the OpenAI disclosure mailbox using an encrypted email. The flaw allows peeking at chat responses intended for other users. This content may contain personal data, confidential business plans, or proprietary code. OpenAI acknowledged receipt with an automated reply, but I haven't received a human follow-up (as of the 16th of July), and the issue remains unpatched.

Why this isn't hallucination

The leaked responses show clear signs of being real conversations: they start with contextually appropriate replies, sometimes reference the original user question, appear in various languages, and maintain coherent conversational flow.
Most convincingly, one response contained accurate financial analysis of an obscure company with a non-Latin name in a small country. When I tested my own ChatGPT requesting the same report without web tools, it said: "Unfortunately, I don't have specific financial statements for [company name] in my training data, and since you've asked not to use web search, I can't pull them live." This proves the original response came from a real user session with web search enabled, not hallucination.

Why I didn't use bugcrowd

I chose to report this vulnerability via official disclosure email rather than through the bug bounty platform because of concerning terms in their disclosure agreement. When you submit through their portal, you're required to agree not to share any information about the issue you found - essentially a blanket non-disclosure that prevents researchers from discussing their findings publicly, even after remediation.
This approach seems misaligned with the broader security community's values and contrasts sharply with companies like Google, who encourage responsible disclosure and allow researchers to publish details after fixes are deployed. Transparency in security research benefits everyone by advancing collective knowledge and holding companies accountable for timely fixes.

Why speak up now?

I have followed the industry‑standard 45‑day disclosure window (CERT/CC, ISO/IEC 29147) as a good-faith effort to respond to my report. Because the vulnerability still exists and because users are unknowingly at risk, I am issuing this limited, non‑technical disclosure:
No exploit code, proof‑of‑concept, or reproduction steps are included here.
Only the fact and severity of the flaw are being disclosed.

Broader lessons

1.
Best-in-class models ≠ mature security. Market leaders may have "AI‑driven" security pipelines, yet real people still need to triage, reproduce, and remediate bugs. Even well‑funded teams can leave critical tickets untouched.
2.
Cloud LLMs amplify privacy stakes. Large language models ingest and generate fragments of our digital lives. A single misconfiguration can leak thousands of sensitive conversations in seconds. Treating privacy as an afterthought is untenable when the blast radius is this large.
3.
Transparency builds trust. Vendors that close the loop with researchers, publish post‑mortems, and ship fixes quickly keep users safer and strengthen their platforms.

What users may want to do

Avoid sharing sensitive content with OpenAI models until an official fix or advisory is released.
Use data‑segmentation features (if available) and scrub prompts of personal identifiers.
Monitor OpenAI security page for updates or mitigation guidance.

What vendors should do

Staff the security inbox with humans empowered to respond within 3–5 business days.
Publish a clear vulnerability response policy with service‑level objectives (SLOs).
Conduct periodic third‑party penetration tests that cover model‑to‑model isolation and data governance controls.
Reward, not ignore, good‑faith researchers. Bug bounty goodwill is perishable.
Do not restrict researchers from disclosing issues via the bug bounty portal policies.

Closing

I remain ready to collaborate with the OpenAI security team and will gladly test any candidate patch. Users deserve guarantees that their private conversations stay private. Until then, caution is advised.
— A concerned security researcher
github/proton/gmail/X/whatever: requilence
PGP Key
: 1234 5678 9ABC DEF0 1234 5678 9ABC DEF0 1234 5678
keybase.io/requilence