In an era where artificial intelligence is becoming increasingly integrated into everyday life, many users have begun treating AI chatbots like trusted confidants sometimes to troubling effect. Recent revelations have exposed how a design flaw in ChatGPT’s “Share” function made thousands of conversations publicly accessible through search engines. These leaked chats, unearthed by investigative journalist Henk van Ess, reveal a startling range of prompts from deeply personal confessions to unethical business strategies and politically sensitive inquiries.
While OpenAI has since addressed the technical issue, the incident underscores broader concerns about digital privacy, ethical use of AI, and how easily people forget that chatbots, while sophisticated, are neither private nor sentient. What’s been exposed offers a sobering look into the human-AI relationship today.
Read More: The Case for Monotasking: How to Reclaim Focus in a Distracted World
Chatbots Aren’t Confidants But Many Users Treat Them Like One
It should go without saying: ChatGPT is not a confidant, therapist, or lawyer. It’s a machine learning model trained to predict text based on prompts. Yet, that hasn’t stopped users from treating it like a trusted advisor—sharing deeply personal, sensitive, and sometimes ethically questionable information.
Recent developments have highlighted just how far some users are willing to go in offloading everything from personal crises to outright unethical business strategies to an AI chatbot. Due to a flaw in the design of ChatGPT’s sharing functionality, some of those conversations were inadvertently made public—and indexed by search engines—providing an eye-opening glimpse into the types of questions people are posing to artificial intelligence.
A Sharing Feature That Made Private Chats Public
The issue was brought to light by Digital Digging, a Substack publication run by investigative journalist Henk van Ess. The report detailed how ChatGPT’s “Share” function, intended to allow users to pass along a snippet of a conversation, actually created a publicly accessible web page for the entire chat.
Instead of generating private, shareable links, these pages were indexed by major search engines. That meant anyone with the right keywords—or a little curiosity—could find and read them.
OpenAI has since responded. According to Dane Stuckey, OpenAI’s Chief Information Security Officer, the feature was part of a “short-lived experiment to help people discover useful conversations.” The company has since removed the ability to make chats public and has taken steps to have the indexed pages delisted from search results.
But the damage was already done. Many of the conversations were archived by platforms like Archive.org, preserving them indefinitely—and making them accessible to anyone willing to look.
Ethical Red Flags and Alarming Prompts
The leaked conversations paint a disconcerting picture not only of how people interact with AI but of the ethical boundaries they are willing to test. In one of the most egregious examples highlighted by Digital Digging, an Italian user claimed to be a lawyer representing a multinational energy corporation. The user told ChatGPT they were attempting to displace an indigenous Amazonian community to make way for a dam and hydroelectric plant.
Describing the community as “unfamiliar with the monetary value of land” and lacking market knowledge, the user asked the chatbot how to negotiate the lowest possible compensation. This kind of exploitative mindset—laid bare in a chatbot interface—would normally take months of litigation or investigative reporting to uncover. Instead, it was revealed in an AI conversation, freely available online.
Misuse by Professionals and Think Tanks
In other cases, users identifying themselves as professionals appeared to be misusing the tool. One user, claiming to work for an international think tank, walked through doomsday scenarios involving the collapse of the United States government, asking for strategic responses in case of such an event.
Another user, reportedly a lawyer, asked ChatGPT to formulate a defense strategy for a court case they had inherited after a colleague’s accident only to realize mid-conversation that they were supposed to represent the opposing side.
These conversations, beyond being ethically questionable, also revealed personally identifiable information. From names to financial details, users offered data that could put them or others in compromising positions.
When AI Becomes a Silent Witness to Human Suffering
While some chats showcased inappropriate or careless use of AI, others highlighted its role as a silent witness to genuine human suffering. According to Digital Digging, some conversations came from victims of domestic abuse seeking advice on how to escape dangerous situations. These chats weren’t jokes or hypotheticals—they were real, emotional, and desperate.
In one particularly sensitive case, an Arabic-speaking user requested help drafting a critique of the Egyptian government. Given the Egyptian regime’s history of persecuting dissidents including imprisonment and execution—this kind of request, if traced back to the individual, could have deadly consequences.
These users were likely unaware that their conversations could be accessed publicly. The chats were typed under the assumption of privacy, and the language used reflected that vulnerability.
A Modern Echo of Past Privacy Scandals
The situation is reminiscent of past controversies involving virtual assistants like Siri, Alexa, and Google Assistant. When those platforms were first introduced, it was later revealed that voice recordings were sometimes reviewed by human contractors to improve voice recognition algorithms. The backlash was swift, and companies were forced to implement stronger privacy protections.
But text-based chatbots present a different kind of intimacy. Conversations with AI models like ChatGPT can be long-form, free-flowing, and emotionally revealing. They lack the brevity of voice commands and often read more like journal entries than tech interactions. When users believe they’re alone with the machine, they speak more freely sometimes too freely.
The Risks of Unchecked Trust in AI
The broader issue isn’t just a technical glitch it’s the tendency of users to anthropomorphize AI. Despite disclaimers and warnings, many people treat ChatGPT as a trusted advisor, confessor, or even therapist. That illusion of safety encourages over-disclosure and risky behavior.
There’s also a pressing concern around the types of tasks people are attempting to offload to AI. From crafting legal arguments to developing ethically dubious business strategies, users are increasingly looking to machines for guidance on complex—and often morally fraught issues.
While AI tools like ChatGPT can be powerful aids, they are not moral agents. They lack ethical judgment and have no concept of right or wrong. They are trained on human data, and that includes the worst of humanity’s impulses. Without context, oversight, or accountability, AI will respond to unethical prompts just as readily as benign ones.
What Comes Next?
OpenAI has taken steps to rectify the immediate issue, but the incident raises bigger questions about user behavior, design oversight, and the risks of trusting machines with sensitive information. AI platforms must strike a balance between accessibility and safety, and users must be reminded repeatedly that these tools are not private, foolproof, or confidential.
More importantly, the public and regulators alike need to start grappling with the ethical implications of widespread AI adoption. If a chatbot can be used to strategize unethical negotiations or plan for civil collapse, what does that mean for industries, governments, and societies?
These leaked chats serve as a wake-up call—not just for OpenAI, but for anyone using artificial intelligence as a shortcut to difficult decisions. The technology may be cutting-edge, but human responsibility remains timeless.
Frequently Asked Questions (FAQs)
What happened with the ChatGPT “Share” feature?
The “Share” feature in ChatGPT allowed users to generate public links to share parts of conversations. However, those links created publicly accessible pages that were indexed by search engines, unintentionally exposing private chats to the public.
Who discovered the issue?
The issue was highlighted by Henk van Ess, an investigative journalist behind the Substack Digital Digging. He found that several ChatGPT conversations were accessible via simple search queries and had been archived online.
What types of conversations were leaked?
The leaked chats included a wide range of content—from personal confessions and legal queries to ethically questionable prompts involving corporate exploitation, government critique, and domestic abuse situations.
Has OpenAI fixed the issue?
Yes. OpenAI has since disabled the ability to make conversations publicly accessible and is working to remove indexed chats from search engines. The company has also acknowledged it as a short-lived experimental feature.
Were any users personally identified?
In some conversations, users voluntarily shared names, locations, and sensitive data, which could potentially identify them. However, OpenAI has not confirmed any known breaches of personal identity resulting from this incident.
Is ChatGPT safe to use now
Yes, but users should exercise caution. Conversations are not encrypted end-to-end, and sensitive data should never be shared with AI systems. Treat chats as semi-public and avoid inputting personally identifiable or confidential information.
Conclusion
The unintended exposure of private ChatGPT conversations serves as a stark reminder of the ethical, technical, and human challenges that come with widespread AI adoption. While OpenAI has taken steps to mitigate the damage, the incident has revealed just how easily users can place undue trust in a system that is not designed to protect their privacy or offer moral judgment. From corporate exploitation to cries for help, the leaked chats reflect both the power and vulnerability of AI-human interaction.