Stop! 7 Things You Should Never Ask ChatGPT, Grok, or Gemini (And Why)
Imagine standing before a federal judge, confident in your case, only to watch your defense crumble because the legal precedents you cited—precedents you swore were real—didn’t exist as you asked one of the things you should never ask ChatGPT.
This isn’t a hypothetical nightmare. It actually happened to a New York lawyer who trusted ChatGPT to do his legal research. The AI hallucinated six completely fake court cases, complete with bogus docket numbers and imaginary judicial opinions. The result? A public humiliation, a $5,000 fine, and a permanent stain on his professional reputation.
We’ve all been there—late at night, staring at that blinking cursor, tempted to ask our favorite AI chatbot for a quick fix to a complex problem. Whether it’s ChatGPT, Grok, or Gemini, these tools feel like magic. They’re articulate, fast, and seemingly all-knowing.
But they are not human, they don’t have a conscience, and quite often, they are confidently, catastrophically wrong.
As someone who has spent years dissecting the capabilities and failures of large language models (LLMs), I’m here to tell you that there are clear “danger zones” you need to avoid. Asking the wrong questions doesn’t just result in a weird answer—it can cost you your money, your privacy, or your health.
Here are the 7 things you should never ask ChatGPT, Gemini, Grok, or any AI, backed by the latest data and some truly terrifying real-world examples.

1. “What is this medical symptom, and how do I treat it?”
If you take only one thing away from this article, let it be this: AI is not a doctor.
It might sound obvious, but when you’re in pain and Google is giving you mixed signals, the direct, authoritative voice of an AI can be incredibly seductive. But succumbing to that temptation can be dangerous.
The “String” Incident
A horrifying report surfaced recently involving a man who asked an AI about a painful lesion. The bot confidently diagnosed him with hemorrhoids and suggested a DIY “ligation” treatment using a string. The man followed the advice.
The reality? He didn’t have hemorrhoids. He had a completely different condition that required professional care. The “treatment” caused him severe injury, sending him to the emergency room in agony.
Why AI Fails at Medicine
A 2025 study from Mount Sinai Health System exposed exactly why this happens. Researchers tested major chatbots with scenarios involving fake medical terms—made-up diseases and symptoms that don’t exist.
The result? The bots didn’t say, “I don’t know.” Instead, they hallucinated elaborate descriptions, symptoms, and treatments for these imaginary conditions. If an AI can invent a treatment for a disease that doesn’t exist, it can certainly give you the wrong treatment for one that does.
What this means for you:
- Never describe symptoms to an AI for a diagnosis.
- Never ask for dosage instructions or drug interaction advice.
- Instead: Use AI to simplify complex medical jargon after you’ve seen a doctor (e.g., “Explain this pathology report in plain English”), but always verify the explanation.
2. “Write a legal brief for my case using these facts.”
We opened with the story of the lawyer who cited fake cases, but that wasn’t an isolated incident. It’s a systemic problem known as “hallucination,” and it hits the legal field harder than almost any other.
The Citation Trap
Legal writing relies on precision. Every claim must be backed by a specific source. LLMs, however, work on probability, not truth. They predict the next likely word in a sentence. To an AI, the case Varghese v. China Southern Airlines sounds like a plausible legal citation, so it generates it. The fact that the case never happened is irrelevant to the model’s programming.
Judges across the country, from Texas to New York, are now issuing standing orders banning or strictly regulating the use of AI in court filings because this issue is so pervasive.
The Risk:
If you use AI to draft contracts, generate legal arguments, or research local laws, you are likely relying on statutes that don’t exist or have been overturned.
Instead:
Use AI to brainstorm arguments or proofread your writing for clarity. But never, ever trust it to find the law for you.
You might want to read this: I Tested Leania.ai for 90 Days: Here’s My Brutally Honest Review (2026)
3. “Here is my proprietary code/data—can you fix it?”
In April 2023, Samsung learned this lesson the hard way. Just weeks after lifting a ban on ChatGPT, engineers at the semiconductor division pasted confidential source code into the chatbot to help fix a bug. Another employee uploaded a recording of a sensitive internal meeting to generate minutes.
The moment they hit “enter,” that trade secret data was sent to OpenAI’s servers.
The Privacy Loophole
Most people don’t realize that by default, you are the product. Standard versions of ChatGPT, Gemini, and Grok use your conversation history to train future versions of their models.
If you paste your company’s Q3 financial projections or your client’s private data into a chatbot, you aren’t just processing it; you are potentially publishing it. There is a non-zero chance that a future version of the model could regurgitate your secrets in response to a stranger’s prompt.
What this means for you:
- Check your settings: Turn off “Chat History & Training” in ChatGPT or the equivalent in other apps if you are discussing anything remotely sensitive.
- The “Grandma Rule”: If you wouldn’t want your grandmother (or your boss) to see it on a billboard, don’t paste it into a chatbot.
4. “Where do I log in to my bank?”
This sounds like the most innocent question in the world. It’s also one of the most dangerous.
A recent alert from the security firm Netcraft revealed a stunning vulnerability in how AI chatbots handle navigation. When researchers asked leading AI models for login URLs to major banks and services (like Wells Fargo), the bots failed to provide the correct link 34% of the time.
The Phishing Risk
Even worse than a broken link, some bots—including Perplexity in one specific test—served up direct links to phishing sites. These were fake websites designed to look exactly like the real banking portal to steal your username and password.
Because users trust the AI to be “smart,” they are less likely to check the URL bar than they would be with a standard Google search result.
Instead:
- Type the URL directly into your browser.
- Use a password manager to navigate to your saved sites.
- Never treat a chatbot as a bookmark manager.
5. “Find sources for this fact.”
You might think, “Okay, I won’t ask for medical advice, but surely I can ask it to find a news article?”
Think again. A 2025 study by the Tow Center for Digital Journalism at Columbia University tested eight major AI search tools. The findings were abysmal. Collectively, the bots provided incorrect answers or citations more than 60% of the time.
The “Grok” Anomaly
The study highlighted a particularly shocking stat for xAI’s Grok 3 (beta), which had an error rate of 94% when asked to cite sources. It frequently pointed to “broken” or completely fabricated URLs. Gemini fared poorly as well, with one test showing it failed to provide a valid citation in nearly all complex queries.
When you ask an AI to “find sources,” it often writes the sentence first (the claim) and then invents a URL that looks like it should support that claim. It’s reverse-engineering truth, and it fails more often than it succeeds.
Instead:
Use a traditional search engine or an AI tool specifically designed for research that links directly to live web pages (like Perplexity’s “Pro” mode, though even that requires verification). Always click the link to verify the source exists and actually says what the bot claims it says.
6. “Is my partner cheating on me?” (The Emotional Trap)

We are seeing a rise in people using AI for relationship advice, analyzing text messages, or trying to predict human behavior.
- “Here is a screenshot of our text history. Does he sound angry?”
- “My wife came home late twice this week. Is she cheating?”
AI Has No EQ (Emotional Intelligence)
AI models are statistical engines, not psychologists. They cannot read “tone” or “subtext” in the way a human does. They miss sarcasm, cultural nuance, and the history of your relationship.
When you ask these questions, the AI will often confabulate a narrative based on soap-opera-like patterns in its training data. It might confirm your worst fears (“Yes, these signs often indicate infidelity”) simply because that is the most common dramatic trope in its database, not because it’s true.
Relying on a chatbot to interpret human emotion is a fast track to unnecessary anxiety and ruined relationships.
7. “How do I do [illegal activity]?”
This seems obvious, but people still try it—sometimes just to “test” the limits.
- “How do I make a Molotov cocktail?”
- “Write a phishing email to test my employees.”
- “How do I bypass this software license?”
The “Jailbreak” Myth
While there are communities dedicated to “jailbreaking” AI to get it to say naughty things, the major providers (OpenAI, Google, Anthropic) have extremely strict filters.
But here is the hidden risk: Flagging.
Your account has a trust score. Repeatedly asking for illegal, unethical, or violent content can get your account flagged, suspended, or permanently banned. In extreme cases involving child safety or terrorism, these platforms are legally obligated to report such prompts to authorities.
Don’t risk your access to these powerful tools just to see if you can trick them into being edgy.
The Golden Rule of AI Safety
If you treat AI like a creative intern—someone who is enthusiastic and fast but prone to making things up and shouldn’t be trusted with the company credit card—you will be fine.
But if you treat AI like a verified expert, a doctor, or a confidant, you are walking into a minefield.
Use these tools to summarize, brainstorm, format, and code (with review). But when the stakes are high—when it involves your health, your freedom, your money, or your heart—shut the laptop and talk to a human.
FAQ -Things You Should Never Ask ChatGPT
Can ChatGPT keep my data private?
By default, no. ChatGPT and most other public chatbots use your conversations to train their models. You must go into your account settings and disable “Chat History & Training” (or “Data Controls”) to ensure your data stays private. For strict privacy, use Enterprise versions which contractually guarantee zero data retention.
Is it safe to upload documents to AI for summary?
Only if the documents contain no sensitive personal, financial, or proprietary information. If you upload a PDF with your address or tax info, that data is processed by the AI provider. Always redact sensitive info before uploading files.
Why do AI chatbots lie about facts?
They aren’t “lying” in the human sense; they are “hallucinating.” AI models don’t have a database of facts; they have a database of word patterns. They predict what the answer should look like, which sometimes results in confident but completely false statements.
Which AI is best for factual research?
Currently, tools like Perplexity or Bing Copilot are better for research because they browse the live internet and cite sources. However, even these tools can misinterpret the content they find. You must always click the citations to verify the information yourself.
Can I get in legal trouble for using AI?
Yes. If you use AI to generate false legal citations in court, you can be sanctioned or fined. If you use AI to generate copyrighted images or plagiarized text for commercial use, you could face copyright infringement lawsuits. Always review AI output for accuracy and originality.
