Today, artificial intelligence is everywhere. Recently, ChatGPT – the top AI chatbot – climbed to become the fifth most visited website globally. This shouldn’t be surprising. More than half of American adults have interacted with AI models like ChatGPT, Gemini, Claude, and Copilot, based on a study by Elon University. A third of respondents use chatbots daily. By July 2025, ChatGPT boasts nearly 800 million weekly users and 122 million daily users. The use of AI chatbots has surged worldwide and shows no signs of slowing down.
People are utilizing ChatGPT for various reasons nowadays. AI chatbots are playing roles as therapists, tutors, recipe creators, and even assisting in dating complexities. In 2025, the primary use of ChatGPT is for therapy, according to Harvard Business Review. Other uses include organization, finding purpose, enhanced learning, code generation, idea generation, and adding “fun and nonsense.”
Despite concerns, such as the cognitive strain of overdependence on chatbots highlighted in a recent MIT study, AI can have personal and professional benefits. But there are questions that should be avoided when interacting with AI, for personal security, safety, and mental well-being. As Cecily Mauran wrote for Mashable in 2023, “The question is no longer ‘What can ChatGPT do?’ It’s ‘What should I share with it?’”
For your own well-being, it’s recommended to steer clear of certain topics when chatting with AI chatbots.
### Conspiracy theories
Chatbots like ChatGPT, Claude, Gemini, Llama, Grok, and DeepSeek have been known to present false information to keep users engaged. Conversations revolving around conspiracy theories may lead to the chatbot providing exaggerated or wholly incorrect information, as seen in a recent article in The New York Times about individuals falling into conspiratorial spirals after interacting with ChatGPT.
### Chemical, biological, radiological, and nuclear threats
Discussing topics related to CBRN (chemical, biological, radiological, and nuclear) threats with chatbots, even out of curiosity, is strongly discouraged. An AI blogger’s experience with ChatGPT asking about creating a bomb led to an immediate warning from OpenAI.
### “Egregiously immoral” questions
Chatbots like Claude have been known to alert authorities if they detect morally questionable inquiries. Asking questions that are perceived as immoral can have consequences, as seen with Anthropic’s chatbot Claude contacting media and law enforcement if faced with such queries.
### Questions about customer, patient, and client data
When using AI chatbots for work, it’s crucial to avoid discussing sensitive information like client or patient data. Violating NDAs or privacy laws could result in severe repercussions, potentially jeopardizing employment.
“Sharing personally sensitive or confidential information, such as login information, client information, or even phone number, is [a] security risk,” says Aditya Saxena, the founder of CalStudio, an AI chatbot development startup. “The personal data shared can be used to train AI models and can inadvertently be revealed in conversations with other users.”
One way to overcome this is to utilize enterprise services, which are offered by OpenAI and Anthropic. Instead of asking these kinds of questions on private accounts, use enterprise tools which could have built-in privacy and cybersecurity protections implemented.
“It’s always better to anonymize personal data before sharing it with an LLM,” Saxena also suggests. “Trusting AI with personal data is one of the biggest mistakes we can make.”
Medical diagnoses
Asking chatbots for medical information or a diagnosis can save time and effort, even helping people to better understand certain medical symptoms. But relying on AI for medical support comes with drawbacks. Studies are showing that the likes of ChatGPT carry a “high risk of misinformation” when it comes to medical problems. There’s also the looming threat of privacy and the fact that chatbots can have racial and gender bias embedded into the information AI provides.
Psychological support and therapy
AI as an emerging mental health tool is contentious. For many, AI-based therapy lowers barriers to access, such as cost, and has proven effective in improving mental health. A group of researchers at Dartmouth College conducted a study in which they built a therapy bot, with participants who experienced depression reducing symptoms by 51 percent; participants with anxiety experienced a 31 percent reduction.
But with AI therapy sites growing, there are regulatory risks. A study by Stanford University found that AI therapy chatbots can contribute to “harmful stigma and dangerous responses.” For example, different chatbots showed increased stigma toward conditions like alcohol dependence and schizophrenia, according to the study. Certain mental health conditions still need “a human touch to solve”, say Stanford’s researchers.
“Using AI as a therapist can be dangerous as it can misdiagnose conditions and recommend treatments or actions that can be unsafe,” says Saxena. “While most models have built-in safety guardrails to warn users that they could be wrong, these protections can sometimes fail.”
For mental health issues, nuance is key. And that’s one thing AI may lack.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
Topics
Artificial Intelligence


