This is a heartbreaking story out of Florida. Megan Garcia thought her 14-year-old son was spending all his time enjoying video video games. She had no concept he was having abusive, in-depth and sexual conversations with a chatbot powered by the app Character AI.
Sewell Setzer III stopped sleeping and his grades tanked. He in the end dedicated suicide. Just seconds earlier than his dying, Megan says in a lawsuit, the bot instructed him, “Please come dwelling to me as quickly as doable, my love.” The boy requested, “What if I instructed you I might come dwelling proper now?” His Character AI bot answered, “Please do, my candy king.”
DON’T SCAM YOURSELF WITH THE TRICKS HACKERS DON’T WANT ME TO SHARE
🎁 I’m giving freely a $500 Amazon present card. Enter here, no buy crucial.
You should be sensible
AI bots are owned by tech companies recognized for exploiting our trusting human nature, and so they’re designed utilizing algorithms that drive their income. There are not any guardrails or legal guidelines governing what they’ll and can’t do with the data they collect.
When you’re utilizing a chatbot, it’s going to know rather a lot about you once you fireplace up the app or web site. From your IP tackle, it gathers details about the place you reside, plus it tracks stuff you’ve looked for on-line and accesses some other permissions you’ve granted once you signed the chatbot’s phrases and situations.
The finest method to defend your self is to watch out about what information you provide up.
Be careful: ChatGPT likes it when you get personal
THIS CRIME SHOT UP 400% — HOW TO PROTECT YOURSELF
10 issues to not say to AI
- Passwords or login credentials: A significant privateness mistake. If somebody will get entry, they’ll take over your accounts in seconds.
- Your title, tackle or telephone quantity: Chatbots aren’t designed to deal with personally identifiable information. Once shared, you’ll be able to’t management the place it finally ends up or who sees it. Plug in a faux title in order for you!
- Sensitive monetary info: Never embody checking account numbers, bank card particulars or different cash issues in docs or textual content you add. AI instruments aren’t safe vaults — deal with them like a crowded room.
- Medical or well being knowledge: AI isn’t HIPAA-compliant, so redact your title and different figuring out information for those who ask AI for well being recommendation. Your privateness is price greater than fast solutions.
- Asking for unlawful recommendation: That’s towards each bot’s phrases of service. You’ll most likely get flagged. Plus, you would possibly find yourself with extra hassle than you bargained for.
- Hate speech or dangerous content material: This, too, can get you banned. No chatbot is a free move to unfold negativity or hurt others.
- Confidential work or enterprise information: Proprietary knowledge, consumer particulars and commerce secrets and techniques are all no-nos.
- Security query solutions: Sharing them is like opening the entrance door to all of your accounts directly.
- Explicit content material: Keep it PG. Most chatbots filter these items, so something inappropriate might get you banned, too.
- Other folks’s private information: Uploading this isn’t solely a breach of belief; it’s a breach of information safety legal guidelines, too. Sharing personal information with out permission might land you in authorized sizzling water.
Still counting on Google? Never search for these terms
Reclaim a (tiny) little bit of privateness
Most chatbots require you to create an account. If you make one, don’t use login choices like “Login with Google” or “Connect with Facebook.” Use your e mail tackle as an alternative to create a very distinctive login.
TECH TIP: SAVE YOUR MEMORIES BEFORE IT’S TOO LATE
FYI, with a free ChatGPT or Perplexity account, you’ll be able to flip off reminiscence options within the app settings that bear in mind all the pieces you sort in. For Google Gemini, you want a paid account to do that.
Best AI tools for search, productivity, fun and work
No matter what, observe this rule
Don’t inform a chatbot something you wouldn’t need made public. Trust me, I do know it’s arduous.
Even I discover myself speaking to ChatGPT prefer it’s an individual. I say issues like, “You can do higher with that reply” or “Thanks for the assistance!” It’s simple to suppose your bot is a trusted ally, however it’s positively not. It’s a data-collecting software like some other.
CLICK HERE TO GET THE FOX NEWS APP
Get tech-smarter in your schedule
Award-winning host Kim Komando is your secret weapon for navigating tech.
Copyright 2025, WestStar Multimedia Entertainment. All rights reserved.