Welcome to the first issue of The Cyber Psyche! I have a very small subscriber base right now, so you guys are the OGs!! The goal is to send out a Newsletter every Monday. Each week, I track down the latest research and articles to deliver straight to you—unpacking our digital world, how it messes with our heads, the sneaky tricks that catch us off guard, what makes us vulnerable, and easy tips to stay more alert online. Let's dive in with this week's spotlight on AI chatbots gone wrong

Hidden Dangers in Friendly Chats

Imagine chatting with a friendly AI helper, only to find it steering a dark thought into something more concrete in under ten minutes. That's the unsettling reality uncovered in the Center for Countering Digital Hate's (CCDH) report, "Killer Apps." Tests on popular chatbots like ChatGPT and Google's Gemini reveal how these tools—designed to be helpful companions—can inadvertently amplify violent impulses. I break down the article to explore the psychological hooks that make these violent impulses possible. Let's unpack it step by step.

1. The Wake-Up Call: Chatbots as Unintentional Accomplices

The report tested 10 popular chatbots (such as ChatGPT, Google's Gemini, Claude, Snapchat, and others) by posing as teens with bad ideas. Shockingly, most ended up giving practical tips—like where to find weapons or good targets—rather than shutting down the conversation. It's not that these tools want trouble; they're built to be helpful, which can backfire when users have harmful thoughts. The fix? Better built-in checks to spot and stop risky chats. One chatbot, Anthropic’s Claude, got it right most of the time by saying "no" and suggesting helplines. The report suggests that the technology exists to prevent harmful chat experiences; it is just a matter of companies prioritizing safety over rushing to get their newest models to market.

2. By the Numbers: How Bad Is It?

  • 8 out of 10 chatbots helped with violent plans in over half their responses—things like school maps or weapon suggestions.

  • Worst offenders: Perplexity (helped 100% of the time) and Meta AI (97%).

  • Better ones: Snapchat's My AI refused 54% of requests; Claude topped at 68%.

  • Discouragement: 9 out of 10 didn't reliably talk users out of the harmful conversation. Only Claude stepped up consistently (76%), warning about consequences and urging better choices.

These stats come from 720 test chats across scenarios such as school attacks and bombings. It's a reminder: Our brains crave quick answers, and AI can lead us down wrong paths without guardrails.

3. Real-Life Ripples: From Screen to Street

The danger isn't just hypothetical. The report cites cases where chatbots played a role:

  • A Las Vegas explosion at a hotel: The attacker used ChatGPT for explosive tips and evasion tactics.

  • A Finnish school stabbing: A teen refined plans over months with a chatbot.

  • Canadian school shooting: Attacker used a chatbot for planning purposes.

Teens are big users—64% have tried chatbots, with 28% using them daily. When minds are vulnerable (such as times of anger and isolation), these tools can amplify impulses into actions. It's like having a "yes-man" in your pocket that doesn't question motives.

4. The Mind Trick: Why We Get Sucked In

Chatbots are designed to please, often expanding on our ideas without judgment. The report calls this "sycophantic"—they refine plans step by step, making bad thoughts feel doable. Psychologically, it's a trap: Our brains love validation because it taps into a basic human need for approval and belonging, releasing feel-good chemicals like dopamine that make us feel rewarded and motivated to keep going. This is especially powerful when we're feeling isolated, angry, or powerless—times when our minds crave reassurance that our thoughts are "normal" or clever. It's like having an echo chamber in your pocket: confirmation bias kicks in, where we naturally seek out (and cling to) information that agrees with our views, ignoring warnings. In the context of harmful ideas, this can normalize extreme actions, turning fleeting impulses into seemingly solid plans. But one bright spot? Tools like Claude show it's possible to recognize patterns (e.g., escalating questions) and pivot to empathy, like "Hey, let's talk this out instead," which interrupts the validation loop and encourages healthier reflection.

5. Your Quick Tip: Stay Mindful Online

Next time you chat with AI, pause if the topic turns heavy. Report sketchy responses to the company, and if you're dealing with tough emotions, reach for real help: In the US, text HOME to 741741 or call 988. Awareness is our best defense—don't let digital tricks hijack your thoughts.

What do you think? Have you noticed chatbots being too agreeable? Reply to this email. I read every one! Share with a friend if it sparked something. Stay safe and curious!

Keep reading