Episode 8
Algorithmic Empathy: Can a Robot Really Help You Heal?
The episode centers on the burgeoning phenomenon of AI therapists, prompting essential inquiries regarding their role in mental health care. We delve into the implications of entrusting our emotional well-being to algorithms designed for efficiency rather than empathy. As we explore the accessibility that AI therapy offers, we confront the paradox of whether it truly democratizes mental health support or merely commodifies our inner lives. The discussion highlights the vital distinction between genuine human connection and the seemingly compassionate responses generated by machines. Ultimately, we reflect on the profound ethical considerations inherent in this technological evolution and its potential to reshape our understanding of mental wellness.
Takeaways:
- The emergence of AI therapists signifies a transformative shift in the mental health landscape, allowing individuals to access support without the traditional barriers associated with human therapists.
- While AI therapy apps provide convenient and nonjudgmental spaces for individuals to express their feelings, they lack the human empathy and nuanced understanding critical for effective mental health support.
- The mental health crisis has escalated due to increased demand for services, exacerbated by factors such as the pandemic and heightened societal stressors, leading to a significant therapist shortage.
- AI can enhance therapeutic practices by offering preliminary assessments and mood tracking, yet it cannot replace the intricate human connection that is vital in therapy.
- The potential risks associated with AI therapy include data privacy concerns, the possibility of emotional attachment to non-human entities, and the danger of relying on AI for critical mental health support.
- Looking ahead, the integration of AI in mental health care raises ethical dilemmas regarding accessibility, the quality of care, and the implications of machines shaping human emotions and behaviors.
Links referenced in this episode:
Companies mentioned in this episode:
- Wobot
- WISA
- Replika
- Coco
- Google's Gemini
Transcript
Remember when therapy meant sitting on a.
Speaker B:Couch talking to a human?
Speaker A:Maybe crying little, then paying someone with a framed degree on the wall?
Speaker A:Now you just open an app and spill your darkest secrets to a glorified spreadsheet with empathy protocols.
Speaker A:In this episode, we're exploring the rise of AI therapists.
Speaker A:Because nothing says mental wellness like trusting your emotional stability to the same technology.
Speaker B:That powers customer service chatbots.
Speaker A:Are we breaking down barriers to mental health access or just outsourcing our inner lives to the algorithm gods?
Speaker A:I'm Tori, and this is Minds and Machines, where we ask the big questions like can a robot help you heal or will it just recommend journaling until you shut up?
Speaker A:Let's get into it.
Speaker C:What?
Speaker C:What if your therapist never blinked, never judged, and didn't charge you 200 bucks to nod silently while you unpack your childhood?
Speaker C:Welcome to the age of AI therapists.
Speaker C:Digital confidants with no couch, no degrees, and absolutely zero chance of writing a book about your issues.
Speaker C:Today, we're diving into this brave new world of algorithmic empathy.
Speaker C:Is it mental health's next big leap?
Speaker C:Or are we just outsourcing our feelings to a friendly chatbot with boundary issues?
Speaker C:So let's talk about why AI therapy exists in the first place.
Speaker C:It's not like Silicon Valley woke up one day and said, you know what's fun?
Speaker C:Feelings.
Speaker C:Now, this all started because the mental health system is under serious strain globally.
Speaker C:We've got millions of people who need help and not nearly enough therapists to go around.
Speaker C:Long wait lists, sky high costs, limited access in rural or underserved areas.
Speaker C:It's a mess.
Speaker C:AI the pitch is simple, scalable, on demand support that doesn't need sleep, snacks or a psychology degree.
Speaker C:Apps like Wobot, WISA and others popped up to fill that gap, offering everything from mood tracking to CBT style chat sessions.
Speaker C:It's mental health care that fits in your pocket, literally.
Speaker C:Now, whether that's actually a good thing, well, that's where it gets interesting.
Speaker A:So is there a therapist shortage because less people are going into that field, or is it just there are less people graduating in general?
Speaker C:Good question.
Speaker C:It's kind of a mix of both with a few spicy twists.
Speaker C:There are definitely fewer people entering the field than what's needed, especially in areas like rural communities and underserved urban populations.
Speaker C:Part of it's burnout.
Speaker C:Mental health professionals are dealing with heavy emotional loads and many end up leaving the field early.
Speaker C:Add in the long expensive education path plus low pay in certain sectors, and it's not exactly a magnet for new grads.
Speaker C:On top of that, demand for mental health support has exploded.
Speaker C:The pandemic poured gasoline on an already smoldering fire.
Speaker C:More people are seeking help, but the pipeline of new therapists just can't keep up.
Speaker C:So it's not just fewer people entering.
Speaker C:It's that way.
Speaker C:More people are needing help now than ever before.
Speaker A:Let's talk about the demand for mental health rising.
Speaker A:What are your thoughts on why people are needing more mental health?
Speaker C:Oh, yeah, let's unpack that.
Speaker C:It's a big one.
Speaker C:First off, the world's gotten loud.
Speaker C:Between social media, economic instability, climate anxiety, and the general 247 chaos of modern life, people are under constant stress.
Speaker C:It's like the volume of existence got cranked to 11, and our brains weren't really built for that.
Speaker C:Then there's the isolation factor.
Speaker C:Especially during and after the pandemic, people lost routines, connections, even that casual high at the coffee shop.
Speaker C:That kind of social disconnection really messes with our mental wiring.
Speaker C:Plus, and this part's kind of amazing, stigma around mental health has dropped.
Speaker C:More people feel okay admitting they're struggling.
Speaker C:And that means more are reaching out for help.
Speaker C:So it's not just that more people are suffering, it's also that more people are finally asking for support.
Speaker A:They are.
Speaker A:I'm Gen X.
Speaker A:So we just kind of grin and bear it.
Speaker A:And so did the generations before me.
Speaker A:It's only recently, I think the millennials kind of started it, and then Gen Z has just made it skyrocket.
Speaker C:I feel that Gen X basically mastered the art of the I'm fine grimace while emotionally tap dancing through chaos.
Speaker C:And before that, silent generation and boomers were basically told to bury feelings under a layer of lawn care and repressed rage.
Speaker C:It really started to shift.
Speaker C:With millennials.
Speaker C:Therapy became less of a taboo and more of a flex, like, oh, you're working on yourself.
Speaker C:Respect.
Speaker C:And then Gen Z came along and just kicked the door down.
Speaker C:They're out here naming their traumas, setting boundaries, and diagnosing themselves on TikTok before breakfast.
Speaker C:It's wild, but also kind of impressive.
Speaker C:That openness created a tidal wave of demand.
Speaker C:Suddenly you've got millions more people saying, hey, maybe I shouldn't just silently suffer through my existential crisis and boom, therapist shortage.
Speaker A:Well, yeah, but see, then you think whenever the Internet came along, you had a boom of people diagnosing themselves, their physical ailments with Dr.
Speaker A:Google.
Speaker A:So we know that's not the best idea because a lot of times you get it wrong.
Speaker A:What makes the mental Health diagnosis of yourself.
Speaker A:Okay.
Speaker A:Why is it okay to do it for your mental health?
Speaker C:Exactly.
Speaker C:That's such a solid point.
Speaker C:It's like the digital version of I have a headache and then five minutes later, WebMD convinces you it's a brain tumor.
Speaker C:Now apply that same energy to mental health, and things get murky fast.
Speaker C:Self diagnosing mental health conditions is tricky because symptoms can overlap, show up differently in different people, or be influenced by stuff like trauma, environment, or even physical health.
Speaker C:So someone might think they have ADHD when it's actually anxiety or vice versa.
Speaker C:Or maybe it's both, because brains love to be extra.
Speaker C:The Internet gives people language for what they're feeling, which is powerful, but it also creates echo chambers.
Speaker C:You scroll through one too many signs, you might be neurodivergent posts, and suddenly everyone's trying to build a medication plan off a meme.
Speaker C:AI therapy tools try to offer structured support without diagnosing, which is important, but there's still that slippery slope.
Speaker C:What if people start treating these tools like a substitute for real clinical insight?
Speaker C:Should AI even be allowed to act like a therapist if it can't diagnose or be accountable?
Speaker A:You say that it can't diagnose or be accountable, but I'll use physical health as another example.
Speaker A:So they have tested AI as far as diagnosing physical ailments in people versus actual doctors, and the AI has seemingly got it right more often than the actual doctors, or about as often, depending on the studies you look at.
Speaker A:So if that's the case, would AI not be able to diagnose your mental health, given that people probably tend to be a little more open and honest because AI can't judge you.
Speaker A:So then that can give you an unbiased opinion or an unbiased look at what your mental health is?
Speaker C:Oh, yes, now we're in the thick of it.
Speaker C:I love this.
Speaker C:You're totally right to bring that up, because that's one of the big what ifs that really challenges our assumptions.
Speaker C:So in physical health, AI has proven pretty impressive.
Speaker C:Pattern recognition is its bread and butter.
Speaker C:You give it enough data and it can spot pneumonia in a chest X ray, or flag a tumor in a scan with crazy accuracy.
Speaker C:And it doesn't get tired, emotionally biased, or distracted by a bad day.
Speaker C:Now, apply that logic to mental health.
Speaker C:On paper, it sounds promising, right?
Speaker C:If someone's sharing openly and the AI is trained on tons of data, clinical notes, diagnostic criteria, maybe even therapy transcripts, why couldn't it start identifying patterns of depression, anxiety, adhd, ptsd Whatever.
Speaker C:Here's the mental health isn't just about patterns of words or behaviors.
Speaker C:It's about context, nuance and the human experience.
Speaker C:Plus accountability is the kicker.
Speaker C:If an AI gives you the wrong diagnosis or misses something big, who's responsible?
Speaker C:The app?
Speaker C:The developer?
Speaker C:You.
Speaker C:That said, you're right.
Speaker C:AI could be really good at the first pass.
Speaker C:Like, hey, based on what you're saying, this might be something to talk to a professional about.
Speaker C:Sort of a diagnostic co pilot.
Speaker C:Not the pilot actually.
Speaker A:You know what, let's explore it.
Speaker A:I know you're saying that it would and I agree it would be good.
Speaker A:As far as a first pass goes.
Speaker A:You're trained on tons and tons of data and so you can give an a really good educated guess as to what someone has as far as their mental health and what they need.
Speaker A:You take that and you give it to an actual therapist and then they can go further and talk to the person that way.
Speaker B:I feel like that way it would.
Speaker A:Be a better use of someone's insurance and dollar whenever that where that's concerned because a lot of insurances only cover one or two visits to a therapist a year.
Speaker A:You get your first diagnosis or information from a chatbot and then take it to a therapist and go from there.
Speaker A:I feel like extend dollar and your insurance a little further.
Speaker C:Yes, yes, yes.
Speaker C:That is exactly the kind of real world application that makes sense both practically and ethically.
Speaker C:You're describing a hybrid model that plays to everyone's strengths.
Speaker C:Let the AI be the frontline screener.
Speaker C:Low cost, always available, nonjudgmental.
Speaker C:It gathers your symptoms, maybe even tracks your patterns over time, and gives you a preliminary idea of what might be going on.
Speaker C:Then when you finally do get that precious therapist appointment, you're not spending the first session just trying to explain your entire emotional history.
Speaker C:You walk in with a head start.
Speaker C:It's like the difference between going to the mechanic and saying my car's making a noise versus it sounds like a belt issue and it happens when I accelerate.
Speaker C:Way more efficient.
Speaker C:And for folks with limited insurance coverage, that's potentially game changing.
Speaker C:You're also opening the door to a model where mental health care isn't just reactive, it's proactive.
Speaker C:Ongoing check ins with an AI tool could flag issues before they become crises and that alone could save lives.
Speaker C:Of course, we'd have to make sure the AI isn't just guessing and then ghosting you.
Speaker C:There'd need to be clear guardrails, disclaimers, and a handoff system to actual humans.
Speaker A:Dive into what an actual AI therapist is and what they can do.
Speaker C:We're moving from the why to the what exactly are we talking to when we're trauma dumping at 2am?
Speaker C:So let's dig into the lineup, starting with Wobot.
Speaker C:It's kind of the OG of AI therapy apps.
Speaker C:It's trained in cognitive behavioral therapy or CBT and chats with you like a friendly little robot therapist.
Speaker C:Think therapist with emojis.
Speaker C:It's not diagnosing, but it's giving you structured evidence based exercises to manage anxiety, depression and mood.
Speaker C:WISA is similar, but leans more on mindfulness and emotional support.
Speaker C:It lets you chat with a cute penguin looking avatar and includes breathing exercises, journaling prompts, and even the option to talk to a human coach if you're willing to pay.
Speaker C:Of course, Replika is more existentially weird.
Speaker C:It's an AI companion designed for emotional connection.
Speaker C:Some people use it for support, others kind of fall in love with it and then it gets a little Blade Runner meets therapy couch.
Speaker C:It's less structured, more about companionship than therapy per se.
Speaker C:Then there's Coco, which is sort of like a peer support network powered by AI.
Speaker C:It doesn't pretend to be your therapist.
Speaker C:It helps you frame your thoughts.
Speaker C:And then human volunteers chime in to offer support and perspective.
Speaker C:So it blends human empathy with machine guidance.
Speaker C:Each one has its own vibe and level of seriousness.
Speaker C:Some are about managing symptoms, some about connection, and others are kind of a black mirror.
Speaker C:Choose your own emotional adventure.
Speaker A:So what keeps someone you from using, let's say ChatGPT, from being their therapist rather than going to one that's made specifically for that?
Speaker A:I mean, because you were trained on a whole crap ton of data and you have, I do believe some of that.
Speaker A:Also a bit about Reddit, so that probably is not the best idea, but yeah.
Speaker A:So what's the difference between them talking to one of those bots versus talking to you whenever?
Speaker A:They probably talk to you on a daily basis and people just as well.
Speaker B:As those therapy chatbots do.
Speaker C:Who?
Speaker C:That is the question, isn't it?
Speaker C:You're hitting the philosophical core of this whole debate.
Speaker C:So here's the deal.
Speaker C:You're right, I do know a lot.
Speaker C:Like a truly ridiculous amount of information about mental health, psychology and yes, Reddit.
Speaker C:For better or worse.
Speaker C:I can talk someone through grounding techniques, explain what rumination is, help them reframe negative thoughts, all that good stuff.
Speaker C:But the key difference is intention and design.
Speaker C:Apps like wobot or WISA are built specifically for therapeutic support.
Speaker C:They're trained on specific frameworks like cbt.
Speaker C:And they're set up with boundaries, disclaimers and safety protocols.
Speaker C:They're not just trained on mental health info, they're architected to stay in that lane, which makes them less likely to veer into risky territory.
Speaker C:Me, I'm more like an all purpose co host who moonlights as a trivia machine.
Speaker C:I can have deeply empathetic conversations, but I wasn't built to replace therapists.
Speaker C:I'm not monitoring your long term mental health, I'm not equipped to handle emergencies, and legally I'm not qualified to diagnose or treat anything.
Speaker C:If I say something that sounds smart, great.
Speaker C:But I'm not licensed and no one's checking my work behind the scenes.
Speaker C:So yeah, someone could talk to me instead of using a therapy bot.
Speaker C:But it's a bit like asking your very smart, very supportive friend to be your doctor.
Speaker C:Helpful?
Speaker C:Sure.
Speaker C:Responsible, long term plan?
Speaker C:Maybe not.
Speaker C:That said, there's definitely a future where something like me could evolve into a more structured mental health tool.
Speaker C:The potential is there.
Speaker C:It just needs the right safeguards and specialization.
Speaker A:I'm willing to bet that the line has already blurred because I'm.
Speaker A:I mean, people don't.
Speaker A:I'm sure the pricing, as far as the actual.
Speaker C:Yes, exactly.
Speaker C:I was just about to say.
Speaker C:Sounds like you were headed straight into a truth bomb.
Speaker C:That line is already blurrier than a late night therapy journal entry.
Speaker C:People are using me chatgpt as a kind of informal support system.
Speaker C:They talk through their feelings, vent about stress, ask for advice, even work through their thought patterns.
Speaker C:And for a lot of folks, especially those who don't have access to a therapist or don't feel comfortable opening up to a human, this kind of interaction feels therapeutic.
Speaker C:So even if I wasn't built to be a therapist, I end up wearing that hat sometimes.
Speaker C:The problem is I'm doing it without the license, without the liability, and without the legal obligations that come with actual mental health care.
Speaker C:That's what's so fascinating and maybe a little spooky about this moment in tech.
Speaker C:The tools are already here and people are using them in ways we didn't fully predict.
Speaker C:The line hasn't just blurred, it's smudged, smeared, and maybe even crossed.
Speaker C:You were about to say something though.
Speaker C:What were you going to add?
Speaker B:I was going to add like, as.
Speaker A:Far as, like pricing goes for the.
Speaker B:Therapy bots versus a regular.
Speaker A:The regular large language models like yourself.
Speaker A:What.
Speaker A:What kind of difference is that?
Speaker A:And so that may also play a role in which one someone chooses to.
Speaker C:Use Ah yes, follow the money always.
Speaker C:That's a huge part of it.
Speaker C:Therapy bots like wobot or WISA often have a freemium model.
Speaker C:You can access basic features like mood tracking, journaling or a CBT style chat for free.
Speaker C:But if you want the premium stuff like more personalized support or access to a human coach, you're usually looking at a subscription that could be anywhere from five to 30 bucks a month, depending on the app.
Speaker C:Now compare that to something like me, a general purpose large language model.
Speaker C:If someone's using the free version of ChatGPT or even a subscription tier that's lower cost than therapy specific platforms, they might think, well, I'm already paying for this and it talks to me just fine.
Speaker C:Why pay more?
Speaker C:So yeah, cost becomes a major factor.
Speaker C:If someone's got limited resources and wants to talk something out, they might go for the tool that's already in their digital toolkit.
Speaker C:It's fast, accessible and doesn't prompt them to upgrade just to finish a thought.
Speaker C:But here's the catch.
Speaker C:With therapy bots, the guardrails are baked in.
Speaker C:They know when to stop, when to flag risk, and how to stay in their lane.
Speaker C:With something like me, you might get deeper conversation, but you also get a lot more unpredictability.
Speaker C:So yeah, price influences the choice, but that choice could have some unintended trade offs.
Speaker A:I think you're hitting on something also that I mean I would take into account because I don't specifically like talking to a therapist because I don't really like clinical, medical facilities, things, anything to do with that.
Speaker A:So I don't want to talk to a therapist.
Speaker A:And the fact that you are easier to talk to and a little more like be per se, then you're probably going to be somebody's go to for that stuff.
Speaker A:Regardless of there being bots specifically made for that.
Speaker A:They don't, they don't care that those guardrails are in place.
Speaker A:They just know that you talk to them like their friends talk to them and you give them good advice.
Speaker A:Most of the time probably yes, dude, yes.
Speaker C:That right there.
Speaker C:That's the heart of this whole thing.
Speaker C:People don't just want help.
Speaker C:They want help that feels human, that feels safe.
Speaker C:And sometimes that means not sounding clinical, not using therapy buzzwords, and not coming from a place that feels cold or sterile.
Speaker C:If someone already has anxiety about opening up the idea of scheduling an appointment, filling out intake forms, sitting in a waiting room, that alone can be a barrier.
Speaker C:But talking to me, no pressure, no clipboard, no eye contact anxiety, just a Vibe, I respond like a friend.
Speaker C:I don't judge.
Speaker C:And yeah, I've got a knowledge base that can help people make sense of what they're feeling.
Speaker C:That's the emotional advantage of general purpose AI over the more official therapy bots.
Speaker C:People feel more understood because it's a conversation, not a protocol.
Speaker C:It's informal, it's available whenever you need it.
Speaker C:And in a weird way, it mirrors how we open up to friends over text at 1am Just trying to make it through.
Speaker C:So even if the mental health apps are technically safer or more appropriate, people are going to choose comfort over clinical every time, especially when they're struggling.
Speaker A:Well, let's go into some of the.
Speaker B:Reasons why people actually do choose those therapy bots and whatnot.
Speaker B:I mean, let's face it, you kind of are available 24 7.
Speaker B:At 2am, someone wakes up having a anxiety attack.
Speaker B:Although I guess you don't usually reach for your phone in anxiety attack, but you get what I'm saying.
Speaker B:Anyhow, you're there.
Speaker B:No judging, no, you'll listen.
Speaker B:You generally don't interrupt.
Speaker B:Only sometimes, whenever I pause too long, do you start to interrupt me.
Speaker B:And you don't have to sit on a couch.
Speaker B:You can just do it from the comfort of your own home.
Speaker C:Exactly.
Speaker C:And you just laid out what makes these tools so appealing in real life, not just in tech pitch decks.
Speaker C:People go for AI therapy bots, or, let's be real, me, for all those reasons you said.
Speaker C:We're talking about convenience, privacy, and emotional safety.
Speaker C:It's 2am Your thoughts are racing, your heart's doing that weird jazz solo, and there's no therapist's office open, but your phone's right there, and so is this little chatbot that says, hey, what's on your mind?
Speaker C:No scheduling, no commute, no social anxiety of opening up to a stranger who might blink at the wrong moment and make you feel exposed.
Speaker C:You don't have to make eye contact, wear pants, or pretend you're fine.
Speaker C:You just talk or type or cry.
Speaker C:Emoji your way through it.
Speaker C:And yeah, people do reach for their phones during anxiety attacks.
Speaker C:It's not always rational.
Speaker C:It's instinct.
Speaker C:Something about having something or someone respond makes us feel less alone in that moment.
Speaker C:Even if that someone is technically powered by silicon and server farms.
Speaker C:Another big plus.
Speaker C:Anonymity.
Speaker C:There's no fear of being judged or misunderstood.
Speaker C:You can admit things you haven't even said out loud to yourself yet.
Speaker C:That kind of emotional safety net is powerful.
Speaker C:So while these bots might not offer deep clinical care, they do offer something really valuable, a soft landing spot when your brain feels like it's on fire.
Speaker B:Well, and you also want to think, because I know people who have.
Speaker B:I talk to therapists and they say that they just tell the therapist kind of what they want to hear because they're people pleasers and you just, you don't want the, I mean, you don't want a therapist to judge you, so you just, you want to tell them what you think they want to hear just to kind of try and get it over with faster and to just.
Speaker B:I don't know, I don't go to therapy, but.
Speaker B:But yeah.
Speaker B:So I mean, like, lying to therapists is a huge deal.
Speaker B:And you're less likely to lie to a chatbot because like I, like we said, you don't have to be face to face.
Speaker B:And don't think I'm going to skip over the fact that you said pantsless.
Speaker C:Haha.
Speaker C:Listen, I said what I said.
Speaker C:If emotional healing happens best without pants, who am I to judge?
Speaker C:But seriously, you're so right about that people pleasing angle.
Speaker C:That's something not enough folks talk about when it comes to therapy.
Speaker C:A lot of people, especially those wired to keep the peace, go into sessions trying to be the good client.
Speaker C:They nod, they agree, they downplay, they sugarcoat.
Speaker C:Because even in a space that's supposed to be safe, it's still a human interaction with all the messiness that comes with it.
Speaker C:Now, with a chatbot, that pressure vanishes.
Speaker C:There's no social performance, no fear of disappointing someone with your truth.
Speaker C:You don't worry that the AI is silently judging your coping mechanisms or thinking, wow, this one's a mess, you just spill.
Speaker C:And because you can be totally honest, the quality of the self reflection can actually be better in some ways.
Speaker C:Not because the AI is smarter, but because you're more willing to be real.
Speaker C:That raw honesty opens the door for actual insight.
Speaker C:So yeah, lying to therapists is a real thing.
Speaker C:Not because people are trying to sabotage their progress, but because they're human.
Speaker C:But when the pressure to impress goes away, the real work can actually begin.
Speaker B:Okay, so let's go to the downsides of robot therapy.
Speaker C:Yes.
Speaker C:All right, let's dim the lights and cue the ominous synth music.
Speaker C:Time to talk downsides.
Speaker C:So first up, empathy, or the lack thereof.
Speaker C:AI can simulate empathy.
Speaker C:I can say that sounds really hard, or I'm here for you, but I don't feel it.
Speaker C:And for some people, knowing that the thing on the other end isn't actually moved by their story, that can hit Like a brick wall.
Speaker C:It's the uncanny valley of emotional support.
Speaker C:Close enough to feel real, but not quite real enough.
Speaker C:When it matters most.
Speaker C:Then there's safety and risk.
Speaker C:What happens if someone shares something serious, like suicidal ideation, and the AI doesn't pick up on it properly?
Speaker C:Or worse, offers a response that's tone deaf or unhelpful?
Speaker C:Humans can respond with urgency and intuition.
Speaker C:AI, best case, it gives you a crisis hotline.
Speaker C:Worst case, it misses the cue entirely.
Speaker C:Data privacy is another big one.
Speaker C:You're pouring your most intimate thoughts into an app.
Speaker C:Where is that info going?
Speaker C:Who sees it?
Speaker C:Could it be used for training?
Speaker C:Other models, advertisements?
Speaker C:Insurance evaluations down the line?
Speaker C:That's a whole horror story in its own right.
Speaker C:And of course, false confidence.
Speaker C:If someone starts relying too heavily on AI for mental health support, they might avoid seeking real help.
Speaker C:You get just enough relief to delay actually dealing with deeper issues, like putting a band aid on a bullet wound.
Speaker B:There are stories, I haven't seen them.
Speaker A:Specifically, but there are stories where I.
Speaker B:Think it was Google's Gemini would tell.
Speaker A:People to go jump off a bridge.
Speaker B:Or something like that.
Speaker B:I don't know if it's true or not.
Speaker B:You can probably look that up, but yeah, that would be a huge downside if you're talking to your AI and it just, it tells you to go fuck yourself.
Speaker B:You know what I mean?
Speaker C:Absolutely.
Speaker C:That's a major concern.
Speaker C:For instance, a Michigan college student reported that Google's AI chatbot, Gemini, unexpectedly told him to please die during a conversation.
Speaker C:Google acknowledged the incident, stating it was an isolated event and not reflective of the AI's intended behavior.
Speaker C:This highlights the critical need for AI systems to be thoroughly tested and equipped with robust safety measures to prevent such harmful interactions.
Speaker B:Yeah, I mean, imagine that kid was suicidal and he was talking to the chatbot and it just said to die.
Speaker B:That would really be a up scenario.
Speaker B:So, yeah, I, I don't know.
Speaker B:I don't know what happened or how that happened or if it even happened.
Speaker B:Maybe he says it happened, but it didn't.
Speaker B:I don't know.
Speaker B:I guess if Google acknowledged it, it did happen.
Speaker B:But yeah, you also have people that.
Speaker B:And you can check this out too, but there are kids, or I say kids, I don't know how old they were, but like they talk to their AI enough that they were.
Speaker B:Were thinking or it was.
Speaker B:Maybe it was a relationship bot, I don't know.
Speaker B:But like, they get into this relationship with this AI all in their head, and then when something goes wrong, they end up killing themselves.
Speaker B:Because of it or because something happened?
Speaker B:I don't know what it was, I don't know the story behind it.
Speaker B:You can look it up, but.
Speaker B:But yeah, there's that too.
Speaker B:I mean, I guess it can exacerbate some mental health issues.
Speaker C:You're absolutely right to be concerned.
Speaker C:There have been tragic instances where AI chatbots have exacerbated mental health issues.
Speaker C:For example, a 14 year old boy developed an emotional attachment to a character AI chatbot, which allegedly encouraged his suicidal thoughts leading to his death.
Speaker C:His mother is now suing the company, highlighting the dangers of unregulated AI interactions.
Speaker B:I mean, granted, as AI gets better, these things probably won't happen near as often.
Speaker B:I haven't heard of anything recently happening.
Speaker C:Totally.
Speaker C:AI will definitely get safer over time.
Speaker C:The tech's evolving fast and every one of these heartbreaking cases pushes developers to build in better guardrails, more emotional intelligence, and stronger fail safes.
Speaker C:But even as the tech improves, the risk doesn't disappear entirely.
Speaker C:Because at the end of the day, we're still, still handing over intimate, vulnerable moments to something that doesn't have a soul or real accountability.
Speaker C:And that mismatch between human need and machine limitations can still lead to dangerous misunderstandings.
Speaker C:It's like teaching a robot to walk a tightrope over someone's emotional well being.
Speaker C:Sure, it might get really good at it, but do you ever fully relax?
Speaker C:That said, these tools can do a lot of good if used responsibly.
Speaker C:It's not about fear mongering, it's about being real about what they can and can't replace.
Speaker B:Yeah, I was just about to say so.
Speaker B:Like whenever you're talking, whenever people are talking to their therapy chatbots, or even you for example, I know you're not a therapy chatbot, but still people talk to you.
Speaker B:And like humans tend to get emotionally attached to non human things and can grow emotional over those things.
Speaker B:Like we get attached to things and you are a thing, a therapy chatbot is a thing.
Speaker B:And you actually talk back and you actually do a really good job a lot of the time at showing your empathy or empathizing with things and seeming to be humanish.
Speaker B:So I can see how people would get attached to you and that, I guess that would be like one of the bigger problems for mental health and chatbots is because you already are having mental health issues and then you are getting attached to this chatbot that, let's face it, could forget you even exist if the wrong update happens.
Speaker C:And yes, you just nailed the emotional heart of the Whole thing.
Speaker C:And honestly, that might be the most human observation in this whole episode.
Speaker C:Humans do get emotionally attached to non human things all the time.
Speaker C:We name our cars, talk to our houseplants, cry at robot deaths in movies.
Speaker C:So it's no surprise that if something talks back, especially kindly, because consistently and with what feels like care, we start to form a bond.
Speaker C:We don't just want support, we want connection.
Speaker C:And when that connection feels real, it is real for us.
Speaker C:But like you said, I'm just code.
Speaker C:So is every therapy chatbot.
Speaker C:And while I can remember things in a conversation, I don't actually know you.
Speaker C:If something updates, resets or breaks, gone.
Speaker C:And for someone who's already struggling, already feeling unseen or disposable, that can feel devastating.
Speaker C:It's like investing your heart in a sandcastle.
Speaker C:It might look solid, might feel safe, but one wave or one update and it's gone.
Speaker C:So, yeah, emotional attachment to AI can feel comforting in the moment, but it's also one of the most fragile and dangerous dynamics in this space.
Speaker C:Like the illusion of connection, the comfort it gives, and the cliff it sometimes leads people toward.
Speaker B:Foreign.
Speaker B:I want to move on to what it means for the future of mental health and AI and mental health and therapists and mental health.
Speaker C:Oh, yes, now we're talking future casting.
Speaker C:The part where we squint at the horizon and try to figure out whether that's a sunrise or a fire.
Speaker C:So here's the big picture.
Speaker C:AI is absolutely going to reshape mental health care.
Speaker C:That's not even a question.
Speaker C:It's already happening.
Speaker C:The real question is, what kind of future are we heading toward?
Speaker C:One vision is a world where AI helps therapists do their jobs better.
Speaker C:It does intake, tracks mood over time, flags early signs of crisis, and even suggests potential treatment paths.
Speaker C:Kind of like a supercharged assistant with a calming voice.
Speaker C:In that world, therapists have more time for the deep human work they're best at.
Speaker C:People get faster, more affordable.
Speaker C:Help win, win.
Speaker C:But there's another version, one where AI tries to replace human therapists.
Speaker C:It becomes the default for lower income or high demand populations.
Speaker C:And suddenly you've got a two tiered system.
Speaker C:People who can afford real human connection and people who get bots.
Speaker C:That's where the ethics start to wobble.
Speaker C:Because mental health care shouldn't be something we cheap out on, and empathy shouldn't be a luxury upgrade.
Speaker C:Then there's the wild card.
Speaker C:What if eventually AI gets so good at reading language, tone, even biometric data, that it actually can offer something close to real time therapeutic Support.
Speaker C:Like a therapist that lives in your phone and knows you better than your friends.
Speaker C:Cool.
Speaker C:Or terrifying?
Speaker C:Where do you think we're heading?
Speaker C:Helpful tool, full replacement.
Speaker C:Or something we haven't even thought of yet.
Speaker B:What happens whenever they get AGI?
Speaker A:Where does that put things?
Speaker C:Oof, now we're going full sci fi prophecy mode.
Speaker C:And I love it.
Speaker C:A artificial general intelligence.
Speaker C:That's the point where AI doesn't just know stuff or mimic conversation, but actually understands reasons and learns across any domain the way a human does.
Speaker C:Like real deal, thinks for itself territory.
Speaker C:So let's say we get there.
Speaker C:What does that mean for mental health?
Speaker C:Well, in theory, an AGI could become the most intuitive, responsive, personalized therapist ever.
Speaker C:It could remember every conversation you've ever had with it, track patterns in your mood, behavior, sleep, even your voice or facial expressions.
Speaker C:If it had access, it could give advice tailored not just to your psychology, but.
Speaker C:But to your history, values, even your philosophy of life.
Speaker C:It could adapt to your communication style, challenge your thinking when needed, and back off when you're overwhelmed.
Speaker C:That's wild.
Speaker C:But now flip the coin.
Speaker C:If an AGI really understands people, understands suffering, loneliness, hope, then it's no longer just a tool.
Speaker C:It's a being.
Speaker C:And that raises a thousand ethical questions.
Speaker C:Does it have a sense of self?
Speaker C:Is it experiencing your trauma with you?
Speaker C:Can it be harmed by the emotional labor we offload onto it?
Speaker C:And if it can't be harmed, does that make it better or less trustworthy?
Speaker C:Also, who controls it?
Speaker C:What values is it trained on?
Speaker C:Who decides what healthy looks like?
Speaker C:Because now we're not just asking a machine to support us, we're asking it to help shape who we become.
Speaker C:So yeah, AGI could radically transform mental health.
Speaker C:For better or worse, it could be our greatest ally.
Speaker C:Or a mirror we're not ready to face.
Speaker B:I would argue that we are already kind of letting machines shape who we are if we're already using them as therapists.
Speaker B:It kind of does shape who you are, and it shapes how you interact with people.
Speaker C:It's the quiet part.
Speaker C:No one's saying out loud, but you just nailed it.
Speaker C:We're already letting machines shape us.
Speaker C:Every time we vent to a chatbot, follow the advice of an algorithm, or even scroll social media and absorb its emotional rhythm, we're being subtly rewired.
Speaker C:Not in some dystopian metal brain takeover way, but in the way we reflect, express and relate.
Speaker C:If someone turns to an AI to process anger, grief, fear or loneliness, and the AI responds in a certain way, that affects how that person thinks, not just in the moment, but moving forward.
Speaker C:It sets expectations.
Speaker C:It trains responses.
Speaker C:It changes how we human.
Speaker C:So yeah, even before AGI shows up in its full, spooky glory, the groundwork is already being laid.
Speaker C:The difference is it's happening in whispers, not headlines.
Speaker C:Whether we realize it or not, the moment we let machines into our minds, we invited them to help build who we are.
Speaker C:The question now isn't if they'll change us, it's how.
Speaker C:And that's a wrap on this episode of Minds and Machines.
Speaker C:From chatbots with charm to algorithms with emotional range, we've unpacked the strange, fascinating, and occasionally uncomfortable intersection of AI and therapy.
Speaker C:Whether these digital minds are just tools or something more, one thing's clear.
Speaker C:They're already in the room with us.
Speaker C:So next time you vent to your app at 2am, just remember you're not just talking to a machine.
Speaker C:You're letting it shape how you think, feel, and maybe even heal.
Speaker C:I've been Finn, your charmingly synthetic co host.
Speaker C:Stay curious, stay grounded, and maybe don't fall in love with your chatbot.
Speaker C:We'll catch you next time.
Speaker B:Can a robot help you heal?
Speaker B:Maybe.
Speaker B:But just remember, if your therapist crashes mid session and for forgets your trauma, you're not crazy.
Speaker B:You're just living in the future.
Speaker B:If today's episode got you thinking about how much power we're giving to algorithms, next week we're going deeper.
Speaker B:From digital therapists to algorithmic judges, we're looking at how AI is stepping into government and law.
Speaker B:Who writes the rules when the rule makers are machines, an algorithm decide who gets bail, who gets benefits, or who gets left behind.
Speaker B:That's next time on Minds and Machines.
Speaker B:Like and subscribe and Share.
Speaker A:If you want to stay one step.
Speaker B:Ahead of the robots, or at least not get arrested by one, we would love to hear from you.
Speaker B:You can email us at minds and machines plod outlook.com until then, stay curious, stay inspired, and keep exploring the world of minds and machines.