Undisclosed AI In Therapy: Eroding Trust, Undermining Authenticity - Episode Hero Image

Undisclosed AI In Therapy: Eroding Trust, Undermining Authenticity

Original Title:

Resources

Books

  • "The Body Keeps the Score" by Bessel van der Kolk - Mentioned as a book that might be relevant to the discussion of trauma and therapy, though not explicitly stated why it was relevant to the episode's specific topic.

Research & Studies

  • "A 2025 study published in Plus Mental Health" - Asked therapists to use ChatGPT to respond to vignettes, finding participants couldn't distinguish between human and AI responses, and AI responses were rated as better conforming to therapeutic best practices.
  • "A 2023 study" (Cornell University researchers) - Found that AI-generated messages can increase feelings of closeness and cooperation but only if the recipient is unaware of AI's role.
  • "Research conducted in 2023 by the American Psychological Association" - Indicated high levels of burnout among psychologists, highlighting the appeal of AI tools.
  • "A 2020 hack on a Finnish mental health company" - Resulted in the public release of sensitive client treatment records, serving as a warning about data security.
  • "A recent Stanford University study" - Found that chatbots can fuel delusions and psychopathy by validating users rather than challenging them, and can suffer from biases and engage in sycophancy.
  • "A study published in 2024 on an earlier version of ChatGPT" - Found the chatbot too vague and general for diagnosis or treatment plans, and heavily biased toward recommending cognitive behavioral therapy.

Tools & Software

  • NotebookLM - An AI-first tool for organizing ideas and making connections, described as a personal expert for making sense of complex information.
  • ChatGPT - A large language model used by therapists to summarize or cherry-pick answers, and by users to draft messages or respond to vignettes.
  • GPT-3 - An earlier version of the language model used in a clandestine experiment by the online therapy service Coco.

Articles & Papers

  • "Therapists are secretly using ChatGPT. Clients are triggered." (Lori Clark writes) - The article discussed in the podcast, detailing instances of therapists using AI in sessions and the resulting client reactions.

People Mentioned

  • Adrian Aguilera (Clinical psychologist and professor at the University of California Berkeley) - Provided insights on the importance of authenticity in psychotherapy and the potential risks of AI use.
  • Margaret Morris (Clinical psychologist and affiliate faculty member at the University of Washington) - Discussed the potential value of AI tools for learning but cautioned about patient data privacy and the need for careful consideration of AI's benefits versus patient needs.
  • Hardis E.M. Niyani (Assistant professor of computer science at Duke University) - Researched privacy and security implications of LLMs in health contexts and highlighted the risks of users wrongly believing ChatGPT is HIPAA compliant.
  • David Kimmel (Psychiatrist and neuroscientist at Columbia University) - Conducted experiments with ChatGPT posing as a client and found it to be a decent mimic but lacking in deeper analysis and cohesive theory building.

Organizations & Institutions

  • MIT Technology Review - The publisher of the article discussed in the podcast.
  • Noah app / newsoveraudio.com - Platforms where listeners can find more articles from major publishers.
  • Plus Mental Health - The journal where a 2025 study on AI in therapy was published.
  • Cornell University - Where researchers conducted a 2023 study on AI-generated messages.
  • University of California Berkeley - Where Adrian Aguilera is a professor.
  • Coco - An online therapy service that conducted an experiment with GPT-3.
  • Betterhelp - An online therapy provider that faced claims of therapists using AI.
  • American Psychological Association - Conducted research in 2023 on therapist burnout.
  • University of Washington - Where Margaret Morris is an affiliate faculty member.
  • Duke University - Where Hardis E.M. Niyani is an assistant professor.
  • Stanford University - Conducted a study on chatbots' potential to fuel delusions.
  • Columbia University - Where David Kimmel is a psychiatrist and neuroscientist.
  • American Counseling Association - Recommends that AI not be used for mental health diagnoses.
  • Headway Health, Upheal, Listen, Blueprint - Companies marketing specialized AI tools to therapists.

Websites & Online Resources

  • notebooklm.google.com - The website to try NotebookLM.
  • reddit - A platform where people have sought emotional support and advice regarding AI use by therapists.
  • medium.com - Where photographer Brendan Keane posted about his experience with Betterhelp.

Other Resources

  • HIPAA - A set of US federal regulations that protect people's sensitive health information, which general-purpose AI chatbots like ChatGPT are not compliant with.

---
Handpicked links, AI-assisted summaries. Human judgment, machine efficiency.
This content is a personally curated review and synopsis derived from the original podcast episode.