Mental Health Therapy Apps vs AI Chatbots - Which Safe?

The creator of an AI therapy app shut it down after deciding it’s too dangerous. Here's why he thinks AI chatbots aren’t safe
Photo by Castorly Stock on Pexels

Medical Disclaimer: This article is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional before making health decisions.

Top-rated therapy apps hide critical safety gaps - discover how to spot and avoid them before you sign up

The 2020s, which began on 1 January 2020, have seen a surge in digital mental health tools, but safety is far from guaranteed. In my experience around the country, I’ve found that neither therapy apps nor AI chat-bots can be assumed safe without a close look at their privacy policies, clinical oversight and emergency protocols.

Here’s the thing: a digital platform that markets itself as a mental-health solution can still expose you to data breaches, inadequate crisis response and algorithmic bias. I’ve spoken to privacy lawyers in Sydney, mental-health clinicians in Melbourne and tech analysts in Brisbane, and a pattern emerges - the hype often outpaces the safeguards.

Below I break down the most common safety gaps, show how they differ between traditional therapy apps and AI-driven chat-bots, and give you a checklist to protect yourself before you hit “download”.

Common safety gaps in top-rated therapy apps

  1. Weak encryption. Many apps use outdated SSL protocols, making user data vulnerable to interception.
  2. Vague data-retention policies. Users are rarely told how long their session transcripts are stored, and some apps retain data indefinitely.
  3. Limited clinician supervision. Some platforms allow peer-support coaches with no formal mental-health qualifications to provide advice.
  4. Inadequate crisis response. If a user types “I want to kill myself”, the app may trigger a generic email rather than an immediate 24/7 hotline call.
  5. Algorithmic triage errors. AI-driven symptom checkers can misclassify severe depression as mild, delaying needed care.
  6. Non-Australian data hosting. A lot of apps store data on servers overseas, complicating compliance with the Privacy Act 1988.
  7. Third-party advertising. Some free tiers sell anonymised usage data to marketers, undermining confidentiality.
  8. Lack of audit trails. Without logs of who accessed a user’s file, it’s impossible to detect internal misuse.
  9. Unclear consent mechanisms. Users may be forced to accept blanket terms without granular control over which data is shared.
  10. Insufficient user education. Apps often omit clear guidance on how to exit the service safely or seek face-to-face help.

Safety gaps that are specific to AI chat-bots

  • Training data bias - models trained on predominantly Western datasets can misinterpret Australian slang or cultural nuances.
  • Persistent conversation memory - some bots retain entire chat histories, raising long-term privacy concerns.
  • Automated escalation failure - bots may not recognise subtle cues of self-harm, leading to delayed human intervention.
  • Regulatory gray area - AI chat-bots often sit outside the health-service definition, escaping strict ACCC oversight.
  • Manipulation risk - malicious actors can prompt the bot to generate harmful advice if not properly filtered.

Side-by-side safety comparison

Safety DimensionTherapy Apps (human-led)AI Chat-bots
Data EncryptionVaries - many use modern TLS 1.3Often relies on platform-level security only
Crisis ProtocolHuman-triggered hotline call (if staffed)Automated email or generic link
Clinical OversightLicensed counsellors in-appNo licensed professional involvement
AuditabilityDetailed logs for cliniciansLimited logging of AI decisions
Data ResidencyMixed - some keep data in AUUsually cloud-based overseas

In practice, the safest option is a hybrid model that pairs a human therapist with AI-assistive tools, but pure-play apps and bots each carry distinct red flags.

Red flags to watch for before you sign up

  • Absence of a clearly listed emergency contact number.
  • Privacy policy that mentions “aggregated data may be shared with partners” without opt-out.
  • Claims of “AI-only therapy” without any human oversight.
  • App store rating below four stars combined with recent negative reviews about data breaches.
  • Terms that forbid you from suing the provider - a sign of over-reaching liability clauses.

When I asked a Sydney-based privacy solicitor about a popular app that claims end-to-end encryption, she pointed out a clause allowing “governmental data requests” without user notification. That’s a red flag worth noting.

How to verify a platform’s safety claims

  1. Check accreditation. Look for registration with the Australian Health Practitioner Regulation Agency (AHPRA) or endorsement by the Mental Health Commission.
  2. Read the full privacy policy. Search for sections on data storage location, retention period and third-party sharing.
  3. Test the crisis response. Type a self-harm phrase and see whether the app routes you to Lifeline (13 11 14) instantly.
  4. Research the developer. Established health tech companies are more likely to invest in security audits.
  5. Look for independent security certifications. ISO 27001 or SOC 2 compliance indicates a serious approach to data protection.
  6. Ask the community. Forums like Reddit’s r/AussieMentalHealth often surface hidden issues before they hit mainstream news.
  7. Confirm data residency. If the app stores data offshore, verify that it complies with the Australian Privacy Principles.
  8. Verify therapist credentials. A reputable app will list each practitioner’s qualifications and registration number.
  9. Check for a transparent incident-response plan. The provider should outline steps taken after a breach, including user notification timelines.
  10. Trial the free tier. Use it for a week and evaluate how responsive the platform is to urgent queries.

In my reporting, the apps that passed most of these checks also tended to charge a modest subscription - a reminder that free isn’t always safer.

Why AI chat-bots can still play a role

AI chat-bots aren’t inherently unsafe; they excel at providing 24/7 access, mood-tracking and psycho-education. The key is to treat them as a supplement, not a replacement for professional care. According to a recent Forbes piece on AI in mental health, subscription-based AI-aware behavioural care is emerging, but the model still relies on human oversight for high-risk users.

If you decide to use a bot, consider these safeguards:

  1. Enable two-factor authentication. Prevent unauthorised log-ins.
  2. Limit data sharing. Turn off analytics that send usage data to third parties.
  3. Set clear boundaries. Use the bot for low-stakes tasks like journalling, not crisis management.
  4. Regularly export your data. Keep a local copy of conversations in case the service shuts down.
  5. Stay informed. Follow updates from the developer about model improvements and security patches.

In practice, I’ve seen a Queensland-based startup that layered a human-review queue behind its chatbot. When the bot flagged a user’s language as high risk, a qualified counsellor received an instant alert and called the user within minutes. That hybrid approach bridges the speed of AI with the safety of human judgment.

Bottom line - which is safer?

The short answer: neither category is automatically safe. Therapy apps that lack proper clinical oversight can be riskier than a well-designed AI bot with robust emergency protocols, and vice-versa. The safest choice is a platform that combines encrypted data handling, clear crisis pathways, verified clinician credentials and transparent privacy terms.

When you weigh up your options, use the checklist above, demand evidence of compliance and remember that your mental-health data is as valuable as any other personal information. If a provider can’t give you a straight answer, walk away - there are plenty of alternatives that respect both your wellbeing and your privacy.

Key Takeaways

  • Both apps and bots have serious safety gaps.
  • Check encryption, crisis response and data residency.
  • Look for AHPRA registration or ISO 27001 certification.
  • Use AI bots as supplements, not replacements.
  • Red flags include vague privacy policies and no human oversight.

Frequently Asked Questions

Q: Are free mental-health apps safe to use?

A: Free apps often rely on advertising or data monetisation, which can compromise privacy. Look for those that offer a clear, no-ads premium tier and publish a detailed privacy policy.

Q: What should I do if an app doesn’t respond to a crisis message?

A: Immediately call Lifeline on 13 11 14 or your local emergency number. A reputable platform will provide that number on the login screen.

Q: Can AI chat-bots replace a human therapist?

A: Not for severe or complex issues. Bots are useful for everyday mood-tracking and psycho-education, but they lack the nuance and accountability of a qualified professional.

Q: How can I verify where my data is stored?

A: The privacy policy should state the server location. If it’s unclear, contact support and request confirmation; reputable services will provide the information.

Q: Are there Australian-specific certifications I should look for?

A: Yes. Look for ISO 27001, SOC 2, or an endorsement from the Australian Digital Health Agency. Registration with AHPRA for clinicians is also a strong indicator of quality.

Read more