The Future of AI in Mental Health Support

AI is everywhere — including in mental health support. How will AI affect our industry, for good and bad? Let’s find out.

The Future of AI in Mental Health Support

We’re not just talking about the near future, but the present: already, there are a myriad of ways you might be seeing AI pop up in mental health spaces. From a chatbot offering a form of therapy, like Woebot, to wearable devices which can monitor your biometric data like sleeping patterns and heart rates, then warn you when any factor is showing danger, like Biobeat, you might already be interacting with AI-powered mental health services. AI is also assisting researchers to analyze patient medical and behavioral data, or to predict what kind of therapy or treatment a patient might respond best to.

And guess what: AI is in nilo.health’s platform, too. We work at the forefront of mental health support and tech, so of course, AI is a powerful and creative tool we wish to harness. But we’re also firm believers in the huge impact and importance of real, expert humans to offer psychological help and support. And we also know that just as AI can do amazing things, there are also real dangers and setbacks associated with the technology. That’s why we’re moving slowly to understand AI’s potential in mental health, and how best to use it here at nilo.

Read on to learn more about AI’s potential, drawbacks, and how we plan to use it at nilo.health.

The positive potential of AI in mental health support

One of the greatest issues with mental health care is how inaccessible it remains. There are too few resources, and too many long waiting lists. AI has the potential to revolutionize this.

Just some of the ways we can use AI to provide better quality mental healthcare to more people include:

  • Analyzing patient data: AI can analyze electronic health records, blood tests, brain images, questionnaires, voice recordings, behavioral signs and even information from a patient’s social media accounts to flag mental health issues. One study found that depending upon the choice of AI technique and quality of training data, algorithms can detect mental illnesses with 63-92% accuracy. This could be used to refer patients to further treatment, or to assess large groups, for example soldiers just returned from combat.
  • Self-assessment and generative AI therapy sessions: It sounds like something from a movie. (Scarlett Johannson, anyone?) But lots of people find it comforting to talk to an AI-driven chatbot, which offers no judgment, and there are a range of chatbots already out there which are working on offering accessible therapy sessions for people who need them. Depending upon the power of the technology, the therapy-bots can use CBT and other clinical methods, pair with wearable devices, and more.
  • Offer personalized treatment plans: AI can deliver nuanced readings of data like brain scans to create personalized treatment plans for patients with schizophrenia and other mental illnesses. Right now, patients find the right medication by a painstaking and sometimes miserable trial-and-error process. But studies show that AI can recognize patterns and recommend the best kind of treatment for the best ‘kind’ of brain.

Want to learn more about technology and mental health support?

Sign Up for the Newsletter

This is just the tip of the iceberg. AI can also help monitor ongoing mental health issues or treatments, automate tasks, support clinicians, and more (as you’ll see when we talk nilo!). But of course, there are also downsides to using AI that we need to be aware of as we move forward.

Dangers and concerns about AI in mental health support

AI is raising concerns in a range of industries, including healthcare. When it comes to mental health, here are some of the negative factors or possibilities facing the world as we integrate AI:

  • AI bias: We tend to think of machines as objective and impartial, but machines are made by people, and it’s all too easy for our own prejudices, limitations and biases to enter the data. Inaccuracies or imbalances in the datasets used to train algorithms can create unreliable predictions or perpetuate social prejudices; for example, mental health issues are more likely to go undiagnosed amongst certain ethnic or cultural groups, and data can replicate this bias. The WHO reports that despite more data training, bias is still a significant risk.
  • More for mental illness than mental healthcare: Currently, AI is mostly being used to track depressive disorders, schizophrenia and other psychotic disorders. This is highly important and impactful, of course, but it means that our current AI technology is less good at dealing with more day-to-day mental health issues, like anxiety or stress. For those of us who work in mental health support in the workplace, this means that AI might not be as practical or useful. (Until, of course, we train it to work for us!)
  • Subjective assessment is still important: Mental health and physical health are equally important – at nilo, we talk about this all the time. But while physical health tends to register in empirical data, subjective assessments are still important with mental health. Mental health workers rely on self-reported patient data alongside other, more empirical measures, and their expertise and skill in assessing subjective assessments is often greater than AI’s, which may make the wrong call.
  • The potential for things to go wrong: We’ve all seen examples of AI going wrong: a ridiculous or misleading answer in a search engine, or bizarre AI-generated images. But when you’re working in mental health support, the stakes are extremely high. We need to be extra careful rolling out AI in mental healthcare, because a mistake might not just be a learning moment, but instead lead to catastrophic results. That means that extreme amounts of testing and perfecting are required before we roll out any kind of AI mental health service.

How will nilo.health use AI?

nilo.health is already using AI! For example, in our mood journal platform, users have the option to explain what they’re worried about. They’ll be met with a set of relevant questions on the topic: for example, questions about stress, anxiety, public speaking, and more. Those questions are written by our expert-led team, but AI clusters the questions and brings them in from our internal library.

This might seem like a small step compared to the huge advances in AI, but our number one priority is our users, so rolling out AI at nilo.health is a matter of careful and serious consideration.

“AI presents an extraordinary opportunity to enhance services, including here at nilo.health, offering personalized support at scale and improving accessibility,” says Katharina Koch, Head of Psychology at nilo.health. “By harnessing the power of AI, we can complement human expertise with data-driven insights to better tailor our services. However, it is crucial to approach this technology with ethical considerations, ensuring that AI augments the human touch rather than replacing it.”

Stay tuned! The potential opportunities of AI are huge. But we need to be thorough and meticulous in the work we do, to avoid bias or dangerous advice. Only then will we be able to use AI as it’s meant to be: part of the work we’re already doing in making mental health support accessible for everyone.

See other resources

EN
DE