Empowered by Artificial Intelligence, our smart assistants and devices have the potential to identify subtle changes in our mental health and even play an active role in diagnostics and treatment. It’s one of the most exciting, but often overlooked, applications for AI – and it couldn’t be more needed.
AI is only as good as the data sets on which it trains, particularly where machine learning is concerned (a field of AI that grew from computational learning and pattern theory). As the data gathered by our ubiquitous smart assistants and devices increases exponentially with billions of daily interactions, the potential capabilities of their respective machine learning systems grow. If we as users are open to disclosing our data for the purpose of analysing our psychological wellbeing – and provided the platforms we depend on can be trusted with that data – the benefits could be tremendous.
It’s not just our patterns of speech that can yield useful insight. Researchers from Harvard and the University of Vermont analysed Instagram images using machine learning techniques and found that the photos we post can be predictive markers of depression. Using colour analysis, metadata and algorithmic face detection, they identified that those participants diagnosed with depression were more inclined to post darker-coloured photos. Applying machine learning, the research team achieved a detection rate of 70 per cent. Previous studies on GP-led diagnosis showed a detection rate of just 42 per cent.
AI is promising in the field of mental health because of its capacity to analyse vast quantities of different types of data and establish predictive patterns where humans, even highly-trained ones, cannot. This is not to say that clinicians won’t in the future have a central role in diagnosing patients, just that AI may provide both clinicians and patients themselves with new avenues for early detection and diagnosis.
Beyond diagnosis, AI tools are also creating new treatment protocols. Woebot is an AI-based chatbot app designed by Alison Darcy, a clinical psychologist at Stanford. It offers users cognitive-behavioural therapy (CBT), asking a series of scripted questions in a conversational format. Unlike clinical therapy, Woebot is available anytime, anywhere. Apps like Woebot not only broaden access to therapy, but may also remove some of the barriers to seeking help, building conversational (and emotional) intelligence as they learn from their patients. With AI, there’s no fear of judgement and perhaps less perception of social stigma – common reasons why people don’t seek professional help.
AI not only offers potential for the diagnosis and treatment of those suffering with acute mental health conditions – it could help everyone live happier, more balanced lives. Stress and stress-related illnesses can rightly be called an epidemic. It affects most of us at some point, whether related to work, family, financial or other pressures. A 2017 survey by The American Psychological Association found that 80 per cent of respondents had suffered from at least one stress symptom in the past month.
As AI becomes ever more blended into our lives, we face many ethical decisions as a society on how the data gathered by our smart assistants, devices and social networks should be used. Those companies that fail to protect or misuse our data risk undermining the public’s trust in and openness to potentially transformative new applications. That would be a travesty, because allowing our data to be used to help manage mental health and wellbeing could prove life changing for millions of people, improving diagnosis, broadening access to support, and lessening stigma. Too often the topic of mental health is shrouded in silence. Let’s hope that the present conversation about our data and AI helps change that.