Nick Taylor, of the mental health app ‘Unmind’, talks about new technology, cultural shift, and mental health.
The topic “mental health” has reached a critical milestone of relevance: politicians have begun to posture around it. Both major parties in the UK have referred to it in their most recent manifesto, and the Conservatives called current treatment provision a ‘burning injustice’.
But, as with physical health, funding is the issue. Given the cost of treatment (and the potential scale of required infrastructure) the more relevant question might be, how can we make it cheaper? New, scalable products exist, which rely on technology - are these the answer? Can they improve outcomes?
I talked to Dr Nick Taylor, who founded just such a product. Unmind is a pioneering mental health platform, aimed at employers, to provide to their employees. Through a ‘bot psychologist,’ daily tools, learning and development series, and tracking functions, Unmind claims to make for a mentally healthier workforce.
I sometimes hear people reach for mental health as the rhetorical example of one thing an AI bot could never do. “You couldn’t have a depression counselor who was a robot.” Do you think that’s true?
[Laughs] No. But I think it’s more grey than that. I don’t think that mental health is one thing – it’s a spectrum. And I don’t think you could say that someone with serious mental health problems would necessarily turn to a bot for support. But I think you could hope that somebody with mild anxiety, mild depression, mild stress, would benefit from some aspects of what a bot could do. And I think there’s evidence to show the effectiveness of that – I know Warren Mansell from the University of Manchester has created a bot called MYLO [Manage Your Life Online], and done some preliminary research to show that it can move people through a problem they’re experiencing in a way that brings about positive change. Equally, WoeBot and the team behind that at Stanford have done a randomized control trial again with the student population showing that it’s effective in managing and working with mild psychological presentations – so I think there is scope for it to have a role. I also think that if you consider the mental health literacy is very low in society generally, there is a role for the bot to play an informer, which is quite helpful – to raise awareness – so for example someone might say my colleague is feeling depressed how can I support them? A bot might say, ‘if someone is feeling is that, they can have these symptoms, it can sometimes be hard to respond. Here are some things it might be helpful to say. Etcetera.’
"I think the wind is in the sale with mental health literacy. The barrier is scalability."
You say that mental health literacy is very low. Do you think that that’s changing? I get the sense that whereas previously people would have talked about mental health in terms of a small group of people they consider mentally ill, now, a lot of people would think about their mental health in the same way they would consider their physical health – there’s an appreciation that it’s not about the pathologization of a small group of people; it’s just something that everybody has to manage. Do you think that is happening?
I love that you say that – that’s where I hope that it’s going. I think there is a change, certainly, and I think if you look at the work the royals are doing with their campaigns or that many of the big charities are doing with their work, or you look at the number of celebrities that come forward and talk about mental health – from Ruby Wax, to Stephen Fry – all of those type of people then I think you can see that there is change happening.
I think the leaders of that change and early adopters understand that there’s a spectrum and we have mental health all of the time, rather than something that we only have when we have ‘mental health problems’, and I think many businesses understand that mental health is not something that should be dealt with in a reactive way, but also proactively.
When I discuss the idea of a mental health spectrum I would say that the majority of people I speak with are interested by the idea but that also the majority of people I speak with would struggle to define how a common diagnosis of common mental health is reached, and what the common symptom sets would be. So I think that yes there’s some interesting and exciting signs of greater understanding of mental health but there’s still a lot to do moving forwards.
And in terms of the roadmaps of – the problems as you see it moving forwards, to get towards a better mental health outcome – it’s the stigma, the lack of knowledge, are those the two only problems you see? And what other problems are there?
I think raising awareness is an enormous task. But I think the wind is in the sale with mental health literacy. The barrier is scalability. How do we scale proactive mental health solutions and I think that’s one of the ways that AI and bots can be involved. So that it’s still really affordable. If you think about the traditional model of reactive healthcare – it’s either face-to-face therapy, psychiatric support, NHS or private health insurance; they’re all either expensive, inconvenient, or difficult to access. And whilst there will always be a place for them some of their work could be done by automated, cheaper, scalable options that are digital.
"Face-to-face therapy, psychiatric support, NHS, private health insurance; they’re all either expensive, inconvenient, or difficult to access"
I think there remains a lack of parity between physical and mental health. Most people own a toothbrush, and really that’s proactive dental healthcare, or own running shoe, or swimming costume, and that’s proactive physical health care. So we’ve got physical and dental proactive health care sorted!
But with mental health we’re not quite that there yet. That’s in part because mental health is generally defined as a problem set. So many people don’t think it’s something they have all of the time. But as soon as you think, it’s something I have all the time, you understand you need to work on it all the time. But this has started to change, many people now use mindfulness based apps such as Headspace because people understand that by doing mindfulness for 10 minutes a day it’s good for their minds. But there’s so much more we can do for proactive mental health than simply mindfulness.
There’s been all this stuff in the press about social media having a strong correlation with negative mental health outcomes. Do you think that, as tech moves us towards a lifestyle which is less like the environment we evolved in, there will be more problems with mental health, that we’ll have to be more proactive addressing it?
I think the answer is quite multi-dimensional and complex and it depends on who you’re looking at. I’m not an expert about the interplay between social media and mental health, but there are probably groups of people at greater risk – adolescents going through existential questions in their life seeing pictures of their friends on yachts, or looking great on the beach, or whatever it might be – might cause greater self-doubt and anxiety. But I don’t think it’ll affect everyone equally and the same scenario might have a much more limited impact on someone in the fifties. So I think it can be a risk to people’s mental health depending on how they use it and their age, gender, etcetera.
With regards the second part of your questions - does the digital world and the modern world take us away from nature and our environments. I think it can, but again it’s slightly dependent. Thinking about a city, it’s a louder place, it’s a busier place – and that’s potentially not very good for us. If you look at rat studies which are often applied to humans – we know that overcrowding causes rats to be hugely stressed and it’s the same for humans. And there’s fascinating research to show that whether you can see a tree from your hospital bed can affect your recovery rate, so we know that nature has the capacity to be very good for us.
Ultimately I think both of those things are dangerous but can be mitigated by taking proactive steps, such as creating nice spaces in cities, and having digital downtime – for example not having your phone next to your bed.
I know a number of people who have given up Facebook. Or will impose on themselves specific rituals when it comes to internet connectivity or social media where they curate very carefully what they allow themselves to look at or not, because they think a lot of this stuff is addictive and I don’t like using it, although I like using it on a minute-to-minute basis. Would you say that’s what Unmind is about?
Unmind is about offering positive proactive mental health support in the workplace. Our goal is to create healthier happier people and healthier happier organizations. We offer tools that can be done on a day-in day-out basis that will improve our mental health, allowing us to be stronger and better. We also offer support for mental health problems at a much earlier stage than traditional models. The platform is designed to cover the whole spectrum of mental health, from problems to thriving.
Ultimately we are fascinated by the potential for digital health. From AI to apps to biotech it is all really exciting. But equally, what’s super-important is clinical validity. So we’re constantly talking and working with top professors from the top Universities around the world, to make sure what we are doing is in line with evidence, and at the cutting edge of what is possible, but also doing research to contribute.
"I think what’s interesting is to think about data… and how it impacts on someone’s understanding of themselves."
Do you find there’s a lot of misinformation in this space?
Yes. 100%. This is a slightly simplistic way of putting it, but I find that there are three categories of apps/digital products when it comes to mental health. There are really really beautiful apps to use which are often designed by design agencies but have questionably clinical validity. Then there are the really really really ugly apps which are developed by clinicians and academics on limited budgets which have brilliant validity but nobody will use because they’re poorly designed. And finally there are the clinically valid beautiful apps which nobody uses because they’re too expensive.
What other innovations have you seen in mental health which you think are interesting?
There’s lots of companies I can think of but something I’m excited about outside of AI which we’ve talked about is biotech. So for example there are a company I find very exciting called LiveQ who are doing really fascinating research with biomarkers and are creating an ecosystem around biotechnology to help improve healthcare. Not everyone wears a wearable, and not all wearables are quite where everyone wants them to be in terms of accuracy, but it’s certainly a interesting space. I think what’s really interesting is to think about the data you can get and how it impacts on someones understanding of themselves. Ultimately the data has to assist the user in taking positive behavioural and emotional steps in their lives.
It’s interesting that you’ve chosen data from wearables. What do you think of the idea of "big data" collection in the mental health space? I wonder whether analytics about somebody’s social media footprint could collect more data than the wearables?
I think that analysing someones social media footprint is an interesting approach, but there are some question marks around the methodology and ethics of some research and work in that space. Ultimately all data, whether biotech, social media etcetera, is interesting and can potentially play a role in understanding better someone’s inner mental state. The key thing for me is that the evidence is sound and applied in a scientifically rigorous, ethical way.