Artificial innovation, or artificial invasion: can AI play a role in the future of therapy and client analysis?
It is a sad but true reality of the world we live in that mental health and wellbeing is coming under continuous strain, and more and more people seek mental health support in some form. This is a factor which has been exacerbated by Covid-19 and its long-lasting, pressing effects on various groups within society.
According to some recent reports in the media, psychiatrists and health professionals are warning that we face a “tsunami” of mental illness from problems stored up during lockdown, and that mental health is now the biggest crisis for businesses. Consequently, this means more clients requiring assessment and access to support as soon as possible, and thus more work for counsellors and therapists who may already be finding that their workload is pushed to the max.
But what if some of this pressure could be alleviated by technology, the invention of a system to help streamline the process and accelerate waiting list times, so that clients can receive help quicker? What if there was an aide or software available to therapists and counsellors to use to assist them with their clients, store information and help in diagnosis?
In recent years, the mental health sector has seen an increase in the incorporation of artificial intelligence software into its structure. From mental health apps like BlueIce which allows clients to record and manage their emotions, and then use this data to generate advice on how to reduce these feelings, to the use of technology-based self-assessments for clients to complete and then deduce a diagnosis by extracting and analysing data from the answers given. And in many ways, this is beneficial, apps allowing people to access support at their discretion, digital assessments providing mental health professionals with qualitative data that a software can pragmatically analyse.
But with some great benefits also comes some potential drawbacks and issues to the incorporation of technology. In the coming years, the therapy room could look a lot different, with the presence of facial recognition software monitoring a client’s behaviour, or text-analysis software set to pick up on specific key words and whether the way a client is speaking has changed over time. While in theory these sound promising, as it stands technology and AI is a long way from being fine-tuned for this to work effectively. Technology is designed to think logically – with this poses the threat of a patient acting in a certain way to direct the computer towards a certain diagnosis, or dodge a diagnosis.
What is more, clarity in technology is vital for the system to produce a response that is as accurate as possible. As many therapists and counsellors discovered over lockdown if they had to adapt to virtual sessions, a strong internet connection is extremely important so as to clearly see a client’s face, watch their facial expressions to assess their movements as well as sensory outputs. If AI were to be used within therapy to assess facial expressions and body language, it would need to be to an infallible degree of accuracy, or risk overlooking or misinterpreting something important. In a recent news report, Stanford University psychologist Johannes Eichstaedt, stated ‘Mental illness is under-diagnosed by at least 50%, and AI can serve as a screening and early warning system. But current detection screening systems haven’t been proven to be effective yet. They have mediocre accuracy by clinical standards.’
And then of course comes the debate of personal data invasion. Artificial intelligence is something which is heavily present in China, whereby the government uses wide-spread facial recognition like SenseTime Group Sense Video to monitor and control human behaviour, as well as humiliate civilians. Not only does this deprive people of confidentiality, but it also imposes upon their mental health in denying them access to the basic human right of personal privacy and living with the worry of their data being exploited. While an extreme example, it outlines the capabilities of AI. Thus, is it ethical to allow a machine to record, store and monitor data on a client? Does this pose the risk of confidential data being exploited? Is technology overstepping the boundary by being implemented into something so personal and intimate as someone’s mental state and emotions?
With these thoughts in mind, we asked therapists Dee Johnson Senior Accred MBACP MNCS MRSPH, and Emily Duffy MNSC for their thoughts on the role of AI in therapy, and whether they think there is a place for artificial intelligence in the analysis and diagnosis of clients.
Dee Johnson: ‘We need to progress and explore new approaches, absolutely and the safe use of AI in mental health in the future could mean more immediate access to support, guidance and prevention rather than draining budgets on maintenance or long-term cures, which often exacerbate the mental health crisis and peoples suffering and deterioration.
Communication comes in many formats now and although our human interactions are vital, we are evolving in new ways, so we need to be forward thinking. And this may provide access for those who struggle with this as a pathway into connecting with people. So, the initial AI approach could be an effective first bridge for accessing help they may not have otherwise sought.
I would need to feel confident that data and very personal information is protected and secure, and that the AI can understand an individual nuances, mannerisms etc and not be misinterpreted i.e. sarcasm, an ironic statement or facial expression would be translated literally and therefore totally misread the clients intention and reality, resulting in a misdiagnosis which would be potentially harmful.’
Emily Duffy: ‘I think there is definitely a place for AI in diagnosis and treatment, but one that needs to be considered thoroughly to keep the client at the heart of it all and their information safe in line with confidentiality.
There might be a risk of clients purposefully using ‘keywords’ to access diagnosis and treatment or even missing those keywords and missing a diagnosis in the early stages, and so it may be best to be used as a supplement to current procedures rather than to completely replace them, especially in the immediate future.
Humans are also complex, so diagnosis is not a case of ticking boxes, but the individual case of each client needs to be taken into consideration, especially when conditions can overlap. So, AI will need to be able to account for this, as well as us as practitioners accounting for how the client might feel in this process i.e., how they are being seen, what their preference might be, their state of being, any human inconsistencies, reliability of human memory when answering questions etc.
I myself have tested current AI apps for mental health and have found them useful in the first instances of using them but then repetitive and too impersonal/lacking the human touch when using them often, so I do think there is a lot of improvement needed before this can be used effectively or long-term.’
Considering the ongoing technological advancements, the incorporation of AI into everyday life is undoubtedly something which we will see more and more in society as time goes by. But is there a role for this within therapy and counselling; can AI, or digital tools, genuinely help us gather more precise data so that services and support can be more helpful within the mental health and wellbeing sector?
The post Artificial innovation, or artificial invasion: can AI play a role in the future of therapy and client analysis? appeared first on A Room In Town.