The next evolution of artificial intelligence has emerged: OpenAI’s GPT-4. With traditional mental health services in high demand, is it possible that an artificial brain could be used to help a human mind?
What’s new in GPT-4?
This week OpenAI, makers of the artificial intelligence language models ChatGPT and DALL-E, announced its latest product; GPT-4. This new AI language model is a significant evolution in comparison to its predecessors, with the company touting its advanced problem-solving capabilities. Another new feature is the AI’s ability to understand image prompts alongside text input, meaning the model is now multimodal in nature. Prior versions, including GPT-3.5, could only parse text input. Overcoming this technical limitation means the AI can now not only perform basic object recognition, but also analyse images and use associated text input to help form a contextual understanding. The company demonstrated this during a developer livestream, asking the AI to decipher images such as UI screenshots and hand-drawn notes, outputting detailed answers and even writing some basic code based on the initial text-based query.
Beyond multimodal capabilities, GPT-4 scored higher than previous models on a variety of benchmarks, including reliability, creativity and ability to process more complex queries with a higher degree of accuracy. This was thanks in part to the model benefiting from the lessons the company learned with GPT-3.5. Details on the precise training methodology, model size and hardware used for this GPT iteration were not released to the public as part of the announcement.
“GPT-4 exhibits human-level performance on various professional and academic benchmarks”.
Despite admitting that GPT-4 is still an imperfect AI and stressing only ‘subtle’ differences as compared to GPT-3.5 on some tasks, the company noted that the new AI “exhibits human-level performance on various professional and academic benchmarks”. OpenAI stated that they have also sought to address many of the criticisms of previous GPT iterations, namely non-factual and harmful responses that have raised concerns regarding risk and misinformation. Due to improvements in safety protocols, the company claims GPT-4 is now 82% less likely to return ‘disallowed’ content and is 40% more likely to provide factual responses when compared to older AI models.
OpenAI has partnered with several companies who have already integrated the new model into their products, including Stripe and Duolingo. Microsoft has confirmed that its recently released new Bing search experience is also running on GPT-4. With consumer access available via a paid subscription to ChatGPT Plus and developers able to utilise the new AI via an API, it won’t be long before we see GPT-4 popping up in many different applications and industries.
Which begs the question, could an artificial brain be used to help a human mind?
Exploring AI’s mental health applications
The mental health industry is in crisis. At its core, the issue is about supply vs demand. People with mental health issues vastly outnumber the smaller contingent of counsellors, therapists, psychologists, psychiatrists and other mental health professionals. Many are overworked or unable to service those in remote and rural locations. This has led to long wait times, a workforce that is exhausted and inequity of access for potential users of the system.
Governments are under pressure to increase funding for mental health services, but that is a seed which may not yield an effective outcome for many years, as incoming staff require education and training which are costly both in terms of resources and time. Experts have advocated for industry reforms, asking for all areas to be reviewed in the hopes of improving service delivery and accessibility, yet this too will be a lengthy undertaking.
Unfortunately, for the estimated one in five people who are currently dealing with a mental health disorder, the time for change was yesterday. Rates of anxiety, depression and other disorders are increasing each year, and any proposed solution needs to bring about not only long-term impacts, but short-term results as well.
With the recent advances in artificial intelligence, could it be leveraged to help the struggling mental health industry whilst we wait for systemic change to take effect?
Therapy, counselling and clinical treatment all require the uniquely human elements that are empathy and insight.
It is a tantalising prospect, AI as a means to treat mental illness, yet it is one fraught with risk, something that would be unethical to release upon our most fragile population. We are not yet (and may never be) at the stage where we can swap out a mental health professional with synthetic representation. Therapy, counselling and clinical treatment all require the uniquely human elements that are empathy and insight. This is now the case more than ever, thanks to the industry’s shift away from the sterile and questionable practice of psychoanalysis toward a Rogerian, client-centered approach to psychology in recent years.
Another major issue to consider is the fact that AI models are not yet completely trustworthy or credible, despite their ability to provide responses with a high degree of confidence. This act, known as AI “hallucination”, aptly describes when a model generates a response to a query that has not been covered by its training data set. More often, the response is factually incorrect, or at worse, harmful to the user. OpenAI admitted that GPT-4 is “still not fully reliable” and that “great care should be taken when using language model outputs, particularly in high stakes contexts”. And what could be more high stakes than the health and wellbeing of a person afflicted with a mental health issue?
Let AI do what it does best
Instead of looking to AI as a means to bolster a struggling workforce, or to replace exhausted clinicians, why not use it as another tool to help ease an industry that is already spread far too thin. A predictive-text language model like GPT might fail at therapy, but could it find a more appropriate application as a triage mechanism?
For instance, depending on the first point of contact, people seeking mental health treatment are usually assessed to some degree by a local doctor, and often again by the clinician when referred. This takes the form of a mental status examination, where potential patients are assessed on the behaviour, emotional state and other key factors that can help guide analysis and subsequent.
Here, AI could be utilised to help take this administrative burden off over-worked health professionals, by assisting patients in completing a portion of the exam at home as a self-report measure. This could also aid in reducing some of the stigma associated with seeking mental health treatment, by ensuring the first steps on the path to treatment are anonymous and completed in a less anxiety-producing environment than a doctor’s office. A follow up or final assessment by the doctor or mental health clinician would still be required to complete the process, but this could already produce time saving benefits and help ensure a greater equity of access for those in need of mental health services.
Another application could see AI being used to cross-reference existing patient records to help detect potential risks for mental health issues. These flags could include biological markers, significant life events or other metrics that could serve to as prompts to enable preventative measures and further ease the mental health system.
It’s also easy to imagine an AI-powered ecosystem of apps and devices, working in harmony to monitor and maintain a user’s mental health through biometric feedback, sentiment analysis and reassuring, chat-like dialogue. Though admittedly, one person’s euphoric utopia is another’s privacy nightmare – a balancing act between sedation, regulation and potential misuse that no tech entity has figured out just yet.
With the evolution of AI, we are experiencing the beginning of a new technological age, one that could be harnessed to power the mental health industry through the plight of today and into a better tomorrow for all. Time will only tell how these AI tools will be utilised, but perhaps they could be just the solution we are looking for.
Remember, help is available wherever you are and for whatever is currently troubling you. Check with your local doctor for a referral to a mental health professional or consult the appropriate health process in your country for more help and assistance. There are also many comprehensive online and phone-based resources available across the world that can help you get back to living a better quality of life.
For immediate help, please consult the following resources:
Online & Telehealth Services:
- BetterHelp (Global)
- Beyond Blue (Australia)
- Global Hotlines
- Mental Health Hotline (US) – 866-903-3787
- Lifeline (New Zealand) – 0800 543 354
- Lifeline (Australia) – 13 11 14