Over the decades, artificial intelligence (AI) has reached milestone after milestone in mimicking how a human speaks, acts, and even thinks. Now, it is being used to lift people’s mental well-being. Does this mean that AI could potentially replace psychologists and guidance counselors? Can one entrust this “non-person” to take care of one’s mental health? More importantly, is AI a growing threat in the field of mental healthcare?
These were the questions that were explored in Mind and Machine: The Emerging Role of Artificial Intelligence in Mental Health, a panel discussion held in Room 301 of the Yuchengco Hall last October 25. Organized by the Office of Career and Counseling Services (OCCS) Mental Health Task Force, it served as their culminating event for Mental Health Month. The panel included Charibeth Cheng, associate dean of the College of Computer Studies; Remedios Moog, head of the University’s Guidance and Counseling Office, a separate unit under OCCS aside from their career services; and Ron Resurreccion, registered psychologist and associate dean of the College of Liberal Arts.
A mile-away emergence
To kick off the event, the panel tackled how artificial intelligence is used in mental healthcare. Among the panel, Cheng believed that AI is more inclined toward supportive care. Many apps and chatbots, she explained, provide around-the-clock support and guidance to mental health patients. She also pointed out that AI does not only talk and interact with its users, but it can also manage prescribed medications, diagnose conditions “faster and more efficiently” through its algorithms, and provide personalized treatment plans by taking into account the symptoms and medical history of the user.
Likewise, Moog and Resurreccion agreed that AI tools can assist a counselor or a psychologist. However, there is still no form of AI that can replace these professions. “As far as I know, there is no psychologist AI. I think, right now, there are some projects, like the CCS students use to detect movements, to observe the person, and predict and annotate behavior…but the AI psychologist, I think I haven’t seen anything like that,” Resurreccion said.
Echoing this, Moog responded, “Yes, there is no counselor AI yet. But there are powered chatbots that are used for a variety of purposes.” She then cited Kapwa and Lusog-Isip as examples of AI tools that are available in the Philippines.
Kapwa is a Twitter chatbot that aims to provide easy-to-access mental health services in the country. Meanwhile, Lusog-Isip is the first self-help mobile application that is culturally adapted for Filipinos. It was formed through a partnership between the United States Agency for International Development RenewHealth Project and the Department of Health.
Glitches and strong suits
When asked about their willingness to integrate AI in their line of work, Resurreccion and Moog answered positively, because to them, the advantages outweigh the possible disadvantages. For instance, with its accessibility and speed, AI could educate patients, assess their moods and behaviors, and transcribe conversations during consultations. However, Resurreccion and Moog highlighted the potential dangers of collecting sensitive information from AI users, including their mental health history and personal preferences. Both also expressed concern about the ethics of AI use.
This was reiterated by Cheng, who additionally provided steps to possible developers for AI to take up a support role. First, one should identify the tasks of a mental health practitioner that the AI can automate. Second, AI can be used to further improve access to mental health services. Lastly, AI frameworks must be programmed to be human-centric to have more collaborative work.
Despite the advancements in AI technology, it still has its drawbacks. Moog advised the audience to duly educate themselves about AI and its capability, noting that it could cause negative thinking patterns that could make individuals grow dependent on the technology. Another issue is AI’s lack of social intelligence, which prevents it from consistently providing helpful information to its users.
The privacy and security of the user are also at risk because AI tools usually collect a large sum of personal information. Elaborating on these security risks, Resurreccion stressed that AI should be used responsibly. He reminded the audience that it was crucial to be mindful of what they ask of AI to safeguard their privacy.
Cheng also brought attention to the lack of transparency in how the technology was developed, making it difficult to detect programming oversights and bias in the codes. “Let’s say there’s an AI system trained to diagnose some disease. It will always end up with the same answer…In AI, it’s very consistent…but there is also a disadvantage for it. If it’s wrong, it is consistently wrong. If it’s biased, it is also consistently biased,” Cheng explained.
The final verdict
With the future in mind, the panelists were asked if AI technology would still be necessary. Both Moog and Resurreccion favored its necessity in the succeeding years, believing that the technology could take up a support role. While Cheng did not directly answer the question, she concluded the panel discussion by emphasizing how AI is “a tool that should be used to augment the work of clinicians and not to replace them.”
Overall, AI technology represents a valuable tool that, when used responsibly and with transparency, can enhance the lives of billions by assisting and complementing human clinicians and providing essential support, guidance, and aid to those currently underserved.
To harness its potential, professionals should approach AI with an open mind, concentrating on its capacity to benefit their profession and maintaining an optimistic outlook regarding its contributions. Nonetheless, they must remain vigilant in addressing concerns such as data privacy, bias, and transparency to use the technology ethically and effectively.