Could having AI analyze the patterns of people playing on their phones be effective in preventing suicide?
Originally published under the title AI algorithms to prevent suicide gain traction
Published in Nature News on December 12, 2017
Original article by Sara Reardon
Facebook and a number of other companies are trying to detect online behavior related to self-harm.
There are a growing number of researchers and high-tech companies that are using social media to mine users for signs of suicidal tendencies. They base this on the overwhelming evidence that the language patterns internet users use when posting on social media, and the way they subconsciously use their smartphones, can predict their mental problems.
Suicide rates are rising among Americans ages 15-34.
Credit: Justin Sullivan/Getty
Commercial companies are beginning to test programs that can automatically identify such risk signals. An app development company called Mindstrong is developing and testing machine learning algorithms that correlate user language and behavior (such as the speed of swiping the scroll bar on a smartphone) with symptoms of depression as well as other psychological disorders.
Next year, the company will further enhance its research on behaviors related to suicidal tendencies, which may help health care facilities detect patients' intent to harm themselves more quickly. In late November of this year, Facebook announced the launch of its self-developed automated suicide prevention tool in most parts of the world. Apple, Google and other tech giants have invested in similar R&D projects.
Some mental health experts hope these tools will help reduce the number of people who attempt suicide. The suicide rate in the United States has been climbing in recent years, with suicide becoming the second leading cause of death for people aged 15-34. According to Scottye Cash, a social work researcher at Ohio State University, young people are more likely to go to social media for help than to consult a therapist or call a psychological crisis intervention line.
Cash and other experts have expressed concerns about privacy issues and the limited transparency of commercial companies. More importantly, there is no evidence to prove that digital intervention tools work. Megan Moreno, a pediatrician at the University of Wisconsin, said Facebook users have a right to know how their information is being used and the purpose of doing so, "How effective are these [tools]? Are they saving lives or harming them? "
Machine intervention
Matthew Nock, a psychologist at Harvard University, says that identifying suicidal people is not easy, at least in the short term, making suicide difficult to prevent. Nock believes that when talking to mental health professionals, most people who want to take their own lives will deny it. But social media provides a real-time outlet to expose emotions. Nock says, "A person's every word and action on social media can be used for research. "
Bob Filbin, chief data scientist at the Crisis Text Line, a New York-based nonprofit, says machine learning algorithms can find patterns that may be overlooked by humans, thus helping researchers and counselors distinguish whether social media posts express jokes, normal anxiety, or true suicidal tendencies.
Through the Crisis Text Line, people can use text messaging to communicate with a counselor. By analyzing 54 million messages on crisis text lines, Filbin and colleagues found that those who were considering ending their lives rarely used the word "suicide"; words like "ibuprofen" (note: an anti-inflammatory fever-reducing drug) and "bridge" were better predictors of suicidal tendencies. Filbin claims that using these analyses, counselors on crisis text lines can typically determine within three messages whether a distress alert to emergency response teams is needed.
Thomas Insel, chairman of Mindstrong, believes that "passive" data collected from electronic devices is more revealing than questionnaires that users actively fill out. Mental health providers can install the Mindstrong-developed app on a patient's phone, and the app can then collect data in the background. Insel says that once the app outlines the typical web behavior profile of a user, it can detect unusual changes in the user. The company has partnered with a number of health care companies to help users seek medical assistance when the mobile app detects an anomaly.
Insel argues, "I can't say if all the other software that requires you to open your phone app to work is useful when something really big is going on. "
Did it work?
The more essential question is how and when it is better to intervene. Nock suggested that the likelihood of false positives is higher, so doctors and companies that use technology to detect suicide risk have to judge the accuracy of the signal before heading to the rescue.
Moreno said there is little evidence that services like suicide hotlines actually save lives, and in addition interventions by others can make suicidal people feel more vulnerable, which can lead to the opposite of what they want. She and others have found that if friends report posts containing suicidal tendencies to others, sometimes the posters will black out their friends, making those friends even less likely to alert others in the future.
Facebook's newly launched anti-suicide program relies heavily on reports from users, as well as a proprietary algorithm that scans for unusual posts. Then Facebook contacts the suicidal user or notifies a contact person. The liaison officer will determine if the user should be sent a link to an intervention agency such as a crisis text line or notify the emergency response team.
But Facebook doesn't provide details of how the algorithm or the liaison works. As for whether Facebook will track users to prove the effectiveness of algorithms or human intervention, a Facebook spokesperson also offered no explanation. In a public statement, a Facebook spokesperson said that their algorithm was "developed in collaboration with a number of experts" and that users could cancel the service themselves.
Facebook's reticence has some researchers worried. Cash argues that "Facebook has a responsibility to make decisions based on evidence. "Yet the company has provided little information that would allow outside experts to evaluate the process.
Regardless, Insel recognizes the efforts made by Facebook, "It's important to examine Facebook's project in the larger context; after all, many attempts to prevent suicide at the moment are failures. "ⓝ