One of the arguments driving the spread of automation is, in some ways, the polar opposite of cold equations. Humanitarian reasons can encourage us to automate functions currently performed by humans. A good example of this is the potential of self-driving cars to reduce automotive-related injuries and deaths. I suspect that potential is probably the biggest reason for autonomous cars’ rapid rise.
Here’s another one, still in the early research days. What if we could prevent suicides through the use of artificial intelligence?
That’s the premise of a new study. All we can see in the open web is the abstract, so let me excerpt the key part:
[W]e used machine learning to measure and fuse two classes of suicidal thought markers: verbal and nonverbal. Machine learning algorithms were used with the subjects’ words and vocal characteristics to classify 379 subjects recruited from two academic medical centers and a rural community hospital into one of three groups: suicidal, mentally ill but not suicidal, or controls. By combining linguistic and acoustic characteristics, subjects could be classified into one of the three groups with up to 85% accuracy. The results provide insight into how advanced technology can be used for suicide assessment and prevention.
“These computational approaches may provide novel opportunities for large-scale innovations in suicidal care.”
It’s a fascinating idea on multiple levels. In the paper itself (thank you, online friends) we learn that such software uses “state analyses [that] measure dynamic characteristics like verbal and nonverbal communication, termed ‘thought markers’”. Pestian, Sorter, Connolly, Cohen et al cite an awful lot of preexisting research already identifying suicidal markers in
retrospective suicide notes, newsgroups, and social media (Gomez, 2014; Huang, Goh, & Liew, 2007; Matykiewicz, Duch, & Pestian, 2009). Jashinsky et al. (2015) used multiple annotators to identify the risk of suicide from the keywords and phrases (interrater reliability = .79) in geographically based tweets… Li, Ng, Chau, Wong, and Yip (2013) presented a framework using machine learning to identify individuals expressing suicidal thoughts in web forums; Zhang et al. (2015) used microblog data to build machine learning models that identified suicidal bloggers with approximately 90% accuracy.
Consider this as a thought experiment. If their software hits 85% accuracy, after some improvement, how many lives could it save in a mental health care facility? I’m assuming the program would work on spoken word texts (recordings or transcripts of conversation) and whatever writing patients produce. Could this kind of software reduce injuries, emotional tolls, and deaths?
Extent the thought experiment to schools. In an educational setting, we could imagine running the software over LMS posts for a school. Could that detect and enable the prevention of some self-harm and suicides?
Let’s go further still, beyond institutional settings. Could hosts of discussion venues for other relatively suicide-prone populations – say, veterans, or anorexia survivors – do the same?
Or assume (for the sake of argument) that this software is generally reliable across populations who aren’t tied together by a particular purpose or identity. That’ll take some invention and development. Could we deploy the resulting tools across a social network, like Pinterest, or Twitter, or Facebook? Facebook is already doing something along these lines. Continue reading