The humanitarian case for automation: the case of suicide prevention

automation_g_originalsOne of the arguments driving the spread of automation is, in some ways, the polar opposite of cold equations.  Humanitarian reasons can encourage us to automate functions currently performed by humans.  A good example of this is the potential of self-driving cars to reduce automotive-related injuries and deaths.  I suspect that potential is probably the biggest reason for autonomous cars’ rapid rise.

Here’s another one, still in the early research days.  What if we could prevent suicides through the use of artificial intelligence?

That’s the premise of a new study.  All we can see in the open web is the abstract, so let me excerpt the key part:

[W]e used machine learning to measure and fuse two classes of suicidal thought markers: verbal and nonverbal. Machine learning algorithms were used with the subjects’ words and vocal characteristics to classify 379 subjects recruited from two academic medical centers and a rural community hospital into one of three groups: suicidal, mentally ill but not suicidal, or controls. By combining linguistic and acoustic characteristics, subjects could be classified into one of the three groups with up to 85% accuracy. The results provide insight into how advanced technology can be used for suicide assessment and prevention.

“These computational approaches may provide novel opportunities for large-scale innovations in suicidal care.”

It’s a fascinating idea on multiple levels.  In the paper itself (thank you, online friends) we learn that such software uses “state analyses [that] measure dynamic characteristics like verbal and nonverbal communication, termed ‘thought markers’”.  Pestian, Sorter, Connolly, Cohen et al cite an awful lot of preexisting research already identifying suicidal markers in

retrospective suicide notes, newsgroups, and social media (Gomez, 2014; Huang, Goh, & Liew, 2007; Matykiewicz, Duch, & Pestian, 2009). Jashinsky et al. (2015) used multiple annotators to identify the risk of suicide from the keywords and phrases (interrater reliability = .79) in geographically based tweets…  Li, Ng, Chau, Wong, and Yip (2013) presented a framework using machine learning to identify individuals expressing suicidal thoughts in web forums; Zhang et al. (2015) used microblog data to build machine learning models that identified suicidal bloggers with approximately 90% accuracy.

And there is more.

Consider this as a thought experiment.  If their software hits 85% accuracy, after some improvement, how many lives could it save in a mental health care facility?  I’m assuming the program would work on spoken word texts (recordings or transcripts of conversation) and whatever writing patients produce.  Could this kind of software reduce injuries, emotional tolls, and deaths?

Extent the thought experiment to schools.  In an educational setting, we could imagine running the software over LMS posts for a school.  Could that detect and enable the prevention of some self-harm and suicides?

Let’s go further still, beyond institutional settings.  Could hosts of discussion venues for other relatively suicide-prone populations – say, veterans, or anorexia survivors – do the same?

Or assume (for the sake of argument) that this software is generally reliable across populations who aren’t tied together by a particular purpose or identity.  That’ll take some invention and development.  Could we deploy the resulting tools across a social network, like Pinterest, or Twitter, or Facebook?  Facebook is already doing something along these lines.

The privacy concerns are immense, obviously. The invasiveness of this chills me. But (here I ventriloquize the proponents) this is about saving human lives from the devastation of suicide, a problem already widely recognized.  Moreover, the United States, Great Britain, France, and other nations have already established that they are willing to cut back on civil liberties in the pursuit of personal safety in terms of terrorism.  Addressing suicide could be seen as similar – or even less of a controversial issue.

At the same time companies of all sorts, from technology providers to medical firms, are already invested in, shall we say, a compromised approach to personal privacy.  And users – us – have just lived through a generation wherein privacy eroded steadily and openly.  Perhaps if this suicide prevention software gets ready to be deployed we’ll hear talk inspired by this transformation of privacy, of “necessary tradeoffs”, or riffs on Franklin’s famous observation along the lines of “people who insist on privacy should expect more suicides”.  “Are you anti-machine or pro-death?”

Suicide prevention already entails detailed examination of a person’s expressions and psyche (for example).  Switching over to software only becomes a privacy problem, it seems, once the application runs over people not currently in the mental health system.  Yet the NSA under two putatively different presidential administrations has successfully established the right to massively surveil entire populations, regardless of their status.

Would organizations and populations opposed to suicide, such as some Christian churches or Islamic groups, add their support to win further adoption of the suicide detection program?  Indeed, should we expect to see theocratic states (Saudi Arabia) deploy this rapidly?

We could turn the question around.  Would it become a problem for a social media hosting service *not* to deploy suicide-detection software?  Would a school or support group get dinged on fees by insurance companies, or sued by family members of users who kill themselves, or sanctioned by governments?

All right, let me pull back from the thought experiment for a moment and return to the mere madness of late 2016. If this software research has the potential to be deployed, we should talk about the many problems of this, beyond the continuing decline of privacy.  Think about the many ways false positives can go wrong.  Consider how people and institutions could abuse it based on race, class, gender, region (note that the study involved a rural population, specifically an “Appalachian community hospital in southern West Virginia”) and other biases, as we’ve already seen with predictive policing.  And I haven’t begun to discuss the many ways interventions could do wrong to a great many people.

Suicide prevention is just one aspect of human life potentially facing automation.  On its own terms we need to explore it with care and openness.  But it’s also useful as an instance of the broader debate over automation, when such projects appear not for reasons of efficiency or market disruption, but for humanitarian purposes.

(link via HackerNews; automation photo by gwynydd michael)

Advertisements
This entry was posted in technology. Bookmark the permalink.

One Response to The humanitarian case for automation: the case of suicide prevention

  1. lukjan says:

    Is the question “could” or “should” it be done? Does it matter why, who, or how it might be done?

    How long until this replaces polls? lol

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s