With this post we conclude our reading of Cathy O’Neil’s Weapons of Math Destruction. (If you’d like to catch up with the reading schedule, click here. All posts for this reading, including the schedule one, are grouped here.)
Here I’ll summarize this week’s chapters, then offer some discussion questions.
But first, checking in on fellow readers’ reactions: Jason Green posted his responses to part 4.
Meanwhile, author O’Neil has a new New York Times piece on academia and automation, urging researchers to work on the problems caused by algorithms. One Princeton professor pushes back, as does a research team. A Google Doc sprung up to document academic programs studying data. The London School of Economics has at least two blog posts and one commission on the topic. (thanks to George Station for these last links)
Chapter 10, “The Targeted Citizen: Civic Life”
Here the book turns to politics and the way algorithms might reshape it. O’Neill begins with two recent studies, each of which suggested Facebook users’ attitudes can be altered by what they see on that network.
Next, the chapter tours the history of recent American presidential campaigns and their use of big data, starting with direct mail (187), and touching on the Romney, Obama, Hillary Clinton, and Cruz runs. O’Neil noted Cambridge Analytica’s role early (191; she references this 2015 Guardian article).
The chapter concludes by looking at microtargeting in anti-abortion and other campaigns, citing the research of Zeynep Tufekci. It finds American republicans more interested in, and susceptible to, microtargeting (194) and concludes that the practice constitutes a very dangerous WMD. “It is vast, opaque, and unaccountable.” (198) . It also separates people civically, as “it will become harder to access the political messages our neighbors are seeing – and as a result, to understand why they believe what they do.” (195)
Interestingly, in the 2016 edition of this book O’Neil decided not to call Facebook and Google WMDs, in the political context:
I wouldn’t yet call Facebook or Google’s algorithms political WMDs, because I have no evidence that the companies are using their networks to cause harm. Still, the potential for abuse is vast. (185)
Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide. (204)
This is the first edition’s conclusion, which is revised to an extent in the afterward. Its essence is a call for federal regulation of algorithms.
O’Neil begins by noting that WMDs are bad enough in isolation, but that their synergies can make things worse. “The problem is that they’re feeding on each other.” (199) . They may have some benefits, but the poorest will suffer the worst (202).
How could regulation work? O’Neil proposes a data scientist’s ethical code, akin to medical doctors’ Hippocratic Oath (205). She goes on to describe how a state regulation would have to carefully measure WMD impact, how auditing movements could work, and that we should simply ditch some of the algorithms that can’t be fixed:
The only solution in such a case is to ditch the unfair system. Forget, at least for a decade or two, about building tools to measure the effectiveness of a teacher. (208)
Others should be “dumb[ed] down.” (210) . Some positive ones might work, even in education (216). Meanwhile, helpful actors like ProPublica can use algorithms to expose and oppose WMDs (211). Ultimately, black box algorithms should be opened to the public (214).
(This is apparently new for the 2017 paperback edition)
We begin with the 2016 election and the role algorithms played in it, from polling to Facebook. ProPublica again appears in a heroic role, exposing another WMD in the justice system (223-4). O’Neil is skeptical about polling, criticizing it for generating bad readings, and thinks its importance will dwindle in the wake of Trump’s win (221-2).
The author also offers a modification to her previous work, suggesting that we understand algorithms by “identify[ing] the stakeholders and weigh[ing] their relative harms.” (225) . That means balancing costs and benefits across society, such as comparing people protected by software versus those harmed. One example is the state of Michigan, whose employment tracking program falsely accused 20,000 people of fraud, injuring their reputations, along with their ability to get jobs (226-7). O’Neil also recommends that we examine not only data processing but collection (229).
- How can political campaigns best use big data and data analytics without causing harm?
- Which educational uses of algorithms actually benefit learners?
- Which actors (agencies, nonprofits, companies, scholars) are best placed to help address the problems O’Neil identifies?
- Are there themes in the book we haven’t addressed, that we should?
And that brings us to the end of this reading. If you’re like to look back over our earlier discussions of Weapons, click here. If you’d like to learn more about our book club, including our previous readings, click here.