Two reports crossed my feeds recently, each concerning the intersection of automation and humans. Taken together, they give insights into some ways education and automation may interact.
I’m not talking about Google’s spectacular triumph at having an AI win at Go (seriously, check it out, if you haven’t already). These are two different if related stories.
First, here’s an account of an Israeli-American robot designed to teach children. Tega has several key features worth our notice. It has strong personalization, which connects with the recent rise in personalized learning. Tega also integrates with mobile phones (presumably parents’, since the students are very young, 3-5 years). Most compelling, the robot has an emotional function, a form of “affective computing,” where the bot identifies learners’ moods and reacts.
The system began by mirroring the emotional response of students – getting excited when they were excited, and distracted when the students lost focus – which educational theory suggests is a successful approach. However, it went further and tracked the impact of each of these cues on the student.
Over time, it learned how the cues influenced a student’s engagement, happiness and learning successes. As the sessions continued, it ceased to simply mirror the child’s mood and began to personalize its responses in a way that would optimize each student’s experience and achievement.
Additionally, while Tega personalized learning, it worked with a group of 38 learners. In Boston, home of the Atlas robot.
Second, a new survey of American attitudes towards automation revealed some fascinating developments. We are apparently in a transition stage, where people anticipate widespread replacement of workers by robots… except for their own jobs.
fully 65% of Americans expect that within 50 years robots and computers will “definitely” or “probably” do much of the work currently done by humans.Yet even as many Americans expect that machines will take over a great deal of human employment, an even larger share (80%) expect that their own jobs or professions will remain largely unchanged and exist in their current forms 50 years from now.
Those attitudes have an interesting demographic shape. People who are older, less well educated, make less money, or work in the business world are more likely to expect widespread automation than younger, more educated, wealthier people “who work in the government, nonprofit or education sectors”. I wonder how this may play out politically. For example, will Trump start talking about the specter of automation? Will Clinton avoid the topic?
A related political angle in the Pew report is is that automation, while exciting, doesn’t loom largest as a job threat in respondents’ minds. Instead, the more popular danger is… old-fashioned other humans, especially bad managers and other workers.
[J]ust 11% of workers are at least somewhat concerned that they might lose their jobs because their employer replaces human workers with machines or computer programs. On the other hand, roughly one-in-five express concern that they might lose their jobs because their employer finds other (human) workers to perform their jobs for less money or because their overall industry workforce is shrinking.
Additionally, people who work in primarily manual jobs are more concerned about automation than those who work in non-physically demanding occupations:
What can we learn from these two stories?
- We are in a transition phase, between a time when widespread automation was merely science fictional, and a future when it simply triumphs. In such a moment we should expect contradictions, shifting alliances, panics, bursts of nostalgia, and visionary beliefs.
- Emotional or affective computing has huge, yet almost unthought-of potential. Consider: when will we have robots (or whichever hardware-software combination you like) that are better than some humans as assessing and reacting to human emotions? Then, a short time later, when such machines are better than most humans? What will that mean for professions where emotional interaction matters a great deal? A little further: when will it be considered unethical to not use such machines?
- We are still not seeing many signs of new jobs emerging in quantities to replace those outmoded by automation.
- Notice the lack of robot-fear among children. Automation may have huge demographic divides. If Bruce Sterling is right, that “the mid-[21st]century will be about ‘old people in big cities who are afraid of the sky'”, when those fearful aged urbanites may also feel deep anxieties about kids and their robot friends.
- Neal Stephenson’s 1995 imagining of a “Young Ladies’ Illustrated Primer” still seems on target, or at least as a target.
- Timelines: last month’s Horizon Report put affective computing at 4-5 years out. Let’s consider that seriously. Imagine, for example, the Tega children then, being between 7 and 10 years old. Think about upper primary and early secondary school students having access to – indeed, coming to expect – emotionally responsive computing and robots. How does that change K-12, pedagogically? How should higher ed prepare?But Pew has a different timeline, far more conservative, looking ahead 50 years to 2066, more than a decade past Bruce Sterling’s mid-century. Instead of automation occurring within this generation, Pew – and the majority of their respondents – sees things settling into place two centuries on. That gives societies, and educators, far more time to prepare (or to just retire and avoid the whole thing). It also takes some urgency out of the issue, especially for people who have a hard time planning more than ten years ahead.