On June 16th I gave the closing keynote to the New Media Consortium’s annual conference. It was a big talk, with tons of images, ranting, and ideas crammed into a very busy hour.
It meant a great deal to me to address an organization which meant so much. I cut loose in this talk, making 95% of it new just for the occasion, taking a lot of risks and challenging the audience. I’d like to share recordings and material here for your use and/or feedback. So sit back and watch, listen, or read.
And here are the prepared remarks. I riffed on them at points, which you can see in the video above. I’ve added several of the images as I referred to them directly, plus a very short bibliography at the very end:
“It is a signal honor to address an organization – a community – that has meant so much to me for more than a decade. NMC is a source of inspiration, learning, challenges, and many friendships. In honor of the futures work long conducted by the NMC, allow me to take you on a futuring journey for the next hour
Here’s my plan, what we’ll be exploring:
- Some quick introductory notes
- The short- term future
- Some medium-term futures
- Towards the longer term
- What to do
1. Introductory notes
I’m going to focus this talk on the ways technology might develop in the future. This entails a risk, that of technological determinism. This assumes that technological developments drive some non-technological changes – for our purposes, to education and society. Think of how train tracks and rolling stock can enable yet constrain human actions. A related assumption: people will keep developing and playing with tech. More simply put: I’ll take the persistent drive for technological invention seriously.
I won’t be talking much about Black Swans, like a possible Singularity, or airborne Ebola, or a WWI-scale disaster, or everyone’s favorite, the zombie apocalypse. Also, I won’t dwell on most non-technological contexts (economics, policy, demographics), unusually for me.
Is the future we’re making a good one or a bad time? Americans like to see technologies and futures in terms of starkly opposed utopian and dystopian poles. I’d like the make things more nuanced, stretching futures across a utopia – reality – dystopia spectrum.
Two guides will help us forward, starting with history. We have a good sense, now, about how humans tend to create and react to new technologies, and we can extrapolate from that knowledge. Our second guide is science fiction, which informs much of today’s talk. Not only has sf been giving us visions of possible futures for more than a century, in addition to offering cognitive tools for imagining the future, but technologists and designers are increasingly influenced by what sf has already imagined. In short, if you’re not reading science fiction, you’re not ready for the rest of the 21st century.
2. Short term, to 2021
We are living through a remarkable time, when revolutions are rippling through traditional education. An unprecedented boom in human creativity thanks to the digital revolution is returning storytelling and story-sharing capabilities to people around the world. And powerful changes in economics, demographics, and globalization, not to mention technology, are reshaping education. Some of schooling as we know it might not survive the decade.
Technological development rushes on. VR in now in place, with applications in gaming, storytelling, and visualization. Watch the costs drop and accessibility rise. Content is starting to appear. AR is developing broadly, for basic visualizations across many different hardware platforms. What’s next? AR and VR connect and intertwine, as the digital and nondigital worlds are thoroughly interlaced. Think Mixed Reality. Think computing in space. Watch Microsoft Hololens and Magic Leap.
Meanwhile, 3d printing is growing rapidly. In education, we’ve seen it move from engineering to libraries. Think: 3d printing across the curriculum. 3d printing is also allied to new learning spaces. A DiY ethos contributes to the growth of Makerspaces and the Maker movement.
Those spaces and technologies link up with the often-heralded transition from consumption to co-creation and production, which continues. Think: student as producer, student as maker.
Meanwhile, hardware continues to shrink, as Moore’s Law keeps on going. For example, my alma mater, UM, produced a combination camera, data storage, and Wifi connection the size of a grain of rice – last year. Let’s assume hardware keeps shrinking. This will let us embed hardware throughout our environment. It will let us do more with projected displays, flexible interfaces. Contact lenses as interfaces could well appear. Mark Weiser’s dreams of ubiquitous computing are coming true.
One way of describing this world of small, embedded, invisible, and environmental hardware is the Internet of Things. This is already occurring through an enormous infrastructure build out, including: expanding into the IPv6 internet protocol; developing new middleware, OSes; building out data ownership and control systems. This should lead us to rethink privacy, data ownership and control, safety tradeoffs, and the public/ private dynamic.
At a technical level, will we rethink what a file is? Imagine an ecosystem mostly composed of streams, not documents in directories; points and flows, not files.
Will there be hyperlinks in the internet of everything? What happens to the web in a world of ubiquitous, often invisible computing? There are many incentives to not develop the web. For example, mobile apps, streaming video, AAA video games, the LMS, paywalls all offer alternatives to the open web of Sir Tim Berners-Lee’s invention. Perhaps the web of 2021 will become like US community tv, trawled by a few humans and increasing #s of AIs. Or perhaps, as Kevin Kelly suggests, we’ll see the IoE hyperlinked and Googleable. Perhaps we’ll improve our ability to search and link across time, connecting to a site’s prior states, hyperlinking the emerging history of the web.
While we shrink some hardware devices, we send others into the air. Drones are changing public and private spaces, around the world. There are peaceful uses for delivery, photography, research, art. Some hobbyists have figured out how to add new devices to drones, such as shotguns and chainsaws. Others, like the United States Pentagon, have created still more uses in war and espionage. Drones were once largely controlled; now some are semi-autonomous, or autonomous, acting on their own. Already ethicists and insurance companies debate the implications of drone crimes, asking who’s responsible for injuries and deaths at the metaphorical hands of a literal machine. And automating jobs: Japanese firm Komatsu uses drones on construction projects to feed data to automated trucks and digging machines.
So many future trends are historical trends that won’t die, or seem to cease only to lurch back into life later on. Some of you may remember p2p architectures dating back to the 1990s. Blockchain is a new realization of that concept. Not only has blockchain led to bitcoin, an interesting, messy, and potentially transformative financial development, but now, through Ethereum, supports decentralized autonomous organizations (DAO): distributed, automated enterprises. One such already functions as a fundraising and fund dispersal firm.
Meanwhile, for the next five years let’s expect more of the boring old stuff: social media, crowdsourcing, crowdfunding, open source, data analytics, mobile computing, gaming, gamification, virtualization, digitization, digital storytelling, always-on media capture, always-on surveillance, hacking… There’s more, of course. There always is.
That’s all in the short term. The next 5 years. We already know all about this stuff.
3. Medium term
Let’s look ahead 10 years. To 2026.
Facebook is already looking ahead to that point, and planning. Note what they want to nail down by then:
Automation: so to get to 2026, let’s just assume progress, and let’s consider artificial intelligence. Not at the level of a cataclysmic, world-rebooting Singularity. Just extrapolations of current trends, along the lines marked out by McAfee and Brynjolfson. I’ll assume Moore’s law continues., and add in that quantum computing starts to appear at consumer and enterprise levels. We start talking about a Fourth Industrial Revolution. Let’s grant further, steady growth in deep learning and advanced neural networks. Count Google’s victory over the game of Go as a milestone, and Siri’s uncanny abilities as a baseline.
Then we have to rethink how we design the digital world. Maybe all of it. How does more advanced AI force us to reconceive data standards and publication, information architecture, archiving, for starters?
As it advances, AI starts taking up human functions. We humans generate a vast and growing horde of data; this is fodder for machines. Projects are appearing every day to take advantage of improving machine analysis, like http://americangut.org , which aims to improve your health by diving deeply into your guts to better understand their microbial life. We’ve already seen criminal analytics automated – which already has problems. Machine to machine functions keep rising, such as high frequency trading, which has already advanced beyond regulators’ abilities to constrain. Already we’ve seen flash crashes, economic incidents, driven by the conversation among programs.
Looking ahead to 2026, imagine increasing segments of human life automated as machine-to-machine functions. We could see the emergence of a posthuman order in our lives.
Let’s add robots to the mix, since automation means both AI and robots. The combination is extending into more human labor functions. This can supplement labor shortfalls (Japan, China) or replace capital with labor (everywhere). Robots + AI + 3dprinting could mean deglobalization, as we relocalize production, especially through customization and creativity.
More: we’re seeing the development of affective, emotional computing, as the Horizon Report notes. For example, we could develop emotional analysts. When will they be at par with a human baseline of emotional assessment? When will they go beyond, and how do we handle that? On another line, what does good machine translation do to professional translators and second language teaching? If we combine automation with the IoE and MR, should we anticipate the appearance of intelligence, even sentient tools?
Today we’re seeing the automation of more job functions and entire jobs. Sometimes they replace human functions, physical or mental, sometimes through expert systems. Since 1990, for the first time in centuries, automation outmodes jobs without creating new ones, perhaps leading to rising unemployment. Imagine a 2026 with persistent 10% or 20% unemployment. What does education mean in such a world?
We’re also seeing the development of automated creativity. Already operational in writing (finance, sports, weather) and images. This image is a screen cap from a neutral net recreating a classic movie – 2001 – on its own terms:
This next image was created by Google’s DreamDream, which turned my original photo of our pre conference session into mild psychedelia:
We’re also seeing automated assistants. For example, tools for analyzing one’s writing, which can help us edit and revise more effectively – without a teacher. We’ve seen IBM’s Watson help point to new avenues of medical research, and legal AIs help with document analysis. By 2026 will we see an AI acknowledged, or even credited as coauthor for a scholarly article?
How should we expect creativity itself to change with automation? The history of human interaction with technology suggests we should, as humans love to revise old forms and create new ones with each invented medium. So look to new ways of making art, different forms of storytelling, fresh takes on gaming, and, maybe, new forms of creativity in 2026 we lack the words to describe in 2016.
Hang on. There are plenty of reasons to resist such an automation-shaped scenario.
Objection: Humans want contact!
Answer: except when we don’t. Introverts overdose daily on human contact. People don’t necessarily prefer human interaction for unpleasant tasks. Geeks and increasing geeky culture famously are comfortable with computer-mediated experience. Generally speaking, younger folks are happier with the digital than their elders.
Objection: Automation is too expensive!
Answer: capital continues to accumulate in this economy. That’s one part of rising inequality (cf Thomas Piketty’s R>g equation). And technology prices drop, historically.
Objection: I’m scared of machines doing bad things to me and my children!
Answer: what happens when the machines are safer and better than humans? Think of self-driving cars, while human drivers murder tens of thousands each year. Or robots in hospitals, where human accidents kill 100s of thousands every year.
4. Long term
Let’s look ahead even further. Try 2050. And let’s be open to the full range of possibilities.
What’s happening in the long range horizon is truly disruptive. We’re seeing grand challenges loom like science fiction plotlines. The specter of automation threatens to radically reformat the world of work and society, changing the world our students will inhabit while supplanting teaching and learning. And that’s just for starters.
Consider the new silicon order. Let’s consider different ways AI could unfold. Nick Bostrom at Oxford has done speculative research into the different ways AI could grow and shape the world, ranging from benign to malign to simply strange. Stephen Hawking wants us of proceeding too quickly, of allowing a dangerous force to erupt across our deeply networked world; imagine how much more threatening his warning becomes in an IoE world. There’s the dystopia of a world ruled by inhuman AI, like the classic movie The Forbin Project. Then there’s the utopian vision of Iain Banks. Imagine benign, grand, and administrative AI that simply works to improve human life. That’s a continuum of silicon-ordered 2050s.
Consider the new social order. Given sufficient automation, how do humans organize together in post-2016 forms? We might not see new jobs appear. Income inequality could accelerates to 19th-century levels. In which case, we could see two new worlds of work.
On the one hand, the mass of humans work part time at low wages, living at a subsistence level, otherwise engaged and entertained by a rich and endless digital environment. Above them are the 1%, often deeply skilled, the owners and managers of the new digital order. There isn’t much middle class between them. Call it the new Gilded Age, or neofeudalism.
On the other hand, automation unleashes a new era in human prosperity, of digital delights and technology-enabled offline goods. New political regulations and social orders transfer enough wealth to the majority of people to enable them to lead rich and rewarding lives, which combine productive work with reflective leisure – what one British organization half-jokingly referred to as Fully Automated Luxury Communism. Again, That’s a continuum of 2050s.
Perhaps we combine and synthesize these movements. Technology doesn’t replace humans, but extends and enriches us. We work and play in ever-closer relationship to the digital world. We are both metaphorically and literally cyborgs.
Let’s go further. These technological advances let us hacking life. At the same time as we develop silicon technology we apply digital tools and concepts to the biological realm. New tools, like CRISPR, give us the ability to shape offspring – to edit life – with increasing precision and power. Open source biology gives new insights into life forms – and shares that knowledge widely. Consider a recent paper in World Neurosurgery, “Brainjacking: implant security issues in invasive neuromodulation”. Or consider another paper, on creating macromolecules to reduce the spread of infection within a body.
The new humanity: consider more deeply what happens when we apply these technologies to humanity. What happens to our sense of what it means to be human?
What we think of as “human” may change beyond recognition. We’re already there in 2016 with bionics and widespread, legal, even mandated psychopharmaceuticals. We’re experimenting with brain-controlled machines. nutraceuticals. We’re starting to print tissue and organ replacements. Precision medicine via bioinformatics, new imaging technologies, and nanotech medicine are coming on line. New devices give some measure of sight to some blind people. A Stony Brook team used targeted light to alter acetylcholine in the brains of mammals, removing some emotional memories. We can conceive of editing human DNA via CRISPR and gene driving. Some populations live decades longer than they did just 2 generations ago; if life extension becomes even basically successful, by 2050 will we see 100 become the new 60? Meanwhile, biological indicators are increasingly used in security: retina scanning, gait recognition.
With such innovations, after such knowledge, what happens to our sense of what it means to be human?
How does public health change? Does health care become the leading American industry? What’s the public interest in editing people’s minds and bodies?
Beyond human life, we could experience a new nature. As one marine biologist, Ruth Gates, explains her new role: “Really, what I am is a futurist,.. Our project is acknowledging that a future is coming where nature is no longer fully natural.” None of our technological innovations occurs in a vacuum. As we alter life and grow the digital world, we also alter the earth. As we change humanity, we alter nature. We may, by 2050, speak of a new Earth.
Already some use the term Anthropocene to describe the planet after the year 1900. The Northwest passage is now open. Multiple nations are engaged in a geopolitical rush for the north polar region, which is now opening up into a new world.
That’s just the start. What happens when snows and permafrost retreat northwards, opening up lands for farming? When hot climates turn arid and desertification begins? Do more cities become like Las Vegas, artificial creations maintained solely by massive infrastructural investment? When do people flee such cities? What changes will occur in the planetary ecosystem when we produce hybrid and novel forms of life?
In a parallel to the transformation wrought by infusing human bodies and societies with increasing numbers of machines, what happens to the natural world when that world is suffused with small, networked, data-gathering devices? What happens to the thin layer of life wrapping the Earth’s rocky mantle when we achieve nanotechnology at industrial scale, or nanotechnology at consumer scale? Will digital connectivity laminate or subsume the biosphere?
In one of his novels Iain Banks describes the infusion of computation into the world through tiny, networked devices. Others have used the term computronium to name the new material that results. Banks coins the sharper word Smatter (smart matter). By 2050, will we produce smatter in labs? Or in garages? Or in forests?
What would we call this world, revised by humans and post human technologies? Donna Haraway offers the maybe tongue-in-cheek term “Cthulhucene”.
By 2050, in short, we are hacking the world. Humans change humans, humans change the world, the new world changes humans, and so it goes. By 2050 we’ve hacked the world, and keep on doing so.
Should we envision this as a renaissance? Perhaps this new world is one where human creativity and identity is reborn through an expansion of our powers and capacities, fraught with all kinds of dangers and disasters. Perhaps 2050 is a time of human rebirth.
Maybe a new politics appears by 2050. Think of this combination: drones, perhaps perpetually aloft thanks to solar power, with big data, IOE-based surveillance, and data analytics backed by AI could yield a dictator’s ecstatic dream of total social control. Does this system elicit a new politics in thirty-five years? Perhaps, for some, they will idolize heroes of our time, like Edward Snowden and Alexandra Elbakyan. Others will abhor them as dangerous criminals. What kind of politics are described by their fans and opponents?
A new politics: for example, in 2016 a proposal appeared for casting some urban areas as Rebel Cities, spaces where surveillance is disallowed. Would such spaces be fruitful ground for shooters like the one in Orlando, as well as for creative expression? Would Rebel Cities descend into chilling cycles of escalating violence and terror, or create new forms of social amity? By 2050 has this range of thinking about surveillance become the new left-right, blue-red political bedrock?
Or, instead, after we hack the earth and transform our population, is our politics described as what Bruce Sterling calls “cities full of old people, afraid of the sky”?
Hang on. What could stop some or all of these developments from happening?
Objection: Moore’s law could slow down or stop, which might ratchets down the pace of technological innovation and production a bit.
Answer: the pace might slow, but the end state still occur. Alternatively, we could shift energies from digital technologies to robotics and quantum computing.
Objection: we could turn our postindustrial economy into one based on the principle of no growth economies . After all, as Edward Abbee famously observed, growth for growth’s sake is the ideology of the cancer call.
Answer: you first. Seriously, try to convince people that they don’t need any more economic growth. Think of the vast equity issues involved in telling the developing world to stop. Or doing this without redistributing wealth.
Objection: a resource crash could knock these futures offline.
Objection: we could voluntarily stop developing technologies.
Answer: “giving up the gun” rarely works, historically, with the rare exception of state power used against the crossbow.
Objection: a new anti-technological politics could arise, urging us to return to an older form of humanity. NeoLuddites? anti-intellectuals? New Humanists?
Answer: it’s possible, and something to watch for. But too many people see themselves benefitting from technologies. This will take some interesting cultural turning.
Objection: could a religious movement against new technologies arise? Frank Herbert gave us such a vision in his classic novel Dune, where a kind of crusade blocks AIs from working for centuries.
Answer: it’s possible, and something to watch for. But most religions are happy to use the technologies, in the end. So we have to anticipate a new religious movement.
Objection: various Black Swans could occur, such as an extraordinarily massive solar event or EMP strikes from some foe or the clathrate gun firing.
Answer: true. That’s the nature of very unlikely, high impact events. Will our technological society build enough resilience into its new Earth?
But before we leave, let’s go even further.
The humans we knew from the year 2000 are a vanishingly rare type, studied by descendants of anthropologists. Artificial intelligences busily work around and above the globe, redesigning life. The biosphere has gained and lost species and entire biomes. The Earth… is transformed. Education and creativity? something else entirely.
Some inspired and creative AI and semi-human teams launch mixed reality reenactments of life in 2016.
5. What is to be done?
How can we anticipate and act strategically in the face of such potential transformation?
We are so not ready.
We currently suffer under a bad mix: the weird simultaneity of a popular and well-funded embrace of technology with strong anti-scientism and unreason. Academic disciplines are not necessarily prepared (think of how 2008 caught macroeconomics flat-footed, and what 2016 is doing to political science. We are radically divided over what constitutes human nature, as we start to hack it up. In the United States we enjoy political sclerosis and dystopian reaction.
We have many political leaders skeptical of, if not actively opposed to civil liberties in the digital world: Trump, Clinton; Cameron; China’s gamified autocracy. Journalism is less free to report now than it was a decade ago, according to a Reporters Sans Frontières report; Turkey arresting journalists on press freedom day. Meanwhile, American tv “news” is a planetary and historical embarrassment. We maintain a horrible legacy of prejudice restricting human growth and creativity. And inequality is starting to aim for nineteenth-century levels.
So given all of that, what shouldn’t we do?
Don’t think about it.
Evade the issue by thinking of retirement. (Present generations don’t have a good record about leaving a world to the young right now)
What is to be done instead?
The blindingly obvious: collaborate with each other, across institutions, sectors, nations, populations, professions. Work through inter institutional groups (like NMC!). Use social media. Use and be open. Read and watch science fiction.
The not so obvious, and challenging: rethink everything in terms of automation’s possibilities. Think of what can be replaced. Become a cyborg. Use futures methods.
The more challenging: Lead! You’re best placed on campuses and other institutions to inform people in context. Get political. Imagine different worlds and inhabiting them – yourselves, your institution, your children and the generation to come.
You. Help. Make. The. Future.
It isn’t something just done to you, delivered like gifts from a cargo cult. You help make the future.
Every decision you make contributes. When you craft a creative work, or teach in a certain way, or nudge a campus in one direction, or support a political candidate, or tell a story, or dream out loud, or influence younger folks, you help co-create what is coming next. Don’t be passive – it’s too late! You’re already making it happen. You are all – each of you – practicing futurists and world-makers. Do so with open eyes, and the flame of creative possibility roaring in your heart.
Erik Brynjolfsson and Andrew McAfee, The Second Machine Age.
Andrea Castillo, “Can a Bot Run a Company?”
Alison Cook-Sather, Catherine Bovill, Peter Felten, Engaging Students as Partners in Learning and Teaching: A Guide for Faculty (2014).
Kristi DePaul, “Robot Writers, Open Education, and the Future of EdTech” (2015) .
Donna Haraway, “Anthropocene, Capitalocene, Chthulucene: Staying with the Trouble” .
Michio Kaku, Physics of the Future.
Rebecca Keller, “The Rise of Manufacturing Marks the Fall of Globalization.”
Kevin Kelly, The Inevitable .
Brooke McCarthy, “Flex-Foot Cheetah”.
Alexis Madrigal, “‘The Future Is About Old People, in Big Cities, Afraid of the Sky’”.
Babak Parviz, “Augmented Reality in a Contact Lens”
Brandt Ranj, “Goldman Sachs says VR will be bigger than TV in 10 years “.
David Rose, Enchanted Objects.
Edward Snowden, “Inside the Assassination Complex“.