Yesterday I participated in a webinar panel hosted by EDUCAUSE. The subject was the future of education and technology. It was originally supposed to be about the 2018 Horizon Report, but, well, my readers know that story.
The audience was a mix of people working in education and technology. I think there were CIOs, IT leaders, instructional designers, and so on.
That’s who I had in mind when I prepared my remarks. I aimed for strategic questions, not tactical implementations, and didn’t get into technical details. I wanted to involve entire institutions, not just technology staff, so I brought in students, curricula, pedagogy, alumni, and the grand purposes of academia. I also counted on fellow panelists addressing other issues, be they Kyle Bowen (Penn State) also speaking to AI, or Amy Collier (Middlebury) making a case for privacy and against big social media platforms.
A crucial detail: I had a very, very, very short time, about 4 or 5 minutes. So I wrote carefully, directly (little framing or scene-setting), hitting key points as clearly as possible, and cramming in as many words as I could orate without running them into a blur. There wasn’t time to introduce or explain the technologies and practices.
(I don’t usually write full scripts for presentations. Most times I speak from an outline or jotted highlights or from slides, my head full of research and mindful of local conditions, in a kind of improv. I speak up to six times per month in person, and who knows how many times virtually, so the phrases and arguments are right at the top of my brain.)
No slides were injured or even created for this. They felt superfluous, especially as I was flanked by other presenters doing their own things. I didn’t want to add unnecessarily to the audience’s cognitive load.
Here’s the text.
Greetings. I’d like to speak to the future possibilities and short-term potentials of two technologies.
First, blockchain. Based on what I’ve seen over the past several years, there are multiple implications, including – now – students on physical campuses trying to mint bitcoin in labs with with their own machines, and the occasional faculty member trying to launch an ICO. But I think there are two more salient possibilities for the short and medium term.
To begin with, blockchain may well be a niche technology, inappropriate for general use. As Nouriel Roubini and Preston Byrne observed, blockchain is essentially an expensive and latency-hobbled database: “Bitcoin is a slow energy-inefficient dinosaur that will never be able to process transactions as quickly or inexpensively as an Excel spreadsheet.”
However, while this is true, there are cases where the costs are worthwhile. For example, it might be worth securing public records with blockchain in a time when data access and transparency are fraught (to put it mildly; Amy Collier will have more to say on this). Or, as Chris Jagers of Learning Machine put it during a Future Trends Forum conversation, we should use the blockchain when we want to store something for permanent access, rather then in temporary or institutionally unstable storage. In other words, blockchain is for when we want something that will go down on our permanent record.
More speculatively, we should watch for technologies built on top of blockchain as a platform. Think of the way the world wide web was created on top of internet protocols; let’s see what happens with new tech like Ethereum or LBRY that uses blockchain as a substrate. Such new tech might have campus or classroom uses; our faculty, staff, and students may build some of it.
Second, on AI and automation:
In the long term, we can imagine several possible big-picture scenarios for the impact of AI and automation on society. In the first, automation replaces many human functions and jobs, leading to widespread anxiety, underemployment, and unemployment. In the second we rethink many human functions and jobs as human-machine syntheses, where we work closely together to maximize our respective strengths. In the third scenario we creatively invent new, post-AI human functions and jobs.
I mention these long-term possibilities because we are starting to prepare for them now, especially in education and technology, and that shapes our short and medium term futures. If scenario 1 plays out, with increased human underemployment and downtime, the academy has three functions: to better prepare students for a more fiercely competitive job market, with fewer jobs; to prepare students for lives with greater downtime than we now have; to explore what it means to be human when machines increasingly render us outmoded or obsolete.
If scenario 2 determines the future – the one with human-machine syntheses – then education has different functions. We now have to rethink curricula and pedagogy for a nearly cyborg closeness with machines. The work of many academic disciplines mutates as professionals and students work more closely with robots and/or software and AI. For campus enterprise IT that means, among other things, a serious expansion of services, greater institutional centrality, and deeper complexity of role and services. For starters, consider adding AI functions to campus software (imagine cognifying the LMS or library catalog, or running course tutors at enterprise scale).
If scenario 3 is the more accurate one – where we respond to automation by creating new jobs and new functions for humans – then we have to revise our curriculum now, emphasizing classes and research in what humans do better than machines. We have to be ready to teach and research new or transformed careers, like AI ethicist, automation librarian, cognification officer, or human creativity lab director. Campuses will have to open lines for some of these jobs themselves. All this will take a mixture of intelligence (monitoring for the appearance of new positions), creativity (creating new fields), collaboration across disciplines and professions, and a willingness to explore and experiment.
I’m not sure how the audience responded. The webinar technology was Adobe Connect, which means the main discussion venue was a narrow chat box. I think most of the questions addressed other panelists as well as other participants’ observations. There was some traffic on Twitter (#ELIWEB), including my tweets.
What do you think?