This week Inside Higher Ed interviewed me and some other folks about chatbots. I was brought in as ed tech futurist. Also on deck were leaders of companies actually making chatbots for education: Ivy.ai and Admit.hub. Lindsay McKenzie did a good job of outlining key aspects of the issue: which functions ‘bots perform, customization, when to defer to human respondents, personalization, and more.
Here I want to go a bit further.
Chatbots now are at a certain developmental level. They can resemble phone trees or the decision-making flowcharts of certain computer games, letting users navigate through them Choose Your Own Adventure style (one example). They can also work with natural language do a basic degree, one that improves gradually. They are typically text only, although some add graphics and even animation.
Most are narrowly functional, like this one, aimed just at keeping you from falling asleep. Others work as communication adjuncts, spreading a message with something more interactive than mass messaging or spam (for example). Some handle discussion moderation, as on Reddit.
At present virtual assistants Alexa/Siri/et al in many ways function as chatbots. They are much more fluent and have greater capacity, but are structured in similar ways: precoded responses to a fairly narrow set of inputs.
Beyond that? How could the field change as the tech improves, especially with incremental development and the impact of AI? And there’s a lot of work to do on that improvement.*
We could simply see more use of chatbots. Earlier this year Gartner forecast 20% of customer interactions taking place with bots by 2020. Extend that a few years out. Add to it our rising shift towards messaging apps, which are in many ways better situated for chatbots than the rest of the media landscape.
Organization and institutions can learn a lot from chatbot interactions over time. This can be simple and very practical – i.e., a preponderance of questions about printing policies means they are unclear or badly communicated, and that can be remedied. Or the results can build up into larger policies at the program level. These reactions can then be fed back down to the bots to help their performance. Obviously analytics plays a role here.
If bot personalization deepens and takes hold, we might spend increasing amounts of time with bots (or just one, our favorite). As individuals we could search not just local data but broader swathes of the digital world this way, or shop. Chatbots in this vision are competitors to Siri et al., moving from the margin towards the center of our digital lives.
As chatbots improve, more organizations (schools, businesses, nonprofits) could outsource more services to software. Conversation as a Service (CaaS): as far as I can tell James Melvin coined the term.
One endpoint in this development arc: chatbots, virtual assistants, intelligent speakers, and AI could fuse into one field of human-computer interaction.
Another and related end point for chatbots is to pass the Turing Test. Some have tried, of course. Every year bots struggle to convince humans that they’re in the same phylum. Once this occurs then classic automation possibilities arise, either swapping humans for chatbots or using bots to redefine human work.
On the other hand, chatbot growth could stall or reverse. First, the technology just might not develop much farther, and we enter a chatbot winter. Second, the rising dislike of companies using data analytics may block bots from rising at scale. Third, we just might hate the things. Most remember Clippy. And forth, chatbots misfiring or reproducing bias could reduce their appeal. For example,
Is your health robot a sexist?
Same chatbot, same symptoms, computes “heart attack” 4 men & “panic attack” 4 women. If we’re not careful it’ll be back to ‘hysteria’ next. #MedTech #Automation #Gender #DigitalHealth #AI #DigitalEthics https://t.co/Cjlu3cOpFd
— Claudia Pagliari (@EeHRN) September 8, 2019
So what do these chatbot possibilities mean for education?
The current field of use – campus information – could expand or contract, as per the above. There are many ways for students (and others) to interact with chatbots short of formal instruction and research: schedules, basic class admin, library materials, fines, applications, student organization information, athletic info, alumni relations, etc.
I wonder about how far chatbots will go in more complex, non-instructional interactions. The IHE piece notes that Ivy.ai is exploring introducing them to mental health counseling, which echoes my mention of ELIZA, while also eliciting serious pushback in comments. If such an intervention works, perhaps businesses and campuses will try chatbots on other, sensitive topics, like student life, contentious campus conversations, or HR processes.
We should, of course, see schools trying out chatbots for more instructional functions in a variety of ways. The fields most likely to be involved are those with easily checked objective content, such as the sciences, following the pattern in AI and automated tutorials (for example). We could also see a drive to build bots for topics with the largest enrollment, like college algebra. The largest classes may be well suited for chatbot support, a la Jill Watson. If chatbots stand out in certain professional fields, their college preparatory classes might be more likely to use them. For example, I’m seeing signs of chatbots in business and health care.
I’m not seeing chatbots replacing faculty for a while, if ever. They instead seem like supplements to a flesh and blood instructor.
On the other hand, chatbots just might never work well for learning. One study found a seriously inferior experience:
how student[s] interact with chatbots is fundamentally different to interactions with human teaching staff. Studies showed that students apply simpler sentences, lack much rich vocabulary and get distracted more easily.
At the same time some faculty – and students – will create or edit chatbots, starting with computer scientists. Open source tools will make this easier, as will commercial learning software. This should, of course, lead to its own challenges. Allied to this is research into chatbot use, which can occur across disciplines: computer science, human-computer interaction, psychology, for starters.
Back to personalization: how far will campus-affiliated chatbots connect with the needs of individual learners? We can imagine software imitating speech patterns, remembering questions over time, or cross-referencing different domains (multiple classes, cocurricular learning, etc). Will people experience multiple personalized chatbots hosted by the same school? Will chatbots be able to ingest data from other ones?
Personal conclusion: I don’t always do well with chatbots, as I’m often in a hurry and have a hard time slowing my thinking down to their frameworks. I do often feel the desire to speak to a human… except in games.
*See, for example, Denys Bernard and Alexandre Arnold, “Cognitive interaction with virtual assistants: From philosophical foundations to illustrative examples in aeronautics.” Computers in Industry volume 107, May 2019, Pages 33-49, https://doi.org/10.1016/j.compind.2019.01.010.