Thoughts on chatbots

This week Inside Higher Ed interviewed me and some other folks about chatbots.  I was brought in as ed tech futurist.  Also on deck were leaders of companies actually making chatbots for education: Ivy.ai and Admit.hub.  Lindsay McKenzie did a good job of outlining key aspects of the issue: which functions ‘bots perform, customization, when to defer to human respondents, personalization, and more.

Here I want to go a bit further.

Chatbots now are at a certain developmental level.  They can resemble phone trees or the decision-making flowcharts of certain computer games, letting users navigate through them Choose Your Own Adventure style (one example).  They can also work with natural language do a basic degree, one that improves gradually. They are typically text only, although some add graphics and even animation.

Most are narrowly functional, like this one, aimed just at keeping you from falling asleep.  Others work as communication adjuncts, spreading a message with something more interactive than mass messaging or spam (for example).  Some handle discussion moderation, as on Reddit.

At present virtual assistants Alexa/Siri/et al in many ways function as chatbots.  They are much more fluent and have greater capacity, but are structured in similar ways: precoded responses to a fairly narrow set of inputs.

Beyond that?  How could the field change as the tech improves, especially with incremental development and the impact of AI?  And there’s a lot of work to do on that improvement.*

We could simply see more use of chatbots.  Earlier this year Gartner forecast 20% of customer interactions taking place with bots by 2020.  Extend that a few years out.  Add to it our rising shift towards messaging apps, which are in many ways better situated for chatbots than the rest of the media landscape.

Organization and institutions can learn a lot from chatbot interactions over time.  This can be simple and very practical – i.e., a preponderance of questions about printing policies means they are unclear or badly communicated, and that can be remedied.  Or the results can build up into larger policies at the program level.  These reactions can then be fed back down to the bots to help their performance.  Obviously analytics plays a role here.

If bot personalization deepens and takes hold, we might spend increasing amounts of time with bots (or just one, our favorite).  As individuals we could search not just local data but broader swathes of the digital world this way, or shop.  Chatbots in this vision are competitors to Siri et al., moving from the margin towards the center of our digital lives.

As chatbots improve, more organizations (schools, businesses, nonprofits) could outsource more services to software.  Conversation as a Service (CaaS): as far as I can tell James Melvin coined the term.

One endpoint in this development arc: chatbots, virtual assistants, intelligent speakers, and AI could fuse into one field of human-computer interaction.

Another and related end point for chatbots is to pass the Turing Test.  Some have tried, of course.  Every year bots struggle to convince humans that they’re in the same phylum.  Once this occurs then classic automation possibilities arise, either swapping humans for chatbots or using bots to redefine human work.

On the other hand, chatbot growth could stall or reverse.  First, the technology just might not develop much farther, and we enter a chatbot winter.  Second, the rising dislike of companies using data analytics may block bots from rising at scale.  Third, we just might hate the things.  Most remember Clippy.  And forth, chatbots misfiring or reproducing bias could reduce their appeal.  For example,

So what do these chatbot possibilities mean for education?

The current field of use – campus information – could expand or contract, as per the above.  There are many ways for students (and others) to interact with chatbots short of formal instruction and research: schedules, basic class admin, library materials, fines, applications, student organization information, athletic info, alumni relations, etc.

I wonder about how far chatbots will go in more complex, non-instructional interactions.  The IHE piece notes that Ivy.ai is exploring introducing them to mental health counseling, which echoes my mention of ELIZA, while also eliciting serious pushback in comments.  If such an intervention works, perhaps businesses and campuses will try chatbots on other, sensitive topics, like student life, contentious campus conversations, or HR processes.

We should, of course, see schools trying out chatbots for more instructional functions in a variety of ways.  The fields most likely to be involved are those with easily checked objective content, such as the sciences, following the pattern in AI and automated tutorials (for example).  We could also see a drive to build bots for topics with the largest enrollment, like college algebra.  The largest classes may be well suited for chatbot support, a la Jill Watson. If chatbots stand out in certain professional fields, their college preparatory classes might be more likely to use them.  For example, I’m seeing signs of chatbots in business and health care.

I’m not seeing chatbots replacing faculty for a while, if ever.  They instead seem like supplements to a flesh and blood instructor.

On the other hand, chatbots just might never work well for learning.  One study found a seriously inferior experience:

how student[s] interact with chatbots is fundamentally different to interactions with human teaching staff. Studies showed that students apply simpler sentences, lack much rich vocabulary and get distracted more easily.

At the same time some faculty – and students – will create or edit chatbots, starting with computer scientists.  Open source tools will make this easier, as will commercial learning software.  This should, of course, lead to its own challenges.  Allied to this is research into chatbot use, which can occur across disciplines: computer science, human-computer interaction, psychology, for starters.

Back to personalization: how far will campus-affiliated chatbots connect with the needs of individual learners?  We can imagine software imitating speech patterns, remembering questions over time, or cross-referencing different domains (multiple classes, cocurricular learning, etc).  Will people experience multiple personalized chatbots hosted by the same school?  Will chatbots be able to ingest data from other ones?

Personal conclusion: I don’t always do well with chatbots, as I’m often in a hurry and have a hard time slowing my thinking down to their frameworks.  I do often feel the desire to speak to a human… except in games.

*See, for example, Denys Bernard and Alexandre Arnold, “Cognitive interaction with virtual assistants: From philosophical foundations to illustrative examples in aeronautics.” Computers in Industry volume 107, May 2019, Pages 33-49, https://doi.org/10.1016/j.compind.2019.01.010.

(Chatbot photo from Twitter-Trends)

Liked it? Take a second to support Bryan Alexander on Patreon!
This entry was posted in technology. Bookmark the permalink.

5 Responses to Thoughts on chatbots

  1. Bryan – Your readers might be interested in the list of articles I curated for an OER chapter on Artificial Intelligence (inclusive of chatbots) for the course I teach. It is updated periodically. The supplemental readings include a link to chatbot aggregator.

    https://granite.pressbooks.pub/comm601/chapter/artificial-intelligence-ai/

  2. Ken Soto says:

    Interesting topic Bryan, thanks. My thinking on chatbots and AI in general has evolved recently – initially very skeptical, now resigned. It comes down to this for me: for almost any deficiency I can identify with chatbots or voice AI, my experiences with actual human chat reps are frequently just as lacking when the company/institution behind the service just doesn’t care enough. Empathy? Subject matter knowledge? Actual ability to change the situation? Comprehension? Responsiveness? All are just as likely to fail with humans as bots.

    A good AI system is approximately equal to or better than a poor human system in this regard, and most users will come to accept the AI as a good enough version, especially when the AI system is programmed to continually improve itself without fatigue or needing motivation.

  3. Michael Flood says:

    Privacy? I think any user is wise to assume that anything communicated to a chatbot is recorded and likely part of a permanent database somewhere. But what does that mean? In many industries (including Education, Healthcare, Telecom, etc) there are specific types of “protected information”. Employees are expected to be trained on handling and safeguarding that information.

    Do the laws that apply to human employees equally apply to chatbots? What if a customer/user provides PII, CPNI, HCI, etc. to a chatbot unsolicited? Do we expect the AI to recognize the sensitive information and mask it from any permanent records?

    What happens when these databases are compromised? Do you want a detailed conversation with a health assistant chatbot about your affliction to be public record?

    Aside from protected information, what about just the expectation of privacy or confidentiality? Do end-users really comprehend how these conversations can be mass-processed, analyzed and made actionable by algorithms?

  4. Warren Blyth says:

    At Oregon State University Ecampus we (the Custom Multimedia team) have discussed exploring chatbots to build an ongoing FAQ service for online courses. In theory, students tend to have the same questions each term, so it might help an instructor if a chat bot assistant could step in immediately (when relevant). Plus, one of the main gripes about distance education is wait time for a response, so a “virtual TA” that is available 24/7 would likely be appreciated. (just haven’t had time to experiment. and there is the privacy concern when aggregating data each term).

    Plus, personally, when looking into biometrics/psychophysiometry (measuring user response using a variety of sensors) for video games , it seems necessary to have a trained clinical psychologist give an exit interview to interpret key moments in the data. Often wondered if a chat bot could be trained to recognize when answers need to be drilled into deeper. And what would happen if a game was considering user reactions constantly DURING play. (like, if your elven assistant where to step in and ask what you’re thinking from time to time. “Are we going to the castle to kill the goblin? or to rescue the princess?” seems like directly identifying user play-intent could lead to much better focused experiences).

  5. Sowmyan says:

    Interesting.

    I agree with Warren Blyth that response time to queries is poor in distance education.

    I am not sure if bot development has some industry wide cooperation strategy coordinated by anybody. Perhaps every one would be trying to develop an AGI device. I would imagine a personal assistant should have an ability to interact with the individual and take care of the unique idiosyncrasies such as a personalized vocabulary, understanding linguistic limitations of the individual, etc. On the ‘search for answer’ side, I guess cloud based domain expert devices, that keep learning and remain current in their domain can evolve. Individual’s chat-bots should be able to query these devices and get the precise answer.

    The domain expert bot should be able to find that specific answer and word it as a short text. Deep inside a domain, context specific use of words may be different than in general use. While I was thrilled with the advances made by search technology in giving me 20 most relevant links, I am getting tired of having to sift through these 20 webpages and find the answers. I am surprised that after so many years, we are still getting referred to so many pages. I would expect a synthesized one page white paper for an answer at the top followed by the conventional links. I find wikipedia articles at the top for most queries. I now want a 2 to 3 sentence answer without having to search further.

    I think there are many domain specific Q&A websites. Such sites can easily evolve into domain specialists. For example stack-overflow is a good resource for many technical questions. Even Quora is. Today these contain human curated answers, many with good references, and also with opinions. A domainbot can perhaps keep scanning these kind of resources and sift fact from opinion, and build its own knowledge. I think the chat bot should not only be interacting with its owner / licensee, but also be chatting with other bots on the web.

    I guess the education industry can grow this arm for domain specific knowledge depository. Perhaps universities can answer the questions in discussion fora in online courses under their name as ‘so and so univ’s bot’, and provide relevant curated answers. That could be their indirect ad targeted at students of relevance to them. Popularity of twitter underlines the human preference for short communication. Perhaps an approach of letting people jointly write an answer, a la Wikipedia style, would be better than many answering questions in their own way.

    The key question then will be do we want to lose cognitive diversity. Wikipedia style is not without its cognitive diversity as multiple people are still involved. Just that the final out put is one version. We have enough divergence in the web. We need a lot of convergence support too.

    Established majors may not want to have a cooperative web. But the FOSS movement can promote a structure for multiple players to collaborate and evolve in curating knowledge and provide concise answers.

Leave a Reply to Sowmyan Cancel reply

Your email address will not be published. Required fields are marked *