How might higher education respond to GPT-4? A community conversation with Ruben Puentedura

What can colleges and universities do about generative AI?  How can academics respond to this fast-moving technology?

Last Thursday we hosted computer scientist and ed tech leader Ruben Puentedura on the Future Trends Forum to explore the implications of large language model artificial intelligence.  I asked a few questions, then the community followed with a wide range of queries and observations, all ably handled by Ruben.  Here’s the whole recording:

Our conversation covered a lot of ground, from the strategic to the tactical.  Ruben responded with his customary mix of deep knowledge, love of learning, and humor.  AI literacy emerged as a theme.

Some topics, ideas, and observations, starting with some about assessment:

From one of my faculty: “ChatGPT is changing the face of assessment. Let’s go back to paper, pencil and bubble sheets.”

I think it’s also part of digital literacy – we’re saying the same thing about ChatGPT as we did about students using Wikipedia. Trust, but verify.

I think that this shift in ed tech is pushing us to reconsider what we want from our students. Do we want lower level (Bloom’s) activities and expectations, or are we truly pushing for those higher level skills.

A student I know (not from my institution) says that his workflow is: ChatGPT to build a draft, then WordTune to make it more human, then they “customize” it to put their spin on it.

How can we adjust assessment to look at the process of learning rather than the product of learning?… I am excited about the opportunity to build scalable tutoring paired with mastery learning to achieve a solution to Bloom’s 2 sigma problem

Then thoughts on class operations:

this is what needs to be taught to students – how to do proper prompts.

The first thought I had was about a teaching assistant for every teacher

Will higher ed and US Copyright clash with each other? (Rhetorical, of course – they already are sometimes.) What happens when students who get used to including ChatGPT content in their papers, with permission, get told they can’t…

Interesting angle on copyright:

Will higher ed and US Copyright clash with each other? (Rhetorical, of course – they already are sometimes.) What happens when students who get used to including ChatGPT content in their papers, with permission, get told they can’t…

Plus using ChatGPT to quickly draft a 300-page textbook

A note on Dunning-Kruger and ChatGPT:

Improving the ability to find real sources:

Open AI’s introduction to GPT-4Microsoft’s introduction to Copilot.

.

Forum_Ruben on ChatGPT with Ranjana Dutta_by Sarah Sangregorio

Screenshot by by Sarah Sangregorio

The topic is one we’re keenly interested in, and has serious depths to it, so we didn’t get to all questions posed by the Forum community.  Here are some of them:

Digital access and equity is essential. Platforms are acting to create access for users, but education evolves at a glacial pace. How do we counter teacher resistance when any
change is an issue?

You mentioned having GPT4 help students understand sources. How do you see AI helping students understand references if it has trouble identifying them?

How important is the discussion of ethics and intellectual property when working with AI? After all, simulations, role-playing, and interaction is important, but they are also
feeding the system.

Seems we now have a real problem as to how to do learning and assessment at scale (traditional higher ed funding model) when these tools demand a more thoughtful, focused assessment approach.

If AI is to be a part of our education, do you see institutions/schools as negotiating group subscriptions so ALL teachers & students have equal access to these tools? [The
librarian wonders]

What about Chomsky’s stance, about the morality of using such tools. As a society, why not “accept” the imperfection of our human limits rather than try to supplant them? Else, what’s the endgame?

Plagiarism and cheating – those are minor concerns, what about some of the other “dangers” of AI? Who owns the data? What are they collecting? Where will this widen
the gaps we see today? AI lies?

Have you tried GPT-4's new Socratic tutor system message? It is quite impressive. An example of that:
https://sharechatgpt.com/share/c1fcc0191d1f22549731ca00b92a3e16

With a ChatGPT focus on writing, are we overlooking how it can support information collection and summarization, or rephrasing to better understand complex information,
essentially a reading focus?

How much of writing is just doing what ChatGPT is doing? How much do we teach this kind of writing as a proxy for thinking? What does this say about how/what we teach?

Clarify: as I see it right now, Chat GPT 3 requires a lot of expert-level knowledge to vet whether the information or writing it spits out is credible – is this true of GPT 4 as well?

Why is it we hardly blinked over tools like Grammarly, which passively edit student essays?

Related to equity: Should we expect our students & ourselves to keep feeding ChatGPT for corporate gain, a la Turnitin & others? Is it a “fair trade-off”?

Will verifying the accuracy and completeness of ChatGPT take more time than using other means of doing researcher? to the answer about gaming and ChatGPT being used by students in analysis of history

I have got creative data and references. I would worry about them using ChatGPT as a partner to build on right now. How can we verify information without having the teachers do that?

[I]t creates wrong references/data. An average student does not verify if the AI is giving correct information.

As a developmental psychologist, I already see the loss of math skills in newer generations since calculators, and spatial skills since GPS… what should we be ready for with ChatGPT?

What are the patterns with regard to skill development in using Chat GPT and other Chat AI tools that we are seeing?

What happens to copyright and publishers in this new era?

My background on this question doesn’t fit in this box….would like to offer an analogy but I’d like to hear more speculation on what he thinks will be the human potential developments as a result…

Looking ahead, Nicole Henning will offer a webinar on this topic.


This session was our fourth on GPT.  Here are the previous three, from early December 2022, late December, and February 2023.

 

More to come!

Liked it? Take a second to support Bryan Alexander on Patreon!
Become a patron at Patreon!
This entry was posted in automation, Future Trends Forum. Bookmark the permalink.

4 Responses to How might higher education respond to GPT-4? A community conversation with Ruben Puentedura

  1. Karl Hakkarainen says:

    Bryan – I’m not able to find the source of the graphic attributed to HFS Research. It doesn’t appear on their website (or, at least, I’ve not been to locate it). Several other people on Twitter and elsewhere have displayed the graphic and credited HFS, but no one is pointing to an article.

  2. Pingback: Looking Back and Looking Forward — This Time the Web 3.0 Revolution is Real | Rob Reynolds

Leave a Reply

Your email address will not be published. Required fields are marked *