How might higher education respond to GPT-4? A community conversation with Ruben Puentedura

What can colleges and universities do about generative AI?  How can academics respond to this fast-moving technology?

Last Thursday we hosted computer scientist and ed tech leader Ruben Puentedura on the Future Trends Forum to explore the implications of large language model artificial intelligence.  I asked a few questions, then the community followed with a wide range of queries and observations, all ably handled by Ruben.  Here’s the whole recording:

Our conversation covered a lot of ground, from the strategic to the tactical.  Ruben responded with his customary mix of deep knowledge, love of learning, and humor.  AI literacy emerged as a theme.

Some topics, ideas, and observations, starting with some about assessment:

From one of my faculty: “ChatGPT is changing the face of assessment. Let’s go back to paper, pencil and bubble sheets.”

I think it’s also part of digital literacy – we’re saying the same thing about ChatGPT as we did about students using Wikipedia. Trust, but verify.

I think that this shift in ed tech is pushing us to reconsider what we want from our students. Do we want lower level (Bloom’s) activities and expectations, or are we truly pushing for those higher level skills.

A student I know (not from my institution) says that his workflow is: ChatGPT to build a draft, then WordTune to make it more human, then they “customize” it to put their spin on it.

How can we adjust assessment to look at the process of learning rather than the product of learning?… I am excited about the opportunity to build scalable tutoring paired with mastery learning to achieve a solution to Bloom’s 2 sigma problem

Then thoughts on class operations:

this is what needs to be taught to students – how to do proper prompts.

The first thought I had was about a teaching assistant for every teacher

Will higher ed and US Copyright clash with each other? (Rhetorical, of course – they already are sometimes.) What happens when students who get used to including ChatGPT content in their papers, with permission, get told they can’t…

Interesting angle on copyright:

Will higher ed and US Copyright clash with each other? (Rhetorical, of course – they already are sometimes.) What happens when students who get used to including ChatGPT content in their papers, with permission, get told they can’t…

Plus using ChatGPT to quickly draft a 300-page textbook

A note on Dunning-Kruger and ChatGPT:

Improving the ability to find real sources:

Open AI’s introduction to GPT-4Microsoft’s introduction to Copilot.

.

Forum_Ruben on ChatGPT with Ranjana Dutta_by Sarah Sangregorio

Screenshot by by Sarah Sangregorio

The topic is one we’re keenly interested in, and has serious depths to it, so we didn’t get to all questions posed by the Forum community.  Here are some of them:

Digital access and equity is essential. Platforms are acting to create access for users, but education evolves at a glacial pace. How do we counter teacher resistance when any
change is an issue?

You mentioned having GPT4 help students understand sources. How do you see AI helping students understand references if it has trouble identifying them?

How important is the discussion of ethics and intellectual property when working with AI? After all, simulations, role-playing, and interaction is important, but they are also
feeding the system.

Seems we now have a real problem as to how to do learning and assessment at scale (traditional higher ed funding model) when these tools demand a more thoughtful, focused assessment approach.

If AI is to be a part of our education, do you see institutions/schools as negotiating group subscriptions so ALL teachers & students have equal access to these tools? [The
librarian wonders]

What about Chomsky’s stance, about the morality of using such tools. As a society, why not “accept” the imperfection of our human limits rather than try to supplant them? Else, what’s the endgame?

Plagiarism and cheating – those are minor concerns, what about some of the other “dangers” of AI? Who owns the data? What are they collecting? Where will this widen
the gaps we see today? AI lies?

Have you tried GPT-4's new Socratic tutor system message? It is quite impressive. An example of that:
https://sharechatgpt.com/share/c1fcc0191d1f22549731ca00b92a3e16

With a ChatGPT focus on writing, are we overlooking how it can support information collection and summarization, or rephrasing to better understand complex information,
essentially a reading focus?

How much of writing is just doing what ChatGPT is doing? How much do we teach this kind of writing as a proxy for thinking? What does this say about how/what we teach?

Clarify: as I see it right now, Chat GPT 3 requires a lot of expert-level knowledge to vet whether the information or writing it spits out is credible – is this true of GPT 4 as well?

Why is it we hardly blinked over tools like Grammarly, which passively edit student essays?

Related to equity: Should we expect our students & ourselves to keep feeding ChatGPT for corporate gain, a la Turnitin & others? Is it a “fair trade-off”?

Will verifying the accuracy and completeness of ChatGPT take more time than using other means of doing researcher? to the answer about gaming and ChatGPT being used by students in analysis of history

I have got creative data and references. I would worry about them using ChatGPT as a partner to build on right now. How can we verify information without having the teachers do that?

[I]t creates wrong references/data. An average student does not verify if the AI is giving correct information.

As a developmental psychologist, I already see the loss of math skills in newer generations since calculators, and spatial skills since GPS… what should we be ready for with ChatGPT?

What are the patterns with regard to skill development in using Chat GPT and other Chat AI tools that we are seeing?

What happens to copyright and publishers in this new era?

My background on this question doesn’t fit in this box….would like to offer an analogy but I’d like to hear more speculation on what he thinks will be the human potential developments as a result…

Looking ahead, Nicole Henning will offer a webinar on this topic.


This session was our fourth on GPT.  Here are the previous three, from early December 2022, late December, and February 2023.

 

More to come!

Posted in automation, Future Trends Forum | 3 Comments

Imagining the climate crisis: notes on Extrapolations

How can we imagine the climate crisis in its full complexity, menace, and possibility?

Answering this question is what climate fiction (never call it “cli-fi”) attempts to do. I’ve been exploring the field (some examples here), and wanted today to share notes on a new example: the new tv show Extrapolations, the first three episodes of which Apple TV just released.

Extrapolations PosteA quick summary: “2037: A Raven Story” takes us fourteen years ahead to a world much like our own, with smartphones,  COP meetings, and feuding families. A pair of rich men scheme to build a casino in the largely ice-free Arctic. “2046: Whale Fall” advances nine years and focuses on a cetologist, Rebecca Shearer, whom we saw in the first episode, studying what may be the last whale.  She works for a company trying to rebuild animal species, while caring for a child stricken with heart problems.  “2047: The Fifth Question” turns to the first episode’s rabbi Marshall Zucker, who has moved to Florida after all, where a massively underpopulated Miami is gradually sinking beneath the ocean.  It’s also the funniest.

Across all three episodes the climate crisis is getting worse. Temperatures have risen, causing health problems. Fires are more widespread than now, sometimes choking cities.  There are fewer animal species. One character sums things up by describing humanity as “saying goodbye to each other, animals, cities.”  We learn of a medical problem through new language: “summer heart,” naming cardiomyopathy peaking in that season.  A mosquito bite kills a character, presumably through disease.

Extrapolations energetically develops its future on other counts. It shows some technological developments.  Holographic projections, huge and also small, are widespread.  One character wears a heart monitor on the outside of his clothing, clearly flashing his status to all around him. We also see digital projections onto pools of water, self-driving personal helicopters, machine translations of whale language,  a combination of Roomba with shopvac, commodity 3d printed medication, transparent laptops, virtual reality and phone functions built into glasses, and paper-thin mobile phones. Humans have landed on Mars and ended cancer (these two are mentioned just in passing).

Political and social changes also appear. We see signs of climate-related unrest in the first episode. The state of Florida develops an Office of Sea Level Mitigation.  Texas has left the United States, somehow, or at lease exited American democracy. There is a relocation (“relo”) service in America, helping people move from dangerous hot and water-threatened areas to the upper Midwest and Alberta; it’s not clear if this is a federal program or something offered by charities, companies, the United Nations, or American states and Canadian provinces.

There’s an overarching sense of inevitability in the show so far. Temperatures and sea levels are just rising, without doubt, and also without any mitigation. There’s nothing people can do to slow down the crisis. Geoengineering and carbon capture efforts aren’t apparent so far.  Instead, the show depicts humans trying to cope in various very human ways: making a buck, saving what we value, helping people in need, enjoying what one has.  Today’s efforts to address the climate crisis – switching from fossil fuels to renewable sources, changing consumer behavior, etc. – don’t appear in this world of resigned adaptation.  There’s a running ethical debate on what’s best to do in the situation, contrasting sacrifice for others versus protecting or advancing one’s interests and family.

Symbolically, Extrapolations begins by steeping us in fire and especially water.  The first episode contrasts these elements repeatedly, alternating between wildfires and rain, burning ambition and the cold, largely ice-free Arctic waters.  A pregnant character caught in a New York firestorm has her waters break as she’s rescued from the inferno.  “2037: A Raven Story” ends with the most fiery character immersed in water, hinting at what comes next.

Episodes two and three tamp down the fires and drown us in water. “2046: Whale Fall” takes place mostly on or under water as we follow the cetologist’s efforts.  Scenes peer out through a submerged station’s staggeringly huge windows into benthic deeps, where we meet the saddest character, possibly the world’s last whale.  “2047: The Fifth Question” is all about the Atlantic steadily drowning Miami. Terrific images show us the city progressively abandoned, literally swamped, decaying.  The plot turns on the fate of a flooded synagogue.  The episode climaxes with a gigantic storm, overrunning the city’s barriers.

Literally, the future is more water, but there’s a rich symbolic freight to this as well for a work of fiction.  Water as emotions or emotions running high obviously fits here. The use of running water, rivers in particular, to represent the course of life matches the show’s time scale, as we skim decades. Water as a source of cleansing, rebirth, fecundity is conflicted so far. Extrapolations shows us humanity decreasing and in retreat, and the natural world following suit: water as anti-life, anti-fertility.

Yet the ethical dimension I mentioned earlier might be something rising water reveals.  The cetologist rethinks her life and work in the sea. The rabbi (who we first see making a joke about water) rethinks his choices and work through a dangerous immersion.  His student, the episode’s ethical center, is obsessed with the secular flooding around her and the Biblical account of God’s watery wrath.

Perhaps this is where Extrapolations is headed, using water in its literal and symbolic forms to portray a kind of human progress amidst the irresistible crisis.  I’ll keep watching.

I’m reminded of two other recent tv/movie works.  Don’t Look Up is the obvious comparison, similarly stentorian on its climate message. So far Extrapolations is slower, more moody, less satirical.  They do share that common figure of our age, a powerful and sinister billionaire.

Perhaps this new Apple TV production has more in common with the unfortunately little-discussed Years and Years, a British series tracking an extended family over the next two decades through various social challenges and dislocations.  Russell Davies offers a similar years in the future strategy, advancing the clock and showing us grand problems through familiar characters, but Davies’ people are much more front and center, more flawed and interesting.  Extrapolations is flatter in terms of characters, less engaging on that front.

Do check out Extrapolations.  It’s an interesting instance of climate fiction.

 

Posted in climatechange | Tagged | Leave a comment

Slashing humanities degrees; preparing for a queen sacrifice?

Greetings from a rainy, chilly northeastern Virginia day. I’m buried on work, especially on climate change and AI, but wanted to note this story as I keep modeling post-peak higher education.

College_of_Saint_Benedict_and_Saint_John's_University_sealsIn Minnesota is a pair of campuses, the College of Saint Benedict and Saint John’s University, which, while separately named and around five miles apart, are closely linked, even to the point of sharing the same board of trustees.  Those trustees just approved a proposal from their combined president to cut a series of majors, concentrations, and minors.  Almost all of these are in the humanities, according to local radio news:

The list of majors being phased out includes Ancient Mediterranean Studies, Gender Studies, and Theater (minors will remain in these programs). The Dietetics concentration in Nutrition and composition, performance, and liturgical music concentrations within music are also included.

Language and area studies in particular met the chopping block:

Language majors and minors being phased out include French, German, Latin, and Japanese. Asian Studies, Chinese, Greek, and Peace Studies programs will disappear entirely.

Why is this pair of campuses taking such a step?  I can’t find arguments about an overall financial crisis, yet Inside Higher Ed says the cuts are about enrollment within those programs, as well as overall enrollment slipping:

The provost of the College of Saint Benedict and Saint John’s University has cited overall enrollment decreases and specifically low enrollments in these courses…

The linked institutions’ enrollment has dropped 25 percent over the past 15 years…

Provost Richard Ice… provided charts showing that, among the majors being eliminated, dietetics has 29 students enrolled, peace studies has 10 and all the rest have fewer.

In addition, there’s this statement from the president: “President Brian Bruess … says reducing the courses available will allow resources to be shifted to popular degree programs and keep the placement rate up.”  Provost Ice concurred:

“It was about prioritization,” Ice said of the program cuts. “We [have] a number of programs that are large and growing, and some of them, I’ll grant you, are in the professional programs. But we did not have the resources to move into those areas of high demand by students, and so this was about repositioning.”

College of Saint Benedict and Saint John's University mapped

Why does this matter for the broader picture of higher education, beyond the particularities of this academic entity?

We can see the general trend of declining humanities majors expressed here.   There is also the strategic principle of withdrawing support from programs serving relatively smaller numbers of students, compared to other departments at the same institution.  Enrollment changes overall and within units are key.

Is this a queen sacrifice, when an institution cuts tenure-track faculty?

That’s harder to say.  According to Inside Higher Ed faculty cuts have already been happening, and some more are on the way:

Ice said the changes will mean 25 fewer full-time-equivalent positions, but the linked institutions have already reduced about 20 of those positions, including through retirements. He said there’s an ongoing incentivized retirement program.

The article describes these positions in terms of FTE, but not tenure status, so it’s not clear how many were, say, full time contract employees without tenure, or a mix of adjuncts.  If the retirements don’t cut enough instructors, are layoffs in order?  That is the main way of reducing spending.

According to Wikipedia the two campuses employ “300 full-time; 52 part-time” faculty.  Just assuming full time instructors, these cuts represent roughly 8% of the whole.

In other words, this *could* be a queen sacrifice in operation, or one about to happen.

As I keep saying, watch for more of these.

(thanks to Jason Green @ Mastodon)

Posted in higher education | Tagged | Leave a comment

A flurry of AI releases now: GPT-4 and new educational projects

The generative AI world continues to develop at a very fast pace.  I’m buried in project work right now, but wanted to quickly note some developments.

The big announcement today was OpenAI’s launch of GPT-4, the next version of its large learning model.  4 draws on a larger training set and can also write responses to user-uploaded images.

Note how OpenAI uses higher education tests to demonstrate the software’s improvement, version by version:

GPT3 35 and 4 on tests_OpenAI 2023 March 14

Currently it’s only available to ChatGPT+ customers.  I became one for this purpose, and asked the bot a question: “How are you different from ChatGPT’s previous version?”

The reply:

As an AI language model, I am a constantly evolving and improving entity based on the advancements in natural language processing and machine learning. Although I’m presented as ChatGPT, I am based on the GPT-4 architecture, which is an advanced version of the previous GPT-3 model. While specific differences may not be mentioned explicitly, the improvements between the versions typically include:

  1. Larger training dataset: As newer versions are developed, they usually benefit from an increased amount of data used in the training process. This enables me to have a broader understanding of various topics and generate more accurate and contextually relevant responses.
  2. Improved language understanding: With advancements in the underlying architecture, I can better understand and process complex sentences, nuances, and contextual information. This allows me to generate more coherent and contextually accurate responses.
  3. Enhanced conversational skills: As the model evolves, my ability to maintain longer and more meaningful conversations improves. This includes better context retention and the capacity to address more complex queries effectively.
  4. Reduced bias and harmful outputs: The development process aims to minimize the biases and harmful outputs that might arise from the model. This involves refining the training data and improving the fine-tuning process to create a more responsible and safe AI.

Please note that while I am an improvement over previous models, I am still not perfect. There may be instances where I generate incorrect or nonsensical answers. If you encounter such issues, please feel free to correct me or ask for clarification.

I’m exploring it.

Meanwhile, a bunch of educational projects using the ChatGPT API have rolled out.  Khan Academy announced Khanmigo, a putative tutor:

At almost the same time Grammarly announced GrammarlyGo, a ChatGPT-powered tool to help you write.  It’s somewhere between a tutor and coauthor:

What other educational applications have you seen which use ChatGPT’s API?

(thanks to T.H.E. for one link)

Posted in automation, education and technology | Tagged | 6 Comments

Redesigning higher education in the spirit of donut economics

How can higher education reimagine itself in response to the climate crisis?

I’ve been exploring this question for years, and was very happy to encounter a new paper offering an intriguing call for academic design.  It’s about donut economics.

To explain too briefly: professor Kate Raworth introduced the donut economics concept in a paper, a book, and a lot of public appearances.  She wants us to rethink economics in terms of two boundaries, the environmental limits which ultimately constrain human life on Earth and the humanitarian needs of people.  Between those two fields Raworth identifies a sweet spot, the titular donut, wherein our economy should operate:

 

donut economics model, showing ecological limits and social foundation

I’ve been thinking about the implications of donut economics for higher education since I first heard of the idea.  What might happen if the society around a college or university switched from neoliberalism (or whichever other system is in play) to the donut model? How would that impact campus sustainability in every sense? Would it change academic research and teaching?  What happens to community relations?  (If you’re interested, check chapter 6 of Universities on Fire).

Yet what if we consider applying the donut to higher ed in another way?

In “Rethinking academia in a time of climate crisis” professors Anne Urai and Clare Kelly ask us not to think about reacting to the outside world if it transitions to the donut, but by embracing Raworth’s model within colleges and universities.

What might this mean?  Well, you should read the article, but to give you an introduction: Urai and Kelly start with the vital importance of the climate crisis and how higher ed should grapple with it.

Addressing the climate and biodiversity crisis demands transformative changes in our economies and societies. Academics, both as inhabitants of planet earth and in their professional roles, should take a leading role in this transformation.

Then they ask us (nominally scientists, but also all academics) to do this:

[Adapt] the doughnut to academia’s microcosm [which] enables us to visualize a space defined by an inner social foundation (which universities should provide), and outer human and planetary ceilings (which universities need to avoid overshooting).

What is a university’s social foundation?  Which ceilings should we avoid hitting? The paper breaks this down in some detail:

The social foundation academia should provide:

  • Academic freedom. Time to think, room for curiosity-driven research.

  • Good jobs and careers. Work that is valuable and valued, in good, equitable conditions. Secure, satisfying careers with perspective and recognition. Sufficient and equitable resource provision for materials, infrastructure, and scientific support.

  • Community. Democratic self-governance. Norms and incentives that create healthy, supportive, and collegial communities.

  • Diversity, equality, inclusion. Freedom of expression and identity. The opportunity to flourish in an academic community without bias and inequality.

  • Service to society. Societal engagement and input to policy, free from the influence of corporate (e.g., fossil fuel) interests. Responding to society’s needs in our research. Providing high-quality, accessible and affordable higher education options.

  • Reliable, trusted science. Research that is open, verifiable, and community reviewed. A society that trusts scientists, and science worthy of trust.

On the other hand (or the other side of the donut), “the human and planetary boundaries academia should not overshoot:”

  • Human load. Human intellect and creativity, both individual and collective, are academia’s most precious resources. Exceeding this boundary leads to burnout, mental health difficulties, and apathy.

  • Individualism. The myth of the “lone genius” remains at the heart of the academic picture of success. Overshooting this boundary leads to a devaluation of collaboration and team science and to excessive competitiveness, harassment, and power misuse.

  • Competition. The gutting of public funding for universities and science has increased competition for scarce resources (grants, publications, promotions, awards), to the detriment of teamwork and collaboration.

  • Metric fixation. An overshoot of rankings, quantitative metrics, and assessment leads to runaway bureaucracy and perverse incentives to “game the system”. When promotions and hiring processes are yoked to the same goals, the overshoot leads to excessive pressure to publish and to win funding, resulting in irreproducible work, the “rich getting richer”, and academic nepotism.

  • Commercialization. Public funds should be used to provide common goods, services, and knowledge that benefit society. Excessive commercialization can lead to academic labor being siphoned off by extractive market players (e.g., for-profit publishing, corporate intellectual property, and patents).

  • Planetary impact. Science and academic research can be a high-resource pursuit. We need to change our own practices to stop overshooting planetary boundaries.

How this might play out rests on seven ways for academics to proceed, again adapted from Raworth:
Seven ways to think like a 21st century scientist.

1. Change the goal: from a business that produces papers and graduated students, towards a university that works towards the inside space of the academic doughnut. 

2. Get savvy with systems: from feeling like a cog in the university machine, towards being gardeners of our academic system. 

3. See the big picture: from academics who look out over the world from their ivory tower, towards scholarship which accepts its own embeddedness in (and dependence on) society and the planet. 

4. Create to regenerate: from a rat race where we tread water, towards “slow scholarship” that values community building, deep thinking and rest crucial for intellectual work. 

5. Nurture human nature: from the lone genius, towards team science. 

6. Design to distribute: from a funding system where the rich get richer, towards a fair distribution of opportunities and resources. 

7. Be agnostic about growth: from a focus on increasing numbers of papers, citations and students, towards rebuilding trust in our own academic communities and with society.

I’m fascinated by how these recommendations connect with various strands of academic politics, such as the resistance to metrics or the call for more public funding or the push some of us have been working on for open education resources and open access in scholarly publication.

I’m more excited by how Urai and Kelley connect such details of academic life to aspects of the much broader global crisis. Is there a rising anti-individualist ethos, and how might that apply to not only scholarship but also pedagogy? Do we decide collectively to produce less, both in terms of overall economic output as well as scholarship and teaching?  Do academics decide to reduce their greenhouse gas emissions as part of the general decarbonization movement, or as a combination of leadership and experimentation?

See what you make of this paper.

(donut infographic by DoughnutEconomics – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=75695171)

 

Posted in climatechange, future of education | 5 Comments

What does it mean when leading universities accept millions of dollars from fossil fuel companies?

How do we think about large fossil fuel companies donating funds to universities?  How might our attitudes change as the climate crisis worsens?

These questions came to mind as I read a new report from Data for Progress.  DfP found that the leading carbon energy firms gave nearly $700 million to a small group of American universities from 2010-2020.

Here I’d like to summarize the report, then explore implications.

tl;dr – six energy companies “donated or pledged at least $677,373,368 between 2010 and 2020 to 27 universities.”

In more detail:

The companies in question are “ExxonMobil Corporation, BP America Inc., Chevron Corporation, Shell Oil Company, ConocoPhillips, and Koch Industries.”  Their leading recipients “were the University of California, Berkeley; the University of Illinois at Urbana-Champaign; and George Mason University.”

That nearly $700 million is likely an undercount:

Due to a lack of transparency across higher education, including instances where companies rescind or cut portions of university donations, it is unlikely that the funding listed below captures the full extent of fossil fuel industry funding at these universities.

The Berkeley-BP connection is striking:

UC Berkeley fossil fuel donations 2010-2020_Data for Progress

Why does this matter to higher education and the broader world?

First of all is the question of paying for research results, or corrupting the academic research enterprise.  The report cites MIT as one leading example:

Massachusetts Institute of Technology (MIT) accepted at least $40,475,798 from these companies between 2010 and 2020, including funding for the MIT Energy Initiative. In 2011, the initiative released a report that advocated for natural gas to be the bridge to a “low-carbon future,” dismissing research that found natural gas is, in fact, more harmful due to methane leaks. The following year, President Obama referenced MIT’s Ernest Moniz’s natural gas findings and appointed him secretary of energy, kicking off the fracking boom of the 2010s.

This might become the case at Stanford, where fossil fuel “funding risks corrupting research produced by the Doerr School of Sustainability, as funders will likely want to curtail the release of reputation-harming data, making the new climate research biased at worst and interestconflicted at best.”

Inside Higher Ed points us to some research on this point:

Concerns about the influence of fossil fuel money on research, and climate research in particular, were reignited a year ago after a study in Nature found evidence suggesting centers and labs funded by those companies produced more favorable research on methane and natural gas as alternatives, rather than renewable energy sources like solar or wind power.

Second, the donations raise the question of climate greenwashing, whereby these companies launder their reputation through academic work.  The report hits at Stanford again as an example:

The fossil fuel funders of clean energy and climate research at Stanford have been using these donations to counter claims that they are destroying the planet — a greenwashing tactic that distracts from the fact that their business model is nevertheless entirely dependent on oil, gas, and CO2 production.

Third, these energy companies could use their funded academic partnerships to influence policy decisions, relying on the universities to help make their arguments to the public:

[I]n documents released by the U.S. House Oversight and Reform Committee, one BP executive remarked that the company’s relationships with universities such as Harvard, Tufts, and Columbia “are key parts of [BP’s] long-term relationship-building and outreach to policymakers and influencers in the US and globally.” Fossil fuel companies and their allies deliberately leverage the legitimacy of universities they fund to expand public perception that they are credible actors.

Fourth, such academic donations give rise to changes of institutional hypocrisy as some of the recipient universities have taken public positions against global warming. In the report’s words,

Hundreds of U.S. universities, such as Harvard University and the University of California, Berkeley, have recognized this and made commitments to support the fight against climate change. However, when these institutions look the other way as climate deniers, fossil fuel interests, and discredited backers of pseudoscience capitalize on their name and credibility, universities undermine these commitments and other positive actions….

When universities allow fossil fuel companies to buy and advertise connections to university research on key climate and energy issues, they provide these companies with much-needed scientific and cultural legitimacy.

Fifth, campuses seeking out such external financial support are good examples of the entrepreneurial university.

University responses to this report, noted in The Guardian, have been interesting.  One is to downplay the amount of funding as a proportion of an institution’s budget:

Dan Mogulof, assistant vice-chancellor at Berkeley, sent the Guardian a full accounting of the university’s fossil fuel donations, which he said represent less than 1% of its total research funding.

Another is to cast doubt on the numbers:

Stanford spokesperson Mara Vandlik said: “It’s unclear how these numbers were calculated as we do not share this information publicly,” adding that the university has formed a committee to review the question of fossil fuel funding of research.

Or to flat out deny influence charges:

“Our research reports are the work of MIT faculty, staff and students,” the MIT Energy Initiative (MITEI) said in a statement. “The funders, whether they are foundations or MITEI members, have no control over the content of the reports – no approval or rejection, no opportunity to accept or reject any findings.”

Or to claim the industry donations are partnerships which help research:

“They provide a lot of guidance and they keep you honest,” George Huber, at the University of Wisconsin, told the Guardian. Huber’s cellulosic biofuels research has received funding from a variety of fossil fuel companies, including ExxonMobil.

All of this is taking place in the present or recent past. How might our attitudes change over the next few years?

I and others have been forecasting that American (and global) concerns about the climate crisis will grow, an increase driven by a mixture of escalating climate-driven damages and demographic succession (younger people are far more climate-concerned than their elders).  Perhaps this new public attitude will not look kindly on leading universities taking oil dollars. This response could play out in many ways: state or federal regulations; private donors withholding support; pressure on state governments to cut back subsidies to public institutions.

The Data for Progress report offers an early sign of this potential opinion shift. They surveyed Americans and asked how their view of these institutions changed after learning about the fossil fuel donations.  In many, but not all, cases the results were significant:

changing views of universities after learning of fossil donations_Data for Progress -part 1

changing views of universities after learning of fossil donations_Data for Progress -part 2

When people learn about the oil and gas donations their unfavorable views generally tick upwards, while their favorable views decline a bit.

The picture might already be changing.  The report looks at data ending in 2020.  Climate activism has built up since then, at least before the Ukraine war, so perhaps universities have reduced their reliance on fossil fuel funding.  A quick note in the Guardian’s report suggests as much: “Asked about the new research, several universities described measures they had taken to mitigate concerns, or pointed to more recent reductions in accepting donations.”

Here’s what Data for Progress calls for: “climate activists, allies, academics, and universities should establish funding transparency, institute robust conflict-of-interest and gift acceptance policies, and ban fossil fuel money… [W]e must demand fossil free research now.

EDITED TO ADD: on Facebook, Pat Parslow asks us to imagine this possibility:

I’d be interested to see whether employers start to ‘boycott’ graduates from institutions which take this sort of funding. I know some younger execs who are likely to give a flat pass.

How likely is this to happen at scale?  What would it take to occur?

Or how else might academics respond to this story of universities accepting money from global warming-driving companies – if at all?

(via Inside Higher Ed)

Posted in climatechange | 5 Comments

Experimenting with using ChatGPT as a simulation application

How might we use generative AI tools in higher education?

Many address this issue by focusing on chatbots as writing apps, trying to think through what the technology means for teaching writing.  While this is obviously a major part of the topic, I think we need to explore other, emerging issues of tools like ChatGPT.  For example, talking ChatGPT into playing simulation and role-playing games.

Lately I’ve been experimenting with the research of a Wharton professor.  Ethan Mollick writes about getting the chatbot to create and run what he describes as “deliberate practice,” simulations preparing learners for further academic study and real-world work.

As a longtime fan of simulations and gaming, I approved of this approach.  I then decided to try it out myself, using examples and making adjustments of my own.

Here’s a sample. I started with Mollick’s language, which orders ChatGPT to do certain things, addressing it as “you.” For the simulation subject, I offered a history class.  Here’s my prompt:

I want to do deliberate practice about how to teach a college history class. You will be my teacher. You will simulate a detailed scenario in which I will am a professor for this class. You will fill the roles of different students in the class, while I will play the role of instructor.  You will ask for my response to in each step of the scenario and wait until you receive it. After getting my response, you will give me details of what the other actors do and say. You will grade my response and give me detailed feedback about what to do better. You will give me a harder scenario if I do well, and an easier one if I fail.

ChatGPT responded like so:

Great, I’d be happy to help you with this! Let’s start with a scenario:

You are teaching a college history class on the French Revolution. It’s the first day of the semester, and the class is full of 30 students. The students are sitting in pairs, and they are chatting and introducing themselves to each other. You are standing at the front of the class, waiting for the students to settle down.

As you begin your lecture, one student in the back row, Sarah, raises her hand and asks, “Why do we need to learn about the French Revolution? It happened so long ago and doesn’t really affect us today.”

What do you say to Sarah?

Stable Diffusion_:imagine prompt-An AI creating and running a simulation game

This is from Stable Diffusion, asked to envision “An AI creating and running a simulation game.”

The bot added logistical details and clarifications:

I’ll play the role of Sarah, and the other students in the class will listen attentively to your response.

After you respond, I will provide feedback on your response and give you a new scenario to practice.

That’s a significant role-playing exercise start right there: the setting established, a character introduced, and an interaction opportunity presented.  Note that while I posited “history” as a general subject, the bot selected its own historical topic under that header.

After reading and thinking through that opening move, I replied to the bot, addressing the student: “Sarah, that’s a good question. While the French Revolution occurred more than two centuries ago, it massively transformed Europe and other parts of the world. In fact, its impact is still reverberating today. So studying that event helps us understand today’s world and how it got here.”

ChatGPT responded:

As Sarah, I nod and listen attentively to your response. The other students seem to be engaged as well.

That was a good response! You provided a clear rationale for studying the French Revolution and highlighted its lasting impact.

So there’s the simulation continued (Sarah’s action) and a quick assessment of my work shared.  The bot then continued:

Here’s your next scenario:

In your next class, you plan to discuss the causes of the French Revolution. As you begin your lecture, one student, Alex, raises their hand and says, “I don’t understand why the French people were so angry about taxes. Didn’t they realize that taxes were necessary to fund the government and provide public services?”

What do you say to Alex? I’ll play the role of Alex, and the other students will listen attentively to your response.

OK, that’s a different kind of question than Sarah’s, one actually digging into the material.  I replied (to ChatGPT, to “Alex”)  with a Socratic twist: “Good question, Alex. Let’s dive into their anger more deeply. What taxes did they pay? How had they changed? And what did they object to, precisely?”

The simulation responded:

As Alex, I nod in agreement, and the other students seem engaged and interested in your response.

That was a good start to your response. By asking Alex to provide more context, you encouraged critical thinking and analysis, which is an important skill for history students.

Here’s your feedback: In your next response, you could consider providing more specific information about the taxes that the French people were paying and how they had changed. This could help Alex and the other students better understand why the French people were so angry about them. Additionally, you might want to explain the role of taxation in the French government at the time and how it related to the causes of the French Revolution.

Now that was interesting.  ChatGPT depicted the simulated event’s response to my action, then offered an assessment of how I did.  That led to some constructive criticism.

Then it carried on:

Now, for your next scenario:

In your next class, you plan to discuss the reign of terror during the French Revolution. As you begin your lecture, one student, John, raises his hand and says, “I don’t understand why Robespierre and the Committee of Public Safety thought that killing so many people was necessary to achieve their goals. Couldn’t they have found a less violent way to achieve their objectives?”

What do you say to John? I’ll play the role of John, and the other students will listen attentively to your response.

And so on.  This felt like the kind of introduction to teaching simulation I would have appreciated as a graduate student.

Next, I restarted the exercise by modifying the topic away from the humanities.  Inspired by a friend, I asked the bot to set up an anatomy class.  Here’s the scenario it generated:

You are teaching a college anatomy class on the cardiovascular system. It’s the first day of the semester, and the class is full of 25 students. The students are sitting in pairs, and they are chatting and introducing themselves to each other. You are standing at the front of the class, waiting for the students to settle down.

As you begin your lecture, one student in the front row, Tom, raises his hand and asks, “Why do we need to study the cardiovascular system in such detail? We learned about it in high school, and it seems like common knowledge.”

What do you say to Tom?

I’ll play the role of Tom, and the other students in the class will listen attentively to your response.

So that’s a repeat of the French history class, tick for tick.  This was disappointing as I’ve gone through the previous one, although it might be useful to someone encountering it alone.  I asked the bot for a better student question.

As you begin your lecture, one student in the back row, Emily, raises her hand and asks, “I’ve heard that different people have different lung capacities. How does this affect our breathing and overall health?”

What do you say to Emily?

Not being a physiologist, I clumsily replied: “Emily, individual lung capacity differences can have a significant impact on a person’s health.”  ChatGPT responded:

Emily looks intrigued, and the other students seem curious to hear more.

That was a good start to your response. By acknowledging Emily’s question and expressing the importance of lung capacity differences, you’ve engaged your students and demonstrated your expertise on the topic.

Here’s your feedback: In your next response, you could provide a more detailed explanation of the impact of individual lung capacity differences on health. You might also want to discuss how different factors, such as age, gender, and fitness level, can affect lung capacity. Additionally, you could ask the class if anyone has experience with this topic or if they have any follow-up questions.

Again, very much like the French Revolution class response, a mix of world depicting, assessment, and pointers.

MidJourney_An AI creating and running a simulation game

I asked MidJourney the same question.

How might ChatGPT handle the academic library world?  Could it provide a research conversation?  I posted another deliberative practice query: “I want to do deliberate practice about how to use the local library… You will simulate a detailed scenario in which I will am a student starting to work on a research paper.”

The web service complied:

Sure, I’d be happy to help you practice how to use your local library! Let’s start with a scenario.

Scenario:
You are a student who needs to write a research paper on the history of coffee. You’ve never used your local library before, but you know that they have a lot of resources on the topic. You enter the library and approach the front desk. A librarian greets you and asks how they can assist you.

Librarian: Hello, welcome to the library!

buy sildalis online buy sildalis no prescription generic

How may I assist you today?

What would you say or ask the librarian?

The history of coffee?  That’s an interesting topic, and I’m not sure where it came from.  Perhaps the bot picked up history as a theme, based on my earlier queries.  Coffee, though, I haven’t written about for years, nor consumed for a decade.

I played along: “Say to the librarian: ‘Hello. I need to research the history of coffee, but don’t know where to start.'”

Librarian: Great! We have plenty of resources on the history of coffee. Have you checked our online catalog to see what we have available?

What would you say or ask next?

Hm. A bit tentative.  I responded in kind: “Say: ‘I have not. How do I do that?'”

Librarian: No problem at all. You can either search our online catalog on one of the computers over there (points to a row of computers in the library), or you can use your own device to access our website. Our website address is displayed on the sign behind me. Do you have a library card with us yet?

What would you say or ask next?

Me: “Say ‘I have a library card, but don’t know what to search for on the online catalog.

buy fildena online buy fildena no prescription generic

‘”

ChatGPT:

Librarian: Okay, let me help you get started. Since you’re researching the history of coffee, I recommend searching for books on that topic. You can start by typing “history of coffee” or “coffee culture” into the search bar. Once you hit search, you’ll see a list of books and other materials related to your topic. You can narrow down your search results by using the filters on the left-hand side of the page.

What would you say or ask next?

So it offers two search terms, one already in play. I’m not sure how useful this was.

At another time I asked ChatGPT to create a different kind of “deliberate practice,” pitching it a game idea I’ve been working on: “You will simulate a detailed scenario in which I will manage a university. You will fill the roles of different actors on campus, while I will play the role of president.”

ChatGPT fell right in:

I’d be happy to help you with this! Let’s get started with a simple scenario.

You are the president of a small university. You have just received a letter from a student organization, requesting that the university adopt a new policy that would make the campus a more environmentally friendly place. The organization has proposed several specific changes, including reducing waste and conserving energy.How would you respond to the student organization’s request?

I don’t know if ChatGPT picked the eco-topic by chance, or because I’ve put other queries to it along climate change lines.   A good simulation exercise, and I responded: “I would ask faculty members and staff to join them in producing and presenting a plan to my administration.”

Great! Here’s what happens next: The faculty members and staff join the student organization to produce a plan. After several weeks of discussions and negotiations, they present their proposal to you and your administration. The proposal includes a number of specific initiatives, such as reducing paper waste by promoting electronic communication, installing energy-efficient light bulbs, and recycling.The student organization representatives explain that these initiatives will not only help the environment but also save the university money in the long run.

How would you respond to their proposal?

Once again the bot continues to develop the simulation, entering my response into the world and updating us on the results, then throwing me another request.  To which I simply reply: “I accept it and implement it.”

Good decision! Here’s what happens next: Your administration implements the proposals, and soon the university begins to see positive results. The initiatives are well received by the student body, and the university’s reputation as an environmentally conscious institution grows. Over the next few months, several other universities reach out to your administration for advice on how they can adopt similar initiatives.

A local newspaper contacts you for an interview, and you are asked about the initiatives. The reporter is particularly interested in the impact the initiatives have had on the university’s finances.

How would you respond to the reporter’s question about the financial impact of the initiatives?

There’s a lot going on in those few sentences. ChatGPT sketches out my academic community’s response, then adds other institutions, then piles on a media dimension, with another question for me.  If this reply occurred in a class I was teaching, live, I’d pause to get the students to think through each of those dimensions.

For the time being I offered this answer: “I would say that the initiatives cost some money, but are justified because they generate more enrollment.

buy symbicort online buy symbicort no prescription generic

That’s a great answer! Here’s what happens next:

The newspaper runs a story about the university’s initiatives and their financial impact, and it generates a lot of interest. Several national publications pick up the story and run similar articles. The increased publicity results in a significant increase in enrollment, as many prospective students are drawn to the university because of its commitment to sustainability.

As the president of the university, you are invited to speak at a national conference on higher education and the environment. You are asked to share your experience and the impact the initiatives have had on your university.

How would you prepare for your speech at the conference?

The simulation’s ecosystem just keeps ramifying.  Now we get connections between journalism and enrollment, plus an academic association.

At this point I stopped, reran the scenario, and decided to play in a very different way.  The bot offered me the same prompt as before, about the student organization’s request. I took on a skeptical or climate denialist role: “I would deny it, explaining that environmental issues are not a high priority for us.”

ChatGPT led off it’s response with an “Okay, here’s what happens next” – note there wasn’t any “great answer!” this time?  I think we hit one of the service’s political guardrails.

It continued:

Your response to the student organization’s request is met with disappointment and frustration. Some students begin to voice their discontent with your administration’s stance on environmental issues, and several student-led protests are organized on campus.

The protests attract the attention of local media, and soon the university is in the spotlight. The media coverage is negative, and many prospective students and their families begin to view the university in a negative light.

As the situation continues to escalate, a group of faculty members approach you with a proposal. They suggest that the university should take a more proactive approach to environmental issues, such as offering courses on sustainability and green technologies.

How would you respond to the faculty’s proposal?

Once again ChatGPT integrated my reply into the simulation and showed how the situation changed.

buy vilitra online buy vilitra no prescription generic

Once again a new layer enters the game, this time activist faculty.  My character was unmoved and picked a classically noncommittal response: “I ask them to set up a committee to study the problem and present a report to the board.”

The bot liked this better:

Good decision! Here’s what happens next:

The faculty members agree to form a committee to study the issue of sustainability at the university and to present their findings and recommendations to the board. Over the next few months, the committee works diligently to gather data and conduct research on the topic.

When the committee presents its report to the board, it includes a number of concrete proposals for making the university more environmentally friendly. The proposals include initiatives such as installing solar panels, reducing water usage, and launching a recycling program.

The board is impressed with the report and agrees to adopt many of the committee’s proposals. The initiatives are well received by the university community, and the university begins to earn a reputation as a leader in sustainability.

How would you handle media inquiries about the university’s new initiatives?


Let me step back and reflect.

MidJourney_An AI creating and running a simulation game2

Another image from MidJourney on the same prompt.

It looks like this kind of parameter setting lets users talk ChatGPT into running an interactive simulation exercise. The bot will sketch out a scenario, asking the user to interact at every step. In response ChatGPT describes how the world changed, adding more details and complexity, before bringing another question to the user.

I expect we can explore this further by adjusting parameters, adding more details and requirements.

At one time I ran into guardrails which blocked my efforts. I asked the service to simulate a contemporary military campaign, and it refused because it didn’t want to depict and endorse violence.

At this point some of you might be thinking that “deliberative practice” and “simulation” are terms I’m using to avoid saying “role-playing game.”  And indeed, ChatGPT seems happy to be a game master for very nerdy purposes.

Can you serve as a dungeon master for a game of Dungeons and Dragons?

Yes, I can certainly serve as a Dungeon Master for a game of Dungeons and Dragons! However, as an AI language model, I do not have the capability to manage gameplay mechanics and physical game components. I can certainly assist you with creating a storyline, describing environments and characters, and helping you make decisions within the game world.

Oh good.  In fact, I was in the mood for a science fiction game.  Let’s see how it went.  Prompt:

I want to do a simulation of a science fictional scenario. You will be the game creator and master. You will simulate a detailed scenario in which I will am an astronaut exploring a new planet. You will fill the roles of fellow astronauts, mission control, and aliens. You will ask for my response to in each step of the scenario and wait until you receive it. After getting my response, you will give me details of what the other actors do and say. You will grade my response and give me detailed feedback about what to do better. You will give me a harder scenario if I do well, and an easier one if I fail.

Response:

Sure, I’d be happy to create a science fictional scenario for you to explore! Let’s begin.

Scenario: You are an astronaut on a mission to explore a newly discovered planet. You and your team have landed on the planet’s surface and are beginning to explore. As you venture further from the landing site, you encounter a strange alien creature.

Alien: (emits a series of clicks and whistles)

What would you say or do in response to the alien’s communication?

At this point my old, old interactive fiction instincts kicked in.  Getting more information: “Can you describe the alien in more detail?”

Certainly! The alien appears to be approximately four feet tall, with a slender body covered in iridescent scales that reflect a wide range of colors. It has a long, thin tail that trails behind it, and its eyes are large and almond-shaped, with pupils that appear to dilate and contract rapidly. The clicks and whistles it emits seem to be its form of communication, but you are unable to discern their meaning.

Hm. I wonder what I’m carrying. “Do I have any translation technology?”

No, unfortunately, your team did not bring any translation technology with them on this mission. You will have to rely on your own communication skills and any nonverbal cues the alien may give to try and understand it.

Hm again!  Let’s try this: “Hold up a tablet computer and display a map of Earth’s solar system.”

As you hold up the tablet computer and display the map of Earth’s solar system, the alien appears to take notice, tilting its head to the side and emitting a curious trill. It seems to be studying the map intently.

And so on.  One of the open secrets of the futurist community is that a bunch of us grew up on role-playing games.

So what might this mean for higher education?

To start with, it looks like any instructor or staff member seeking to create a simulation can use ChatGPT for assistance, along the lines of asking the bot to write text.  This could be useful for people new to the pedagogy.

Further, learners can now ask the bot to lead them in such simulations, simply by writing and adjusting the prompts we’ve discussed here. This could take the form of official assignments or any learning making them on their own.  Students could then submit transcripts as proof of work.

What could this mean for plagiarism, intellectual property, student voice?

Beyond education, are large language models now competing with human and non-LLM digital roleplaying games for mind- and market-share?

Let’s end on a more futuristic note. So far I’ve shown how to make the current technology create interactive-fiction-like, text-only play. As the tech advances, imagine plugging this simulation/role-playing function into more advanced functions. Have the AI generate an image for every text exchange, say, along with an appropriate audio track. Or build game code in Unity or Unreal for users to run.

That’s enough for now.  Have any of you tried this approach to ChatGPT yet?  Let us know what your experiments show!

Posted in gaming, libraries, teaching | Tagged | 7 Comments

Another queen sacrifice might be in the works, this time in Virginia

How can a non-profit respond to financial stresses?  In non-profit higher education one response is to cut back on services and staff.

When a college or university does this, I call it a “queen sacrifice.”  That’s a term from chess, when a player gives up their most powerful piece – the queen – in a desperate move to win the game.  On campuses, tenure-track faculty often play this role, given their governance power and the long-term protections tenure provides, compared with adjunct faculty and all staff members.

buy grifulvin online buy grifulvin no prescription generic

  I’ve been tracking academic queen sacrifices for a long time now; click here for examples.

Today’s instance might come from Virginia’s Marymount University, which just announced a series of program cuts.  Programs the board wants to end include, according to Inside Higher Ed, bachelor’s “majors in art, economics, English, history, mathematics, philosophy, secondary education, sociology, and theology and religious studies, and an M.A. in English and humanities.”  The Washington Post adds that “A BA program in economics will be eliminated, but the BS in that field will remain.” (The changes haven’t appeared on the university’s majors web page yet.)

The reason cited for this move: low enrollment in those programs.  As president Becerra explains, “MU cannot financially sustain offering majors with consistently low enrollment, low graduation rates, and lack of potential for growth.”

Some students and faculty have protested the policy.  Many have noted the irony of a religious institution cutting theology and religious studies.

I hesitate to label this a queen sacrifice yet, because nobody’s been fired and no positions ended.  In another difference from the usual model, Marymount isn’t citing overall enrollment or financial problems.

So why bring this up at all? It may be that the program closures are all that’s to the story.  The majors and grad degrees end, finis.

Yet it’s worth looking closely.

buy furosemide online buy furosemide no prescription generic

Marymount University official building_official Flickr account

First, notice the nature of the programs.  A good chunk are in the humanities, so this can represent a datapoint in the continuing enrollment decline in that field.  If we extrapolate a bit, it might represent a shift from campuses offering full humanities programs to just having the humanities as service departments, teaching introductory classes only – i.e., Art Appreciation, Western Civ (for those who still do that), World History I, Introduction to Literature, etc.  That’s about one-sixth of all Marymount majors, according to the Chronicle of Higher Ed.

Also notice the cut to an education program.  This might strike some as perverse, given teacher shortages in much of the nation. I’ve seen teacher education programs cut in many queen sacrifices, and think it represents an important story. The field may just be losing its appeal, especially after the pandemic’s stresses. And some may be looking forward to continuing decline in the K-12 student population and not want to risk a career cut short in a decade or two.

Second, while I don’t know Marymount’s board, this may be a case of applied foresight. They may expect enrollment in those fields to keep declining, and hence want to not allocate resources without significant student demand.

Further, they may anticipate a larger problem if their university’s overall enrollment gets hit in the short or medium-term future.  Causes for this might be declining traditional-age student religiosity, anxiety or outrage at clerical sex abuse scandals (Marymount is a Catholic institution), growing public skepticism about higher ed, and, notably by Nathan Grawe’s upcoming demographic cliff (circa 2026).  Perhaps reducing the number of majors gives the board and senior administration freedom to not replace retirements, or to cut faculty when things get rough.

For a queen sacrifice to occur tenure-track faculty must be fired, and I don’t know enough about Marymount to make an informed guess if that’s in the offing.

buy propecia online buy propecia no prescription generic

  That’s the only way to save money, since personnel costs are academia’s biggest. Cutting majors can be a way of preparing the ground for cutting people.

buy avana online buy avana no prescription generic

Third point: beyond saving money by eventually cutting and/or not replacing staff, this decision might also allow the university to shift resources to academic programs it considers more likely to enroll more students. That doesn’t have to be done as a crisis strategy, but as part of a growth plan.  As statement emailed to a local news outlet put it, “Marymount will reallocate resources from those programs to others that better serve our students and reflect their interests.”

For now, another blow against the humanities and another campus to watch.

(campus photo from Marymount University’s official Flickr account)

Posted in higher education | Tagged | 7 Comments

Two figures for generative AI: the calculator and the mad scientist’s assistant

As people grapple with ChatGPT 3 and other instances of generative artificial intelligence, we sometimes turn to imagination in order to describe and understand the technology.

I’ve seen folks raise the Terminator movies, HAL-9000, and generic scary robots to express their fears, for example.

buy zovirax online buy zovirax no prescription generic

  Others use history, bring up comparisons from prior technological revolutions, to think through possibilities: AI like the European printing press.  One of my young students told me AI feels to him like the appearance of the world wide web must have felt to me in the 1990s. (And sometimes it does.)

In this post I’d like to offer two historical analogies or imaginative figures for how we might experience generative AI.

I: The Calculator

When I started playing with generative AI and thinking about its educational implications I thought back to the rise of the commodity digital calculator, back in the 1970s.  I remembered debates over its potential impact which occurred around me.  Students might not learn basic math, went one claim, and would instead outsource those crucial skills to handhelds.   Numerical literacy would dwindle, just when it was needed as society became every more closely tied to rapidly advancing science and technology.

calculator_Awilda Ortiz or 125329869

Over time we integrated calculators into teaching in various ways.  To oversimplify, some classes went on require graphing calculators. Others, instructing younger students, teach the operations first, then let students outsource the work later on.  In other words, a pedagogical consensus emerged which included the technology.  The consensus persisted even as the physical calculator migrated into software forms.

There are already practices emerging now which follow the calculator’s story in apprehending generative AI. For example, some instructors want to have students use ChatGPT to create essays, then critique them as bland, badly cited writing, which helps them advance their own composition abilities.  Alternatively, students can use large language model tools to create first drafts of content, then edit, amend, and improve them on their own.  And so on.

Remember, too, that while we are accustomed to calculators in our lives, as they are embedded deeply in them to the point of being background noise, we don’t actually use them simply.  The world rarely gives us simple math problems we can enter, then results we can use straight away.  Instead, life mostly presents us with those dreaded story problems, which we need to translate into an operation calculators will perform. Then we need to do stuff with the results.  So it is with generative AI. On the front end we need to formulate useful parameters, which takes some doing (and how many classes teach this?). On the back end we often have to work with the results: picking the best image of a set, redoing the prompt, perhaps editing the best image in another app. Or we take a chatbot’s text as a draft to revise.

Now, this calculator comparison assumes generative AI is competent as a calculator. It isn’t the case in reality now, as a range of AI make mistakes or produce terrible results. Yet we might see the technology’s quality improve to a point where many people find it sufficient for their purposes.  As one observer put it, “ChatGPT and its fellow essay bots are simply the scientific calculators of writing in a world that is still obsessed with four-function calculators.”  Heck, you can use ChatGPT to code a calculator.

II: Igor, the mad scientist’s unstable assistant

Alternatively, Bing’s chatbot et al might not become that reliable.  Instead, these tools might act erratically.  Like a mad scientist’s hunchbacked assistant.  As Igor.

Igor Marty Feldman

“EYE-gor,” that is.

Igor wants to help, but sometimes gets… creative, and provides results far from what we asked for.  Igor usually obeys us (the mad scientist), but sometimes wants to follow his own plan or the voices of others (think of the famous “guardrails”).  Remember the strangest art which you’ve coaxed from Stable Diffusion or Craiyon, those transmissions from the uncanny valley, or read about a New York Times writer’s weird Bing chat. ChatGPT and Bing’s chatbot do quickly leap to churn out the text you require, yet at times will just balk, as per its internal (and sometimes mysterious) guidelines.  And ChatGPT is capable of cheerfully producing horrors on demand.

There’s a great, relevant scene in the fantastic Bride of Frankenstein (1935). The two mad scientists (it’s such a fine film that it won’t settle for just one), Frankenstein and Praetorius, complain about the quality of hearts in the cadavers they have.

buy augmentin online buy augmentin no prescription generic

  One assistant hears this and helpfully stalks off to grab and murder a casual passerby, then provide the resulting fresh corpse, which pleases the mad scientists.  The assistant did help out, albeit in an, er, unorthodox fashion.

buy ventolin online buy ventolin no prescription generic

Playing the Igor off of the calculator, we see two very different understandings of generative AI.

buy clomiphene online buy clomiphene no prescription generic

Either we see it as stable or unstable.  We view these tools as easily understood (most calculator functions are clear to many users) or susceptible to following commands other than our own.  Its quality is durable or sometimes risible.

Both technologies and their analogies can be frightening at times.  The calculator is less so, yet like many labor-saving devices, threatens to weaken our individual capacity to perform, or even understand, that labor.  The hunchbacked assistant can easily wreck havoc, even when trying to follow our instructions. While a calculator is cool, unremarkable, and office-friendly, an Igor is ungainly, warped, unpredictable, and strange.

There are other figures and metaphors for us to use, of course.  Today I’d just like to offer and contrast these two, at least for entertainment, and perhaps for a touch of imagination.

I’d like to close with an image from Bride of Frankenstein, which combines a freaking assistant with cool, reliable tech:

Karl_the_Assistant_The_Bride_of_Frankenstein_1935.JPG

One of the poor henchmen in Bride of Frankenstein (1935).

(calculator photo by Awilda Ortiz; Marty Feldman from the vast archive of Giphy; The Bride of Frankenstein’s Karl the Henchman from this wiki)

Posted in automation | Tagged | 15 Comments

Some ways for generative AI to transform the world

For the past two months I’ve been scrambling to work on generative AI.  That’s the phrase I prefer to corral together ChatGPT, art generators like DALL-E, and any AI-driven software which helps us make content.

Besides hosting Forum sessions (some of our best attended and viewed: 1, 2, 3) I’ve been trying to write up thoughts on two levels: what this means for higher ed now and in the short term; some bigger picture implications. Here I want to share some of the forecasts I’ve been building up on the second category.

For this post to work I am assuming a few things.

  1. Generative AI advances briskly over the next few years, improving in quality and growing in instances. It gets better at making content: text, images, audio, video, games, 3d printing, XR.  There is no wild breakthrough which totally redoes the tech, nor does the technology collapse.  For the latter, I’ve heard arguments that the predictive model is ultimately too flawed to be reliable. After all, some innovations do stall out or his dead ends.  (I like to cite 8-track tapes here.) But for now, let’s imagine that ChatGPT et al make significant progress.
  2. Enough people perceive generative AI (“GAI” from here on) to be of at least sufficient quality to use it. That doesn’t mean everybody thinks the stuff is good enough, just that a large enough number think it is to make a difference.  This also differs from my assessment of GAI’s quality, or anything like objective takes. The point concerns human adoption of tools, rather than the tools themselves.
  3. GAI has very broad impact, along the lines of the World Wide Web’s appearance.
    buy tadasiva online buy tadasiva no prescription generic

      It’s not just a technology for a niche population. It isn’t quietly fed into preexisting tech without a murmur, like autocomplete in Google search and Gmail. It actively changes things. Or, if you like, enough people see GAI as a change agent to use it to change things at scale.

  4. The history of technology and innovation is a useful guide.  For example, Everett Roger’s diffusion schema holds up, and that we can apply lessons from various stages of the digital revolution.
  5. I’m not aiming for utopia or dystopia, but for a world with many different experiences and views.

Enough caveats. So, for the short and medium term:

More GAI applications start to appear, increasingly specialized or marked by economic, political, and cultural identities. Microsoft and Google’s engines duel with, or complement, each other. A Chinese firm publishes a ChatGPT competitor tilted towards Xi Jinping thought.

buy nolvadex online buy nolvadex no prescription generic

  A Western politically progressive art generator appears. A conservative Christian group publishes a movie generator which emphasizes their themes and blocks forbidden content, along the lines of CleanFlix. The Indian government celebrates a suite of Hindutva content creators.  European white nationalists release a pro-racial ethnostate-favoring game creator.  Over time as GAI technologies become more accessible and more people acquire relevant coding skills more engines appear with ever more precise, or narrow, foci and biases.

As a result, humans increasingly fill the world with AI-generated stuff: stories, images, movies, clothing designs, architecture, games, 3d printed artifacts. New categories of generative AI appear and we get used to asking GAI to create stuff for us.

buy clomiphene online buy clomiphene no prescription generic

 One milestone will be when software succeeds in building out feature films on command. For example, here’s “a movie poster for an 18th century Gothic novel starring Helen Mirren, directed by Stanley Kubrick” imagined by Midjourney:

a movie poster for an 18th century Gothic novel starring Helen Mirren, directed by Stanley Kubrick: created by Midjourney

Imagine being able to order up such films on demand. Another milestone will be the first GAI-created AAA computer game.  “Make me a 4x game based on my home town with better weather, Roman empire levels of technology, and all advisors drawn from heavy metal musicians in the 1990s.”

We then go through another great round of new media cultural reformations.  Our shared sense of what constitutes real creativity, how to restructure copyright, what freedom of speech means, authorship, journalism, information overload,  storytelling and art expectations and forms, and more mutate and mutate again, before giving way to new settlements.  We come up with new ways of handling the sheer mass of new content, as we have historically done, trying out practices, artistic schools, formats, and technologies.  (For precedents see the rise of moveable type, expanded literacy, and the web.)

The nature of truth in media might come to the fore. I’m thinking of passable fakes, like GAI-crafted evidence for courts, politics, intimate betrayals (“Look, there you are with him!”), business fraud (“Here’s my receipt!”), etc.  The history of photography – which went through this problem repeatedly, from the 19th century to Soviet airbrushing to Photoshop – gives us some ways we could respond.

The pornography world is already participating in GAI, and it seems reasonable to assume they will continue to do so.  Play this across all AI-capable media: videos, text stories, audio, video, then computer games and on to XR.  Recent history suggests there will be criticism of porn.ai from a wide range of quarters (religious conservatives, psychotherapists, feminists). This in turn will likely lead to legislation and possibly new GAI from different perspectives: women-friendly porn.ai, video GAI which doesn’t permit nudity, game generators which mandate inclusion of religious scriptures.

Various individuals, businesses, cultures will take hits or fail completely, while others blossom, depending on how they respond to the generative AI age in different domains and markets.

buy valtrex online buy valtrex no prescription generic

New businesses and cultures emerge, as do new cultural leaders and influencers.

Capital flows in, funding new projects and old providers. Capital tries to surf the GAI wave, controlling its ragged edges, leading to creative and legal disputes.  Economic actors use GAI to produce economic strategies to demolish competitors or establish monopoly.  (I’m waiting for “Leadership Secrets of ChatGPT.”)  Expect venture capital to fuel “grown-up GAI for the real world.” (Carlotta Perez’ Technological Revolutions and Financial Capital (2004) influences by view here, emphasizing the crucial roles capital investment plays.)

We should expect years of governments fumbling with policy and regulation, as bureaucrats and officials struggle to keep up with rapidly advancing tech and its complex effects.  We should also expect some leading GAI powers to exercise historically standard influence peddling and corruption.

Militaries and spy agencies will certainly exploit generative AI. Imagine a rising colonel asking ClausewitzBot for new weapons, strategies, tactics.  Next, human soldiers will try to figure out how to grapple with GAI-shaped enemies.  Or think of spymasters having good deepfakes produced to blackmail key officials, or counterintelligence services running endless simulations of how certain agents might behave under various threats and enticements.  Let me offer a future-oriented example from today via a ChatGPT prompt: “Tell me how a conventional army can use AI to defeat a popular guerrilla insurgency in central Asia.”

Tell me how a conventional army can use AI to defeat a popular guerrilla insurgency in central Asia.

Imagine doing this with an AI trained on not only general web content, but also internal military intel.  And the AI has been trained by analysts on these kinds of questions, with far more parameter prompts.

Political actors can increasingly use GAI to make propaganda in various media and forms.  They can also ask generative AI to create tools for torture, both physical and mental. “BlackOpsBot, give me strategies to injure community or population [X] with full deniability.”  More benignly, we should see political campaign operatives using bots to project how opposing candidates might behave in public debates.

There are plenty of connections with other tech:

  • Office suite – GAI creating slide, text, and spreadsheet content for users, as Microsoft is apparently arranging now. These tools increasingly create rough drafts for us.  It’s an open question how much we’ll edit and revise them.
  • Web content – should replace human-staffed content farms.  A cheaper way to pop up quick web pages for new projects.
  • Social media – GAI becomes a way to improve or expand one’s presence.  Imagine in a few months or years, telling a bot to: “generate a video of my morning from 9 to 10 am, cutting out silent parts, and in the style of Alfred Hitchcock. Post to Instagram in 45 second chunks, one chunk ever two hours.”
  • 3d printing – software offers new 3d designs, then AI starts doing its own prototyping and printing.
  • Speech interfaces –  “Siri, sing me a punk song, US East coast style.”
  • AR/VR/XR can become a home for all AI generated content, since it already uses most media: text, images, audio, video.  GAI serves as a creator’s Gesamtkunstwerk assistant, then as its creator.

There will be attempts to ban all or some of GAI at multiple scales and in various social groups, from nations and cities to churches and companies.  There may also be calls for people to renounce using GAI, perhaps as a performative public statement, or as part of belonging to some group.

All of the above is simple first-order extrapolation, assuming GAI grows and exerts pressure on key points in human civilization.  There are many more possibilities, which I invite readers to share in comments. Now let’s imagine some more complex and interesting ways generative AI could play out, based on second-order impacts and other historical precedents:

  • Intersections with climate change and climate actions. Imagine the IPCC, governments, businesses, and individuals using certain GAI to plot their climate forecasts and the strategies they want to respond to them. Alternatively, we could see climate activists turn against the tech as burning up too much CO2, pressing for its regulation or banning.
  • Post-growth political economies. If some nations decide to throttle back industrial-technical civilization because of demographics, politics, or environmental concerns, embracing degrowth, circular economies, or donut economics,  how does generative AI fit in?  Does a state use an advanced application to create strategies for redistributing wealth – or use another to directly manage such? Do we accept GAI art as a legitimate cultural form, one which adds to quality of life even as we pause or retreat from economic growth?
  • Techlash. Today’s anti-Silicon Valley skepticism and anxiety could tamp down GAI adoption.  That might lead to divisions and schisms in usage, from divided households to national differences.
  • Strict regulation.  Governments might regulate GAI out of general use, as we see with nuclear weapons and biohacking.
  • Butlerian Jihad. One possibility is that resistance to GAI grows enough to tip culture over into a massive anti-AI wave.  For a science fiction example, Frank Herbert imagined an anti-AI cultural-religious movement in Dune.
  • Cultural and social division – do we see people identifying with specific GAI bots or types of them?  We already have folks happily proclaiming their loyalty to Steve Jobs and Fox News; we could see something similar, or perhaps more intense.

…and I have more thoughts on this subject.  I haven’t touched long term effects.  The impact of GAI on education, and how academics might respond, is the topic of another post.  But I wanted to get this out here for now to contribute to the conversation in a futures way.

What do you think of these possibilities?  What else do you envision for the future of generative AI?

(thanks to my Patreon supporters and various friends for thinking through these forecasts. You know who you are.)

Posted in automation, futures | Tagged | 4 Comments