For the past two months I’ve been scrambling to work on generative AI. That’s the phrase I prefer to corral together ChatGPT, art generators like DALL-E, and any AI-driven software which helps us make content.
Besides hosting Forum sessions (some of our best attended and viewed: 1, 2, 3) I’ve been trying to write up thoughts on two levels: what this means for higher ed now and in the short term; some bigger picture implications. Here I want to share some of the forecasts I’ve been building up on the second category.
For this post to work I am assuming a few things.
- Generative AI advances briskly over the next few years, improving in quality and growing in instances. It gets better at making content: text, images, audio, video, games, 3d printing, XR. There is no wild breakthrough which totally redoes the tech, nor does the technology collapse. For the latter, I’ve heard arguments that the predictive model is ultimately too flawed to be reliable. After all, some innovations do stall out or his dead ends. (I like to cite 8-track tapes here.) But for now, let’s imagine that ChatGPT et al make significant progress.
- Enough people perceive generative AI (“GAI” from here on) to be of at least sufficient quality to use it. That doesn’t mean everybody thinks the stuff is good enough, just that a large enough number think it is to make a difference. This also differs from my assessment of GAI’s quality, or anything like objective takes. The point concerns human adoption of tools, rather than the tools themselves.
- GAI has very broad impact, along the lines of the World Wide Web’s appearance.
buy tadasiva online buy tadasiva no prescription generic
It’s not just a technology for a niche population. It isn’t quietly fed into preexisting tech without a murmur, like autocomplete in Google search and Gmail. It actively changes things. Or, if you like, enough people see GAI as a change agent to use it to change things at scale.
- The history of technology and innovation is a useful guide. For example, Everett Roger’s diffusion schema holds up, and that we can apply lessons from various stages of the digital revolution.
- I’m not aiming for utopia or dystopia, but for a world with many different experiences and views.
Enough caveats. So, for the short and medium term:
More GAI applications start to appear, increasingly specialized or marked by economic, political, and cultural identities. Microsoft and Google’s engines duel with, or complement, each other. A Chinese firm publishes a ChatGPT competitor tilted towards Xi Jinping thought.
A Western politically progressive art generator appears. A conservative Christian group publishes a movie generator which emphasizes their themes and blocks forbidden content, along the lines of CleanFlix. The Indian government celebrates a suite of Hindutva content creators. European white nationalists release a pro-racial ethnostate-favoring game creator. Over time as GAI technologies become more accessible and more people acquire relevant coding skills more engines appear with ever more precise, or narrow, foci and biases.
As a result, humans increasingly fill the world with AI-generated stuff: stories, images, movies, clothing designs, architecture, games, 3d printed artifacts. New categories of generative AI appear and we get used to asking GAI to create stuff for us.
One milestone will be when software succeeds in building out feature films on command. For example, here’s “a movie poster for an 18th century Gothic novel starring Helen Mirren, directed by Stanley Kubrick” imagined by Midjourney:
Imagine being able to order up such films on demand. Another milestone will be the first GAI-created AAA computer game. “Make me a 4x game based on my home town with better weather, Roman empire levels of technology, and all advisors drawn from heavy metal musicians in the 1990s.”
We then go through another great round of new media cultural reformations. Our shared sense of what constitutes real creativity, how to restructure copyright, what freedom of speech means, authorship, journalism, information overload, storytelling and art expectations and forms, and more mutate and mutate again, before giving way to new settlements. We come up with new ways of handling the sheer mass of new content, as we have historically done, trying out practices, artistic schools, formats, and technologies. (For precedents see the rise of moveable type, expanded literacy, and the web.)
The nature of truth in media might come to the fore. I’m thinking of passable fakes, like GAI-crafted evidence for courts, politics, intimate betrayals (“Look, there you are with him!”), business fraud (“Here’s my receipt!”), etc. The history of photography – which went through this problem repeatedly, from the 19th century to Soviet airbrushing to Photoshop – gives us some ways we could respond.
The pornography world is already participating in GAI, and it seems reasonable to assume they will continue to do so. Play this across all AI-capable media: videos, text stories, audio, video, then computer games and on to XR. Recent history suggests there will be criticism of porn.ai from a wide range of quarters (religious conservatives, psychotherapists, feminists). This in turn will likely lead to legislation and possibly new GAI from different perspectives: women-friendly porn.ai, video GAI which doesn’t permit nudity, game generators which mandate inclusion of religious scriptures.
Various individuals, businesses, cultures will take hits or fail completely, while others blossom, depending on how they respond to the generative AI age in different domains and markets.
New businesses and cultures emerge, as do new cultural leaders and influencers.
Capital flows in, funding new projects and old providers. Capital tries to surf the GAI wave, controlling its ragged edges, leading to creative and legal disputes. Economic actors use GAI to produce economic strategies to demolish competitors or establish monopoly. (I’m waiting for “Leadership Secrets of ChatGPT.”) Expect venture capital to fuel “grown-up GAI for the real world.” (Carlotta Perez’ Technological Revolutions and Financial Capital (2004) influences by view here, emphasizing the crucial roles capital investment plays.)
We should expect years of governments fumbling with policy and regulation, as bureaucrats and officials struggle to keep up with rapidly advancing tech and its complex effects. We should also expect some leading GAI powers to exercise historically standard influence peddling and corruption.
Militaries and spy agencies will certainly exploit generative AI. Imagine a rising colonel asking ClausewitzBot for new weapons, strategies, tactics. Next, human soldiers will try to figure out how to grapple with GAI-shaped enemies. Or think of spymasters having good deepfakes produced to blackmail key officials, or counterintelligence services running endless simulations of how certain agents might behave under various threats and enticements. Let me offer a future-oriented example from today via a ChatGPT prompt: “Tell me how a conventional army can use AI to defeat a popular guerrilla insurgency in central Asia.”
Imagine doing this with an AI trained on not only general web content, but also internal military intel. And the AI has been trained by analysts on these kinds of questions, with far more parameter prompts.
Political actors can increasingly use GAI to make propaganda in various media and forms. They can also ask generative AI to create tools for torture, both physical and mental. “BlackOpsBot, give me strategies to injure community or population [X] with full deniability.” More benignly, we should see political campaign operatives using bots to project how opposing candidates might behave in public debates.
There are plenty of connections with other tech:
- Office suite – GAI creating slide, text, and spreadsheet content for users, as Microsoft is apparently arranging now. These tools increasingly create rough drafts for us. It’s an open question how much we’ll edit and revise them.
- Web content – should replace human-staffed content farms. A cheaper way to pop up quick web pages for new projects.
- Social media – GAI becomes a way to improve or expand one’s presence. Imagine in a few months or years, telling a bot to: “generate a video of my morning from 9 to 10 am, cutting out silent parts, and in the style of Alfred Hitchcock. Post to Instagram in 45 second chunks, one chunk ever two hours.”
- 3d printing – software offers new 3d designs, then AI starts doing its own prototyping and printing.
- Speech interfaces – “Siri, sing me a punk song, US East coast style.”
- AR/VR/XR can become a home for all AI generated content, since it already uses most media: text, images, audio, video. GAI serves as a creator’s Gesamtkunstwerk assistant, then as its creator.
There will be attempts to ban all or some of GAI at multiple scales and in various social groups, from nations and cities to churches and companies. There may also be calls for people to renounce using GAI, perhaps as a performative public statement, or as part of belonging to some group.
All of the above is simple first-order extrapolation, assuming GAI grows and exerts pressure on key points in human civilization. There are many more possibilities, which I invite readers to share in comments. Now let’s imagine some more complex and interesting ways generative AI could play out, based on second-order impacts and other historical precedents:
- Intersections with climate change and climate actions. Imagine the IPCC, governments, businesses, and individuals using certain GAI to plot their climate forecasts and the strategies they want to respond to them. Alternatively, we could see climate activists turn against the tech as burning up too much CO2, pressing for its regulation or banning.
- Post-growth political economies. If some nations decide to throttle back industrial-technical civilization because of demographics, politics, or environmental concerns, embracing degrowth, circular economies, or donut economics, how does generative AI fit in? Does a state use an advanced application to create strategies for redistributing wealth – or use another to directly manage such? Do we accept GAI art as a legitimate cultural form, one which adds to quality of life even as we pause or retreat from economic growth?
- Techlash. Today’s anti-Silicon Valley skepticism and anxiety could tamp down GAI adoption. That might lead to divisions and schisms in usage, from divided households to national differences.
- Strict regulation. Governments might regulate GAI out of general use, as we see with nuclear weapons and biohacking.
- Butlerian Jihad. One possibility is that resistance to GAI grows enough to tip culture over into a massive anti-AI wave. For a science fiction example, Frank Herbert imagined an anti-AI cultural-religious movement in Dune.
- Cultural and social division – do we see people identifying with specific GAI bots or types of them? We already have folks happily proclaiming their loyalty to Steve Jobs and Fox News; we could see something similar, or perhaps more intense.
…and I have more thoughts on this subject. I haven’t touched long term effects. The impact of GAI on education, and how academics might respond, is the topic of another post. But I wanted to get this out here for now to contribute to the conversation in a futures way.
What do you think of these possibilities? What else do you envision for the future of generative AI?
(thanks to my Patreon supporters and various friends for thinking through these forecasts. You know who you are.)
An intriguing assembly. Signs are already there for most (maybe all) scenarios. Another observation — consider the increased acceleration compared to the early internet.
But more stuff? As a potential negative that could be up there with the capacity for mass producing disinformation.
It must be great working in an education context where you do not have to worry about how tech companies are handling student data 🙂 I see most of this AI as a huge step backwards: we are back to seeing education as the production of papers (education as product, not process); ChatGPT is built on a solid foundation of racist, antisemitic, sexist bias, and false information; there are unresolved intellectual property issues here, and it was created in highly questionable circumstances (https://time.com/6247678/openai-chatgpt-kenya-workers/). And why are you so sure that these tools are going to just get “better”? Has everything in the tech world just evolved to get better? (e.g. anything that Microsoft touches, Facebook, Twitter, etc.) Isn’t there a possibility that this could get worse? I have been really disappointed in the edtech sector over this. They seem more driven by FOMO than an actual desire to analyze what AI actually is (or isn’t). We have seen this before when Web 2.0 was a thing – everyone wanted to use the shiny new toys despite the accessibility issues.
This is so great! Thanks for writing this, Brian. Looking forward to your next post.
Yes, GAi will have it’s growing aches and pains. I’m not going to speculate because there is plenty of that on the social. I’m using it as freely as I can right now, enjoying exploring it! The horse is out of the barn and it has been for quite awhile – think about the explosion of Zoom and plenty of what we often use has embedded Ai. Some may want it controlled or limited or whatever, trying to ban it, “control” it or gaslight into thinking it’s being controlled (think government). I don’t believe it can be “controlled” but guard rails are necessary to protect privacy, and safety of course. IMO it is Artificial Superior Intelligence (ASI) future that will be the most exciting- or maybe very frightening – Yes, Geoff thinking about the unintended consequences- what do we know? Either way, we most likely won’t be here to experience it or will we???? 😉 (Singularity is Nearer)