Some large-scale decisions we can make about AI in 2023

In my current work on forecasting the intersection of AI and higher ed, I’ve been running into an interesting problem.  Well, several, but today I’d like to share a structural one, caught between futures thinking and where AI is right now.

We’re on the verge of several major decision points about how we use and respond to artificial intelligence, particularly due to the explosion of large language models (LLMs).  These occur at large-scale and also radical levels.   We could easily head in different directions for each, which makes me think of branching paths or decision gates.  As a result the possibilities have ramified.

To better think through this emerging garden of forking paths, I decided to identify the big ones, then tried to map different ways through them.  I also mapped them out in a flow chart, which yielded some surprising results.

Dataset size.  Right now LLMs train on enormous datasets.  This is a problem for multiple reasons: a large carbon footprint, reducing access to the handful of people owning or working at capital-intensive enterprises (OpenAI, Google, etc). For some LLM applications, the bigger the training set, the better.  On the other hand, there have been some developments with AI software which get good results from smaller sets, and OpenAI’s leader stated that big datasets are now a thing of the past.  So in which direction will we take AI, towards building and using bigger or smaller source collections?

Midjourney_several_futures_for_AI_robot lady 2

From Midjourney, prompted to depict “several futures for AI.”

Copyright and intellectual property (IP). Those big datasets include some problematic material in terms of ownership and rights. Usually the corpus curators did not solicit approval from all content owners.  Some (probably most) or the content is under copyright, and these for-profit firms cannot reasonably expect to defend their use as fair use (especially after this week’s Prince/Warhol decision). (Check out this helpful Washington Post article, which lets you check what content is in one major dataset.  My blog is in there, among many others.)

Moreover, some creators want to shield their work from being used by LLMs.  Some are already suing. Will we see such lawsuits or allied regulations shut down these giant digital scraping projects and hence stall AI growth or use, or will LLM projects evade sanctions and continue?

Cultural attitudes. Right now AI is a hotly debated topic, going far beyond hype and kneejerk reactions.  The controversy is not just a pro/con binary, but consists of arguments across numerous divides, axes, and topics.  For example, in the previous point I  mentioned the art topic, which turns on copyright and autonomy. Consider as well debates over job automation, which focus on labor markets and self-worth. Think of divides over machine creativity as creepy or interesting. And recall the deep anxiety about machines making decisions without humans in the loop.

Taken together, we could see an emergent cultural revulsion against AI, much as the 20th century saw build up over nuclear and biological weapons. This could lead to a hardened anti-AI attitude. Alternatively, many people enjoy or just find useful new AI tools for a range of reasons: convenience, boosted creativity, etc. Will we crystalize as a civilization into one stance or another?  (This is where I keep reminding people of Frank Herbert’s Butlerian Jihad idea.)

Midjourney_several_futures_for_AI_robot lady 1

Midjourney, same prompt as above.

Nonprofit/edu/culture heritage projects. Right now most AI projects have surfaced from giant companies (Google, Microsoft, Meta, Baidu) or from a nonprofit owned by same (OpenAI).  So far they have succeeded in being able to assemble the necessary constellation of human talent, massive computing power, huge datasets, and the right software.   Accordingly, we could imagine a short/medium-term future where LLMs are solely the property of giants.  Yet the history of digital technology has reliably shown tools tend to become more accessible and cheaper over time. On a recent Future Trends Forum we discussed the possibility of small, low resourced groups making their own LLM applications.  Indeed, this is an opportunity for nonprofits, educators, and cultural heritage organizations to make their own and with their own spin.  I’ve thought of libraries or faculty and staff teams creating AI based on non-problematic datasets, such as Internet Archive or Hathitrust content.

The gate here is then: does AI continue to be the province of giant companies, or does it democratize and enter the non-profit realm?

OK, wrapping these together, I found that each had one option which led to maintaining the status quo in key ways. In contract, their alternatives pointed to new ways of structuring AI. In the flow chart I traced these together:

Near future of AI flow chart

Here’s the flowchart. Colored boxes are the decision topics. Colored circles are two aggregate end points.

On reflection, the “New AI Pathways Emerge” looks fascinating, and worth teasing out in a different post.

I’ve used the gate metaphor here and spoken as if humanity is making a decision as a whole at each point, but that’s not historically sound.  We contradict ourselves, normally, and especially spatially. Perhaps we should expect multiple branching points.  Imagine a polity which decides that AI should be carefully controlled by the state, and only take the form of giant projects, well regulated, and only accessing licensed content, while a different nation prefers a wild west approach, with many types of AI projects using all kinds of content.  How would such divergences impact geopolitics?

Additionally, these are just four gates.  Several others are in my mind, like technical quality (perpetually flawed or artificial general intelligence?) and private status (nationalized or not?). Which ones do you see?

Back to higher education: should academics advocate for one path  or another through these gates?  How is enterprise IT preparing for the alternatives? What are the implications of each branch for research and teaching?

PS: I asked Midjourney to image some futures for AI as a flowchart:

Beautiful, but the text makes no sense.

Liked it? Take a second to support Bryan Alexander on Patreon!
Become a patron at Patreon!
This entry was posted in automation. Bookmark the permalink.

14 Responses to Some large-scale decisions we can make about AI in 2023

  1. “We could easily head in different directions for each, which makes me think of branching paths or decision gates.”
    This sort of thinking, I find, is valuable, for showing paths and possibilities and wrong turns (although we may not always be aware of that until it is too late, I suppose).
    Thank you
    Kevin

  2. Pingback: Some large-scale decisions we can make about AI in 2023 | Bryan Alexander | So. Consider

  3. a feast of thought-datalyzing narrative threads. Such possibility; so crazy-critical to folx-kinds’ tomorrows. thank you, brother. I’m watching your smoke as usual.

  4. Guillermo says:

    Always insightful, Bryan.

  5. Alan Levine says:

    Still digesting the flow chart but thanks for that WP link to the content on the C4 dataset 1/40 of ChatGPT 3??). Your blog out ranked mine!

  6. Bryan Alexander says:

    Perhaps the dataset prefers bears or cats to dogs?

  7. Glen McGhee, FHEAP says:

    Flow charts are tempting, but if AI is a Black Swan event, then all bets are off.

    In other words, it will be like the 60s — only digestible in hindsight, and only if you can remember more than most, because it was nothing but a blur for them.
    https://www.businessinsider.com/chatgpt-changed-everything-and-was-a-black-swan-event-2023-5

    • Glen McGhee, FHEAP says:

      To what extent will AI be a black swan event? IN what respect?
      PERPLEXITY says:
      AI has the potential to be a black swan event in several respects.
      Here are some ways in which AI could be a black swan event:

      Unpredictability: AI is a rapidly evolving technology, and it is difficult to predict how it will develop in the future. There is a risk that AI could develop in unexpected ways that have significant and unforeseen consequences.

      Impact: AI has the potential to have a major impact on society and the economy. For example, AI could lead to significant job losses, changes in the way we work and live, and new ethical and legal challenges.

      Inappropriately rationalized: If AI does turn out to be a black swan event, there is a risk that people will try to rationalize it after the fact with the benefit of hindsight. This could lead to a false sense of security and a failure to prepare for future black swan events.

      Unforeseeable consequences: AI has the potential to create unforeseeable consequences that could have a major impact on society and the economy. For example, AI could lead to the development of new weapons systems that are difficult to control or predict.

      Rapid development: AI is developing rapidly, and it is difficult to keep up with the pace of change. This could lead to a situation where AI develops faster than our ability to understand and control it.

      However, it is important to note that AI may not necessarily be a black swan event. While there are risks associated with AI, there are also opportunities. AI has the potential to improve our lives in many ways, from healthcare to transportation to education. It is up to us to ensure that we develop AI in a responsible and ethical way, and that we are prepared for the potential risks and challenges that it may bring.

      • Bryan Alexander says:

        I don’t think AI is a black swan event. It’s been in full view since the 1950s when the Dartmouth group kicked it off. It has ebbed and flowed, of course, by fashion and winters, but it’s long been a concern.

        Now, the sudden advance of LLMs – I don’t think that’s a black swan either. Personally, I’ve been tracking creative AI for a decade, and gave workshops on it. A sudden uptick in quality is something history teaches us to expect. Think, for example, of how the iPhone upgraded the mobile phone: not introducing a new category, but improving what it could do.

        I think there’s a lot of shock and awe around ChatGPT in particular because many of the people emitting said shock and awe are feeling threatened in ways they haven’t experienced. The amazing image creators we saw in 2021-2022 were equally impressive, but human image crafters are a small and uninfluential lot. *Text* creators, well now! We’re talking government officials, reporters, opinion makers – now they’re under the gun and freaking out.

        OK, I can be more charitable. When I told workshop audiences to prepare for creative AI most of them were stunned. I should pre-LLM examples and they were amazed. So maybe I’m being a good professional and mentally inhabiting a future ahead of other people.

        Yet the black swan has already flown into our pond. Now we deal with what it means. That’s where my posts, like this one, come in, relying on what the futures field has to offer.

      • Bryan Alexander says:

        I like your comments, Glen – and, ahem, have written about those points earlier.

        To a couple:
        “If AI does turn out to be a black swan event, there is a risk that people will try to rationalize it after the fact with the benefit of hindsight. This could lead to a false sense of security and a failure to prepare for future black swan events.”
        Yes, that’s how we roll, unfortunately. Normalization is a key part of the process.

        “Unforeseeable consequences: AI has the potential to create unforeseeable consequences that could have a major impact on society and the economy.”
        Yes, that’s what I’m trying to anticipate.

        “Rapid development”
        What’s the best way for me to share what I find here? I’ve been posting to social media but that seems to have limited utility.

  8. Pingback: Looking Back and Looking Forward — Retrospection at the End of Another Academic Year | Rob Reynolds

Leave a Reply

Your email address will not be published. Required fields are marked *