What are campuses doing about AI this fall semester?

NB: I’ve updated this post several times with more examples, mostly recently 7/25/2023.

As fall classes draw nigh, I wonder (among other things) what colleges and universities are doing to do about generative AI.

I haven’t seen many completed institutional policies.  I’ve been looking around and asking on social media, and it seems like either most of academia isn’t taking such a formal step, or is working quietly, behind the scenes.  Rebecca Mayglo offered one example of the former:

 

I think many campuses are going through what Karl Aho describes:

We’ve got a study group working on advising the provost about policy/guidelines. We don’t have one in place yet.

The University of Arizona has several working groups, according to Nicole Hennig.

Others haven’t reached that formal organization yet, operating still in study mode, as Jim McGensy observes:

Marc Watkins thinks a lack of general AI literacy means institutions can’t develop policies yet:

Watkins later added: “It feels like we’re eons away from accepted practice in edu.”

Some institutions are trying to boost AI knowledge on campus.  Auburn University launched a Teaching with AI online class for its faculty and staff. The University of Mississippi has conducted a summer class on AI.  The University of Arizona’s library published a libguide on “AI Literacy in the Age of ChatGPT” plus a Student Guide to ChatGPT.

This isn’t to underplay efforts under way. The American University of Armenia added a note about AI in their cheating policy:

6.4.2.        Cheating. Cheating includes but is not limited to:

6.4.2.1.     using or referring to notes, books, devices or other sources of information, including advanced Artificial Intelligence (AI) tools, such as ChatGPT, in completing an Academic Evaluation or Assignment, when such use has not been expressly allowed by the faculty member who is conducting the examination;

Hosftra University on Long Island did a similar amendment to their syllabus policies:

Hofstra University places high value upon educating students about academic integrity. At the same time, the University will not tolerate dishonesty, and it will not offer the privileges of the community to the repeat offender. The academic community assumes that work of any kind–whether a research paper, a critical essay, a homework assignment, a test or quiz, a computer program, or a creative assignment in any medium–is done, entirely and without unauthorized assistance, by the individual(s) whose name(s) it bears. Use of generative artificial intelligence tools (e.g. Chat GPT) must be consistent with the instructor’s stated course policy. Unless indicated otherwise in the instructions for a specific assignment, the use of Chat GPT or similar artificial intelligence tools for work submitted in this course constitutes the receiving of “unauthorized assistance for academic work”, and is a violation of the Hofstra University Honor Code. Students bear the ultimate responsibility for implementing the principles of academic integrity. [emphases added]

Montclair State University updated their plagiarism policies in May:

Information taken from generative AI, such as ChatGPT, must be cited, otherwise it will be defined as plagiarism. Best practice for some disciplines may be to find the same information elsewhere for a complete citation.

Zurich’s University for Applied Sciences (ZHAW) issued two German-language policies earlier this year.  One is, as far as my bad German and Google Translate allows, a blog post offering guidelines for teaching.  Another is a more extensive look at AI for assessment.

The University of Illinois Urbana-Champaign has issued not policy but guidelines, rather than a policy.  According to Ted Underwood Jamie A. Nelson played a major role in shaping them.  Also,

We have a task force, and guidelines. I think guidance has been pretty good, and appropriately flexible. Flexibility is needed for the reasons Marc outlines. We’re all feeling our way here and appropriate policies for intro/adv courses may be different.

College Unbound is doing a fascinating and possibly unusual effort on this score, going through an iterative policy and practice development process including students. (thanks to Lance Eaton). Kwantlen Polytechnic University is developing policies and has shared some guidance.

There are other moves campuses can make concerning AI and classes.  Amanda Sturgill names two: modifying career support for graduates, and one program (mass communication) working within that profession.

At my school, we are thinking about the effects on life/profession after college. So are parts of mass communication education’s professional organization.

Then things come down to individual classes and instructors, and what they decide to do. Lance Eaton has set up a terrific Google Doc of classroom policies, which display a lot of creativity.

Some institutions are setting up professional development resources and opportunities. For example, The Ohio State University has posted one helpful webpage.  (thanks to Terry Bradley)

There’s got to be more going on out there, just not eliciting much of a buzz.

So this is a plea for help, a call for pointers.  Have you seen any campuses working on policies or other big decisions about AI and fall classes?

We can use this post as a place to share examples.  If they grow, I can set up a Google Doc.

(thanks to Perry Share, Exhaust Fumes, Mark Watkins, Cerstin Mahlow, and Brent Anders; thanks also to commentators on this post Alan Levine and Nicole Hennig)

Liked it? Take a second to support Bryan Alexander on Patreon!
Become a patron at Patreon!
This entry was posted in automation. Bookmark the permalink.

8 Responses to What are campuses doing about AI this fall semester?

  1. Alan Levine says:

    See Kwantlen Polytechnic’s guidelines and policy statement
    https://wordpress.kpu.ca/generativeaitlkpu/

    And at national levels see the OECD AI Policy Observatory with a dashboard for numerous countries (not just education ones).
    https://oecd.ai/

  2. Sam says:

    I appreciated the Google Doc you linked, which provides a thoughtful array of issues and potential guidance. I’m puzzled by Watkins’ statement “until general AI literacy is common, and until the tech pauses development for at least a few years, likewise stops being integrated into software we use every day”. Is he suggesting that we should pause AI development (a mistake, in my opinion, and impossible, since the cat is out of the bag)? Also, he seems to be saying that no policy should be implemented until all information is available… which supposes that all information will EVER be available about a moving target – innovation in technology. I think a much more practical approach is to engage the students with the generative AI. They need to learn how to recognize if a work product is likely generated by AI, how to validate what the AI product has stated, when use of generative AI is helpful and when it is unethical, etc. Same as engineering students did when hand-held calculators first became available.

    • Bryan Alexander says:

      Good questions, Sam.
      I think some folks really would like to pause their responses until they feel they have sufficient data. And some others – perhaps with overlap – would like to pause AI for now.

  3. University of Arizona has created several working groups on different topics related to AI. We’ve been meeting weekly over the summer and more groups will continue in the fall. The groups are: AI and data acumen, AI training, communicating about AI, Fall event planning, Spring event planning, access and equity, syllabus guidance, integrity in education, K-14 group, integrity in research, knowledge creation & ownership, industry.

    I’m on the AI training group and we’re working to come up with a definition of foundational knowledge that people need about generative AI (for any discipline or level). We’ll also make recommendations for some training opportunities.

    In the meantime, we at the libraries have created two guides:
    AI Literacy in the Age of ChatGPT, https://libguides.library.arizona.edu/ai-literacy-instructors
    and Student Guide to ChatGPT, https://libguides.library.arizona.edu/students-chatgpt/

    • Bryan Alexander says:

      Nicole, that’s excellent! Thank you for sharing your story plus the two guides. Good work.

      Please feel free to tell Arizonans about the Future Trends Forum’s AI sessions. More to come.

Leave a Reply

Your email address will not be published. Required fields are marked *