Chat GPT
Center for Faculty InnovationJanuary 26, 2023 - (PDF)
Perhaps you’ve heard of ChatGPT. It is a new chatbot, launched by OpenAI in November of 2022, and it’s gone viral. Artificial Intelligence (AI) like that used in ChatGPT has found wide application—in law, computer programming, even medicine. In just a few months, ChatGPT itself has taken higher ed by storm, “stunning” academics with its abilities. (Here is just one Zotero group library, compiled by Lee Skallerup Bessette, of ChatGPT-related pieces.)
ChatGPT is intended to interact with users in a simple, conversational way. You can ask it questions, give it prompts, request fine-tuning or additions. Its answers are generated by using information found all across the internet (Wikipedia, Reddit, random websites, maybe even CFI Teaching Toolboxes; I could be famous now!!). The responses are human-like… but not human. It’s easy to use and, as of now, free (funded in part, if you care, by Elon Musk). Here is Forbes’s quick primer on what to know.
Specifically, for Teaching Toolbox purposes, it’s important to know that ChatGPT can generate content for, say, essay prompts, test questions, and discussion board posts—you know, the sort of things we usually ask students to do to demonstrate that they’ve learned something in our courses. One microbiologist gave it a 10-question college-level microbiology quiz and “ChatGPT blew it away.”
Cue our collective panic [description: link is to a gif of Beaker the Muppet, well, panicking].
One twitter user claimed, “It’s basically ruined homework.” Another wondered whether this was the end of writing assignments. I must admit that all the Chicken Little talk was starting to weary me by the end of December. (And I wasn’t even closely tracking this stuff since I was, you know, supposed to be on leave). I felt, as one participant in a “What might ChatGPT mean for higher education” webinar noted, “nothing new is happening.” Before, students could ask a friend or pay a stranger online to create course content for them. Multiple AIs exist already that can be used for such nefarious purposes. And students do, despite our best efforts, share test questions with each other, copy others’ words without proper attribution, turn in purposefully corrupted files (this happened to me my very first semester at JMU!), and more.
Like Autumm Caines, I was feeling annoyed about how exclusively the various ChatGPT conversations and news stories seemed to focus on students using AI-generated text to cheat on essays and exams, with predictably severe solutions involving surveillance, control, and punishment. (We’ve addressed cheating in a Toolbox before, in the context of teaching online during the pandemic, and, to be transparent, I do tend to care less about cheating than a lot of other very reasonably minded folks here do. There is also the reality that many employers will require students to have AI-related skills and competences in their jobs, so we may be doing them a disservice by attempting to curb these interactions and practice.) Certainly, though, academic integrity is still a shared value at JMU. We have an Honor Council for a reason. And some faculty think they have already caught students cheating.
But ChatGPT isn’t an A+ student yet (though it may well get there, given time and given enough of us helping to improve it). The AI is often obvious; it may use terms or information that instructors haven’t taught students in that particular class, it may write in a way that just seems “off” (though this kind of judgment is subject to limitations and biases of its own), it may churn out very similar responses to multiple students who use it for the same class prompt, it may include bogus information, it may offer incorrect or nonexistent citations, it may not cite in the proper or required format, it may write at a level that’s not appropriate for a particular course, it may offer responses that lack depth and complexity, it may eventually be able to produce a result that’s useful to a student with dishonest intentions, but only after so many various refinements that I, knowing students, have a hard time believing they will actually put in the effort. It sometimes even thinks that 47 is bigger than 64. A secret cryptographic watermark in development may help professors be able to identify AI-generated content in student work. And ChatGPT will likely soon become monetized, narrowing just how many people will be able to use it (and, of course, privileging some over others).
For many instructors, the sophistication and ease of ChatGPT may confirm the need for in-person assessments, multiple-choice tests, oral exams, or handwritten essays, though there may be good reasons to choose otherwise. It may make some of us rest easier to turn to online test proctoring software and services (though there have been problems in the past with this route too, for students of color and students with disabilities); JMU, if you didn’t already know, gives us access to Respondus. And there is AI detection software available for use, such as huggingface and GPTZero, though with tweaks (e.g., “rewrite this in a way that will not trigger an AI detector”), they can still be easy to fool. (False negatives AND false positives both concern me; also I just don’t have the time—or want to give the time I do have—to be putting my students’ work through the wringer.)
But one scholar who studies technology and regulation is hopeful: “Whenever there’s a new technology, there’s a panic around it….It’s the responsibility of academics to have a healthy amount of distrust — but I don’t feel like this is an insurmountable challenge.” Many educators are viewing AI like ChatGPT less as a threat and more as an opportunity, focusing, as one poster did on a professional listserv, on “how to utilize AI to create meaningful learning.” How can we take advantage of ChatGPT? How can we “embrace the bot”? How can it be used to augment and unlock human creativity? How can we use this moment as an opportunity to “reimagine education”? I wondered, along with another listserv poster, on an ongoing thread entitled “is ChatGPT blowing up on your campus?”: “Isn’t ChatGPT just another online tool, whose uses and limitations might end up being good for faculty and students to learn more about?”
I have so much faith in instructors to be creative, innovative, flexible, imaginative, resourceful, collaborative, and charitable in response to any “wicked problem” that we and our students may encounter. We have proven this capacity time and again (March 2020 being a recent obvious example), though, of course, we’re also experiencing some serious burnout as a result. There are plenty of ideas, theories, and models that can support faculty as we evolve our teaching in response to advancements and changes in our world. Take critical digital pedagogy, for instance, which encourages us to be reflective and evaluative of technology; instead of coming from a place of fear or doubling-down on mistrust, we might consider how to intentionally have nuanced conversations with our students about this technology. (See previous CFI Teaching Toolboxes on digital pedagogy, here and here.) Some folks are already imagining positives to the use of AI like ChatGPT (e.g., that it might be fairer to non-native speakers).
If you are wanting to foster the responsible use of ChatGPT in your courses and assignments (some faculty are even requiring it!), here are just a few ideas you might consider (many of which came directly from Cynthia Alby’s amazing shared resource):
- Write prompts it’s not equipped to handle, for example, by drawing upon specific lessons/examples taught only in your class or by asking students to connect what they’ve learned to their own personal experiences.
- As fellow CFI assistant director Dayna Henry suggested to me, ask students to take on the role of information verification (vs. generation). That is, the instructor or even the students would create content using ChatGPT and then the students’ role is to check and evaluate, using what they’ve learned in class, to find out if that output is correct.
- Use ChatGPT as an opportunity to teach students digital and AI literacy: how to enhance their capabilities for output and efficiency, to better prepare them for future employment.
- “Have students read and annotate using social annotation apps such as Perusall… or Hypothes.is. Not only would it be far too time-consuming to use AI to annotate in such a context, but students often find the conversation so meaningful that they wouldn’t want to turn to AI” (Alby).
- “Update basic writing assignments such as short answers, simple essays, and reflections by asking students to create mind maps instead or to write them using ‘track changes’ in Word or in ‘suggesting mode’ in Google Docs” (Alby). Consider how to get students to demonstrate or display their drafting or creation or problem-solving processes (like in K-12, where kids are prompted to “show your work” or “show your thinking”), which are so often invisible or private.
- Create time in class for students to begin writing (or other) projects, without the aid of ChatGPT, using this brainstorming (and the documentation thereof) as a springboard for later work.
- “Replace traditional writing-focused assignments with authentic ‘performance tasks’ focused on artifacts that are visual or audio such as podcasts, videos, debates, speeches, interviews, drawings, diagrams, peer instruction, scale models, storyboards, performances, displays, multimedia projects, ‘UnEssays,’ field studies, analysis of specific data sets, original research, etc.” (Alby).
- Other CFI assistant director Andreas Broscheid offered this suggestion: allow students to co-author a writing assignment with ChatGPT; students then provide a detailed account of the different contributions of the two co-authors (or submit track changes?) and are responsible for any factual or reasoning content.
- Use ChatGPT to generate counterarguments; students then have the opportunity to explore and consider various perspectives on a particular issue or topic, perhaps eventually determining which one they might hold. Alternatively, students could use ChatGPT to generate counterarguments to a specific claim they want to make (e.g., in a persuasive paper), and the final product could require them to incorporate and indicate how they addressed these objections.
- Consider updating your syllabus with information about ChatGPT or other related AI tools, as well as your expectations about appropriate and ethical use (though we had a hilariously hearty conversation about this idea within the CFI because we weren’t sure we wanted to be responsible for introducing ChatGPT to students in the first place. “Don’t give them any ideas!” was a funny prevailing sentiment.)
- What else can you think of?
We could also consider how to use ChatGPT to make our own lives easier, such as assisting in syllabus creation. (The always-hilarious @ass_deans Twitter account admonishes us not to use ChatGPT to write our annual evaluations.) One of my friends asked ChatGPT, “how can faculty use chatgpt?” ChatGPT suggested faculty could use it as a teaching tool, a research tool, and a teaching assistant. My friend then followed up with: “how can faculty use chatgpt for teaching?”
ChatGPT itself offered these ideas:
- “As a demonstration tool: Faculty could use ChatGPT to demonstrate how a language model works and how it can be used to generate human-like text. This could be a useful way to illustrate the principles of natural language processing and language generation to students.
- As a supplement to lectures: Faculty could use ChatGPT to provide additional information or examples during lectures, or to answer students' questions in real-time. This could be especially useful for online classes, where students may not have the opportunity to ask questions in person.
- As a homework or project tool: Faculty could use ChatGPT as a tool for students to use as part of their homework assignments or projects. For example, students could use ChatGPT to generate sample responses to prompts or to create their own chatbots.
- As a learning tool: Faculty could use ChatGPT to create interactive learning materials or activities for students to work through on their own. This could be a useful way to help students learn about natural language processing and language generation in a hands-on manner.” (ChatGPT 2023)
I loved this response, in part because of how unhelpful I find so many of the suggestions (i.e., I wouldn’t trust AI to be the one responsible for answering students’ questions, even in an online course). I asked my friend if I could use them in this Teaching Toolbox. He, delightfully, suggested I ask ChatGPT instead. I did. ChatGPT told me no, I shouldn’t use content I hadn’t created on my own without proper attribution. ChatGPT, so responsible! I then asked, “What if I give proper attribution?” I got an error message back. (I got a lot of these error messages when playing around with ChatGPT.) So I’m copying and pasting away here, with a little proper citation for fun.
Next week (Wednesday, 2/1), we will offer an online roundtable from 10 to 11:15am to give folks space to brainstorm how to take advantage of this new technology, both in the classroom and in the rest of their lives, and how to guard against its potential misuses. We are also always available to consult about courses and particular assignments, along with the instructional designers over in Libraries. In the meantime, it may be worth familiarizing yourself with the ChatGPT, while it’s free. Of course, if you do, please understand that you join the ranks of people helping to train it—and privacy and security concerns are just as applicable here as anywhere else online where you might be getting an account and sharing information. (Same goes for students, which would be worth noting and discussing with them, if you plan to integrate ChatGPT into your class.)
For what it’s worth, here is what ChatGPT gave me (click here for accessible version) when I asked it to “Write a CFI Teaching Toolbox about ChatGPT.” I don’t think I’m out of a job just yet.