This Toolbox Wasn’t Written by ChatGPT (pinky promise)

News
 

Sept. 5, 2024 - (PDF)

It’s still not really clear what Generative AI (or gAI, but let’s simply say ChatGPT, since that’s how we got to know it) will be: Is the singularity nigh, and will it be a bang, or a whimper, or not even the end of the world? Will it be the end of education? Or will it simply be, I dunno, the next Wikipedia or the next pretty neat digital watch? (I vaguely remembered that Douglas Adams joked about digital watches in the Hitchhiker's Guide to the Galaxy, but I had forgotten where to find the exact place in the book. Microsoft Copilot made it really easy to find the quote—and then I noticed that it was right on page 1. Adams would have loved this!)

In any case, students seem to get the hang of ChatGPT. About 44% of JMU students say they use it (for personal use), and more than 14% say they use it for assignments (we don’t know if this is with permission of instructors). A good portion refused to answer these questions on the JMU Continuing Student Survey; make what you will of it. About half of all students said their instructors provided information about ethical AI use, while more than 60% had instructors who prohibited ChatGPT. Overwhelming majorities want to learn more about using ChatGPT ethically in their respective fields. This may be related to the fact that gAI-related skills are in high demand among employers, and employees of all types are using ChatGPT extensively. A recent Boston Consulting study found that ChatGPT use could increase productivity and quality of work in some tasks—and degrade it in others.

From an educational perspective, this leaves us with two broad options or paths. One is to resist. There are good reasons for this. I don’t have to list them all: Just Google (or ask ChatGPT) “Why is ChatGPT bad?” and find insightful pieces like this one. So, this path would suggest banning ChatGPT from our classes and focusing on real-life, authentic, human exchanges (and probably quite an amount of policing to enforce the ChatGPT ban). (De Fine Licht offers some conditions for when a ChatGPT ban is ethical.)

I prefer the other option—to embrace the new technology and guide students towards its effective and ethical use. I think there are good reasons for this: Banning ChatGPT from our classes is exceedingly hard. AI detectors may be getting better, but do we trust them enough to AI-police student work (something I am reluctant to do on principle)? Many students will use it anyway, and we have the opportunity to help students use it in ways that are effective and ethical, supporting their learning instead of undermining it, and at the same time preparing them for AI-infused workplaces. We may also be able to influence how our society interacts with ChatGPT by prompting our students to analyze its social implications and learn how to carve out ways to assert their own humanity over the standard din of AI-produced text that may in the near future form the fabric of our modern lives.

But then I am teaching political science and not writing. (JMU’s own Danielle DeRise provides a thoughtful, balanced perspective as a Writing instructor that I encourage you to read.) And I am not teaching classes in which students have to demonstrate the memorization of facts that may determine life and death, or at least the health, of future patients. ChatGPT challenges us to be very clear about what students have to learn, and thus do on their own, and what the things are that they can let a machine do. It also challenges us to build relationships with students, so that we can distinguish what’s genuinely them from what ChatGPT did, motivate them to learn, and help them learn how to be responsible learners and professionals. Of course, that’s hard to do in a large mass-learning class—ChatGPT challenges learning institutions to change as well.

In what follows I drop a few notes about working and teaching with ChatGPT that I found remarkable. You may have more to add, and some to contradict. I encourage you to do so.

  1. If we want to teach students how to ethically and effectively use chatGPT, we need to learn how to ethically and effectively use chatGPT ourselves. There are no ready-made recipes, though there is an emerging literature that provides some guide posts. I recommend the ongoing work by Ethan and Lilach Mollick at Penn State, Ethan Mollick’s Substack, Graham Clay’s blog on using gAI for all kinds of faculty work, as well as the Chronicle’s Teaching newsletter. Our friends at JMU Libraries have offered workshops on generative AI tools for classes—you can find announcements in their Tuesday morning emails. Here at CFI, we are in fact working with Danielle to offer a roundtable on her article. Stay posted!

 

  1. It’s worth it to try different generative AI tools. JMU provides a commercial data protection license for Microsoft’s Copilot, which is based on ChatGPT 4, so that’s a good place to start. (My understanding is that that license prevents user input from being used for AI training purposes; still, one must not enter confidential information into the chat, especially not about students.)

 

  1. I think most AI-enthused people believe, or at least hope, that ChatGPT will free up their time by doing all the boring routine things that fill their days. And in fact, it does a bang up job with meeting notes, for example. But don’t get your hopes up with the time thing: past innovations have not necessarily led to less work (hello, agriculture!), especially not for all (hello, washing machine!) [HELLO EMAIL! -EG]. (And, of course, past innovations have also led to major labor market disruptions, which do not exactly lower the affected workers’ loads, at least not in a desirable way.) Still, we may be able to offload tasks that can be, well, half-assed to ChatGPT, leaving more time and focus for more important tasks. But we also need to pay attention to whether we’re increasing the workload of others in the process—do low-paid, administrative staff end up taking care of additional tasks that used to require further expertise, now that ChatGPT (supposedly) helps?

 

  1. I suspect most people think of ChatGPT as a writing machine that produces text to replace things previously written by human authors: emails, outlines, answers to questions, student essays (gasp!), Toolboxes (double gasp!), and so on. I find ChatGPT more powerful, though, not as a machine that answers questions in that way, but as a machine that we can get to ask questions. Instead of having it write an essay (of questionable ethical and other qualities), I like to have it ask me questions about my essay: questions about points that are not clear, about things that someone reading the essay would like to learn more of, about things that are missing, etc. Instead of asking ChatGPT to provide ideas or outline, students could ask it to ask them questions that lead them to a topic or an outline. ChatGPT as a mentor or reviewer may be much more effective than ChatGPT as a writer. In that way, ChatGPT may not end up doing my work, but it may help me improve my work. ChatGPT as a mentor or source of feedback may also address an issue I've found tricky: How to get students to use feedback and ask questions. Doing such things requires a certain tolerance for vulnerability, which not all students have. Asking a machine may be easier for students than asking a fellow student for feedback, or—even "worse"—a professor. Sure, we can provide better feedback than ChatGPT, but imperfect feedback is usually better than no feedback, I find.

 

  1. OK, let me say something conveniently vague that'll be not helpful at all: As instructors we will have to provide clear guidelines about ChatGPT use to students that are still flexible enough to allow for experimentation and discovery. I have obviously no idea how to do this well. But I think it includes clear statements about what we consider violations of academic integrity involving ChatGPT. We may also want to specify in assignment instructions what ChatGPT use, if any, is acceptable. At a minimum, for example, students may be required to document how they used ChatGPT. (Faculty should do the same.) Going beyond this, I think that conversations with, and instruction for, our students about the various risks and ethical questions of ChatGPT, as well as about effective work with ChatGPT, will have to become part of our teaching practice.

 

  1. While it's important to educate students in the appropriate and effective uses of ChatGPT, at this point it is wise to allow students to opt out of ChatGPT use. Even though many students probably don't mind (and use ChatGPT anyway), the privacy, intellectual property rights, and other concerns around generative AI are too high to force all students to use it.

 

  1. As ChatGPT becomes part of "the workplace," students need to learn how to employ it in a range of tasks. One important step of using ChatGPT is checking whether the texts it produces are accurate and usable. That requires substantive as well as process knowledge that students need to acquire. Students need to learn how to program before using ChatGPT to write code. Students need to learn how to write well before they can quality-control (and improve) texts written by a machine. The tricky question, for me, is how we can do both: Help students learn how to use ChatGPT and at the same time learn the things needed to check whether ChatGPT does its job correctly. More work for teachers!

There are more things to say, discuss, and struggle over in relation to ChatGPT and other generative AI tools. One thing I appreciate in these conversations is that they often go to the heart of our work: What are the things that we, as academics and as humans, need to do ourselves, and what are the things that we can outsource to a machine? What does it mean, in our different disciplines, to create authentic work? How can we assert our very own voice, different from a machine voice? What do we really want students to learn, and what are the legitimate tools that can lead them to what we want them to learn? (And do they include ChatGPT?) But also: How does our human learning from, and responding to, texts differ from what a machine like ChatGPT does? If our own human learning is fair use of copyrighted material, is the data analysis-based "learning" behind ChatGPT fair use as well—or does it violate the intellectual property of authors? And what does this tell us about intellectual property, how we interact with texts, and the authority of authors? These and other questions will keep us (and maybe ChatGPT) busy for a while.

 

Back to Top

by Andreas Broscheid

Published: Thursday, September 5, 2024

Last Updated: Thursday, September 5, 2024

Related Articles