Life isn’t fair, is it? No one consulted us teachers and educators before launching these large language models that would effectively break many parts of education as we know it. And now it’s our responsibility to fix things, to rebuild and rethink, because no one else could do a good job of it. So, where do we go from here? How do we face the challenges that this disruption imposes, and does it create new opportunities to transform the teaching practice and the learning experience for the better?
Let’s survey the challenges. The most obvious casualty of large language models is the essay assignment, or really any written assignment not completed in class. Cheating on essays, book reports and research papers has always been possible, but the chatbots brought the convenience and the likelihood of getting away with wholesale plagiarism to a level previously undreamt of. And because these generators can be accessed on any smartphone or device, students can potentially take them into class and use them on exams.
But the challenge goes far beyond cheating. Students will outsource their thinking to these models, relaying teachers’ questions to them as a prompt and parroting back the output. This short circuits the learning process entirely. I’ve experienced this already in a large college lecture hall. After I explained the diminishing effect of inductive stabilization through sequential covalent bonds, a well-meaning student eagerly recited an explanation via analogy that she had just asked ChatGPT for. Like so much LLM output, the analogy seemed plausible at first blush but fell apart under scrutiny, and I took the opportunity to remind the class that ChatGPT doesn’t think, exactly. Now, I am sure that an A.I. optimist would say that this was a beautiful example of how chatbots can be leveraged to facilitate and enhance classroom discussion, but I see it as an unwelcome interloper that frustrates the learning experience by taking the student’s place in the driver’s seat. I want the student to engage with the material, to come up with their own metaphors and interpretations that we can analyze together, not to stand on the sidelines while I spar with a chatbot that’s stochastically simulating expertise in my subject. Worse yet, what about all of the students who go to ChatGPT later for a second explanation but don’t get the output immediately vetted by me and end up feeling falsely confident in a flawed or superficial understanding of a topic? How do we make sure that students are putting in the mental work necessary to build real understanding, and how do we fairly grade them for their effort?
One major theme connecting many of the proposed solutions is that we must pivot from written assignments to assessing students in the classroom. Many have suggested going back to oral exams, putting more emphasis on classroom discussion, or having students compose drafts on the spot. Anything that requires them to demonstrate knowledge and understanding without the opportunity to consult a chatbot. However, there are considerable practical challenges to implementing these solutions. Oral exams eat up a lot of class time and can be downright impractical for larger class sizes. And having students compose a paper on the spot during class seems like a criminal misuse of precious instructional time. These solutions assume that we have a wealth of time with the students and resources that are simply not available.
Perhaps a more practical solution is to change the structure of the learning experience so that students are forced to engage with the knowledge and concepts in real time. Even before the A.I. crisis, there has been a movement in higher education towards classroom innovations that allow for more student-led active learning. Classroom transformations such as the flipped class, where students watch recorded lecture videos ahead of time and spend class time working in self-directed groups under the teacher’s supervision, provide the opportunity to assess students on their work without sacrificing the instruction time. If implemented well, these or other student-centered and process-oriented approaches could be welcome innovations at all levels of education. I personally favor a hybrid approach that weaves elements of the flipped class into a traditional lecture so that students periodically break out into group learning sessions where they work on practical exercises that I build my lecture around, discussing things amongst themselves while I walk around and keep things on track.
On a final note, let me appeal to those who don’t share my personal pessimism (doomerism, perhaps), towards generative A.I. in the context of education. If you think that chatbots are harmless or even have a positive role to play in the learning process, you still have to recognize that the landscape has changed because of them, and that the challenges we face as educators in the wake of this change, as attested to by students themselves, are real. And the potential solutions mentioned above are perfectly compatible with students using generative A.I. Indeed, they are predicated on the assumption that there is nothing we can do to stop it. But we cannot go on as if nothing has changed, grading papers we know were written almost entirely by A.I. This way will lead to mutual resentment and cynicism, until finally instructors say, ok two can play at this. Since the chatbots did the writing, I’ll let the chatbots do the grading. This leads to a complete lose-lose situation where, to paraphrase an old joke, we pretend to teach, and they pretend to learn. We can’t accept that outcome.
Leave a comment