I’m using an extra credit assignment I created, “Using Generative AI to Create Mini Bibliographies,” for the first time this summer in a class on “Literature of the Fantastic.” This is a 10-week, asynchronous, upper-division class that I’m teaching for the first time, and I created the supplementary assignment specifically for it. (You can read more about the assignment here.)
These “Mini Bibliographies” are a fairly straightforward annotated bibliography assignment, and my goal with the assessments in the class overall was to make the steps in the research process in literary studies discrete: close reading skills would be developed through a close reading journal and summarizing, assessing, and synthesizing secondary sources would be developed through the mini bibliographies.
AI-Resistant Assignments
I know that creating annotated bibliographies is a task easily offloaded to generative AI. Like other educators, I’m now constantly working on designing assessments that are “AI-resistant,” even though that is becoming increasingly difficult as AI improves—not to mention that a majority of my classes are asynchronous. Nevertheless, summarizing, assessing, and reflecting on secondary sources are necessary skills, and I think this is true even if one day in the future gen AI might be able to accurately summarize texts 100% of the time.
Take summarizing a text for example: A summary is not an objective fact. It is an act of interpretation through which we learn what we think. When we do research, we read what other researchers have said about a topic in order to understand that topic better. An AI summary can present us with a summary of a text, but such a summary is based on nothing more than predictive logic.
A summary is not an objective fact. It is an act of interpretation through which we learn what we think.
An AI generated summary can be a helpful place to start to get an overview of a piece of writing. It cannot replace reading a text closely and critically. A critical reading poses questions of a text, questions that are, themselves, the result of having acquired a body of knowledge and the capacity to analyze and interpret arguments and evidence.
An annotated bibliography is a “bad” assignment, if the goal is to produce authentic AI-resistant assessments. But the skills that are required to create an annotated bibliography are core learning outcomes in the humanities: textual analysis and interpretation. For some students, writing an annotated bibliography may be a task that they’ll never again perform once they graduate from college. For others, it will be a major part of their research process as they pursue graduate degrees. Somewhere in the middle are the bulk of students who, in their personal and professional lives, will be required to perform analysis and interpretation in some form or another. The annotated bibliography is a “good” assignment insofar as it breaks down secondary research into discrete tasks, each of which can be learned and assessed individually.
Would I assign only annotated bibliographies in a class? Absolutely not. And in this class, my mini bibliographies are each worth 10% of the course points, and they ask students to annotate just two texts each: one provided by me and one that they find in their own independent research. As such, the task is tightly focused on the course outcomes and small enough in scope that it is neither overwhelming nor perceived as “busy work.”
Nevertheless, AI.
There are three reasons why I designed an extra credit option to use gen AI in the creation of these bibliographies.
The assessment is absolutely not AI-resistant.
Portions of the task are off-loadable to gen AI.
If students are submitting AI-generated work as their own anyway, especially in the context of 1 and 2, then the assignment is an excellent opportunity to have students use AI intentionally and reflectively.
In my previous piece, I described how the optional portion of the assignment gave students carte blanche to “cheat”:
Although claiming work generated by AI as your own is a violation of academic integrity, you will not be penalized for your bibliography if you respond in ways that seem antithetical to academic integrity. For example, if your reflection indicates it felt like you were cheating, or that you used AI as a crutch rather than a tool, that’s okay. This is because, if you are participating in this process, you will be learning about how to use AI effectively and ethically and communicating transparently about how you are using these tools as part of a learning process.
I’m not saying that the work required of the optional portion of the assignment is easy. It’s definitely not. But if we want students to use gen AI in a way that actually promotes critical thinking, we have to design assessments in which they engage meaningfully with, challenge, and critique AI generated content, and we have to design assessments in which students reflect on how and why they are using these tools. That’s hard. Period. It requires students to do the work plus more, at least in the beginning.
Cheating Anyway
Therefore, I wasn’t entirely shocked to find a lot of writing in this class that I suspected was written by AI but that didn’t come with the optional extra credit documents. But as I wrote about earlier, for a variety of reasons, I prefer to take a head-in-sand approach when it comes to assessing such work. Then, a student emailed me a source for my approval. It looked legit. The title seemed like one that would appear in the actual (real) journal that was part of the citation. But it was completely fabricated. I returned to some of their earlier work and found similarly fabricated sources, all summarized, assessed, and reflected on as though the source actually existed.
This was disappointing, especially in the context of a class that provides so much guidance and that explicitly encourages students to participate in a process that both allows them to cheat and provides a grade-based incentive for reflecting on academic integrity and beginning to develop their own research ethics.
One of the biggest emotional challenges as I was dealing with this situation was the time wasted. I spent hours of my life looking for this source to guide a student who so earnestly asked for my guidance under totally false pretenses, listening to the student explain why I should feel sorry for them, and documenting the case for the office of student conduct. All of these hours are ones I couldn’t spend designing future classes, developing better assessments, interacting with students who are engaged in the learning process, conducting research. And it’s not just my own time or resources. If I suspect that this work (or anyone else’s) is generated by AI, then my students do too, and the whole learning process is undermined.
I’m not giving up on the assessment. I think it’s good. And like so many aspects of cheating—both today and historically—some students show up to learn and others don’t. I do believe it’s my job to create learning environments and experiences in which students are challenged and can thrive, in which learning is incentivized and cultivated, and cheating is disincentivized. Students not showing up is usually relational: very few things in teaching are solely the responsibility of the educator or solely the responsibility of the learner. So how do I learn from this? How do I create learning environments—especially asynchronous ones—that cultivate students’ interest in research ethics?
Students Lack Guidance
In one student’s submission of the optional portion of the assessment, they wrote that this was only the second class they’d taken that actively addressed the use of generative AI in any way at all beyond prohibiting its use. There are thousands of educators trying to figure out how to navigate this landscape, and yet my student had absolutely no access to any discourse at all about the role of AI in their education and absolutely no guidance about how they could or should use it, let alone guidance about how to reflect meaningfully on their own research ethics with respect to AI. This seems like a failure of the system to me.
Also, life.
Students Are Busy and No One Cares
There are still plenty of “traditional” students, but even at “traditional,” four-year, public, land-grant universities like mine, fewer and fewer students represent that “ideal.” This is even more true in courses that are offered online. If I have a typical class, its enrollment is probably half traditional and half non-traditional, in the way we usually think about traditional and non-traditional students. But most of my traditional students have responsibilities and face challenges that weren’t part of college life 30 years ago. Almost all of them are taking a full course load and almost none of them are working less than 30 hours per week, on top of the other demands of both college and “normal life.”
First of all, I cannot understand how this is tenable.
Second, the only reason I know this is because I ask. There is no formal or systematized way for me to learn about my students’ everyday lives. Institutionally, they are names or numbers that populate my LMS.
One thing that I hear consistently on my end-of-semester course evaluations is that my students feel like they’re real human beings in an actual class. They can’t believe it, because their education up to this point has consisted of their being names and numbers in someone else’s LMS.
So, cheating.
If I had to take five classes on top of working a full-time job, taking care of two kids, one of whom had a significant disability, going through a divorce, and surviving a natural disaster, I cannot see a reality in which I didn’t cheat too.
AI is obviously a significant concern for me, but honestly, the system is broken. Maybe that broken system is exacerbated by AI, but the real problem is not AI.

