Thursday, July 31, 2025

A Bleak Future For Peer Tutoring in Writing?

 


Even before retirement from full-time teaching, I had concerns about how the concurrent emergence of AI and a neoliberal generation of senior administrators demanding "data-driven" programs and assistance might harm the autonomy of writing centers.

We are no longer lore-driven, but our work focuses first and always on human interaction rather than measurable results, work that proceeds from 50 years of scholarship and practice. What will ubiquitous AI mean for us?

We can see from the chart above, reflecting my Spring 2025 survey responses from students, that 3/4 admit to using AI for writing work. Though the survey was anonymous, I suspect the percentage to be far higher. To paraphrase what several students said in textual responses, "we are all using it, no matter what professors say." In Summer 2024, I attended sessions at the European Writing Center Association where directors reported declines in usage at their centers, as students turned more to AI for just-in-time assistance during hours when centers are closed.

I've called elsewhere for writing-center staff and administrators to be leaders as AI advances, so folks who do not teach but presume to lead universities do not tell us how to do our work. At stake? What I call a "Dark Warehouse" university on the Amazon model: much human labor replaced by technology, delivering a measurable product when and how consumers want. It's not a bad model for dog food or a cell-phone case, but it's terrible for the sort of human-to-human contact that has built the modern writing center.

I need more data from my and other schools to make any big claims, but I will focus on the AI's role in this possible and disturbing future, with some data from my student surveys of the past three years.

We had a smaller number of respondents this year than in the past (in 2023, n= 112, in 2024, n=74 , this year n= 47). Without a cohort of Writing Consultants and teaching fewer students myself, my reach was shorter, and I relied upon notices via our campus e-list. Faculty may not have sent out the survey either; many (too many) still ignore AI and others seem disinterested in knowing what students are doing. I have no empirical evidence for these claims, but my gut reaction and stories from other campuses lend support to my hunch.

Here is another chart from the current data from Spring, 2025, for the 3/4 of respondents who used AI in some manner for writing:

Some of the "create a draft" labels get cut off, but here are the options:
  • Create a draft I would submit for a grade: 1 respondent (3%)
  • Create a draft I would not submit but use to get ideas for structuring my draft: 11 respondents (33.3%)
  • Create a draft I would not submit but use to get ideas for vocabulary or style: 8 respondents (24.2%)
  • Create a draft I would not submit but use to incorporate sources better: 6 respondents (18.2%)
The range of uses from the chart maps well onto the tasks done by writing centers, except we don't write anything for clients, beyond modeling a sentence or two. We ask questions of writers, which some systems such as Anthropic Claude began to do as recently as Spring 2025.
 
Interactivity in natural language, with the first metacognitive questions from AI, raises a question or three, involving how AI might replace human tutors especially in the wee hours or at the busiest times of the semester. I've found assessment of drafts (with the right prompts!) can be as good as the feedback I give as a human.
 
Ironically on my campus, we've made seeing humans more onerous. As institutions like mine want to measure everything, there's an irony: students may simply seek commercial AI instead of campus services. My institution set up a rather onerous system for students to set up meetings. The intentions were good (track students' progress and needs) but undergrads are simply not organized enough, in my experience, to heed the details needed to book meetings. Many exist in a haze of anxiety, dopamine fixes from phones, and procrastination. ChatGPT asks for nothing except a login using their existing credentials.
 
The notion of walk-in appointments, which had been the rule when I directed the center, remains for us at Richmond, but students get pushed to the new system and need to register even during a walk-in. This added level of bureaucracy confused and daunted many who stopped by, during my final semester of full-time work, when I worked one shift weekly myself as a writing consultant.
 
I argued against adding more complexity to our system, in vain. The collected data on writers seemed sacred. We had to count them, to count everything. My counterpoint? You want students to come? Just let the kids walk in and help them; the human helper does the bean-counting later. AI, on the other hand, invisibly counts those beans for its corporate owners (and trains itself). It serves needs at 3am and in a heartbeat. One need not leave one's chair to get assistance that, as my students have found in my classes, steadily improves by the semester.
 
Instead of wrestling over social-justice concerns in our journals, we might focus our limited time on how to avoid becoming amateur statisticians for Administration. We might concentrate our energies on how we get students to come to us, not AI, when they are stuck or panicking. 
 
I have no clear advice here, except: make a tutorial with a human as seamless as with AI. That goal seems more important to me than all the arguing about students' rights to their own voices, since AI tends to reduce prose to a voiceless homogeneity. Getting students to see human tutors offsets some of the environmental and labor consequences of AI, too. If students see humans more, their carbon footprints are smaller and those tutors stay employed. 
 
Get them in our doors and, yes, employ AI ethically for refining work already done, to add voice and nuance, to remove falsehoods dreamed up by machines, and say something exciting. I'm pessimistic that our bean-counting, non-teaching leaders of many colleges will heed my advice. The Dark Warehouse seems more at hand than when I published my article early in 2023.

 

 

Tuesday, July 8, 2025

A Class Policy on AI Hallucinations

 

Image from Hitchcock's film Vertigo


As I noted in my last post, a student in my May-term class committed academic dishonesty in a reading journal I'd sought to make AI-resistant. This to me is a more serious issue than turning in work an AI generated. Why is that?

In my course, I provide careful guidelines for how students may use AI to help them. For instance, since today's undergrads are generally awful readers, I allow them to employ AI to help them understand themes in readings and connections between readings. Some students in their evaluations noted that the graded work here "forced them" to do their readings. 

Yes, readers, thank you for reading this post. Be advised that many college students no longer do any class readings, unless forced, even at selective institutions. To me, that's a gateway to a new Dark Age, nothing less.

My method, following my rules for multiple-entry Google Workspace reading journals, requires students to fill one column with a quotation or summary of a key event, a second column with an analysis of the event, and a third with a question to bring to class. I also include a mandate to comment weekly on a peer's journal. I got this notion from John Bean's excellent book Engaging Ideas; you can learn more in the third edition of Bean's classic, with his co-author Dan Melzer.

The student who misused AI had done good work for earlier assessments, yet for the final one asked an AI to find notable quotations. That was not against my policies. What was? In two instances, the AI invented direct quotations not in the readings. The writer, too harried or too complacent to check, did not do a word search of the originals.

I gave the writer an F on that assessment, which pulled down the final course-grade. In my reasoning, I said that had that occurred on the job, the student would have likely been fired. Best to learn that ethical lesson now, while the stakes were relatively low (though to a perfection-obsessed undergrad, the stakes may have seemed high, indeed).

We discussed the matter in a cordial way; the writer had done well on earlier work but for reasons still fuzzy to me, failed on this final assessment. So in future classes, I'll change two things. First, my policy on AI hallucinations will be harsh; if the assessments are as frequent as in my recent class there will be no chance at revision. In ones where assessment is less frequent, the F will be applied but the writer will get to revise the journal and I will average the two grades.

Second, I'll add a new requirement for synthesis with earlier readings: yes, a fourth column! This skill is woefully lacking in students, who seem unable to construct consistent narratives across the work done in a class. This I attribute to a "taught to the test" mentality in high school as well as a disconnected learning experience and, for some, lack of passion for learning afterwards.

As for AI hallucination? Though Ethan Mollick and OpenAI claim that larger models hallucinate less now than in the early days, I'm not so sure. Mollick tends to use what I call "concierge" AIs that cost quite a bit; my students generally use a free model and do not engineer their prompts well. 

You can read more about a comparison of different LLM models and hallucination here. I still feel that we remain a long way from knowing which LLM to trust and when, but the OpenAI article does provide good tests we humans can apply to check the AI's output.

Always check its work. My student did not, and paid a price, rightly so. 

Image from the film Vertigo.