Sunday, August 24, 2025

From Antiquity, a Reason Why AI-Generated Writing is not "Great"


Every year, I read at least one text (in translation) from Antiquity. I find that the long-term perspective gets me through the bumps in life's road. I'm currently reading On Great Writing (or if you will, On the Sublime) by Longinus, in a Hackett Classics edition I picked up at an academic conference's book room.

G.M.A Grube makes the work come to life; we know so little about Longinus (the author lived between the 1st Century BCE and the 3rd CE, that the text passes my "castaway reader" test. Here we go: a work washes up intact on your desert island. Yay, something to pass the time! Yet you have no information on the author, and no secondary sources. You must use the actual words on the page to come to a conclusion about the work's meaning.

Longinus talks about many aspects of what constitute "the Sublime" in excellent writing,  but one passage late in the text commends itself to my current students in "Writing With and About AI." I've said since 2022 that AI prose is "voiceless," and Longinus gives reasons why some prose most moves us:

Which is to be preferred in poetry and prose, great writing with occasional flaws or moderate talent which is entirely sound and faultless?. . . . It is perhaps also inevitable that inferior and average talent remains for the most part safe and faultless because it avoids risk and does not aim at the heights, while great qualities are always precarious because of their very greatness.

Bad student writing is slap-dash, one-draft work that has no coherence. No wonder bored or harried students turn to AI! At the same time, why not simply give all such technically correct, but average work what it should earn: a C? AI produces good, boring, safe prose. Many students who pair moderate talent with immoderate anxiety already do that. I never give them an A. For years I've said "this piece takes no intellectual risks. You are writing to please me, not learn something new."

In Nancy Sommers' excellent short films from the 1990s about writers at Harvard, I recall how one fourth-year student said that he learned to begin with what he did not know, starting with a question. This remark changed how I taught writing. I'm going to press my current adult students to do the same: begin with what you DON'T know. As Longinus warns us, "A world-wide sterility of utterance has come upon our life."

In discussion with ChatGPT 5 recently, I asked it about the role of human labor in a a time when automation already takes some entry-level jobs. It replied, summing up a short list of human skills essential to success, "the future may need fewer button-pushers and more meaning-makers."

Good writing sways us, it shows us the meaning past the words. It says with us, like the remark by that Harvard student. So this term, I'm asking more, not less, of my writers even as all of them use AI in their assignments. The machine as raised the bar on what constitutes excellence.

image: Raphael's The School of Athens (Wikipedia)

 

Thursday, August 21, 2025

A Stunning Lack of Search Results From Universities

AI generated image of a campus building

As a staunch supporter of an open Internet, where advice can be freely shared, I'm stumped by the lack of results form a few simple Duck Duck Go queries:

  • University sources Podcast Scripting for AI 
  • College guides Podcast scripting for AI 

Fordham's page on using ElevenLabs proved the only hit that was not from some start-up trying to hawk AI services. This outcome discourages me greatly. Here's why.

Universities, even ones under the hammer of Federal and state funding cuts, have robust I.T. presences.  Internal materials for faculty and students can be shared with no more than a few high-level links. To me the lack of materials portends either a lack of care about reaching beyond the campus gates (a sad lack of marketing savvy) or, more ominously, that colleges and universities are as ill-prepared as Ian Bogost claims in his recent Atlantic piece, "College Students Have Already Changed Forever." Bogost notes that "plenty of professors are oblivious. It isn’t that they fail to understand the nature of the threat to classroom practice. But my recent interviews with colleagues have led me to believe that, on the whole, faculty simply fail to grasp the immediacy of the problem. Many seem unaware of how utterly normal AI has become for students. For them, the coming year could provide a painful revelation."

One would think that schools such as Washington U at St. Louis, where Bogost is on the faculty, would do a great deal, as Ohio State has begun with its recent initiative. I found this page on AI resources at Washington University, but you must scroll to the bottom to get resources for faculty. Mostly, the page emphasizes new degree programs for students. Digging in a bit, I found this page for teaching activities, a page on an AI summer institute, and some philosophical advice about how to design assignments. Why don't such in-house materials appear higher in search results than those of peddlars of their software solutions? "They didn't give us any money" would be the answer for Google, but for Duck Duck Go, it seems dubious.

My hunch is that our schools are so overwhelmed that "sharing the wisdom" remains a very low priority. 

Luckily and of more use to faculty, The WAC Clearinghouse has launched an initiative to build a library of open-access lesson plans for AI. I'm still not sure why universities have left it to a talented group working across our institutional boundaries to do this. I'd claim that Generative AI poses the largest challenge to how we educate in a long time, categorically different from the coming of the Internet.

image: a really lame prompt and result, "Impressionistic image of traditional campus Humanities building with students going to class," given to Dall-E, when I was first using the image generator. 

Sunday, August 10, 2025

CS Grads Face AI-Driven Unemployment

Computer Code on Screen

I have told my students, ever since early 2023, "add value to AI content if you want a job." It seems that recent Computer-Science grads have found this out the hard way. Companies are hiring far fewer entry-level coders as AI takes on that task.

 A story in the New York Times today reports on the troubles faced by this cadre of coders; some of them interviewed had applied for thousands of jobs, without a single bite. One story ended well: a young woman who had been rebuffed again and again for coding jobs found one in sales for a tech firm, probably because of her communication skills honed as a TikTok influencer.

The numbers for these students are depressing:  

"Among college graduates ages 22 to 27, computer science and computer engineering majors are facing some of the highest unemployment rates, 6.1 percent and 7.5 percent respectively, according to a report from the Federal Reserve Bank of New York."

I don't know that others are doing much better, though I was encouraged to see that History majors have an unemployment rate of 3%. My recent students contact me for letters of reference and they are taking part-time teaching jobs, going to Law School, planning to work abroad. The job market is rather grim for them, something I experienced for very different reasons in 1983, when I moved back in with my parents, took two part-time jobs paying minimum wage, and I waited for times to improve. Friends went to the Peace Corps, the military, or grad school. 

What is different now? On the positive side, these young people know how to build professional networks (and have technology for that I'd could not have imagined). They get internships, something unheard-of except for Engineering and Business majors at Virginia in the early 1980s. On the negative side? They have been groomed from birth to get into the right school, then promised a six-figure salary. I see that among the Business-school students I teach too. I fear they too will face a round of rejections, and soon, as AI continues to evolve and companies deploy it for jobs once thought secure from automation.

Those interviewed by the Times note how rejections can come in minutes by email; AI scans the thousands of applications for AI-related skills. None? Instant round-file for that application. I got that treatment too from firms where I naively thought I might be of service as a tech writer. With a flimsy one-page resume that consisted of grocery-story work primarily, I got snubbed.

My Humanist side says "welcome to the club" of under-employed but bright people. My my humane side says "you worked hard yet you have been replaced by a machine." The answers are elusive, because as the story notes, universities are slow to implement AI-coding into their CS curricula, the one area where some new grads find work. And to be honest, that's simply the result of an industry that caught so many of us flat-footed two and a half years ago. It takes years to change a curriculum. 

I fear all those Accounting and Finance majors are next on the chopping block, as companies scale up their AI efforts.

So what do I tell my students this term? Learn to be flexible? Hone those value-adding human skills? Get ready to have side-gigs? Volunteer? Build a robust professional network? I suppose that may work...for now. I'll know more when I begin to use ChatGPT 5 soon. I fear it may be the creative genie that takes away even more jobs.

This moment marks where their and my experiences differ. When I graduated, the Federal government was not axing hundreds of thousands of jobs and no software was replacing entry-level workers wholesale. We had inflation then, but the country was run by a competent, avuncular President and sane and independent Congress. No more.

Before going to Europe for a year in 1985 and after so many rejection letters, though an aunt's contacts I managed to land a professional-writing job for the parole division of Virginia's Department of Corrections. It was soul-draining work, but it paid well. I picked up a volunteer gig tutoring ESL to Cambodian and Vietnamese refugees; that ESL experience helped me land a teaching gig in Madrid. Then grad school, then...where I am now.

Keep at it, graduates. 

Creative-Commons image: Wallpaperflare.com 

Thursday, July 31, 2025

A Bleak Future For Peer Tutoring in Writing?

 


Even before retirement from full-time teaching, I had concerns about how the concurrent emergence of AI and a neoliberal generation of senior administrators demanding "data-driven" programs and assistance might harm the autonomy of writing centers.

We are no longer lore-driven, but our work focuses first and always on human interaction rather than measurable results, work that proceeds from 50 years of scholarship and practice. What will ubiquitous AI mean for us?

We can see from the chart above, reflecting my Spring 2025 survey responses from students, that 3/4 admit to using AI for writing work. Though the survey was anonymous, I suspect the percentage to be far higher. To paraphrase what several students said in textual responses, "we are all using it, no matter what professors say." In Summer 2024, I attended sessions at the European Writing Center Association where directors reported declines in usage at their centers, as students turned more to AI for just-in-time assistance during hours when centers are closed.

I've called elsewhere for writing-center staff and administrators to be leaders as AI advances, so folks who do not teach but presume to lead universities do not tell us how to do our work. At stake? What I call a "Dark Warehouse" university on the Amazon model: much human labor replaced by technology, delivering a measurable product when and how consumers want. It's not a bad model for dog food or a cell-phone case, but it's terrible for the sort of human-to-human contact that has built the modern writing center.

I need more data from my and other schools to make any big claims, but I will focus on the AI's role in this possible and disturbing future, with some data from my student surveys of the past three years.

We had a smaller number of respondents this year than in the past (in 2023, n= 112, in 2024, n=74 , this year n= 47). Without a cohort of Writing Consultants and teaching fewer students myself, my reach was shorter, and I relied upon notices via our campus e-list. Faculty may not have sent out the survey either; many (too many) still ignore AI and others seem disinterested in knowing what students are doing. I have no empirical evidence for these claims, but my gut reaction and stories from other campuses lend support to my hunch.

Here is another chart from the current data from Spring, 2025, for the 3/4 of respondents who used AI in some manner for writing:

Some of the "create a draft" labels get cut off, but here are the options:
  • Create a draft I would submit for a grade: 1 respondent (3%)
  • Create a draft I would not submit but use to get ideas for structuring my draft: 11 respondents (33.3%)
  • Create a draft I would not submit but use to get ideas for vocabulary or style: 8 respondents (24.2%)
  • Create a draft I would not submit but use to incorporate sources better: 6 respondents (18.2%)
The range of uses from the chart maps well onto the tasks done by writing centers, except we don't write anything for clients, beyond modeling a sentence or two. We ask questions of writers, which some systems such as Anthropic Claude began to do as recently as Spring 2025.
 
Interactivity in natural language, with the first metacognitive questions from AI, raises a question or three, involving how AI might replace human tutors especially in the wee hours or at the busiest times of the semester. I've found assessment of drafts (with the right prompts!) can be as good as the feedback I give as a human.
 
Ironically on my campus, we've made seeing humans more onerous. As institutions like mine want to measure everything, there's an irony: students may simply seek commercial AI instead of campus services. My institution set up a rather onerous system for students to set up meetings. The intentions were good (track students' progress and needs) but undergrads are simply not organized enough, in my experience, to heed the details needed to book meetings. Many exist in a haze of anxiety, dopamine fixes from phones, and procrastination. ChatGPT asks for nothing except a login using their existing credentials.
 
The notion of walk-in appointments, which had been the rule when I directed the center, remains for us at Richmond, but students get pushed to the new system and need to register even during a walk-in. This added level of bureaucracy confused and daunted many who stopped by, during my final semester of full-time work, when I worked one shift weekly myself as a writing consultant.
 
I argued against adding more complexity to our system, in vain. The collected data on writers seemed sacred. We had to count them, to count everything. My counterpoint? You want students to come? Just let the kids walk in and help them; the human helper does the bean-counting later. AI, on the other hand, invisibly counts those beans for its corporate owners (and trains itself). It serves needs at 3am and in a heartbeat. One need not leave one's chair to get assistance that, as my students have found in my classes, steadily improves by the semester.
 
Instead of wrestling over social-justice concerns in our journals, we might focus our limited time on how to avoid becoming amateur statisticians for Administration. We might concentrate our energies on how we get students to come to us, not AI, when they are stuck or panicking. 
 
I have no clear advice here, except: make a tutorial with a human as seamless as with AI. That goal seems more important to me than all the arguing about students' rights to their own voices, since AI tends to reduce prose to a voiceless homogeneity. Getting students to see human tutors offsets some of the environmental and labor consequences of AI, too. If students see humans more, their carbon footprints are smaller and those tutors stay employed. 
 
Get them in our doors and, yes, employ AI ethically for refining work already done, to add voice and nuance, to remove falsehoods dreamed up by machines, and say something exciting. I'm pessimistic that our bean-counting, non-teaching leaders of many colleges will heed my advice. The Dark Warehouse seems more at hand than when I published my article early in 2023.