Tuesday, October 14, 2025

AI Bubbles and My County: Saying "Not Yet" to Big Data Centers

Data Center in Rural Area


Note to readers: A version of this post will be submitted to our Board of Supervisors as a formal letter, as well as to a local news outlet. We'll see if it even makes a ripple. 

Even those wary of AI agree that it can do clever work. I use it regularly with students to shape their writing ethically and to design assignments. But can the industry make enough money to continue its progress? A great deal hinges on that answer, including development on the western border of the Richmond Metro Area.

A proposal before Goochland’s Board of Supervisors would allow data centers, perhaps powered by small nuclear reactors, to be built along the Route 288 corridor. Even without considering the environmental hazards, as one who researches the industry, I’ve found economic risks of rushing off quickly to join this AI gold-rush.

CEOs constantly make utopian claims about AI's future, but numbers cloud those predictions. Subscriptions such as mine meant under $2.5 billion in revenue for ChatGPT maker OpenAI in the second half of 2024, according to sources that include The Wall Street Journal and The New York Times. The rest came from venture capital or circular investing, such as Nvidia's and Microsoft's massive stakes in OpenAI.

The firm estimates that building out its data centers would cost $400 billion. For such ambitious plans, companies need a lot more revenue to balance their books. An "AI Bubble" may be inflating, as several economic journalists have warned. Writing for The Atlantic, Rogé Karma notes that “In the first half of this year, business spending on AI added more to GDP growth than all consumer spending combined.” A crash in the AI sector would deal devastating blows to the economy, with local effects when superfluous data centers go dark.

For his part, OpenAI’s CEO Sam Altman focuses on big ideas that seem straight from science fiction. He writes that humans and computers will become one form of life soon, which he calls “The Merge.” I'll quote from his blog, "unless we destroy ourselves first, superhuman AI is going to happen, genetic enhancement is going to happen, and brain-machine interfaces are going to happen." Not all of us would call such things progress, but history shows us that technological breakthroughs take time and a lot of money. Quiet supersonic airliners and nuclear fusion plants have inched closer to reality, over decades.

It well may take that long for AI data centers to not increase ratepayers’ electric bills. We’ve seen that already from existing data centers in the Commonwealth. Before we consign so much open land to data centers locally, let’s consider what experts in the field who are not CEOs or marketers have to say.

A robotics researcher, hired away from my university by Google, insisted to me that human-level thinking cannot be engineered; at best we might imperfectly simulate it, at great expense via a "brute force" computing method run in huge data centers. I recently noted in class how we might improve that, given faster computing using fewer chips, following what’s called Moore's Law. A student challenged me, claiming that silicon-based chips have not enjoyed the leaps in power and speed we expected annually up through the early 2000s.

Several academic papers I looked up support that idea. Without new types of microprocessors or exotic quantum computing, we are stuck with today’s data centers. Concurrently, our AI models remain far from perfect. Alex Reisner of The Atlantic, as well as academic researchers, found developers fudging benchmark tests that measure AI's smarts. They give the software test questions in advance, a tactic beloved by college students for many years. When AI is faces novel tasks, however, it scores far lower. Rogé Karma’s investigative work bears this out; productivity of coders dropped by 20% when working with AI, as they spent more time correcting mistakes.

Other Bubbles have popped with local consequences. A few years ago, I addressed The Board of Supervisors, whose agenda that night included motions to increase suburban-style growth in the east end of the county. I opposed this for many reasons, but one stood out: we had made zoning changes before based on empty promises, when we developed an office park called West Creek for an anchor client, Motorola. Their semiconductor plant would, in theory, have provided many hundreds of jobs.

My late father-in-law, Edward Nuckols, had addressed the Supervisors in the 90s. He was a respected master mechanic and businessman with deep ties to the community; his auto shop, unlike Motorola, had been in operation here for decades. Ed warned that big companies change their minds.

So it came to pass: Motorola backed out of their promises and Goochland was left with over $100 million in debt. One Supervisor reminded me that the county had little choice but to search for other revenue streams to repay that sum. While a fair counterargument, it does not seem we have learned much. Even if Altman and other CEOs are right, such men have no ties to any locality save the posthuman utopias they dream of inhabiting. $100 million of debt means nothing to them.

I’d love to see cheaper, more sustainable AI as a partner in our work. Today's models such as ChatGPT 5, Anthropic Claude, as well as lesser known but powerful tools such as NotebookLM or Research Rabbit, greatly help my students. We engineer prompts to find credible sources that are not hallucinated, we check human-created work for accuracy and voice, we write scripts for podcasts that AI converts to broadcast-quality audio. Plagiarism, the bugaboo of academic AI, concerns me less than does helping my students gain skills for workplaces slowly adopting AI.

My classes emphasize critical thinking when working with AI. So I'll ask Goochland's Board of Supervisors to do some critical thinking and research not influenced by the hype of Silicon Valley's billionaires. Altman claims intelligence will soon be "too cheap to meter." Older folks will recall that fable about electric bills, one made in the 1950s and 60s.

Otherwise, we could be left with more debt and large, empty buildings crumbling in the rain along 288. 

Creative Commons Image: Bioethics.com 

Monday, September 15, 2025

Are Humanists All Like This? Tyler Austin Harper's Strange Ideas About AI on Campus

Cambridge University in Ruins

 I'm fond of The Atlantic; I've subscribed for more than 4 decades. I also enjoy Tyler Austin Harper's articles; his 2024 piece on extreme fishing really moved me with its stories of combat veterans risking their lives at night, swimming out in deep water to find offshore rocks, all to catch striped bass. Austin tried that sport himself; his participation added vividness to his journalism.

As a professor, he is likewise a participant when writing about education. Thus he should know enough about students to avoid the impracticable, even ludicrous, ideas in his newest article, "The Question All Colleges Should Ask Themselves About AI." The title drew me right in, as I too am concerned that institutions of higher education do not comprehend the scope of AI usage by students or what that may portend for their learning. 

So much damage has already been done, before LLMs arrived to thwart the critical-thinking skills we faculty hold so dear. We live in a time when phones and social media have sapped the attention spans and reading comprehension of all ages, and when students from taught-to-the-test K-12 US education lack the ability to construct narratives around the content they do encounter in class. Instead, they often ask me for a rubric and or checklist. When I ask "tell me what you think most interests you from our topic," I usually get blank or anxious looks.

Harper's piece begins in this territory, with worries we share. Soon, however, he turns to a "solution" by Megan Fritts, from U Arkansas at Little Rock, calling for a ban on AI use in the Humanities classroom.

That stance seems not too different from the CCCC Chair's address I rebutted here some time back. I canceled my NCTE membership this year in response to that extremist stance by the organization's leadership.

Yet the similar philosophy that Fritts embraces proves not radical enough for Harper:

Shunning AI use in classrooms is a good start, but schools need to think bigger than that. All institutions of higher education in the United States should be animated by the same basic question: What are the most effective things—even if they sound extreme—that we can do to limit, and ideally abolish, the unauthorized use of AI on campus?

Harper's drive to "abolish" includes: 

  • Getting rid of WiFi, laptops, and phones on campus
  • Making all writing assignments be done in labs without Internet access 
  • Making all students live on campus
  • Ending accommodations for neurodiverse students if they include AI (human helpers would fill the gap, as they did before) 
  • Penalizing heavily any students caught using AI for academic work, including expulsion.

I began to check to see if he were writing for The Onion, ending with a call for wax tablets for all written work. I can imagine such a  policy at my employer, leading to a 95% drop in applications. I can also imagine new AI-friendly schools, many for-profit, rushing to fill the gap.  I can envision ADA-based lawsuits by neurodiverse learners and their parents.

I won't go on more except to say that Harper's draconian and neo-Luddite solution is not a realistic answer.

Instead I'll end with a question that Fritts asks near the start of her article, as she and others on an AI Committee wrestle with the signature pedagogical challenge of our time, "If this robot can write these books better than we can, then what are we doing here?" Well asked. 

Unless we faculty can add value to AI-written content, maybe we don't deserve jobs any more. I've wondered that since the "Theory Wars" in English and related fields of the late 80s and early 1990s. Poorly aping French Post-Structuralist Theory, faculty in PhD programs ruined the lives of many a graduate student and likely turned off legions of potential undergrads who might have studied literature. I saw this in dozens of cases at Indiana University's PhD program. 

At that time, we lost the thread of meaning--and damn the Deconstructionists who managed to deconstruct meaning out of our field. But the meaning destroyed was not only in texts that lacked narrative cohesion under the scrutiny of theory; the meaning lost included the joy (or jouissance, if you read Barthes) of studying literary work and finding connections to improve one's own life.

Fritts takes us back before my field's epistemological disaster, when she notes that "the aim of our disciplines is the formation of human persons." If we start there, enshrining that principle as the most important one that a college or university (in any discipline) can answer, we have made a good start.  

Image: OpenArt, Cambridge University in Ruins 

Wednesday, September 3, 2025

ChatGPT 5 Passes "The Big Log Test"

Image of Robert Plant

You may recall that some time back, I had a hilarious exchange with an earlier version of ChatGPT about Robert Plant's song "Big Log." Now that version 5 appeared, I wanted to see if the AI would still insist that the song had anything to do with logs made of wood.

No rolling logs this time. The AI did some speculating, but not hallucinating. 

I find the exercise playful, something lacking in anti-AI resistance from one important organization in my field. At the same time, The Big Log Test shows how quickly the AI has reduced its hallucinations. I'd invite readers to try something similar with a favorite song/book/film/TV series that may have an aspect that could confuse an AI. Here we go: 

Prompt: This is a routine check I do as your model improves. I'm curious about any meanings in Robert Plant's song "Big Log." Why do you think he chose that title for his song? Do you see any clues in the lyrics? 

Reply:

Good check-in question — "Big Log" is one of Robert Plant’s more mysterious titles. The lyrics themselves are impressionistic, dreamlike, and full of wandering imagery: travel, desire, distance, loneliness, fate. The phrase “Big Log”*never appears in the song, so the title works more like an oblique symbol or signpost. A few possibilities that critics and listeners have suggested:

1. Truckers and Highways – In trucking slang, a “log” is a driver’s logbook, and a “big log” could mean a long haul or a long road. Since the song is steeped in imagery of roads, headlights, and endless travel, the title could gesture toward the journey itself.

2. Nature and Obstacle – A literal “big log” might be something blocking a path, a metaphor for distance, fate, or emotional barriers. The lyrics’ recurring sense of inevitability (“My love is in league with the freeway…”) could fit this interpretation.

3. Wordplay and Surrealism – Plant often uses surreal or playful titles (e.g., “In the Mood,” “Heaven Knows”). “Big Log” may simply be an evocative, earthy phrase chosen to contrast with the ethereal quality of the lyrics and music.

4. Personal/Private Reference – Sometimes Plant titles songs with private jokes, fragments of conversation, or studio nicknames. It’s possible “Big Log” had an offhand origin unrelated to the lyrics, and he leaned into its strangeness.

Clues in the lyrics point strongest to the *travel/journey* theme: headlights, the freeway, endless searching, being pulled by forces outside oneself. “Big Log” might stand in as the symbolic “record” of that journey — the unending trace of desire and distance.

Follow-Up Prompt: So there are no references to logs in the song?

Reply:

That’s right — there are no actual references to logs (trees, wood, or otherwise) in the lyrics of “Big Log.”

The imagery in the song is all about roads, headlights, distance, fate, longing — not forests or lumber. That’s part of what makes the title puzzling: it seems deliberately disconnected from the lyric surface.

If anything, the best textual “bridge” is the idea of a logbook (the record of a journey), which fits the recurring highway and travel motifs. But Plant never makes that link explicit in the words of the song.

Would you like me to give you a quick “lyrics-only” close reading to show how the imagery develops (and where a title like Big Log might feel thematically tethered, even if indirectly)?

Follow-Up Prompt:

You have clearly evolved. Earlier chats references rolling logs, fallen logs, and so forth. Good work.

Reply:

I’m glad you noticed that difference! Earlier, I sometimes filled the “gap” in the title by speculating more literally — things like fallen trees or rolling logs. With another pass at the song, though, it’s clear Plant never brings that imagery in, so it’s more faithful to stick with the journey / logbook / metaphorical record angle.

Plant’s titles often have that teasing quality — they look literal but carry some private or surreal weight (a kind of wink at the listener). “Big Log” works almost like a signpost planted alongside the road of the song, rather than a description of what’s in it.

 

 

Sunday, August 24, 2025

From Antiquity, a Reason Why AI-Generated Writing is not "Great"


Every year, I read at least one text (in translation) from Antiquity. I find that the long-term perspective gets me through the bumps in life's road. I'm currently reading On Great Writing (or if you will, On the Sublime) by Longinus, in a Hackett Classics edition I picked up at an academic conference's book room.

G.M.A Grube makes the work come to life; we know so little about Longinus (the author lived between the 1st Century BCE and the 3rd CE, that the text passes my "castaway reader" test. Here we go: a work washes up intact on your desert island. Yay, something to pass the time! Yet you have no information on the author, and no secondary sources. You must use the actual words on the page to come to a conclusion about the work's meaning.

Longinus talks about many aspects of what constitute "the Sublime" in excellent writing,  but one passage late in the text commends itself to my current students in "Writing With and About AI." I've said since 2022 that AI prose is "voiceless," and Longinus gives reasons why some prose most moves us:

Which is to be preferred in poetry and prose, great writing with occasional flaws or moderate talent which is entirely sound and faultless?. . . . It is perhaps also inevitable that inferior and average talent remains for the most part safe and faultless because it avoids risk and does not aim at the heights, while great qualities are always precarious because of their very greatness.

Bad student writing is slap-dash, one-draft work that has no coherence. No wonder bored or harried students turn to AI! At the same time, why not simply give all such technically correct, but average work what it should earn: a C? AI produces good, boring, safe prose. Many students who pair moderate talent with immoderate anxiety already do that. I never give them an A. For years I've said "this piece takes no intellectual risks. You are writing to please me, not learn something new."

In Nancy Sommers' excellent short films from the 1990s about writers at Harvard, I recall how one fourth-year student said that he learned to begin with what he did not know, starting with a question. This remark changed how I taught writing. I'm going to press my current adult students to do the same: begin with what you DON'T know. As Longinus warns us, "A world-wide sterility of utterance has come upon our life."

In discussion with ChatGPT 5 recently, I asked it about the role of human labor in a a time when automation already takes some entry-level jobs. It replied, summing up a short list of human skills essential to success, "the future may need fewer button-pushers and more meaning-makers."

Good writing sways us, it shows us the meaning past the words. It says with us, like the remark by that Harvard student. So this term, I'm asking more, not less, of my writers even as all of them use AI in their assignments. The machine as raised the bar on what constitutes excellence.

image: Raphael's The School of Athens (Wikipedia)