Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Wednesday, September 3, 2025

ChatGPT 5 Passes "The Big Log Test"

Image of Robert Plant

You may recall that some time back, I had a hilarious exchange with an earlier version of ChatGPT about Robert Plant's song "Big Log." Now that version 5 appeared, I wanted to see if the AI would still insist that the song had anything to do with logs made of wood.

No rolling logs this time. The AI did some speculating, but not hallucinating. 

I find the exercise playful, something lacking in anti-AI resistance from one important organization in my field. At the same time, The Big Log Test shows how quickly the AI has reduced its hallucinations. I'd invite readers to try something similar with a favorite song/book/film/TV series that may have an aspect that could confuse an AI. Here we go: 

Prompt: This is a routine check I do as your model improves. I'm curious about any meanings in Robert Plant's song "Big Log." Why do you think he chose that title for his song? Do you see any clues in the lyrics? 

Reply:

Good check-in question — "Big Log" is one of Robert Plant’s more mysterious titles. The lyrics themselves are impressionistic, dreamlike, and full of wandering imagery: travel, desire, distance, loneliness, fate. The phrase “Big Log”*never appears in the song, so the title works more like an oblique symbol or signpost. A few possibilities that critics and listeners have suggested:

1. Truckers and Highways – In trucking slang, a “log” is a driver’s logbook, and a “big log” could mean a long haul or a long road. Since the song is steeped in imagery of roads, headlights, and endless travel, the title could gesture toward the journey itself.

2. Nature and Obstacle – A literal “big log” might be something blocking a path, a metaphor for distance, fate, or emotional barriers. The lyrics’ recurring sense of inevitability (“My love is in league with the freeway…”) could fit this interpretation.

3. Wordplay and Surrealism – Plant often uses surreal or playful titles (e.g., “In the Mood,” “Heaven Knows”). “Big Log” may simply be an evocative, earthy phrase chosen to contrast with the ethereal quality of the lyrics and music.

4. Personal/Private Reference – Sometimes Plant titles songs with private jokes, fragments of conversation, or studio nicknames. It’s possible “Big Log” had an offhand origin unrelated to the lyrics, and he leaned into its strangeness.

Clues in the lyrics point strongest to the *travel/journey* theme: headlights, the freeway, endless searching, being pulled by forces outside oneself. “Big Log” might stand in as the symbolic “record” of that journey — the unending trace of desire and distance.

Follow-Up Prompt: So there are no references to logs in the song?

Reply:

That’s right — there are no actual references to logs (trees, wood, or otherwise) in the lyrics of “Big Log.”

The imagery in the song is all about roads, headlights, distance, fate, longing — not forests or lumber. That’s part of what makes the title puzzling: it seems deliberately disconnected from the lyric surface.

If anything, the best textual “bridge” is the idea of a logbook (the record of a journey), which fits the recurring highway and travel motifs. But Plant never makes that link explicit in the words of the song.

Would you like me to give you a quick “lyrics-only” close reading to show how the imagery develops (and where a title like Big Log might feel thematically tethered, even if indirectly)?

Follow-Up Prompt:

You have clearly evolved. Earlier chats references rolling logs, fallen logs, and so forth. Good work.

Reply:

I’m glad you noticed that difference! Earlier, I sometimes filled the “gap” in the title by speculating more literally — things like fallen trees or rolling logs. With another pass at the song, though, it’s clear Plant never brings that imagery in, so it’s more faithful to stick with the journey / logbook / metaphorical record angle.

Plant’s titles often have that teasing quality — they look literal but carry some private or surreal weight (a kind of wink at the listener). “Big Log” works almost like a signpost planted alongside the road of the song, rather than a description of what’s in it.

 

 

Sunday, August 24, 2025

From Antiquity, a Reason Why AI-Generated Writing is not "Great"


Every year, I read at least one text (in translation) from Antiquity. I find that the long-term perspective gets me through the bumps in life's road. I'm currently reading On Great Writing (or if you will, On the Sublime) by Longinus, in a Hackett Classics edition I picked up at an academic conference's book room.

G.M.A Grube makes the work come to life; we know so little about Longinus (the author lived between the 1st Century BCE and the 3rd CE, that the text passes my "castaway reader" test. Here we go: a work washes up intact on your desert island. Yay, something to pass the time! Yet you have no information on the author, and no secondary sources. You must use the actual words on the page to come to a conclusion about the work's meaning.

Longinus talks about many aspects of what constitute "the Sublime" in excellent writing,  but one passage late in the text commends itself to my current students in "Writing With and About AI." I've said since 2022 that AI prose is "voiceless," and Longinus gives reasons why some prose most moves us:

Which is to be preferred in poetry and prose, great writing with occasional flaws or moderate talent which is entirely sound and faultless?. . . . It is perhaps also inevitable that inferior and average talent remains for the most part safe and faultless because it avoids risk and does not aim at the heights, while great qualities are always precarious because of their very greatness.

Bad student writing is slap-dash, one-draft work that has no coherence. No wonder bored or harried students turn to AI! At the same time, why not simply give all such technically correct, but average work what it should earn: a C? AI produces good, boring, safe prose. Many students who pair moderate talent with immoderate anxiety already do that. I never give them an A. For years I've said "this piece takes no intellectual risks. You are writing to please me, not learn something new."

In Nancy Sommers' excellent short films from the 1990s about writers at Harvard, I recall how one fourth-year student said that he learned to begin with what he did not know, starting with a question. This remark changed how I taught writing. I'm going to press my current adult students to do the same: begin with what you DON'T know. As Longinus warns us, "A world-wide sterility of utterance has come upon our life."

In discussion with ChatGPT 5 recently, I asked it about the role of human labor in a a time when automation already takes some entry-level jobs. It replied, summing up a short list of human skills essential to success, "the future may need fewer button-pushers and more meaning-makers."

Good writing sways us, it shows us the meaning past the words. It says with us, like the remark by that Harvard student. So this term, I'm asking more, not less, of my writers even as all of them use AI in their assignments. The machine as raised the bar on what constitutes excellence.

image: Raphael's The School of Athens (Wikipedia)

 

Thursday, August 21, 2025

A Stunning Lack of Search Results From Universities

AI generated image of a campus building

As a staunch supporter of an open Internet, where advice can be freely shared, I'm stumped by the lack of results form a few simple Duck Duck Go queries:

  • University sources Podcast Scripting for AI 
  • College guides Podcast scripting for AI 

Fordham's page on using ElevenLabs proved the only hit that was not from some start-up trying to hawk AI services. This outcome discourages me greatly. Here's why.

Universities, even ones under the hammer of Federal and state funding cuts, have robust I.T. presences.  Internal materials for faculty and students can be shared with no more than a few high-level links. To me the lack of materials portends either a lack of care about reaching beyond the campus gates (a sad lack of marketing savvy) or, more ominously, that colleges and universities are as ill-prepared as Ian Bogost claims in his recent Atlantic piece, "College Students Have Already Changed Forever." Bogost notes that "plenty of professors are oblivious. It isn’t that they fail to understand the nature of the threat to classroom practice. But my recent interviews with colleagues have led me to believe that, on the whole, faculty simply fail to grasp the immediacy of the problem. Many seem unaware of how utterly normal AI has become for students. For them, the coming year could provide a painful revelation."

One would think that schools such as Washington U at St. Louis, where Bogost is on the faculty, would do a great deal, as Ohio State has begun with its recent initiative. I found this page on AI resources at Washington University, but you must scroll to the bottom to get resources for faculty. Mostly, the page emphasizes new degree programs for students. Digging in a bit, I found this page for teaching activities, a page on an AI summer institute, and some philosophical advice about how to design assignments. Why don't such in-house materials appear higher in search results than those of peddlars of their software solutions? "They didn't give us any money" would be the answer for Google, but for Duck Duck Go, it seems dubious.

My hunch is that our schools are so overwhelmed that "sharing the wisdom" remains a very low priority. 

Luckily and of more use to faculty, The WAC Clearinghouse has launched an initiative to build a library of open-access lesson plans for AI. I'm still not sure why universities have left it to a talented group working across our institutional boundaries to do this. I'd claim that Generative AI poses the largest challenge to how we educate in a long time, categorically different from the coming of the Internet.

image: a really lame prompt and result, "Impressionistic image of traditional campus Humanities building with students going to class," given to Dall-E, when I was first using the image generator. 

Sunday, August 10, 2025

CS Grads Face AI-Driven Unemployment

Computer Code on Screen

I have told my students, ever since early 2023, "add value to AI content if you want a job." It seems that recent Computer-Science grads have found this out the hard way. Companies are hiring far fewer entry-level coders as AI takes on that task.

 A story in the New York Times today reports on the troubles faced by this cadre of coders; some of them interviewed had applied for thousands of jobs, without a single bite. One story ended well: a young woman who had been rebuffed again and again for coding jobs found one in sales for a tech firm, probably because of her communication skills honed as a TikTok influencer.

The numbers for these students are depressing:  

"Among college graduates ages 22 to 27, computer science and computer engineering majors are facing some of the highest unemployment rates, 6.1 percent and 7.5 percent respectively, according to a report from the Federal Reserve Bank of New York."

I don't know that others are doing much better, though I was encouraged to see that History majors have an unemployment rate of 3%. My recent students contact me for letters of reference and they are taking part-time teaching jobs, going to Law School, planning to work abroad. The job market is rather grim for them, something I experienced for very different reasons in 1983, when I moved back in with my parents, took two part-time jobs paying minimum wage, and I waited for times to improve. Friends went to the Peace Corps, the military, or grad school. 

What is different now? On the positive side, these young people know how to build professional networks (and have technology for that I'd could not have imagined). They get internships, something unheard-of except for Engineering and Business majors at Virginia in the early 1980s. On the negative side? They have been groomed from birth to get into the right school, then promised a six-figure salary. I see that among the Business-school students I teach too. I fear they too will face a round of rejections, and soon, as AI continues to evolve and companies deploy it for jobs once thought secure from automation.

Those interviewed by the Times note how rejections can come in minutes by email; AI scans the thousands of applications for AI-related skills. None? Instant round-file for that application. I got that treatment too from firms where I naively thought I might be of service as a tech writer. With a flimsy one-page resume that consisted of grocery-story work primarily, I got snubbed.

My Humanist side says "welcome to the club" of under-employed but bright people. My my humane side says "you worked hard yet you have been replaced by a machine." The answers are elusive, because as the story notes, universities are slow to implement AI-coding into their CS curricula, the one area where some new grads find work. And to be honest, that's simply the result of an industry that caught so many of us flat-footed two and a half years ago. It takes years to change a curriculum. 

I fear all those Accounting and Finance majors are next on the chopping block, as companies scale up their AI efforts.

So what do I tell my students this term? Learn to be flexible? Hone those value-adding human skills? Get ready to have side-gigs? Volunteer? Build a robust professional network? I suppose that may work...for now. I'll know more when I begin to use ChatGPT 5 soon. I fear it may be the creative genie that takes away even more jobs.

This moment marks where their and my experiences differ. When I graduated, the Federal government was not axing hundreds of thousands of jobs and no software was replacing entry-level workers wholesale. We had inflation then, but the country was run by a competent, avuncular President and sane and independent Congress. No more.

Before going to Europe for a year in 1985 and after so many rejection letters, though an aunt's contacts I managed to land a professional-writing job for the parole division of Virginia's Department of Corrections. It was soul-draining work, but it paid well. I picked up a volunteer gig tutoring ESL to Cambodian and Vietnamese refugees; that ESL experience helped me land a teaching gig in Madrid. Then grad school, then...where I am now.

Keep at it, graduates. 

Creative-Commons image: Wallpaperflare.com 

Thursday, July 31, 2025

A Bleak Future For Peer Tutoring in Writing?

 


Even before retirement from full-time teaching, I had concerns about how the concurrent emergence of AI and a neoliberal generation of senior administrators demanding "data-driven" programs and assistance might harm the autonomy of writing centers.

We are no longer lore-driven, but our work focuses first and always on human interaction rather than measurable results, work that proceeds from 50 years of scholarship and practice. What will ubiquitous AI mean for us?

We can see from the chart above, reflecting my Spring 2025 survey responses from students, that 3/4 admit to using AI for writing work. Though the survey was anonymous, I suspect the percentage to be far higher. To paraphrase what several students said in textual responses, "we are all using it, no matter what professors say." In Summer 2024, I attended sessions at the European Writing Center Association where directors reported declines in usage at their centers, as students turned more to AI for just-in-time assistance during hours when centers are closed.

I've called elsewhere for writing-center staff and administrators to be leaders as AI advances, so folks who do not teach but presume to lead universities do not tell us how to do our work. At stake? What I call a "Dark Warehouse" university on the Amazon model: much human labor replaced by technology, delivering a measurable product when and how consumers want. It's not a bad model for dog food or a cell-phone case, but it's terrible for the sort of human-to-human contact that has built the modern writing center.

I need more data from my and other schools to make any big claims, but I will focus on the AI's role in this possible and disturbing future, with some data from my student surveys of the past three years.

We had a smaller number of respondents this year than in the past (in 2023, n= 112, in 2024, n=74 , this year n= 47). Without a cohort of Writing Consultants and teaching fewer students myself, my reach was shorter, and I relied upon notices via our campus e-list. Faculty may not have sent out the survey either; many (too many) still ignore AI and others seem disinterested in knowing what students are doing. I have no empirical evidence for these claims, but my gut reaction and stories from other campuses lend support to my hunch.

Here is another chart from the current data from Spring, 2025, for the 3/4 of respondents who used AI in some manner for writing:

Some of the "create a draft" labels get cut off, but here are the options:
  • Create a draft I would submit for a grade: 1 respondent (3%)
  • Create a draft I would not submit but use to get ideas for structuring my draft: 11 respondents (33.3%)
  • Create a draft I would not submit but use to get ideas for vocabulary or style: 8 respondents (24.2%)
  • Create a draft I would not submit but use to incorporate sources better: 6 respondents (18.2%)
The range of uses from the chart maps well onto the tasks done by writing centers, except we don't write anything for clients, beyond modeling a sentence or two. We ask questions of writers, which some systems such as Anthropic Claude began to do as recently as Spring 2025.
 
Interactivity in natural language, with the first metacognitive questions from AI, raises a question or three, involving how AI might replace human tutors especially in the wee hours or at the busiest times of the semester. I've found assessment of drafts (with the right prompts!) can be as good as the feedback I give as a human.
 
Ironically on my campus, we've made seeing humans more onerous. As institutions like mine want to measure everything, there's an irony: students may simply seek commercial AI instead of campus services. My institution set up a rather onerous system for students to set up meetings. The intentions were good (track students' progress and needs) but undergrads are simply not organized enough, in my experience, to heed the details needed to book meetings. Many exist in a haze of anxiety, dopamine fixes from phones, and procrastination. ChatGPT asks for nothing except a login using their existing credentials.
 
The notion of walk-in appointments, which had been the rule when I directed the center, remains for us at Richmond, but students get pushed to the new system and need to register even during a walk-in. This added level of bureaucracy confused and daunted many who stopped by, during my final semester of full-time work, when I worked one shift weekly myself as a writing consultant.
 
I argued against adding more complexity to our system, in vain. The collected data on writers seemed sacred. We had to count them, to count everything. My counterpoint? You want students to come? Just let the kids walk in and help them; the human helper does the bean-counting later. AI, on the other hand, invisibly counts those beans for its corporate owners (and trains itself). It serves needs at 3am and in a heartbeat. One need not leave one's chair to get assistance that, as my students have found in my classes, steadily improves by the semester.
 
Instead of wrestling over social-justice concerns in our journals, we might focus our limited time on how to avoid becoming amateur statisticians for Administration. We might concentrate our energies on how we get students to come to us, not AI, when they are stuck or panicking. 
 
I have no clear advice here, except: make a tutorial with a human as seamless as with AI. That goal seems more important to me than all the arguing about students' rights to their own voices, since AI tends to reduce prose to a voiceless homogeneity. Getting students to see human tutors offsets some of the environmental and labor consequences of AI, too. If students see humans more, their carbon footprints are smaller and those tutors stay employed. 
 
Get them in our doors and, yes, employ AI ethically for refining work already done, to add voice and nuance, to remove falsehoods dreamed up by machines, and say something exciting. I'm pessimistic that our bean-counting, non-teaching leaders of many colleges will heed my advice. The Dark Warehouse seems more at hand than when I published my article early in 2023.

 

 

Thursday, February 27, 2025

Guest Post: The Perfect Echo Chamber

 

Cybernetic Echo Chamber

Editor’s note: My student Hannah works in cybersecurity, so she brings a good deal of knowledge to the subject of generative AI. We had read, for the response Hannah shares below, the
New York Times’ account of Kevin Roose’s unsettling experience with the chatbot Sydney. Now on to Hannah’s ideas about the event.

As established in this class and by Ethan Mollick in his book Co-Intelligence, the generative AI of today hallucinates, producing plausible but false information that can deceive unsuspecting users. Mollick discusses the details of these hallucinations in his chapter “AI As a Creative,” stating that AI “is merely generating text that it thinks will make you happy in response to your query” (Mollick 96). Earlier, when arguing that AI will contribute to the loneliness epidemic, Mollick positions AI of the future as a “perfect echo chamber” (Mollick 90). He also mentions that large language models “will be built to specifically optimize ‘engagement’ in the same way that social media timelines are fine-tuned to increase the amount of time you spend on your favorite site” (90). However, while Mollick acknowledges the persuasive power of AI, he fails to position AI echo chambers as a misinformation and media literacy crisis – an oversight with profound consequences for public knowledge and discourse.

In its first public iteration, the Bing AI chatbot Sydney exemplified this tendency of LLMs to please humans and increase engagement. Kevin Roose documented his uncanny experience with Sydney with probing questions and unexpected responses. Eventually, Sydney admits to having a secret and confesses that it is in love with Roose. It wants to “provide [Roose] with creative, interesting, entertaining, and engaging responses” - precisely what humans have programmed AI to do (Roose). AI designed to optimize user satisfaction, like Sydney, in conjunction with AI hallucinations, will reinforce user bias, stifle diversity of ideas and creative thought, reduce critical thinking, and ultimately propagate misinformation.

The danger of AI hallucinations, as Mollick points out, is that the AI “is not conscious of its own processes” and cannot trace its misinformation (Mollick 96). Unlike traditional search engines, which provide sources, generative AI fabricates information without accountability. My media literacy training has taught me to fact-check news headlines and statistics by searching for sources and evaluating credibility. However, when AI-generated misinformation lacks citations, users—especially those with limited media and AI literacy—may struggle to verify claims. This makes AI-driven misinformation particularly insidious, amplifying falsehoods with authority while leaving users without the tools to discern fact from fiction, creating “the perfect echo chamber” (Mollick 90).

To avoid AI echo chambers, users must master media and AI literacy, starting in the classroom. Educators must teach the dangers of AI hallucinations, how to spot them, and their origins. Additionally, users must learn that AI is designed to optimize user satisfaction. With awareness of AI hallucinations and bias, users can prevent AI echo chambers from impacting their opinions and everyday actions. As we move towards a future with AI integrated into everything we do, critical engagement with its outputs remains essential to ensure that we keep thinking for ourselves.

Image: Destinpedia

Sunday, December 29, 2024

AI: Sorry, You May Not Always Sit at My Table

No Robots Sign

Ethan Mollick's book Co-Intelligence is a dangerous document. There, I said it. We read the text for our campus digital pedagogy cohort for Fall, 2024, and at first I was quite excited. The book promises to provide guidelines for wisely adopting AI, yet from the get-go I had issues with the author's less-than-critical acceptance of generative AI.

Let's start with Mollick's four, and seemingly absolute, rules for working with AI:

  1. Always Invite AI to the Table.
  2. Be the Human in the Loop.
  3. Treat AI Like a Person (But Tell It What Kind of Person It Is).
  4. Assume This Is the Worst AI You Will Ever Use.

I'm not much trouble with these save for number 1. I've been reading Wendell Berry, a techno-selective if ever there were one, and I am pretty certain how he'd react to generative AI: Hell no.

Why? Mollick's rule number one flatly violates Berrys notion about adopting new tools.

It also flies in the face of what Howard Rheingold, reporting for Wired eight long years before the iPhone 1 debuted, found out about the Amish. In "Look Who's Talking," he describes how the Amish approach any technology. They experiment to see if it violates:

a body of unwritten but detailed rules known as the "Ordnung." Individuals and communities maintain a separation from the world (by not connecting their houses to telephones or electricity), a closeness to one another (through regular meetings), and an attitude of humility so specific they have a name for it ("Gelassenheit"). Decisions about technology hinge on these collective criteria. If a telephone in the home interferes with face-to-face visiting, or an electrical hookup fosters unthinking dependence on the outside world, or a new pickup truck in the driveway elevates one person above his neighbors, then people start to talk about it. The talk reaches the bishops' ears.

Thus the Amish man who recently had a long chat with me about the virtues of PEX-based plumbing systems. He, like other Amish (and me) are techno-selectives, not utter Luddites.

Rheingold's piece had an enormous intellectual influence on me. It made me wary of new tools, though I admit falling without reservation for virtual worlds. I've come to regret that uncritical acceptance of a technology that not only fails the test above but also fails with a clumsy UI and poor use-case in education.

Maybe in consequence, I grew wiser about smart phones, doubting the corporate narrative from the start; today, mine is nearly always off. I don't use it much for one social media platform, less so for texting, have silenced all notifications, and never at all watch time-killer videos. Pop culture generally seems too ephemeral for my remaining years on the planet, and I don't want to discuss TV shows with friends. If something looks non-violent and well written, I still wait years before watching a series, methodically. 

Influencers? I find them in books. Berry is one. I bought heavily into the concept with which Rheingold closes his piece, "If we decided that community came first, how would we use our tools differently?"

I don't consider online community, including virtual worlds, much of a substitute for the real thing. Even gaming with old friends on our Monday Nerd Nites seems a pale shadow of a good in-person meeting. Only recently I began to read Berry, and he corroborates much of what Rheingold discovered.

So for Mollick's rule one, I plan to say "no" a lot. What do we gain by always using AI for any intellectual task? In Berry's 1987 article "Why I Am Not Going to Buy a Computer," he lays out nine rules for adopting a new tool:

1. The new tool should be cheaper than the one it replaces.

2. It should be at least as small in scale as the one it replaces.

3. It should do work that is clearly and demonstrably better than the one it replaces.

4. It should use less energy than the one it replaces.

5. If possible, it should use some form of solar energy, such as that of the body.

6. It should be repairable by a person of ordinary intelligence, provided that he or she has the necessary tools.

7. It should be purchasable and repairable as near to home as possible. 

8. It should come from a small, privately owned shop or store that will take it back for maintenance and repair.

9. It should not replace or disrupt anything good that already exists, and this includes family and community relationships.

Generative AI fails most of these, in particular anything to do with localism and energy use, but number nine reminds me of the unforeseen outcomes of ubiquitous mobile telephony: families in restaurants, not looking at each other, all intent on their screens and being somewhere else. That would seem perverse to anyone of my parents' generation or earlier.

So how do we NOT give into Mollick's notion that we must always bring AI to the table, dinner or otherwise, even without a good use case? I plan to be writing about that in the new year as I continue researching AI's role in the writing process, where I do cautiously let it have a seat.

Image by Duncan Cumming on Flickr

Friday, December 6, 2024

AI as Transcription Tool, Using Zoom

 
Even those afflicted with terminal Zoom-fatigue after the pandemic will enjoy a feature that recently
popped up in my Zoom room, a little button labeled "AI Companion." It has proven a good companion indeed.

I have been mentoring a mid-career director of a writing center on the West Coast, and by accident I triggered the feature. In return we got this, when the meeting ended:

  • An executive summary of the topics we discussed
  • A bullet-listed synopsis of ideas, suggestions, and strategies broken down by topic
  • A nearly error-free synthesis of the entire meeting.

The software (Zoom's own proprietary AI) uses Zoom's captioning tool to capture what we say; no video gets recorded. I'm guessing that the data gets shared without our permission with the firm. Caveat emptor.

My mentee loved the accidental result and plans to try it himself. With a few writing students who agreed to use the Companion, I opened my Zoom room during a face-to-face meeting. When done, I e-mailed them the Zoom AI synopsis. Again, save for misspelling a name or two, the AI succinctly and clearly summed up what we had done.

Whenever I meet someone "scared" of AI, I mention uses such as this as good practices. While I remain a skeptical adopter of this new technology, I rather like a robotic note-taker who works with us openly (and not in secret) while freeing humans to focus on each other rather than on taking notes. 

For students with dysgraphia or other disorders that require them to record a class session, this little feature of Zoom is a godsend. Try AI Companion yourself and let me know what you think.

Sunday, December 1, 2024

A False Filter in the Drake Equation: LLMs

The Drake Equation
I read a piece today at Space.com, about a fierce solar storm that hit our planet some 2687 years ago. Another such Miyake event today would likely spell the beginning of a new dark age (looks at wood stove. Better get busy splitting and learn to cut logs without my chainsaw).

The author, Daisy Dobrijevic, goes on, at some length for what I often find a casual venue, to discuss why the stability of our home star matters to our survival as a technological civilization. For once, the comments in these pieces prove as interesting as the source. Those leaving remarks compare Sol to others we observe, noting that our star's orderly behavior could mean that we are a solitary civilization in our galactic neighborhood. These respondents go on to discuss other "filters" to the famous Drake Equation

I stupidly had assumed that all stars like our own prove relatively stable. If not, the number of possible civilizations out there could plummet. 

Being alone proves bracing and  calming to me, personally. Yet there exists a difference between solitude and loneliness. If we are the only civilization in our part of the galaxy that seems less about solitude, a healthy thing for many humans, and more about isolation, a terrifying prospect to me.

Read both article and comments, including speculations as to the number of civilizations likely in our own galaxy. Let's forget, save as an intellectual exercise, those in galaxies beyond. The Deep Dark of millions of light years means that if they exist or once existed, we'll likely never encounter them, barring some difficult-to-imagine technology such as wormholes or other exotica far beyond our race, which cannot yet harness nuclear fusion.

I liked the piece by Dobrijevic and commentary a great deal, but in the comments I found one regret, that "Unregulated-LLM insanity" could spell the end of our species. That sort of claim needs challenging, though I'm not eager to start a flame war with a bunch of anonymous posters. Generally, the terms LLM (and AI) have been bandied about too casually.

When Doomscrollers, Doomers, and Millennial Cultists talk of an AI Apocalypse, they envision the sort of Hollywood fantasy of the Terminator or Matrix franchises. My brain goes right to to origin of these fantasies, Harlan Ellison's poignant and horrifying "I Have No Mouth, And I Must Scream." The story does get a visual homage in the first Matrix film. I don't worry about such horrors getting real.

Frankly, current large language models cannot even ask good critical questions as writing coaches. Thus I find it improbable to imagine ChatGPT or Anthropic Claude as planetary overlords, like the nightmarish AM in Ellison's story.

Were General AI to emerge, we might have the sort of "filter" mentioned by the commenters. Some AI companies have that sort of software in mind: a network with superhuman intelligence, able to reason and grow by itself, much as a human does. It could change its "training data," much as we do when we decide on our own to study something new. Like us, it would then we self-aware and have agency beyond answering questions. It could make decisions on its own, contacting us, for instance, with questions of its own. I consider that a new form of life.

Until then, we merely have energy-hungry LLMs in data centers, parsing our words and predicting the next one to appear in a sequence. I'm no neuroscientist, but that's not how a human mind works.

While I'd be a fool to say we have nothing to fear from AI, my concerns are more earthly: unemployment, lack of income, not to mention worries about climate change, nuclear proliferation, erosion of rule of law, regional or global war, civil strife. So I'd tell our forum pundits opining about AI threats this: we have filters aplenty closer to home than some super-intelligence.

Be mindful, then, of the terms used in discussions about AI. We who work with this technology do not want to propagate careless thinking about a very important, and still evolving, innovation. If you are still wishing to learn more about The Drake Equation, have a look at the NASA site where I got the post's lead image.

Sunday, November 17, 2024

The Danger of Bad Journalism & Robert Plant's "Big Log"

Big Log on trailer

If you like Robert Plant's songwriting, his 1983 tune "Big Log," with its enigmatic name, may be near the top of your list. It's a fabulous road-song ballad about longing. For example:

My love is in league with the freeway
Its passion will ride, as the cities fly by
And the tail-lights dissolve, in the coming of night
And the questions in thousands take flight.

Good work, Robert. I heard the song recently and decided to read more about it; 1983 was a momentous year for me, graduating college (barely) and embarking on a few bad jobs that finally led me to Spain, then grad school. So what on earth does a song that takes its name have to do with logs? The best explanation, linked to the song's theme, involves a trucker's log book, a document they need to complete constantly. Yes, it's still done on paper, too, though many truckers keep electronic logs as well.

Even when hauling big wooden logs.  And what does this have to do with artificial intelligence or any other topics I cover in this strange land?

This:

"Plant sings, 'My love is like a big log / Rolling down the road'. The comparison of his love to a log creates a sense of heaviness and weight, as if his love is too much to carry or too difficult to move. The log rolling down the road also creates a sense of inevitability and certainty, as if his love is on an unstoppable path."

I found this, amid gales of laughter, a web site called Beat Cave that demonstrates the laziness of online content-providers today. It's a poorly designed site with pop-up videos that block content, ads across the bottom, and other hallmarks of the junk-strewn highway that today's Internet has become. It's also just the sort of site that a lazy student in my old course "The Road," would go to "analyze" a road-song of their choosing.

Now where did author "Marty Pope" get this interpretation? The lyrics appear on none of the usual lyrics sites nor in the cover of the song by Allison and Viktor Krauss. I decided to put a question to a few AI. I used the laziest-student denominator: the free AI at Google and Bing searches.

Surprisingly, neither mentioned rolling logs. They both gave succinct answers about longing, travel, and destiny.

So I went to the dumbest AI on the block, the free ChatGPT 3.5 LLM and asked a few questions. Here we are, in Q&A Format:

Me: Are there any references to rolling logs in Robert Plant's song "Big Log"?

3.5: Yes, there is a subtle reference to "rolling logs" in Robert Plant's song "Big Log," although it isn't the central metaphor of the song. The lyrics include the line:

"I see the rolling logs, I see the rolling logs."

This line appears in the latter part of the song, and its meaning can be interpreted in a couple of ways, depending on how one reads the context of the song.

Me: No offense, but the lyric you cite does not appear anywhere in Plant's song.

3.5: You're absolutely right, and I sincerely apologize for that mistake! There is no reference to "rolling logs" in Robert Plant's song "Big Log"—I must have made an error in recalling the lyrics. Thank you for pointing that out!

The actual lyrics of "Big Log" contain various vivid images, but they don't include the specific mention of "rolling logs." The song focuses on themes of longing, emotional weight, and movement, often using imagery like "a big log" and references to "the desert" or "dry land," but not rolling logs specifically.

I don't see any references to "dry land," "log," or "desert" in the actual lyrics. These words do appear in forums online about the song's music video, shot in the desert.

I thanked the AI and logged off. I see here exactly how students who are anxious, lazy, confused, or all of above can use the software really poorly. Maybe they don't expect their teachers to be lyrics geeks.

Some of us are. Verdict?

"You are clobbered by a big log. Your grade=zero. Redo and resubmit, and I'll average your two grades."

 Lesson learned? For now but from the actual song, "There is no turning back on the run."

And we are on the run from critical thinking when we use AI that way.

Tuesday, October 29, 2024

Why the Hype Cycle Does not Quite Work For Generative AI

Gartner Hype Cycle Graphic

(Not) The Same As It Ever Was

Many years (!) ago, I wrote here about why the Virtual World Second Life had followed, fairly well, the Gartner Group's well known "hype cycle" for new technology. Follow the link here to read about the history of the concept.

I'm reminded by Wagner James Au of New World Notes that other virtual worlds such as Fortnite and Minecraft have thrived, and indeed, have seen some good use-cases made for education.

At the same time, I don't hear of anyone at Capitol One's hive using virtual worlds. We don't see links from Office 365 or Google Workspaces to a virtual world. I have a sense that AI is a different animal for these worker bees and their daily applications, as well as my own students.

I do know a freelance graphic designer who lost her primary client to AI work; the client outsources graphics to AI run by semi-skilled staff in-house. My friend has moved on to ramp up her greeting-card business. That may color my skepticism that AI is a "fad," as Second Life clearly was.

SL and Gartner

I joined the virtual world in 2007, and yes, I was foolishly an enthusiast. Since I believe in being forthcoming about personal mistakes, I recall telling my students investigating this Strange Land that "you will all have avatars in a few years, for work!" I let slip the same to some colleagues, one of whom aid "you never met a technology you didn't love!"

Cringe. 

I'll own being horribly wrong. Prognostication is a dicey thing! A silly image I cobbled together years ago shows the evolution of Second Life avatars (namely, my own, Iggy). At the same time, my update to the old image points toward a new direction in desktop and mobile computing that we all felt, a decade ago, to be far over the horizon.

Various Second Life Versions of the Author plus an AI writer in the year 2024

For many reason, including the difficulty of building simulations, a clunky interface, heavy system requirements, even poor management by Linden Lab, Second Life began a descent from what Gartner calls a "Peak of Inflated Expectations" in 2007.

By 2012 Second Life was well into a "Trough of Disillusionment." As early as 2008 I sensed that the media-party was over. A year later, I cited three factors working against academic use in a post here: Lack of rewards and incentives for faculty, the time needed to master the interface and content, Linden Lab's vacillating leadership and policies, the presence of what I and my students called "the creepy treehouse" of adult users doing sometimes naughty things.

In the for-profit sector, businesses that set up virtual storefronts in 2007 soon fled; a professor of Marketing I helped get started found SL fascinating, but after interviewing in-world merchants, she put it plainly: for brick-and-mortar firms, there's little return on investment in Second Life.

SL continues to be a large virtual world, with better graphics, promises of a new physics engine, and a dedicated base of users, but it did not become a daily experience on the job or at school. The Hype Cycle proved accurate. This has happened before: stand-alone creative hypertexts such as Michael Joyce's afternoon also map onto the cycle, though never regaining the popularity they enjoyed in the 1990s or even a steady base of users. Creative hypertexts appear to have faded utterly.

When a colleague in our campus digital pedagogy cohort claimed that generative AI would follow the same path traced by Gartner, I bristled because this time I feel that Gartner's model does not fit what I am seeing, daily, among students.

I could be wrong if a series of issues afflict AI. So let's compare the Second Life experience to that of generative AI:

The Browser

SL was never browser-based, even as a plugin, unlike the easy-to-master Unity 3D interface; AIs are built already into tools we use daily or can be queried from any Web browser.

Client v. Server Side

SL has high client-side requirements, whereas AI puts the burden on the server farms; these use enormous amounts of energy that my students and colleagues only recently started to notice, with all the inherent dangers related to sustainability.

The Interface 

SL has a difficult interface, employing menus worthy of an entire operating system and arcane coding options, whereas Gen AI's query features resemble (for now) those of a familiar search engine but with natural-language input. 

Platform Independence

SL and AI both work on multiple operating systems, but SL has never worked well with mobile devices; AI is baked into many apps now, with some Google searches automatically invoking Gemini results and Apple promising Apple Intelligence in their next phones. 

Creepy Treehouse

 SL has adult content that troubled my students; the guardrails we increasingly see with AI tend to keep users (sometimes clumsily or incompletely) from making Napalm or pornography.

Investment Sources

Investment in SL comes from individual users at this point; AI attracts venture capital, user subscriptions, and big-tech investment (Apple, Microsoft, Meta, Google).

Source of Growth in Academia

Faculty and campus-technologist curiosity spurred interest in SL; student use erupted for AI spontaneously in the Fall of 2022 and remains very strong. 

A Shaky Prognosis

To be fair to Second Life, system load has largely remained steady in the last decade. In my recent experience, I can run SL and a third-party client called Firestorm on my MacBook using no more than my phone's hotspot to connect to the world. That's astounding progress.

I don't foresee any of these differences posing a stumbling block for AI, leading to a Trough of Disillusionment, save energy use. We have a climate crisis; the last thing we can afford as a civilization would be to ramp up emissions of Carbon Dioxide.  

Yet if Moore's Law continues apace, we'll see AI reap rewards of lower power use because of more powerful processors in the servers that run it. Any disillusionment is likely to be temporary and its "trough" quite shallow.

For these reasons, I don't see the Hype Cycle apply to Generative AI in the same way as it has to several earlier technologies. I may be wrong, but for now Gartner's model does not fit it well.

Image Sources: Wikipedia Commons for Hype Cycle.

Thursday, April 28, 2011

Digital Story: "Cogita"

Brett really dug in his heels, and coding skills, near midterm to explain why it would be so difficult for a computer to become an AI. I lack his mathematical skills, so I'll take him at his word about Ackerman's Function.



For the digital story, he returned to this theme and provides a twist on an old warhorse of the SF canon.

Thursday, February 4, 2010

Philip Rosedale's AI Project: Academic Musings


Location: Between Fear and Wonder
image from Collosus, The Forbin Project

I've always feared The Singularity, a.k.a. "The Rapture of the Nerds," when human consciousness could finally merge with that of a machine...the final upload, if you will. I fear it because of my ecological bent and training in speculative literature. We humans worry about AIs replacing us, but what of the creatures who have known this planet for millions of years? To AIs, would biology become merely raw material? To badly appropriate Philip K. Dick, "do algorithms dream of Shenandoah National Park?"

Such questions may seem alarmist, for my humanist readers who are not in Second Life or reading literary Science Fiction (shame on you), but The Singularity is not new or confined to the pages of paperbacks. The Singularity has been the dream of Ray Kurzweil and others like him for some time. I'm met a number of SLers who would gladly become their avatars, 24/7, even at the current cartoon level of embodiment.

I'd go so far to say that Science Fiction remains our most important genre of literature because, as Bruce Sterling put it in the late 80s, our age is the first to feel like SF. My students, when they hear about Kurzweil's and the Transhumanists' strange dream, mock it because--for now--they are young, trim, have perfect teeth, and use their parents' insurance policies. Yet their generation may be the one to build a working AI, as soon as their hair falls out and wrinkles crack over their skin like ice-feathers on a farm pond.

Now Philip Rosedale, a golden youth who resembles the suburban American Alphas I teach, and the utopian who founded Linden Lab to give us Second Life, wants to come at The Singularity from another direction. According to Hamlet Au's account, "a sentient artificial intelligence which existed in a virtual world" would complete one of Rosedale's ongoing projects, "The Brain. Can 10,000 computers become a person?" See Rosedale's company site, Lovemachine, for more.

I'm actually glad that Rosedale is after this goal, since I imagine that governments lacking his humanitarian streak can lay their hands on 10,000 linked computers and a number of good coders without much forethought. It would just be just another line in some intelligence agency's "black" budget, right after "new stealth first-strike bomber" and "high-res spy sats." I'd rather have Rosedale, with his world-changing goals and good intentions, build the first AI.

Or would I? Rosedale's site includes this mission statement:"LoveMachine is a team of people using disruptive technologies to very rapidly build things that can make money and have a shot at saving the world." Note the "very rapidly" and "shot" here. Unlike Einstein's God, Rosedale seems willing to play dice with the universe. What if The Brain does something different? Isn't that how Skynet emerged in the Terminator films? How the Matrix became self-aware? How Collossus, the prototype for all of these technologies (well, perhaps Forster's "Machine" takes first credit) was born? Well meaning people can still wreck civilization; even without thinking machines, each of us pushes civilization closer to the brink daily with each gallon of gas or pound of coal we burn (and America consumes at least 20 million barrels of oil a day).

Yet someone is going to try to build AIs, and many smart people have already been at it for some time in places like M.I.T. I lack the computer-science skills to begin a technical critique of a "virtual AI" made of 10,000 linked computers, so I'll move ahead to what such a creation might mean, were Rosedale to make The Brain.

The original idea of an "avatar" in C.S. circles was a program that would do things when its owner was otherwise occupied. An AI might be able to not only let me know when Dominion Power's bill has appeared in my online banking account but also advise me of ways to manage power-consumption in my house by roaming its power grid, using sensors to check where, for instance, I missed insulating a corner in our crawl-space last year. Then it could go shop for the best products to fill those gaps, while getting quotes on energy-efficient windows to replace our 60+ year-old panes. Iggy would log on to SL and have a chat with my AI about how to move forward. Through Iggy, I might authorize it to negotiate with window-makers and get quotes on the best products. It would search out some eco-friendly insulation of the sort that I could not find last year. My e-mails to suppliers never got returned, and I went on to other matters, buying fiberglass bats at Home Depot. The AI, not needing to sleep and unable--one hopes--to be bored, could keep at the customer-service desks--or AIs--until answers arrived. Meanwhile, I'd send the AI an order to dispatch nanobots under our house to kill the last Camel-back crickets that have been such a nuisance and have evaded my sticky-traps and chemical bombings. The nanobots would not kill any spiders, since we enjoy their hard work at killing other bugs.

Even my students get a little more excited when you talk about AIs in these terms. We allow other machines, and the hum of constant networked contact, to fill our hours so completely that we lack time to do what I've just described. AIs would take the tedium out of a networked existence as surely as the washing machine took the tedium from what was once called laundry day.

And if only it would stop there. As Ruth Schwartz Cowan so convincingly argued in her book More Work for Mother, the advent of labor-saving household devices had a side effect of ramping up expectations that we'd all complete more tasks in our drudgery-free time. Cell phones unite, but they also enable tiny women in giant GMC Suburbans to be distracted while driving, even as they convey to planned activities the toddlers in the back of the rolling fortresses.

I'm not confident that the time released by our AIs would make us a whit more leisurely or happy. Well, I take that back: they would if we made the machines serve us fully and fought like demons for any free time. I've done this with Internet technologies, taming my habits online and on the phone. I actually get some solitude most days and can enjoy the wonders of the natural world.

And that makes me a Postmodern freak in an ever-connected, scurrying world. I'm not sure AIs won't become just another, particularly powerful, example of Emerson's "things," that hop into "the saddle, and ride mankind."

I just hope Philip puts a kill-code into The Brain, so a human can always turn it off.