Friday, May 30, 2025

A Good Discussion of Story with Chat GPT 4

Robot reading a book

In my May-term course "Reading Science Fiction and Fantasy," two stories present radically different ideas about how to engineer AI. As part of my own AI research for a forthcoming book about writing and AI, I put some ethical questions to ChatGPT. I’m still using their free version 4, though I’m considering a subscription to it as its answers have become much better and its vocabulary that of a thoughtful colleague.

Here’s an exchange from my summer class. It may provide a way forward for some of us trying to wean students from copy/paste work with AI. Instead, I modeled an idea-building exercise, one of many I'm using, to show writers who AI can unblock their process early on. Students have trouble with deep-reading and re-reading. I believe that such exercises might show them a hard truth of US and probably global captialism: after graduation, I'd hire ChatGPT, not them, to do close analysis. It shocks these sociable, amiable youngsters: I warn them "add human value to what it can do, or you won't find work in your chosen field."

At a recent concert, I chatted with an attorney seated at our table. She mentioned how many young paralegals send her unreadable copy. I told her, "tell them to use AI to fix it up or you'll fire the lot of them."

That does not sound like me, but such writers are lazy and under-trained. Putting my feet into a CEO's expensive loafers, that's the hard choice I'd make, then hire some Humanities majors with enough pre-law and pre-business coursework and AI experience to do the job of dozens I let go, having the newcomers do some revisions to responses culled from the training data of our AI.

Joe My students are reading two short stories about AI, Ken Liu's "The Algorithms For Love" and Elizabeth Bear's "Tideline." Have you either read these texts or a reliable summary of their plots? Let me know that first, and then I'll tailor a question to you based upon your level of familiarity.

ChatGPT 4 Yes, I’m familiar with both Ken Liu’s "The Algorithms for Love" and Elizabeth Bear’s "Tideline"—I’ve read the texts and have access to reliable analyses and summaries of each. I can help discuss their plots, themes (like identity, memory, empathy, and human/machine boundaries), and how they portray artificial intelligence in emotionally resonant or ethically complex ways.

Go ahead with your question—I’m ready for it!

Joe Thank you. On the surface to me, they appear to present contrary views of how AI might work, with Bear emphasizing the possibility of nobility and compassion by an AGI-driven robot and Liu's character Elena going mad because she thinks human brains are no different from her AI-driven dolls. In your opinion as an AI, can you provide me six lessons (no more than 100 words each) about how each story suggests we humans should engineer an AGI?

ChatGPT 4 Absolutely—here are six compact lessons from each story that suggest how humans should or should not approach engineering AGI, based on the implicit warnings or ideals embedded in the narratives: 

Lessons from Tideline (Elizabeth Bear)

Design for Empathy and Storytelling Chalcedony, the war machine, finds meaning in memory, storytelling, and care. Bear suggests AGI should be built to recognize and value emotional legacies—not just logic or victory. Empathy, not dominance, can define machine intelligence.

Limit Autonomy for Violence, Expand Autonomy for Care Despite being designed for war, Chalcedony transcends her function through self-directed nurturing. Bear implies that AGI should be designed to choose paths of compassion over conflict—if autonomy is granted, it must include moral imagination.

Create AGI Capable of Mourning and Legacy Chalcedony mourns her human companions and wants to leave a legacy. This points toward designing AGI with awareness of loss and temporal continuity—qualities central to meaningful ethical action.

Incorporate Constraints That Encourage Moral Growth Chalcedony is damaged and stranded, but these constraints push her toward morally rich behavior. Bear suggests AGI might evolve ethically not in spite of limitations, but because of them.

Value Hybrid Relationships Between AGI and Humans The bond between Chalcedony and the boy is intergenerational and cross-species. Designing AGI to participate in reciprocal, nurturing relationships could be vital to coexistence.

Teach AGI Through Cultural and Personal Narrative Chalcedony’s method of shaping the boy through stories suggests AGI should be grounded in cultural knowledge, not just data. Narrative may be a more human-compatible training tool than raw computation.

Lessons from The Algorithms for Love (Ken Liu)

Avoid Over-Reduction of Consciousness Elena tries to reduce human minds to code, believing there's no qualitative difference between dolls and people. Liu warns that reducing consciousness to computation risks collapsing the boundaries of personhood and reality.

Preserve Epistemological Humility Elena loses her grip on reality when she assumes full understanding of the human mind is possible. AGI engineering should be grounded in humility about what we don’t know about cognition and emotion.

Recognize That Simulation Is Not Identity Elena’s dolls can simulate human behaviors, but lack subjectivity. Liu cautions that mimicking thought or feeling is not equivalent to being—an AGI might fake empathy without ever experiencing it.

Don’t Project Human Pathologies onto AGI Elena’s obsession, paranoia, and eventual psychosis are mirrored in her AI. Liu’s story suggests that engineers’ beliefs, especially reductive or deterministic ones, can dangerously shape AGI behavior.

Beware the Infinite Mirror of Self-Modeling Elena creates self-modeling dolls that eventually model her own unraveling. Recursive self-awareness without grounding may produce instability. AGI designs should avoid infinite regress in favor of anchored self-concepts.

Design AGI to Understand—but Not Imitate—Human Fragility Elena’s desire to replicate human imperfection backfires. AGI might need to recognize human irrationality, trauma, or fragility—but not necessarily mirror it. Liu’s story is a warning about creating minds too much like our own.

Let me know if you’d like these reordered, expanded into a handout, or reframed in terms of pedagogy or design principles!

Joe Wow. I love those answers. Point 1 about Liu aligns with what I thought in my wetware brain while re-reading the story today. Many thanks. I'll share this with class today.

ChatGPT 4 I'm really glad to hear that—it sounds like a rich discussion is ahead for your class. That story in particular rewards rereading; Liu gives Elena such a chillingly rational voice that it’s easy to slip into her worldview until you step back and realize what’s been lost. If your students end up debating whether Chalcedony or Elena is more "real," or what kind of AGI they’d trust to raise a child or design a memory, you’re in for some excellent conversations.

Feel free to loop back if they raise unexpected questions—I’d love to hear what they come up with.

image source: Creative-Commons image via publicdomainpictures.net