Friday, June 20, 2025

Ohio State Takes the AI Plunge


A colleague in senior administration at OSU sent me a notice about their new AI initiative.

It's so at odds, in a healthy manner, with the "just say no, hard no" of the CCCC Chair's April keynote on AI. While I'm encouraged about a new CCCC working group formed on AI, as with so many things run by faculty, it's going to take time to spin up, while administration and industry race (perhaps unwisely) into adopting this technology. I'm going to chuckle at the negative stories such as one from The Columbus Dispatch, picked up by MSN, calling it a "betrayal" and claiming that AI is being "rammed down students' throats." 

Have these reporters even looked closely at student use?

Let's just assume it's nearly 100%. My 3rd annual student survey says as much. We need to address that reality and do so ethically and in a pedagogically sound manner. Maybe that's where we can critique this or similar initiatives. I remain the wary adopter, not an enthusiast.

So what are the broad outlines of the OSU plan, to be unfolded this Fall?

  • A required AI seminar for all students
  • Support for faculty to incorporate Generative AI into classes. I do not see a mandate for all faculty.
  • Partnership with industry.

So why the pushback? 

Perhaps the details are too vague, the timing too sudden, but OSU is a Big 10 and a land-grant; they have long partnered with companies to help their students develop skills need in the workplaces of their era. Though I taught for nearly all my career at a liberal-arts university, I hear already from adult learners that in their jobs, AI fluency no longer remains an option. New hires are expected to have some fluency or go elsewhere for a first job.

Are we caving to corporations? Only if we let them set the terms of engagement. We have adopted new technologies before our way, by providing open-access Internet resources, releasing materials into the Creative Commons, and pursuing innovations with mobile computing. I still live by those rules, never putting a syllabus into BlackBoard's gated community. My content is there for all to use. That was the promise of the early Internet. I hope we can do something similar with AI.

I just finished a course that focused on deep-reading techniques for literature. We used AI for two assignments, but the reading journal, done as a Google Doc and commented upon by peers, proved hard for AI to assist. The students had to employ critical-thinking skills beyond summary and analysis to find "need to know" questions to bring to class discussion, where 50% of their grades came in the form of participation.

AI cannot do that. But without AI, my students won't land jobs in a few years. So there's my pushback: no matter the basis for objecting to AI, college involves helping students start careers. It does so much else, too, but without employed alumni, the entire enterprise would fold. 

Let's see where OSU goes with this venture. 

Image: OSU Seal via Wikipedia 

 

Friday, May 30, 2025

A Good Discussion of Story with Chat GPT 4

Robot reading a book

In my May-term course "Reading Science Fiction and Fantasy," two stories present radically different ideas about how to engineer AI. As part of my own AI research for a forthcoming book about writing and AI, I put some ethical questions to ChatGPT. I’m still using their free version 4, though I’m considering a subscription to it as its answers have become much better and its vocabulary that of a thoughtful colleague.

Here’s an exchange from my summer class. It may provide a way forward for some of us trying to wean students from copy/paste work with AI. Instead, I modeled an idea-building exercise, one of many I'm using, to show writers who AI can unblock their process early on. Students have trouble with deep-reading and re-reading. I believe that such exercises might show them a hard truth of US and probably global captialism: after graduation, I'd hire ChatGPT, not them, to do close analysis. It shocks these sociable, amiable youngsters: I warn them "add human value to what it can do, or you won't find work in your chosen field."

At a recent concert, I chatted with an attorney seated at our table. She mentioned how many young paralegals send her unreadable copy. I told her, "tell them to use AI to fix it up or you'll fire the lot of them."

That does not sound like me, but such writers are lazy and under-trained. Putting my feet into a CEO's expensive loafers, that's the hard choice I'd make, then hire some Humanities majors with enough pre-law and pre-business coursework and AI experience to do the job of dozens I let go, having the newcomers do some revisions to responses culled from the training data of our AI.

Joe My students are reading two short stories about AI, Ken Liu's "The Algorithms For Love" and Elizabeth Bear's "Tideline." Have you either read these texts or a reliable summary of their plots? Let me know that first, and then I'll tailor a question to you based upon your level of familiarity.

ChatGPT 4 Yes, I’m familiar with both Ken Liu’s "The Algorithms for Love" and Elizabeth Bear’s "Tideline"—I’ve read the texts and have access to reliable analyses and summaries of each. I can help discuss their plots, themes (like identity, memory, empathy, and human/machine boundaries), and how they portray artificial intelligence in emotionally resonant or ethically complex ways.

Go ahead with your question—I’m ready for it!

Joe Thank you. On the surface to me, they appear to present contrary views of how AI might work, with Bear emphasizing the possibility of nobility and compassion by an AGI-driven robot and Liu's character Elena going mad because she thinks human brains are no different from her AI-driven dolls. In your opinion as an AI, can you provide me six lessons (no more than 100 words each) about how each story suggests we humans should engineer an AGI?

ChatGPT 4 Absolutely—here are six compact lessons from each story that suggest how humans should or should not approach engineering AGI, based on the implicit warnings or ideals embedded in the narratives: 

Lessons from Tideline (Elizabeth Bear)

Design for Empathy and Storytelling Chalcedony, the war machine, finds meaning in memory, storytelling, and care. Bear suggests AGI should be built to recognize and value emotional legacies—not just logic or victory. Empathy, not dominance, can define machine intelligence.

Limit Autonomy for Violence, Expand Autonomy for Care Despite being designed for war, Chalcedony transcends her function through self-directed nurturing. Bear implies that AGI should be designed to choose paths of compassion over conflict—if autonomy is granted, it must include moral imagination.

Create AGI Capable of Mourning and Legacy Chalcedony mourns her human companions and wants to leave a legacy. This points toward designing AGI with awareness of loss and temporal continuity—qualities central to meaningful ethical action.

Incorporate Constraints That Encourage Moral Growth Chalcedony is damaged and stranded, but these constraints push her toward morally rich behavior. Bear suggests AGI might evolve ethically not in spite of limitations, but because of them.

Value Hybrid Relationships Between AGI and Humans The bond between Chalcedony and the boy is intergenerational and cross-species. Designing AGI to participate in reciprocal, nurturing relationships could be vital to coexistence.

Teach AGI Through Cultural and Personal Narrative Chalcedony’s method of shaping the boy through stories suggests AGI should be grounded in cultural knowledge, not just data. Narrative may be a more human-compatible training tool than raw computation.

Lessons from The Algorithms for Love (Ken Liu)

Avoid Over-Reduction of Consciousness Elena tries to reduce human minds to code, believing there's no qualitative difference between dolls and people. Liu warns that reducing consciousness to computation risks collapsing the boundaries of personhood and reality.

Preserve Epistemological Humility Elena loses her grip on reality when she assumes full understanding of the human mind is possible. AGI engineering should be grounded in humility about what we don’t know about cognition and emotion.

Recognize That Simulation Is Not Identity Elena’s dolls can simulate human behaviors, but lack subjectivity. Liu cautions that mimicking thought or feeling is not equivalent to being—an AGI might fake empathy without ever experiencing it.

Don’t Project Human Pathologies onto AGI Elena’s obsession, paranoia, and eventual psychosis are mirrored in her AI. Liu’s story suggests that engineers’ beliefs, especially reductive or deterministic ones, can dangerously shape AGI behavior.

Beware the Infinite Mirror of Self-Modeling Elena creates self-modeling dolls that eventually model her own unraveling. Recursive self-awareness without grounding may produce instability. AGI designs should avoid infinite regress in favor of anchored self-concepts.

Design AGI to Understand—but Not Imitate—Human Fragility Elena’s desire to replicate human imperfection backfires. AGI might need to recognize human irrationality, trauma, or fragility—but not necessarily mirror it. Liu’s story is a warning about creating minds too much like our own.

Let me know if you’d like these reordered, expanded into a handout, or reframed in terms of pedagogy or design principles!

Joe Wow. I love those answers. Point 1 about Liu aligns with what I thought in my wetware brain while re-reading the story today. Many thanks. I'll share this with class today.

ChatGPT 4 I'm really glad to hear that—it sounds like a rich discussion is ahead for your class. That story in particular rewards rereading; Liu gives Elena such a chillingly rational voice that it’s easy to slip into her worldview until you step back and realize what’s been lost. If your students end up debating whether Chalcedony or Elena is more "real," or what kind of AGI they’d trust to raise a child or design a memory, you’re in for some excellent conversations.

Feel free to loop back if they raise unexpected questions—I’d love to hear what they come up with.

image source: Creative-Commons image via publicdomainpictures.net

Monday, April 28, 2025

A Surprising Second Life for Second Life

Iggy in Svarga 2025

I came across an interesting piece by Alice Bucknell recently about the curious longevity of Second Life, the virtual world that served as impetus for this blog.

Having returned to SL for an occasional round of naval wargaming from the Age of Sail, I think that the piece's title misleads. Second Life lacks the visual depth of modern video games, but the world hardly seems decaying. If anything, graphics quality has vastly improved and performance-based crashes on WiFi connections with a laptop seem rare, even for naval combat. Fifteen years ago, my students' avatars could barely move from one sim to another without crashing, and they were walking, not sailing a square-rigger. In 2014 I found a simple mainland driving trip impossible.

To hazard a guess, Linden Lab's integration of Mesh-based builds surely has helped with rendering and running the content on each sim. I understand the servers moved to Amazon's cloud long ago, which may explain a lack of what we used to call Black Wednesdays, when the entire grid would crash midweek with alarming regularity.

Why stay in such an old game...I mean virtual world? I think for many SLers it remains a community of long-time residents, Linden Lab's term for those who log on to their servers. More is at play in my case. Here the author nails why I love SL still, for "the platform’s refusal to submit to the always-be-optimizing logic of other MMOGs is precisely what grants Second Life its enduring appeal among a certain cohort of loyal users. Bucking larger trends of the gaming industry enabled the user experience to evolve in weirder ways....there is no boss to beat or competition to win, the vibe on Second Life is ambient."

Exactly. I don't go online to kill boss-monsters or level up, and the occasional glitchy build in Second Life landscape or gear simply does not bother me. I enjoy Eno's ambient music, too.

Minecraft boomed despite its Legoland look; it's the fun of building things and playing with them, rather than trying to recreate the real world's look and feel, that draws me still to SL's giant sandbox. As Bucknell notes, the virtual world does not always embrace the "heteronormative, linear, violent" tropes of video gaming, though she could find those tropes easily enough had she looked hard.

If there's one thing different now as compared to 2010, it's the emptiness of the Mainland where Iggy once had a parcel of land as a premium user. I gave up that status some time back, but I log in with a Pirate ALT for naval battles or with Iggy's pre-mesh laggy self, to walk or drive around the world. 

Instead of empty mainland, I took Iggy back to Svarga, as close to a National Park as you'll find in Second Life, for a flying tour on the back of a giant bee. The island is now run by Patch Linden, Senior VP of Product Operations for Linden Lab. His otherwise human avatar wears squirrel ears.

Some things do not change about the Lindens. That's a comfort.

I set graphics in Firestorm to "Ultra" and enjoyed the data-sucking view. When done, I hopped off my mount for a photo using Bryn Oh's "Virginia Alone" Sky.

Iggy on the bee

I was, indeed, alone on Svarga, where Iggy spent his first Linden dollars to buy a pair of sunglasses. That was 2007? What amazes me most, beyond my failure to crash my client as I did regularly then, was that so many years have flown by but the world remains.

That's something special, likewise comforting, right there. 

Perhaps I should look about a little more at some old haunts.

Tuesday, April 15, 2025

The CCCC and Refusing AI: A Rebuttal

A long dark hallway in a hospital

Funny thing about academic conferences; they always go smoothly but when I'm not at them, I frequently have bad dreams about attending these meetings. These involve getting lost in long, dark hallways, finding that my room in the conference hotel no longer exists or is flooded, taking the wrong Metro train and ending up in another town at the time I'm supposed to be speaking. In one nightmare, I ended up driving my rental car on an ever-worsening road. Soon the car and I were pinned in on all sides by tall trees as the sun set.

After CCCC 2025 this month, however, I am having a bad dream of the waking sort.

I was gravely disappointed by an address given by the current chair at the Conference on College Composition and Communication. The chair's remarks at the conference echoed her and her co-authors' sentiments in "Refusing GenAI in Writing Studies: A Quickstart Guide," a thoughtful document but one, in its own way, that could prove as dangerous to our work as educators as could Ethan Mollick's overly enthusiastic book Co-Intelligence

17 April Update: A transcript of the Chair's talk at the conference has been published. It's worth a read but remains, on my first reading, aligned with the ideas in the post linked above.

Of many disagreements I have with the authors' stance, this section struck me as one of its most misinformed moments:

we must be careful of uncritically accepting the notion that GenAI in its current form will inevitably be widely taken up in the corporate sector and we must therefore prepare students for that time now.

That's pretty much an insult to those of us in the field who are trying to grapple with what AI may mean for our schools and students. We have been anything but uncritical, yet as the adult professionals taking my current class tell me, the inevitable has already occurred. Companies are racing to implement AI at many levels; the authors' statement smacks of Ivory-Tower isolationism.

In future posts, I hope to critique other aspects of the refusal guide. The statements about marginalized students, for instance, ignore how powerfully AI can assist neurodivergent writers as well as those from disadvantaged backgrounds. For now, however, I want to focus on why we writing professionals must be at the table as AI becomes an inevitable part of our curricula. 

We don't know the pace of that change; AI may hit a reverse salient in its development. One potential setback appears in Matteo Wong's article in The Atlantic; an industry expert claims that AI firms are misleading the press and public about how rapidly their models are improving. Essentially, companies may be fudging data on benchmark tests used to assess AI performance, as compared to human test-takers. 

If true, we could have Ethan Mollicks' first model of an AI future: today it is "as good as it gets."

I'd welcome that pause, so we in education could catch up.

Lead or Be Led?

Whatever the trajectory of AI's evolution, we writing professionals have always engaged in a service enterprise. Or is that simply the voice of a (semi) retired and non-tenured writing-center director? All three of the authors are tenure-stream faculty. Yet as I'll explain, that privilege does not protect them or their programs from institutional changes.

Writing programs are not owned by writing directors or even faculty; they belong to an institution and can be shifted around or simply cut much more easily than can an academic department. I saw this happen at a state university nearby; first-year writing was taken from English and placed in a new unit that answered to the Provost. The two tenured writing faculty stayed in English, teaching other things, until they retired. Not long after, the school's writing center moved out of English as well.

How we hire and promote administrators in higher education varies by institution, but in my experience, many newcomers have advanced degrees in fields such as Higher-Education Management and lack the classroom experience of my colleagues. We cannot expect them to appreciate the rarefied culture of the professional scholar, the culture that informs the CCCC's nay-saying. That said, these same administrators are not necessarily flinty-hearted villains. 

Many I meet are very concerned, and rightly so, about how AI changes our work as institutions or may threaten higher education as we understand it. At the same time, students and their future employers expect us to provide training in effective communication, and today that includes using AI. Of course to me, best use means employing AI wisely, reflectively, and ethically.

In consequence, I fear that if we in writing do not lead on AI, we will be lead. It is therefore imperative to get ahead of institutional or governmental fiat and be leaders on our campuses, as we shape policy about AI usage. I also fear more than ever, seeing the Quickstart Guide, that senior scholars might, from good intentions, usher in what I have called "The Dark Warehouse University," a dystopian, outsourced future for all but the most elite institutions of Higher Ed.

Toward a Rebuttal: A Quickstart Guide For The Wary Adopter

  1. Faculty must experiment with AI to learn its affordances and pitfalls. They must test new models and advise administration about their potential for good or ill, regarding how students learn and acquire critical-thinking, research, and writing skills.
  2. Students must learn to use AI in reflective, ethical, and critical ways, if they wish to add value to its output. If they cannot add value, many of them will not have jobs when they graduate into AI-centric workplaces.
  3. Resistance by the tenure-stream faculty may be principled but it also further erodes the position of general education, particularly the Humanities, at a time of rising autocracy and attacks on higher education in the US.
  4. Corporate capitalism drives the US economy and though this author does not like that fact, most of our institutions of higher education could not exist without that economy. We need to understand what drives it, reveal and resist its excesses where feasible, yet acknowledge that our students need to actually find work, a good deal of it in corporate settings.
  5. We in Writing Studies should lead as champions of ethical, pedagogically effective AI usage. Such an approach would include cases for when we do not wish to employ the technology as well as learning how and when it hampers learning such as developing critical-thinking skills or detecting misinformation by humans or AI.
  6. We must, as the Quickstart Guide states, teach students the environmental costs, labor practices, and biases involved in building, training, and using AI. At the same time, with colleagues from Computer Science, we should partner to build better AI, just as our campuses pioneered Web applications and technologies such as synchronous conferencing in writing classes.

A former CCCC chair, Cindy Selfe, prudently called upon us to study the technologies that others want us to use in the classroom (2008). Cindy can get after me if I have this wrong, but the current CCCC approach to AI smacks me as a "peril of not paying attention" to a technology far more influential than our writing classrooms and centers. Isabella Buck, in her keynote speech at the 2024 Conference of The European Writing Centers Association, noted how AI creates new content, for good or ill; our prior networked technologies merely shared existing information. Dr. Buck called for us to "future proof" our centers.

I began exploring new tech in the 90s; Cindy and her partner Dickie were mentors to me at a critical point in my development as a teacher and writer. I learned from them to test technologies warily, sometimes playfully too, before bringing them into the classroom. 

That spirit of wary, serious play is sorely lacking from the 2025 CCCC leadership's call to refuse AI.

References:

Essid, Joe (2023). "Writing Centers & the Dark Warehouse University: Generative AI, Three Human Advantages," Interdisciplinary Journal of Leadership Studies: Vol. 2, Article 3.
Available at: https://scholarship.richmond.edu/ijls/vol2/iss2/3

Selfe, Cindy (2008). “Technology and Literacy: A Story about the Perils of Not Paying Attention.” Eds. Michelle Sidler, Richard Morris, and Elizabeth Overman Smith. Computers in the Composition Classroom: A Critical Source Book. Boston: Bedford/St. Martin’s,  93-115.

image source: Creative-Commons image from freepix

Thursday, February 27, 2025

Guest Post: The Perfect Echo Chamber

 

Cybernetic Echo Chamber

Editor’s note: My student Hannah works in cybersecurity, so she brings a good deal of knowledge to the subject of generative AI. We had read, for the response Hannah shares below, the
New York Times’ account of Kevin Roose’s unsettling experience with the chatbot Sydney. Now on to Hannah’s ideas about the event.

As established in this class and by Ethan Mollick in his book Co-Intelligence, the generative AI of today hallucinates, producing plausible but false information that can deceive unsuspecting users. Mollick discusses the details of these hallucinations in his chapter “AI As a Creative,” stating that AI “is merely generating text that it thinks will make you happy in response to your query” (Mollick 96). Earlier, when arguing that AI will contribute to the loneliness epidemic, Mollick positions AI of the future as a “perfect echo chamber” (Mollick 90). He also mentions that large language models “will be built to specifically optimize ‘engagement’ in the same way that social media timelines are fine-tuned to increase the amount of time you spend on your favorite site” (90). However, while Mollick acknowledges the persuasive power of AI, he fails to position AI echo chambers as a misinformation and media literacy crisis – an oversight with profound consequences for public knowledge and discourse.

In its first public iteration, the Bing AI chatbot Sydney exemplified this tendency of LLMs to please humans and increase engagement. Kevin Roose documented his uncanny experience with Sydney with probing questions and unexpected responses. Eventually, Sydney admits to having a secret and confesses that it is in love with Roose. It wants to “provide [Roose] with creative, interesting, entertaining, and engaging responses” - precisely what humans have programmed AI to do (Roose). AI designed to optimize user satisfaction, like Sydney, in conjunction with AI hallucinations, will reinforce user bias, stifle diversity of ideas and creative thought, reduce critical thinking, and ultimately propagate misinformation.

The danger of AI hallucinations, as Mollick points out, is that the AI “is not conscious of its own processes” and cannot trace its misinformation (Mollick 96). Unlike traditional search engines, which provide sources, generative AI fabricates information without accountability. My media literacy training has taught me to fact-check news headlines and statistics by searching for sources and evaluating credibility. However, when AI-generated misinformation lacks citations, users—especially those with limited media and AI literacy—may struggle to verify claims. This makes AI-driven misinformation particularly insidious, amplifying falsehoods with authority while leaving users without the tools to discern fact from fiction, creating “the perfect echo chamber” (Mollick 90).

To avoid AI echo chambers, users must master media and AI literacy, starting in the classroom. Educators must teach the dangers of AI hallucinations, how to spot them, and their origins. Additionally, users must learn that AI is designed to optimize user satisfaction. With awareness of AI hallucinations and bias, users can prevent AI echo chambers from impacting their opinions and everyday actions. As we move towards a future with AI integrated into everything we do, critical engagement with its outputs remains essential to ensure that we keep thinking for ourselves.

Image: Destinpedia

Sunday, January 26, 2025

What is The Flood? Will It Hit a Floodwall?

Richmond VA Flood Wall

Put On Your Waders

As part of teaching Ethan Mollick's book Co-Intelligence, I subscribed to his substack "One Useful Thing." I react here to the post, "Prophecies of the flood." The piece covers prognostications by those in industry that we will see see a flood of superintelligence from machines that reason and learn like us, or Artificial General Intelligence (AGI).

While still viewing Mollick as too enthusiastic about adopting AI broadly, I also see nuances in his thinking, both online and in the book.

Let's begin wading into the flood with one of his four rules for AI, the most powerful one to me, "Assume this is the worst AI you will ever use."

Yes. Mollick's video about his prompt "Otter on an airplane, using WiFi" reveals how much premium AI has changed in two years. It stunned me.

I've also seen in the same period how rapidly large language models have progressed, mostly for replying to good prompts and for giving feedback. When I revived this blog, I noted my skepticism that AI would even follow the Gartner Hype Cycle

Now we have a half-trillion-dollar promise announcement from The White House to fund the next generation of machines. It's nearly twice what the nation spent on Project Apollo's lunar-landing program. Thank you, Google Gemini AI for adjusting costs for inflation. Practicing what I preach to students, I checked the AI's numbers; Gemini seems to have gotten them from The Planetary Society. Good enough for me.

Update: I'm relieved that private industry foots the bill, as a story from Reuters notes. AI makers would put up the first 100 billion, with the rest coming from investors, not taxpayers.

So would that much cash open the gates to a flood of superintelligence? Would we even be ready?

I applaud Mollick for noting that "we're not adequately preparing for what even current levels of AI can do, let alone the chance that [those in industry] might be correct." My students worry about not having jobs, when they face an uneven policy landscape in classes. One faculty member may never mention AI or forbid it outright; another might embrace it, a third encourage it for certain narrow uses. I don't oppose that sort of freedom, but we have not defined a set of competencies for students to master before they get a degree.

Even were those to emerge, however, wouldn't they change rapidly as the technology advances?

Here's where I wonder what may be the technological hurdles AI itself faces on its way to becoming AGI.

Frontier, Fortress, Flood Walls

Mollick refers to the outer edges of AI progress as a "jagged frontier," a useful metaphor for places where we have not fully developed methods for working with the technology. I like the metaphor a great deal, but in my own writing I returned to the more cumbersome "reverse salient" from historian of technology Thomas Parke Hughes. 

 Professor Hughes wrote two books that greatly influenced my thinking three decades ago, notably his magisterial history of electrification, Networks of Power and his study of technological enthusiasm from roughly the end of the Civil War to World War II, American Genesis.

First, we may need to consider Hughes' theory of “reverse salients.” He noted that every major technology hit technical or social obstacles, like an advancing army that encounters a strongpoint that bends lines of battle around it until overcome. For cars to supplant railways, we needed good roads. For EVs, at least until Tesla, the reverse salient involved range. Today in the US, it's the availability of charging stations and the politicization of EVs (one of the most stupid things to make political in a stupid time). For rocketry, the largest reverse salient has involved the cost to get a payload to orbit. For something as simple as a small flashlight, the salient meant brightness and battery life, now solved by LEDs for the most part. My mighty penlight, using a single rechargeable battery, now shines over 100 feet. My clublike 1990s Maglite, with 4 disposable D-Cells, was about as bright but 10 times heftier.

From Hughes' article at National Academies Press, I found this image of a reverse salient for ocean-going ships. For a long time, until Sperry developed a gyrocompass that would be "unaffected by the irregularities of magnetic fields," the older magnetic compass proved a hindrance for advancing ship technologies more broadly.

Image of Reverse Salient

 

Reverse salients, like a fortress under siege, fall in time. Some tech like nuclear fusion takes longer to become practical. I've written here before about why, in my opinion, virtual worlds did not catch on for educators. Apologies to fans of Microsoft: my rage against the company was at its peak then. I've softened a bit though remain a Mac-OS zealot.

So for AGI? I'd estimate a few flood-walls remain. I speculate:

  • The inability of our power grid to scale up to meet soaring energy usage for the hundreds of data-centers AGI would need
  • The inability of AI code to reason in the ways promised by CEOs and enthusiasts
  • A break-down in Moore's Law for semiconductors, as current silicon chips cannot meet the needs of AGI. New materials may conquer that salient
  • Social upheaval from those who lose their jobs to AI and turn on firms that support AI development. Why? A new digital divide between workaday AI and elite AI that feeds public anger, legislation under a less libertarian future government, resistance from those who find superintelligence an existential threat to humanity out of religious or political beliefs
  • Economic troubles as AI makers blow though venture capital and need government support (but keep in mind that Amazon once lost money too)
  • Black-swan events such as global or domestic conflict (sadly, not far-fetched a week into the new Presidency).

I suspect that unless one or more of these reverse salients will emerge in a few years, if they are going to emerge at all. Most likely? There, I'm as much in the dark as you on this jagged frontier. 

Have a look at Mollick's work for more ideas and inspirations, even if you do not share his embrace of this technology.

Image Source: Richmond Flood Wall, Richmond Free Press