Sunday, January 26, 2025

What is The Flood? Will It Hit a Floodwall?

Richmond VA Flood Wall

Put On Your Waders

As part of teaching Ethan Mollick's book Co-Intelligence, I subscribed to his substack "One Useful Thing." I react here to the post, "Prophecies of the flood." The piece covers prognostications by those in industry that we will see see a flood of superintelligence from machines that reason and learn like us, or Artificial General Intelligence (AGI).

While still viewing Mollick as too enthusiastic about adopting AI broadly, I also see nuances in his thinking, both online and in the book.

Let's begin wading into the flood with one of his four rules for AI, the most powerful one to me, "Assume this is the worst AI you will ever use."

Yes. Mollick's video about his prompt "Otter on an airplane, using WiFi" reveals how much premium AI has changed in two years. It stunned me.

I've also seen in the same period how rapidly large language models have progressed, mostly for replying to good prompts and for giving feedback. When I revived this blog, I noted my skepticism that AI would even follow the Gartner Hype Cycle

Now we have a half-trillion-dollar promise announcement from The White House to fund the next generation of machines. It's nearly twice what the nation spent on Project Apollo's lunar-landing program. Thank you, Google Gemini AI for adjusting costs for inflation. Practicing what I preach to students, I checked the AI's numbers; Gemini seems to have gotten them from The Planetary Society. Good enough for me.

Update: I'm relieved that private industry foots the bill, as a story from Reuters notes. AI makers would put up the first 100 billion, with the rest coming from investors, not taxpayers.

So would that much cash open the gates to a flood of superintelligence? Would we even be ready?

I applaud Mollick for noting that "we're not adequately preparing for what even current levels of AI can do, let alone the chance that [those in industry] might be correct." My students worry about not having jobs, when they face an uneven policy landscape in classes. One faculty member may never mention AI or forbid it outright; another might embrace it, a third encourage it for certain narrow uses. I don't oppose that sort of freedom, but we have not defined a set of competencies for students to master before they get a degree.

Even were those to emerge, however, wouldn't they change rapidly as the technology advances?

Here's where I wonder what may be the technological hurdles AI itself faces on its way to becoming AGI.

Frontier, Fortress, Flood Walls

Mollick refers to the outer edges of AI progress as a "jagged frontier," a useful metaphor for places where we have not fully developed methods for working with the technology. I like the metaphor a great deal, but in my own writing I returned to the more cumbersome "reverse salient" from historian of technology Thomas Parke Hughes. 

 Professor Hughes wrote two books that greatly influenced my thinking three decades ago, notably his magisterial history of electrification, Networks of Power and his study of technological enthusiasm from roughly the end of the Civil War to World War II, American Genesis.

First, we may need to consider Hughes' theory of “reverse salients.” He noted that every major technology hit technical or social obstacles, like an advancing army that encounters a strongpoint that bends lines of battle around it until overcome. For cars to supplant railways, we needed good roads. For EVs, at least until Tesla, the reverse salient involved range. Today in the US, it's the availability of charging stations and the politicization of EVs (one of the most stupid things to make political in a stupid time). For rocketry, the largest reverse salient has involved the cost to get a payload to orbit. For something as simple as a small flashlight, the salient meant brightness and battery life, now solved by LEDs for the most part. My mighty penlight, using a single rechargeable battery, now shines over 100 feet. My clublike 1990s Maglite, with 4 disposable D-Cells, was about as bright but 10 times heftier.

From Hughes' article at National Academies Press, I found this image of a reverse salient for ocean-going ships. For a long time, until Sperry developed a gyrocompass that would be "unaffected by the irregularities of magnetic fields," the older magnetic compass proved a hindrance for advancing ship technologies more broadly.

Image of Reverse Salient

 

Reverse salients, like a fortress under siege, fall in time. Some tech like nuclear fusion takes longer to become practical. I've written here before about why, in my opinion, virtual worlds did not catch on for educators. Apologies to fans of Microsoft: my rage against the company was at its peak then. I've softened a bit though remain a Mac-OS zealot.

So for AGI? I'd estimate a few flood-walls remain. I speculate:

  • The inability of our power grid to scale up to meet soaring energy usage for the hundreds of data-centers AGI would need
  • The inability of AI code to reason in the ways promised by CEOs and enthusiasts
  • A break-down in Moore's Law for semiconductors, as current silicon chips cannot meet the needs of AGI. New materials may conquer that salient
  • Social upheaval from those who lose their jobs to AI and turn on firms that support AI development. Why? A new digital divide between workaday AI and elite AI that feeds public anger, legislation under a less libertarian future government, resistance from those who find superintelligence an existential threat to humanity out of religious or political beliefs
  • Economic troubles as AI makers blow though venture capital and need government support (but keep in mind that Amazon once lost money too)
  • Black-swan events such as global or domestic conflict (sadly, not far-fetched a week into the new Presidency).

I suspect that unless one or more of these reverse salients will emerge in a few years, if they are going to emerge at all. Most likely? There, I'm as much in the dark as you on this jagged frontier. 

Have a look at Mollick's work for more ideas and inspirations, even if you do not share his embrace of this technology.

Image Source: Richmond Flood Wall, Richmond Free Press

Sunday, December 29, 2024

AI: Sorry, You May Not Always Sit at My Table

No Robots Sign

Ethan Mollick's book Co-Intelligence is a dangerous document. There, I said it. We read the text for our campus digital pedagogy cohort for Fall, 2024, and at first I was quite excited. The book promises to provide guidelines for wisely adopting AI, yet from the get-go I had issues with the author's less-than-critical acceptance of generative AI.

Let's start with Mollick's four, and seemingly absolute, rules for working with AI:

  1. Always Invite AI to the Table.
  2. Be the Human in the Loop.
  3. Treat AI Like a Person (But Tell It What Kind of Person It Is).
  4. Assume This Is the Worst AI You Will Ever Use.

I'm not much trouble with these save for number 1. I've been reading Wendell Berry, a techno-selective if ever there were one, and I am pretty certain how he'd react to generative AI: Hell no.

Why? Mollick's rule number one flatly violates Berrys notion about adopting new tools.

It also flies in the face of what Howard Rheingold, reporting for Wired eight long years before the iPhone 1 debuted, found out about the Amish. In "Look Who's Talking," he describes how the Amish approach any technology. They experiment to see if it violates:

a body of unwritten but detailed rules known as the "Ordnung." Individuals and communities maintain a separation from the world (by not connecting their houses to telephones or electricity), a closeness to one another (through regular meetings), and an attitude of humility so specific they have a name for it ("Gelassenheit"). Decisions about technology hinge on these collective criteria. If a telephone in the home interferes with face-to-face visiting, or an electrical hookup fosters unthinking dependence on the outside world, or a new pickup truck in the driveway elevates one person above his neighbors, then people start to talk about it. The talk reaches the bishops' ears.

Thus the Amish man who recently had a long chat with me about the virtues of PEX-based plumbing systems. He, like other Amish (and me) are techno-selectives, not utter Luddites.

Rheingold's piece had an enormous intellectual influence on me. It made me wary of new tools, though I admit falling without reservation for virtual worlds. I've come to regret that uncritical acceptance of a technology that not only fails the test above but also fails with a clumsy UI and poor use-case in education.

Maybe in consequence, I grew wiser about smart phones, doubting the corporate narrative from the start; today, mine is nearly always off. I don't use it much for one social media platform, less so for texting, have silenced all notifications, and never at all watch time-killer videos. Pop culture generally seems too ephemeral for my remaining years on the planet, and I don't want to discuss TV shows with friends. If something looks non-violent and well written, I still wait years before watching a series, methodically. 

Influencers? I find them in books. Berry is one. I bought heavily into the concept with which Rheingold closes his piece, "If we decided that community came first, how would we use our tools differently?"

I don't consider online community, including virtual worlds, much of a substitute for the real thing. Even gaming with old friends on our Monday Nerd Nites seems a pale shadow of a good in-person meeting. Only recently I began to read Berry, and he corroborates much of what Rheingold discovered.

So for Mollick's rule one, I plan to say "no" a lot. What do we gain by always using AI for any intellectual task? In Berry's 1987 article "Why I Am Not Going to Buy a Computer," he lays out nine rules for adopting a new tool:

1. The new tool should be cheaper than the one it replaces.

2. It should be at least as small in scale as the one it replaces.

3. It should do work that is clearly and demonstrably better than the one it replaces.

4. It should use less energy than the one it replaces.

5. If possible, it should use some form of solar energy, such as that of the body.

6. It should be repairable by a person of ordinary intelligence, provided that he or she has the necessary tools.

7. It should be purchasable and repairable as near to home as possible. 

8. It should come from a small, privately owned shop or store that will take it back for maintenance and repair.

9. It should not replace or disrupt anything good that already exists, and this includes family and community relationships.

Generative AI fails most of these, in particular anything to do with localism and energy use, but number nine reminds me of the unforeseen outcomes of ubiquitous mobile telephony: families in restaurants, not looking at each other, all intent on their screens and being somewhere else. That would seem perverse to anyone of my parents' generation or earlier.

So how do we NOT give into Mollick's notion that we must always bring AI to the table, dinner or otherwise, even without a good use case? I plan to be writing about that in the new year as I continue researching AI's role in the writing process, where I do cautiously let it have a seat.

Image by Duncan Cumming on Flickr

Friday, December 6, 2024

AI as Transcription Tool, Using Zoom

 
Even those afflicted with terminal Zoom-fatigue after the pandemic will enjoy a feature that recently
popped up in my Zoom room, a little button labeled "AI Companion." It has proven a good companion indeed.

I have been mentoring a mid-career director of a writing center on the West Coast, and by accident I triggered the feature. In return we got this, when the meeting ended:

  • An executive summary of the topics we discussed
  • A bullet-listed synopsis of ideas, suggestions, and strategies broken down by topic
  • A nearly error-free synthesis of the entire meeting.

The software (Zoom's own proprietary AI) uses Zoom's captioning tool to capture what we say; no video gets recorded. I'm guessing that the data gets shared without our permission with the firm. Caveat emptor.

My mentee loved the accidental result and plans to try it himself. With a few writing students who agreed to use the Companion, I opened my Zoom room during a face-to-face meeting. When done, I e-mailed them the Zoom AI synopsis. Again, save for misspelling a name or two, the AI succinctly and clearly summed up what we had done.

Whenever I meet someone "scared" of AI, I mention uses such as this as good practices. While I remain a skeptical adopter of this new technology, I rather like a robotic note-taker who works with us openly (and not in secret) while freeing humans to focus on each other rather than on taking notes. 

For students with dysgraphia or other disorders that require them to record a class session, this little feature of Zoom is a godsend. Try AI Companion yourself and let me know what you think.

Sunday, December 1, 2024

A False Filter in the Drake Equation: LLMs

The Drake Equation
I read a piece today at Space.com, about a fierce solar storm that hit our planet some 2687 years ago. Another such Miyake event today would likely spell the beginning of a new dark age (looks at wood stove. Better get busy splitting and learn to cut logs without my chainsaw).

The author, Daisy Dobrijevic, goes on, at some length for what I often find a casual venue, to discuss why the stability of our home star matters to our survival as a technological civilization. For once, the comments in these pieces prove as interesting as the source. Those leaving remarks compare Sol to others we observe, noting that our star's orderly behavior could mean that we are a solitary civilization in our galactic neighborhood. These respondents go on to discuss other "filters" to the famous Drake Equation

I stupidly had assumed that all stars like our own prove relatively stable. If not, the number of possible civilizations out there could plummet. 

Being alone proves bracing and  calming to me, personally. Yet there exists a difference between solitude and loneliness. If we are the only civilization in our part of the galaxy that seems less about solitude, a healthy thing for many humans, and more about isolation, a terrifying prospect to me.

Read both article and comments, including speculations as to the number of civilizations likely in our own galaxy. Let's forget, save as an intellectual exercise, those in galaxies beyond. The Deep Dark of millions of light years means that if they exist or once existed, we'll likely never encounter them, barring some difficult-to-imagine technology such as wormholes or other exotica far beyond our race, which cannot yet harness nuclear fusion.

I liked the piece by Dobrijevic and commentary a great deal, but in the comments I found one regret, that "Unregulated-LLM insanity" could spell the end of our species. That sort of claim needs challenging, though I'm not eager to start a flame war with a bunch of anonymous posters. Generally, the terms LLM (and AI) have been bandied about too casually.

When Doomscrollers, Doomers, and Millennial Cultists talk of an AI Apocalypse, they envision the sort of Hollywood fantasy of the Terminator or Matrix franchises. My brain goes right to to origin of these fantasies, Harlan Ellison's poignant and horrifying "I Have No Mouth, And I Must Scream." The story does get a visual homage in the first Matrix film. I don't worry about such horrors getting real.

Frankly, current large language models cannot even ask good critical questions as writing coaches. Thus I find it improbable to imagine ChatGPT or Anthropic Claude as planetary overlords, like the nightmarish AM in Ellison's story.

Were General AI to emerge, we might have the sort of "filter" mentioned by the commenters. Some AI companies have that sort of software in mind: a network with superhuman intelligence, able to reason and grow by itself, much as a human does. It could change its "training data," much as we do when we decide on our own to study something new. Like us, it would then we self-aware and have agency beyond answering questions. It could make decisions on its own, contacting us, for instance, with questions of its own. I consider that a new form of life.

Until then, we merely have energy-hungry LLMs in data centers, parsing our words and predicting the next one to appear in a sequence. I'm no neuroscientist, but that's not how a human mind works.

While I'd be a fool to say we have nothing to fear from AI, my concerns are more earthly: unemployment, lack of income, not to mention worries about climate change, nuclear proliferation, erosion of rule of law, regional or global war, civil strife. So I'd tell our forum pundits opining about AI threats this: we have filters aplenty closer to home than some super-intelligence.

Be mindful, then, of the terms used in discussions about AI. We who work with this technology do not want to propagate careless thinking about a very important, and still evolving, innovation. If you are still wishing to learn more about The Drake Equation, have a look at the NASA site where I got the post's lead image.