Sunday, January 26, 2025

What is The Flood? Will It Hit a Floodwall?

Richmond VA Flood Wall

Put On Your Waders

As part of teaching Ethan Mollick's book Co-Intelligence, I subscribed to his substack "One Useful Thing." I react here to the post, "Prophecies of the flood." The piece covers prognostications by those in industry that we will see see a flood of superintelligence from machines that reason and learn like us, or Artificial General Intelligence (AGI).

While still viewing Mollick as too enthusiastic about adopting AI broadly, I also see nuances in his thinking, both online and in the book.

Let's begin wading into the flood with one of his four rules for AI, the most powerful one to me, "Assume this is the worst AI you will ever use."

Yes. Mollick's video about his prompt "Otter on an airplane, using WiFi" reveals how much premium AI has changed in two years. It stunned me.

I've also seen in the same period how rapidly large language models have progressed, mostly for replying to good prompts and for giving feedback. When I revived this blog, I noted my skepticism that AI would even follow the Gartner Hype Cycle

Now we have a half-trillion-dollar promise from The White House to fund the next generation of machines. Congress would have to approve such funding, but it's nearly twice what the nation spent on Project Apollo's lunar-landing program. Thank you, Google Gemini AI for adjusting costs for inflation. Practicing what I preach to students, I checked the AI's numbers; Gemini seems to have gotten them from The Planetary Society. Good enough for me.

So would that much cash open the gates to a flood of superintelligence? Would we even be ready?

I applaud Mollick for noting that "we're not adequately preparing for what even current levels of AI can do, let alone the chance that [those in industry] might be correct." My students worry about not having jobs, when they face an uneven policy landscape in classes. One faculty member may never mention AI or forbid it outright; another might embrace it, a third encourage it for certain narrow uses. I don't oppose that sort of freedom, but we have not defined a set of competencies for students to master before they get a degree.

Even were those to emerge, however, wouldn't they change rapidly as the technology advances?

Here's where I wonder what may be the technological hurdles AI itself faces on its way to becoming AGI.

Frontier, Fortress, Flood Walls

Mollick refers to the outer edges of AI progress as a "jagged frontier," a useful metaphor for places where we have not fully developed methods for working with the technology. I like the metaphor a great deal, but in my own writing I returned to the more cumbersome "reverse salient" from historian of technology Thomas Parke Hughes. 

 Professor Hughes wrote a two books that greatly influenced my thinking three decades ago, notably his magisterial history of electrification, Networks of Power and his study of technological enthusiasm from roughly the end of the Civil War to World War II, American Genesis.

First, we may need to consider Hughes' theory of “reverse salients.” He noted that every major technology hit technical or social obstacles, like an advancing army that encounters a strongpoint that bends lines of battle around it until overcome. For cars to supplant railways, we needed good roads. For EVs, at least until Tesla, the reverse salient involved range. Today in the US, it's the availability of charging stations and the politicization of EVs (one of the most stupid things to make political in a stupid time). For rocketry, the largest reverse salient has involved the cost to get a payload to orbit. For something as simple as a small flashlight, the salient meant brightness and battery life, now solved by LEDs for the most part. My mighty penlight, using a single rechargeable battery, now shines over 100 feet. My clublike 1990s Maglite, with 4 disposable D-Cells, was about as bright but 10 times heftier.

Reverse salients, like a fortress under siege, fall in time. Some tech like nuclear fusion takes longer to become practical. I've written here before about why, in my opinion, virtual worlds did not catch on for educators. Apologies to fans of Microsoft: my rage against the company was at its peak then. I've softened a bit though remain a Mac-OS zealot.

So for AGI? I'd estimate a few flood-walls remain. I speculate:

  • The inability of our power grid to scale up to meet soaring energy usage for the hundreds of data-centers AGI would need
  • The inability of AI code to reason in the ways promised by CEOs and enthusiasts
  • A break-down in Moore's Law for semiconductors, as current silicon chips cannot meet the needs of AGI. New materials may conquer that salient
  • Social upheaval from those who lose their jobs to AI and turn on firms that support AI development. Why? A new digital divide between workaday AI and elite AI that feeds public anger, legislation under a less libertarian future government, resistance from those who find superintelligence an existential threat to humanity out of religious or political beliefs
  • Economic troubles as AI makers blow though venture capital and need government support (but keep in mind that Amazon once lost money too)
  • Black-swan events such as global or domestic conflict (sadly, not far-fetched a week into the new Presidency).

I suspect that unless one or more of these reverse salients will emerge in a few years, if they are going to emerge at all. Most likely? There, I'm as much in the dark as you on this jagged frontier. 

Have a look at Mollick's work for more ideas and inspirations, even if you do not share his embrace of this technology.

Image Source: Richmond Flood Wall, Richmond Free Press

No comments: