Sunday, December 29, 2024

AI: Sorry, You May Not Always Sit at My Table

No Robots Sign

Ethan Mollick's book Co-Intelligence is a dangerous document. There, I said it. We read the text for our campus digital pedagogy cohort for Fall, 2024, and at first I was quite excited. The book promises to provide guidelines for wisely adopting AI, yet from the get-go I had issues with the author's less-than-critical acceptance of generative AI.

Let's start with Mollick's four, and seemingly absolute, rules for working with AI:

  1. Always Invite AI to the Table.
  2. Be the Human in the Loop.
  3. Treat AI Like a Person (But Tell It What Kind of Person It Is).
  4. Assume This Is the Worst AI You Will Ever Use.

I'm not much trouble with these save for number 1. I've been reading Wendell Berry, a techno-selective if ever there were one, and I am pretty certain how he'd react to generative AI: Hell no.

Why? Mollick's rule number one flatly violates Berrys notion about adopting new tools.

It also flies in the face of what Howard Rheingold, reporting for Wired eight long years before the iPhone 1 debuted, found out about the Amish. In "Look Who's Talking," he describes how the Amish approach any technology. They experiment to see if it violates:

a body of unwritten but detailed rules known as the "Ordnung." Individuals and communities maintain a separation from the world (by not connecting their houses to telephones or electricity), a closeness to one another (through regular meetings), and an attitude of humility so specific they have a name for it ("Gelassenheit"). Decisions about technology hinge on these collective criteria. If a telephone in the home interferes with face-to-face visiting, or an electrical hookup fosters unthinking dependence on the outside world, or a new pickup truck in the driveway elevates one person above his neighbors, then people start to talk about it. The talk reaches the bishops' ears.

Thus the Amish man who recently had a long chat with me about the virtues of PEX-based plumbing systems. He, like other Amish (and me) are techno-selectives, not utter Luddites.

Rheingold's piece had an enormous intellectual influence on me. It made me wary of new tools, though I admit falling without reservation for virtual worlds. I've come to regret that uncritical acceptance of a technology that not only fails the test above but also fails with a clumsy UI and poor use-case in education.

Maybe in consequence, I grew wiser about smart phones, doubting the corporate narrative from the start; today, mine is nearly always off. I don't use it much for one social media platform, less so for texting, have silenced all notifications, and never at all watch time-killer videos. Pop culture generally seems too ephemeral for my remaining years on the planet, and I don't want to discuss TV shows with friends. If something looks non-violent and well written, I still wait years before watching a series, methodically. 

Influencers? I find them in books. Berry is one. I bought heavily into the concept with which Rheingold closes his piece, "If we decided that community came first, how would we use our tools differently?"

I don't consider online community, including virtual worlds, much of a substitute for the real thing. Even gaming with old friends on our Monday Nerd Nites seems a pale shadow of a good in-person meeting. Only recently I began to read Berry, and he corroborates much of what Rheingold discovered.

So for Mollick's rule one, I plan to say "no" a lot. What do we gain by always using AI for any intellectual task? In Berry's 1987 article "Why I Am Not Going to Buy a Computer," he lays out nine rules for adopting a new tool:

1. The new tool should be cheaper than the one it replaces.

2. It should be at least as small in scale as the one it replaces.

3. It should do work that is clearly and demonstrably better than the one it replaces.

4. It should use less energy than the one it replaces.

5. If possible, it should use some form of solar energy, such as that of the body.

6. It should be repairable by a person of ordinary intelligence, provided that he or she has the necessary tools.

7. It should be purchasable and repairable as near to home as possible. 

8. It should come from a small, privately owned shop or store that will take it back for maintenance and repair.

9. It should not replace or disrupt anything good that already exists, and this includes family and community relationships.

Generative AI fails most of these, in particular anything to do with localism and energy use, but number nine reminds me of the unforeseen outcomes of ubiquitous mobile telephony: families in restaurants, not looking at each other, all intent on their screens and being somewhere else. That would seem perverse to anyone of my parents' generation or earlier.

So how do we NOT give into Mollick's notion that we must always bring AI to the table, dinner or otherwise, even without a good use case? I plan to be writing about that in the new year as I continue researching AI's role in the writing process, where I do cautiously let it have a seat.

Image by Duncan Cumming on Flickr

No comments: