I've had a very productive conversation with my colleague Thomas, who is a "hard no" person when it comes to AI in classes. We agree that reasoned Humanist dissent should be seated at the table, even as Humanists such as I do invite AI to take a seat.
Here I employ a metaphor used in Ethan Mollick's book Co-Intelligence: Living and Working With AI. I fear that Mollick may miss some valid reasons, from a Humanist perspective, for being very wary of what AI may do to our minds, our reading and writing habits. In a year with a candidate who walks and talks like an authoritarian, perhaps of a fascist inclination, much rides on how new technologies will influence civic discourse in the coming years.
Thomas holds ideas similar to those espoused in the article "ChatGPT Does Not Have to Ruin College," online at The Atlantic. I like many of those reasons for resisting the hype-cycle around AI, but I also turn to a much older set of caveats, espoused back in 1988 at the dawn of the personal-computing age.
Wendell Berry's "Why I'm Not Going to Be Buying a Computer" caused a stir when it ran in Harpers, and it still rankles some of us who have found, say, blogging really good for one's writing muscles. Berry still holds that his "tiny no" was the right answer to make.
While I disagree broadly with Berry on computing, I do find one aspect of his refusal very compelling. In his essay he lists nine criteria for adopting a new tool:
- The new tool should be cheaper than the one it replaces.
- It should be at least as small in scale as the one it replaces.
- It should do work that is clearly and demonstrably better than the one it replaces.
- It should use less energy than the one it replaces.
- If possible, it should use some form of solar energy, such as that of the body.
- It should be repairable by a person of ordinary intelligence, provided that he or she has the necessary tools.
- It should be purchasable and repairable as near to home as possible.
- It should come from a small, privately-owned shop or store that will take it back for maintenance and repair.
- It should not replace or disrupt anything good that already exists, and this includes family and community relationships.
Generative AI fails, by my reckoning, most of these tests. It does, arguably, do better work than the traditional search engine (test 3). Otherwise, it fails tests 4 and 9 badly. I suppose in time AI server-farms might be covered with solar panels (test 5) but Japan's decision to restart its nuclear power plants to power AI, as well as Microsoft's recommissioning of Three-Mile Island for AI power argue otherwise.
Berry's no Luddite. Note the solar panels in the Wikipedia image I chose for this post. I'm reminded of how Howard Rheingold called the Amish "adaptive techno-selectives" in his insightful 1999 feature piece about mobile phones, "Look Who's Talking."
We will know, in time, if Berry proves correct, as he says in the recent interview, that "you could just ask your computer and it’ll tell you. But this doesn’t contribute to the formation of a mind." What is learning, after all? I've long distinguished information from knowledge. Having more information, under the scrutiny of my admittedly imperfect powers of reasoning and critical thinking, builds a store of useful knowledge. As a farmer as well as academic, I know things Berry does, too. I can judge when a field is ready for a cover-crop by hard-earned experience, but I also go online for weather forecasts, advice about soil conditions in Central Virginia, organic methods for controlling pests.
My concern, however, is that we may offload reasoning to large language models, whose propensity to hallucinate without really good prompt-engineering has been well documented in journalistic and scholarly work. A feedback loop results: can we detect these errors if doing so requires the very reasoning powers we are using less frequently?
I don't know, but I do know that naysayers such as my colleague, Wendell Berry, and others who thoughtfully resist marketing hypberbole need a seat at the table.
No comments:
Post a Comment