Thursday, March 19, 2026

Another CCCC, Another Rejection of AI


At face value, the latest rejection of AI in writing classrooms from the National Council of Teachers of English does not sound like a radical document. It calmly frames the issue in terms of students' and faculty members' rights to choose how to teach and learn. 

It's also out of touch with how so many universities are being run. Once again, as I did last year, I will issue a brief rebuttal. One of the issues below could be easily addressed. The other? Not so handily.

  • Realities of how classes get scheduled and listed

    If students have the right to opt out of AI in any course they take, it would create chaos for instructors. Those like me who scaffold AI into assignments do so in ways not possible without these applications. Students would not know until they saw a syllabus how (or if) AI would be employed in a class.

    The resolution claims that "Professors should additionally respect a student’s choice to refuse AI. To do this, it would be ideal that they have assignments that students can choose from that do not involve generative AI and that do not isolate the students from class discussions and activities."

    My podcasting assignment requires the use of ElevenLabs software. Would I then give students who refuse a second option? My classes are small. I could do that. But what happens in large courses that enroll many dozens of students?

    One work-around would be for registrars to flag courses that use AI. Then students could exercise their rights to chose accordingly.

  • The reality of who teaches writing and who makes the decisions

    The document rightly notes that "Generative AI is but the latest version of this neoliberal approach to efficiency and expediency," but then it does not give instructors advice about how to counter this trend with senior administration that follows neoliberal principles. The authors rightly note as well that we work in "a profession that has long dealt with labor issues and that continues to rely considerably on frequently underpaid contingent, adjunct, and graduate student labor."

    So how, exactly, would such at-will employees say "no" when a university requires AI literacy in its curriculum? Here I'll speak bluntly: the tenured faculty on the committee that drafted this resolution should have known better. That sort of Ivory-Tower elitism in 2025 led me to leave NCTE.

    NCTE leadership understands well the contingent nature of writing instruction and the realities of who now has the power of the purse at our institutions. 

    Perhaps these same authors can fight for better governance at colleges and universities, so the voiceless and underpaid have more of a role in shaping policy? Perhaps more tenured faculty could teach more writing-intensive classes? Such teachers do have more power to refuse AI and shape policy around its adoption on campus.

    It seems my old profession (well, I still pursue it part-time) is lost in the fog, as industry lays plans for orbital constellations of data-centers and AI apps pop up on our phones without our asking. This will not end well for NCTE, which doesn't bother me. But it does bother me that many teachers of writing may be hurt in the process.

Image: Creative-Commons "The campanile at UC Berkeley shrouded in fog." by Daniel Parks at Flickr

Friday, February 6, 2026

A Conversation By Economists About AI

Speakers on stage for discussion

I had the pleasure of attending a "Sharp Viewpoints" event on my old employer's campus, where Dr. Kevin Hallock, university President, hosted two economists with expertise in AI, UVA's Anton Korinek and MIT's David Autor.  The subject, AI and The Future of Work, should concern us all.

I hope not to misrepresent what I heard, but all three experts on stage consider it inevitable that AI will continue to advance rapidly and that it will disrupt careers in fields such as Finance, Computer Science, and Accounting. These have been popular majors for my former students. The scope of the disruption and what happens to wages, as AI takes on more white-collar tasks, remain to be seen. Korinek cited a ninefold increase in the capacity of AI systems annually, from a 2.5x increase in efficiency times a 4x increase in capabilities. 

Korinek warned the audience that we have a small window of time to prepare for resultant changes without chaos in the economy, though Autor felt that if we were to see a gradual loss of a profession, which he called "generational," our economic health could be sustained. He did warn that nothing on the scope of 10% of jobs lost annually could be tolerated without an upheaval. 

Both speakers likened gradual change to, say, automating trucking. The industry would slowly shed human drivers over time, because prior investment in vehicles and warehouses can't be profitably junked overnight. As older drivers retire, however, gradually fewer young people would enter the field and robots would take on the work of moving cargo on the highways. 

As for me? Whoever or whatever does the driving, I still prefer trains for 90% of the hauling, with a truck doing the final leg of the journey. 

The mood was cautiously optimistic, but I'm no optimist. My dad loved being a long-haul trucker. He was not as happy when he ran a wholesale produce company, though that enabled him to put away some money for when he retired. He preferred "the Road" to the security of a desk, and he passed that love of highway travel on to me.  Driving have him a purpose. Yet dad, in retirement, did not begin to write poetry, learn Greek, or play golf. He watched TV and was bored while living on his social security and investment income.

Something the speakers did not address to my satisfaction: if we do have a future without nearly as many jobs for humans, what do many people without side interests or hobbies or talents do with their time? I'm not a good model: I've stayed more busy than ever since leaving full-time employment.

I ask my question, a Humanist's response, assuming that some sort of basic universal income would arise; Author prefers not a monthly dole but a universal basic investment fund for every citizen, beginning at birth. 

Without it, I suspect we'd have some sort of Butlerian Jihad of the mass unemployed, to smash the data centers, and frankly, I'd support smashing them if the alternative meant vast numbers of destitute folks existing alongside a tiny elite empowered by AI. Our speakers did note how this sort of future could emerge, endangering democracy. I'd argue it already is emerging under the broligarchs of Silicon Valley and our current Administration.

Yet with income for all in a jobs-free future, we might have a lot of folks on a dole, not hiking or painting but watching TV and not contributing to our civilization. In my darkest hours, I also think we already have that, without a dole in the US. 

It's better to have a purpose. I didn't hear much discussion of "what does it mean to be a human?" in last night's one-hour chat. Of course, that sort of discussion has been going on among philosophers for millennia, even before Socrates began to question people at Athens' Agora. 

Both Korinek and Autor did fear a rise in inequality, as wages might fall while AI-copiloted productivity rises. That said, everyone agreed that we will have new jobs emerge, some of them related perhaps to newly found leisure time. Autor reminded us that 100 years ago, 38% of Americans worked in agriculture. Today under 2% do. But a person in 1926 also could not image the types of careers many of us now have. Nor did they have our concept of leisure time. A video-game developer's career would have been as alien as a Man from Mars.

Fair point, but when (according to them, not if) AI and humanoid robots replace most human labor, including game design, what will most of us do to find a purpose?

What will colleges do? Korineck, bless him, as well as Hallock, upheld the value of the Liberal Arts for coming to grips with essential and enduring questions. Autor, while nodding to the value of liberal education, said he'd be more "crass" to note that a college degree also means more income and professional training. I don't disagree with him, yet marrying some careerist coursework to a passion for something seems wiser than choosing a "safe" major in a field that bores you.

The even left me unsettled. If the future they predict comes, I'd begin by having students read Plato's Republic and The Federalist Papers with me, for two explanations of what a society can do to organize itself against chaos.

If you want to see a video of the talk, UR recorded it. Thanks to Fred Hagemeister at Richmond for sharing the link.

Friday, January 16, 2026

Giantism and The Moon Race

NASA rockets 1960s

The race is on, with the Artemis 2 mission coming as soon at early next month. I'm hoping to see it and future endeavors work well, so humanity begins a long voyage to the stars as a multi-planet species. I know that sounds odd in a blog about minimalist living and self-reliance for rural life, but to me the space program has always been about learning new techniques for doing things here on Earth, about new materials, for instance, that went into cordless tools I use as a DIYer or insulation I use to cut our energy costs. Splitting-maul and Moon rocket? Why not?

I've spoken at length about this with futurist Bryan Alexander, who shares my love for space travel. We are both puzzled by a rejection of human exploration by many on the Left and from the environmental movement. It amounts to a form of neo-Luddism at a time when many nations and companies are pursuing reusable launch vehicles. NASA's big SLS Moon rocket is not, save for the Orion capsule.

What critics ignore is how money is already being made in the heavens and how seemingly silly tourist flights resemble early aviation's first paying passengers. I don't think there's any way to stop this next Space Age, save for a Kessler Event (the film Gravity depicts that) making launches untenable and sends telecom back to the 1960s. 

I'd claim that moving human energy and heavy industry into space would be a godsend for our home planet's ecosystem, especially when cleaner rocket fuels come into play. You'll find a few ideas about that here. As much as I dislike the billionaire associated with SpaceX, he has one thing right: we need to begin moving some of our people and energy off-planet. That does not mean abandoning our world to the slow ecocide currently under way.

At the same time, I've written of his megalomaniacal plans for Starship at my other blog; I suspect the derivative called the Human Landing System (HLS) will prove a disaster the first time it tries to land on an unprepared lunar surface. To me it's as bad an idea as von Braun's Nova rocket of the late 1950s, also marketed to NASA as a Moon rocket when its maker was really trying to build a Mars rocket in disguise and with public monies. Both Nova and HLS would try to land a skyscraper-sized rocket on the Moon.

The current US President, who pays no attention to details save those concerning his enormous ego, is pushing us to land by 2028, the final year (I hope) of his Administration. Doing that would beat China's measured program, which plans to land astronauts by 2030. I suspect the Chinese will make that deadline, and we'll lose an HLS lander or, worse, lives trying to beat that arbitrary date. 

Perhaps Jeff Bezos' Blue Moon, an alternative lander from Blue Origin of less monstrous proportions, can get American (and I hope international partners') boots on the Moon safely and even begin the work of building a permanent settlement there. If we think small about getting there (to stay this time) we might do better than SpaceX's giantism.

By the time we make that next giant leap, we may also have national leadership that is not anti-environmental yet still pro-Space. We may then have both well-funded science and human exploration / settlement happening beyond orbit.

It's one of the things that still gives me hope in a difficult time. 

Saturday, December 20, 2025

Reason 11 For Not Building Data Centers Now

Google StarCloud Concept

A few stories recently caught my eye about moving data centers away from where we live, far away, in fact.

 China's government plans to test space-based data centers starting in 2026, with a rollout of ones with costs as low or lower than earth-based centers slated for the 2030s. Meanwhile in the States, Amazon's Jeff Bezo has hatched similar plans. Not to be left behind by his billionaire rival, Elon Musk wants to use his Starship mega-rockets to orbit data centers. Then there's Google Starcloud, with a test planned for 2027. That system would also depend upon something like Starship to make the venture cheap enough to construct.

At first blush, it seems like a crazy idea, but as I considered the benefits versus costs, it makes sense to move this industry skyward:

  • With reusable rocket boosters, costs to orbit per kilogram have dropped radically in recent years. If Starship prospers and others copy its model, we'll see the door opened for very cheap rocket launches.
  • We have abundant solar energy in orbit. There's no need for generators, nuclear reactors, or natural gas.
  • We also have easy cooling, without using water. Rotate a satellite and you have a solar heating on one side, the utter cold of space on the other.
  •  Space-based data centers need not be huge to do their job. They could be a constellation of large satellites that talk to each other, as Starlink does already. Right now, however, big centers seem to be the model for space-based construction.
  • If we do go big, we know how to do this already thanks to the International Space Station. 
  • Parts of orbital centers can be replaced with one launch, and the old centers can be upgraded by robots or small enough to burn up on a deorbit. 
  • Beaming data to ground stations 200 miles away on Earth is not a problem. We already do this.

My hope is that this technology will mature fast, to avoid more environmental and social disruption on Earth. And closest to home, I hope my County's Board of Supervisors pays attention, before we end up with a huge and obsolete building placed on agricultural land next to residences.

One issue that does bother me, beyond the possible climate-change effects of launching so many rockets?

It's called The Kessler Effect (or Syndrome). Readers may have seen the film Gravity, which one explosion in space results in a cascading set of collisions and, consequently, a massive cloud of space debris hurtling around 10 miles per second, chasing an underwear-clad Sandra Bullock.  

How serious is a collision in space? I once saw a piece of Space Shuttle Challenger's front windows, removed from a mission before the craft's 1986 catastrophe. The section of thick glass was damaged badly by hitting a tiny fleck of paint tossed off some forgotten rocket-booster, perhaps decades earlier. Now imagine millions of these objects, large and small, forming a cloud around the Earth, making any rocket launches an exercise in futility, destroying telecom networks, and grounding human space travel for many decades. Or centuries.

A center as large as Starcloud makes for a huge target, were the Kessler Effect to begin. 

Belatedly, firms and governments are considering ways to mitigate space debris and harden orbital infrastructure. Let's hope they get is right, as they'll only have one opportunity. I'd like to get my data from the heavens, not from Earth with more carbon pollution and wasted groundwater.

image: Starcloud center from Google video