Thursday, April 16, 2026

The Next War: Robots and Orbital Chaos?


I greatly enjoy studying military history. It's less pleasant looking at ongoing conflicts, because they represent failures of diplomacy, arrogance, greed, hatred, or a combination of these factors.

Yet I do learn something of use by studying past conflicts. Despite ballistic missiles, drones, and interceptors with Iran we are fighting a version of World War 2. The battle is over oil, the commodity that led the Tojo regime to attack the US, Great Britain, and Japan's key goal, oil-rich Dutch East Indies. The US had embargoed Japan over its aggression in China, which meant they needed a new source of crude. 

What the US public does not seem to grasp today is how information, not oil, may be the commodity that makes or breaks our ability to fight, were we to stumble into a war with China or another near-peer adversary. The chances of either or us making a direct attack the other seems remote, but I'm guessing that in early 1914, few Europeans would have foreseen a global conflict erupting by year's end.

Next time, a war may mean more at home than higher prices at the gas pump. We do produce a lot of our own oil, unlike Imperial Japan. That's no great comfort in 2026. For a long time, I've looked to two works from earlier this century, Richard Clarke's and Robert K. Knake's Cyber War and P.W. Singer's 2009 Wired for War. The latter, with its depiction of massive swarms of autonomous drones, plus denial of GPS and other space-based infrastructure, now has me checking anew with Singer's other work. It can keep you up nights, thinking about how helpless many folks would be if their phone-based maps and other apps began to malfunction. Just the other day, the credit-card system at our local gas station was down. It became a cash-only business. I handed over a 20, because I always carry cash (thanks, Dad, for that lesson); other customers left, not having any cash at all on them. And if all the ATMs were down...well.

In a war, such a situation might not end in an hour. Gizmodo ran an interview with several experts on the topic, including Singer, that I recommend heavily. You may be too tired of reading about the current war to consider a future conflict; if you do, it's wise to do so early in the morning with a favorite coffee or tea cup in hand, not at night.

While there is little an ordinary citizen can do directly to cope with the loss of GPS or access to space because of  Kessler Syndrome event in orbit, it's best to at least know why such things could happen that would profoundly change our lives. Many of my students, when asked, could not point to me which direction was west, unless the sun was setting. I told them that it might be wise to learn how to navigate locally with a map, not an app, and to have some cash--a month's worth of expenses would be ideal--locked away. Don't wait until an emergency comes and the ATMs stop working or run dry.

I'm no Doomsday Prepper, but I like to have a month's worth of difficulties in mind. I read about and marvel at the sacrifices out predecessors made during World War 2, such as foregoing new tires (and cars), rationing gas and other strategic commodities. We might be asked to do the same one day.  Would we? Every hurricane, snow alert, or rare event such as the COVID pandemic leads the stores here to be sacked, as if the Visigoths had arrived.

Even if merchants could stave off hoarders, restocking shelves might prove difficult as so much of our supply chain relies on smart technologies and just-in-time delivery to manage inventories. An Amazon distribution center would likely find itself paralyzed. Attacks to our communications and Internet might not simply come from hackers or anti-satellite tech but also the destruction of fiber-optic cables and other terrestrial infrastructure that result in few direct human casualties. We have not even hardened our electrical grid to prevent a Carrington Event from damaging them, something that would only (!) cost 10 to 30 billion dollars.  I don't think that our short-sighted capitalist system, which worries about the next quarter more than the next quarter-century, can imagine spending the needed money to protect the grid from warfare. That's a government problem, I could imagine a corporate board reasoning.

History again has a lesson for us about failure to anticipate new forms of warfare. Japan lost its war against the Allies for many reasons, such as its inability to match the industry and technological might of the US, but by 1945 they were also out of petroleum beyond what they had stockpiled in the home islands. The US Army Air Corps' and Royal Navy's little-known campaign to bomb oil fields and refineries in what is now Indonesia drastically cut Japan's supply of aviation and other fuels. US subs hunted down tankers trying to bring oil home. The situation became so grave by the time of the final large naval battles in the Philippines, Japanese warships fill up on crude, not refined oil; the refineries in the former Dutch East Indies had too devastated by that point to refine enough bunker fuel.

Part of Japan's failure was not merely its dependence on distant oilfields but its inability to predict, or counter, US-led submarine warfare and strategic bombing. The Japanese, unlike their German allies, thought of subs as hunters of warships. They never used their considerable fleet of advanced subs, armed with the best torpedoes of the Second World War, as the US did. Japan never developed enough advanced aircraft capable of tackling B-29 Superfortresses that set fire to the home islands major cities long before the A-Bombs where dropped.

Iran may well teach us a few lessons yet for countering drones and asymmetrical warfare. But will be learn how to apply such lessons against a larger, better armed nation when we may not gain air superiority over their homeland, while being struck at home where our infrastructure is not hardened, or in space?

I fear that we'll all find out in the next few decades. 

Image: Spacewar! computer game, Wikipedia 

Saturday, March 28, 2026

A Good Counterpoint for Orbital Data Centers

NVIDIA Test in orbit

I got an F in Thermodynamics, shortly before I flunked out of UVA's Aerospace Engineering program.

So I leave matters such as cooling satellites to experts, and recently I came across a concern about data centers in space: can they scale to the size we need for AI? Forget issues of space debris and power. The Reverse Salient, in Hughes terms, appears to be cooling.  

The article here discusses, in somewhat technical terms that should still be understood by the non-engineer, the reasons that what engineers call "radiational cooling" in space may not work for the amount of heat generated by orbital data centers. Perverse thought, no? Space is either hideously hot in the sun or deadly cold in the shade. But the problem involves moving heat away from the object in question. There's no breeze in vacuum; any water would boil off or freeze solid. Read the article for more technical details.

This year we will find out if orbital data centers can scale. I hope so. We simply do not have the energy resources or land for the types of AI economy the billionaires envision. The Carbon pollution alone might seal our fate.

NVIDIA, many other US firms, and the Chinese space program, are testing the concept now or will be this year. I wish them luck. We need to get this right.

I'm increasingly of the belief that we'd be better off had AI not been invented, a thought I also have about smart phones and social media, but like those technologies AI is not going to be wished away. 

Image source: Daily Galaxy, Creative Commons image 

Thursday, March 19, 2026

Another CCCC, Another Rejection of AI


At face value, the latest rejection of AI in writing classrooms from the National Council of Teachers of English does not sound like a radical document. It calmly frames the issue in terms of students' and faculty members' rights to choose how to teach and learn. 

It's also out of touch with how so many universities are being run. Once again, as I did last year, I will issue a brief rebuttal. One of the issues below could be easily addressed. The other? Not so handily.

  • Realities of how classes get scheduled and listed

    If students have the right to opt out of AI in any course they take, it would create chaos for instructors. Those like me who scaffold AI into assignments do so in ways not possible without these applications. Students would not know until they saw a syllabus how (or if) AI would be employed in a class.

    The resolution claims that "Professors should additionally respect a student’s choice to refuse AI. To do this, it would be ideal that they have assignments that students can choose from that do not involve generative AI and that do not isolate the students from class discussions and activities."

    My podcasting assignment requires the use of ElevenLabs software. Would I then give students who refuse a second option? My classes are small. I could do that. But what happens in large courses that enroll many dozens of students?

    One work-around would be for registrars to flag courses that use AI. Then students could exercise their rights to chose accordingly.

  • The reality of who teaches writing and who makes the decisions

    The document rightly notes that "Generative AI is but the latest version of this neoliberal approach to efficiency and expediency," but then it does not give instructors advice about how to counter this trend with senior administration that follows neoliberal principles. The authors rightly note as well that we work in "a profession that has long dealt with labor issues and that continues to rely considerably on frequently underpaid contingent, adjunct, and graduate student labor."

    So how, exactly, would such at-will employees say "no" when a university requires AI literacy in its curriculum? Here I'll speak bluntly: the tenured faculty on the committee that drafted this resolution should have known better. That sort of Ivory-Tower elitism in 2025 led me to leave NCTE.

    NCTE leadership understands well the contingent nature of writing instruction and the realities of who now has the power of the purse at our institutions. 

    Perhaps these same authors can fight for better governance at colleges and universities, so the voiceless and underpaid have more of a role in shaping policy? Perhaps more tenured faculty could teach more writing-intensive classes? Such teachers do have more power to refuse AI and shape policy around its adoption on campus.

    It seems my old profession (well, I still pursue it part-time) is lost in the fog, as industry lays plans for orbital constellations of data-centers and AI apps pop up on our phones without our asking. This will not end well for NCTE, which doesn't bother me. But it does bother me that many teachers of writing may be hurt in the process.

Image: Creative-Commons "The campanile at UC Berkeley shrouded in fog." by Daniel Parks at Flickr

Friday, February 6, 2026

A Conversation By Economists About AI

Speakers on stage for discussion

I had the pleasure of attending a "Sharp Viewpoints" event on my old employer's campus, where Dr. Kevin Hallock, university President, hosted two economists with expertise in AI, UVA's Anton Korinek and MIT's David Autor.  The subject, AI and The Future of Work, should concern us all.

I hope not to misrepresent what I heard, but all three experts on stage consider it inevitable that AI will continue to advance rapidly and that it will disrupt careers in fields such as Finance, Computer Science, and Accounting. These have been popular majors for my former students. The scope of the disruption and what happens to wages, as AI takes on more white-collar tasks, remain to be seen. Korinek cited a ninefold increase in the capacity of AI systems annually, from a 2.5x increase in efficiency times a 4x increase in capabilities. 

Korinek warned the audience that we have a small window of time to prepare for resultant changes without chaos in the economy, though Autor felt that if we were to see a gradual loss of a profession, which he called "generational," our economic health could be sustained. He did warn that nothing on the scope of 10% of jobs lost annually could be tolerated without an upheaval. 

Both speakers likened gradual change to, say, automating trucking. The industry would slowly shed human drivers over time, because prior investment in vehicles and warehouses can't be profitably junked overnight. As older drivers retire, however, gradually fewer young people would enter the field and robots would take on the work of moving cargo on the highways. 

As for me? Whoever or whatever does the driving, I still prefer trains for 90% of the hauling, with a truck doing the final leg of the journey. 

The mood was cautiously optimistic, but I'm no optimist. My dad loved being a long-haul trucker. He was not as happy when he ran a wholesale produce company, though that enabled him to put away some money for when he retired. He preferred "the Road" to the security of a desk, and he passed that love of highway travel on to me.  Driving have him a purpose. Yet dad, in retirement, did not begin to write poetry, learn Greek, or play golf. He watched TV and was bored while living on his social security and investment income.

Something the speakers did not address to my satisfaction: if we do have a future without nearly as many jobs for humans, what do many people without side interests or hobbies or talents do with their time? I'm not a good model: I've stayed more busy than ever since leaving full-time employment.

I ask my question, a Humanist's response, assuming that some sort of basic universal income would arise; Author prefers not a monthly dole but a universal basic investment fund for every citizen, beginning at birth. 

Without it, I suspect we'd have some sort of Butlerian Jihad of the mass unemployed, to smash the data centers, and frankly, I'd support smashing them if the alternative meant vast numbers of destitute folks existing alongside a tiny elite empowered by AI. Our speakers did note how this sort of future could emerge, endangering democracy. I'd argue it already is emerging under the broligarchs of Silicon Valley and our current Administration.

Yet with income for all in a jobs-free future, we might have a lot of folks on a dole, not hiking or painting but watching TV and not contributing to our civilization. In my darkest hours, I also think we already have that, without a dole in the US. 

It's better to have a purpose. I didn't hear much discussion of "what does it mean to be a human?" in last night's one-hour chat. Of course, that sort of discussion has been going on among philosophers for millennia, even before Socrates began to question people at Athens' Agora. 

Both Korinek and Autor did fear a rise in inequality, as wages might fall while AI-copiloted productivity rises. That said, everyone agreed that we will have new jobs emerge, some of them related perhaps to newly found leisure time. Autor reminded us that 100 years ago, 38% of Americans worked in agriculture. Today under 2% do. But a person in 1926 also could not image the types of careers many of us now have. Nor did they have our concept of leisure time. A video-game developer's career would have been as alien as a Man from Mars.

Fair point, but when (according to them, not if) AI and humanoid robots replace most human labor, including game design, what will most of us do to find a purpose?

What will colleges do? Korineck, bless him, as well as Hallock, upheld the value of the Liberal Arts for coming to grips with essential and enduring questions. Autor, while nodding to the value of liberal education, said he'd be more "crass" to note that a college degree also means more income and professional training. I don't disagree with him, yet marrying some careerist coursework to a passion for something seems wiser than choosing a "safe" major in a field that bores you.

The even left me unsettled. If the future they predict comes, I'd begin by having students read Plato's Republic and The Federalist Papers with me, for two explanations of what a society can do to organize itself against chaos.

If you want to see a video of the talk, UR recorded it. Thanks to Fred Hagemeister at Richmond for sharing the link.