The author, Daisy Dobrijevic, goes on, at some length for what I often find a casual venue, to discuss why the stability of our home star matters to our survival as a technological civilization. For once, the comments in these pieces prove as interesting as the source. Those leaving remarks compare Sol to others we observe, noting that our star's orderly behavior could mean that we are a solitary civilization in our galactic neighborhood. These respondents go on to discuss other "filters" to the famous Drake Equation.
I stupidly had assumed that all stars like our own prove relatively stable. If not, the number of possible civilizations out there could plummet.
Being alone proves bracing and calming to me, personally. Yet there exists a difference between solitude and loneliness. If we are the only civilization in our part of the galaxy that seems less about solitude, a healthy thing for many humans, and more about isolation, a terrifying prospect to me.
Read both article and comments, including speculations as to the number of civilizations likely in our own galaxy. Let's forget, save as an intellectual exercise, those in galaxies beyond. The Deep Dark of millions of light years means that if they exist or once existed, we'll likely never encounter them, barring some difficult-to-imagine technology such as wormholes or other exotica far beyond our race, which cannot yet harness nuclear fusion.
I liked the piece by Dobrijevic and commentary a great deal, but in the comments I found one regret, that "Unregulated-LLM insanity" could spell the end of our species. That sort of claim needs challenging, though I'm not eager to start a flame war with a bunch of anonymous posters. Generally, the terms LLM (and AI) have been bandied about too casually.
When Doomscrollers, Doomers, and Millennial Cultists talk of an AI Apocalypse, they envision the sort of Hollywood fantasy of the Terminator or Matrix franchises. My brain goes right to to origin of these fantasies, Harlan Ellison's poignant and horrifying "I Have No Mouth, And I Must Scream." The story does get a visual homage in the first Matrix film. I don't worry about such horrors getting real.
Frankly, current large language models cannot even ask good critical questions as writing coaches. Thus I find it improbable to imagine ChatGPT or Anthropic Claude as planetary overlords, like the nightmarish AM in Ellison's story.
Were General AI to emerge, we might have the sort of "filter" mentioned by the commenters. Some AI companies have that sort of software in mind: a network with superhuman intelligence, able to reason and grow by itself, much as a human does. It could change its "training data," much as we do when we decide on our own to study something new. Like us, it would then we self-aware and have agency beyond answering questions. It could make decisions on its own, contacting us, for instance, with questions of its own. I consider that a new form of life.
Until then, we merely have energy-hungry LLMs in data centers, parsing our words and predicting the next one to appear in a sequence. I'm no neuroscientist, but that's not how a human mind works.
While I'd be a fool to say we have nothing to fear from AI, my concerns are more earthly: unemployment, lack of income, not to mention worries about climate change, nuclear proliferation, erosion of rule of law, regional or global war, civil strife. So I'd tell our forum pundits opining about AI threats this: we have filters aplenty closer to home than some super-intelligence.
Be mindful, then, of the terms used in discussions about AI. We who work with this technology do not want to propagate careless thinking about a very important, and still evolving, innovation. If you are still wishing to learn more about The Drake Equation, have a look at the NASA site where I got the post's lead image.