Friday, February 13, 2026

The Singularity Is Neither Here Nor Near

You know something has morphed into pure conventional wisdom when it turns up at Real Clear Politics: The singularity is going viral. The title links a meaningless hypostatization, "the singularity", with a cliche, "going viral". For starters, I asked Chrome AI mode to define "singularity", and I think it did about as good a job as can be done with a very amorphous term:

A singularity is a point where existing rules or models break down because a value has become "infinite" or undefined. While the term originates in mathematics, it is most commonly used today in two very different contexts: astrophysics and technology.

. . . The core idea is that an upgradable intelligent agent (like an AI) could enter a "positive feedback loop" of self-improvement. Each new, more intelligent version would be able to redesign itself even faster, leading to an explosion of superintelligence that far surpasses all human capability.

. . . Futurist Ray Kurzweil, a leading proponent, famously predicted this event will occur around 2045.

But it's important to put Futurist Ray Kurzweil in full context. I prodded Chrome AI mode on this: "Does Ray Kurzweil's idea of the singularity involve human consciousness being uploaded into a computer?" It replied,

Yes, Ray Kurzweil’s vision of the Singularity includes the "uploading" of human consciousness into a digital substrate as a critical step toward achieving "digital immortality". He predicts that by 2040–2045, technology will be advanced enough to instantiate a specific person's entire mental process on a powerful computational medium.

This glosses over the very big philosophical problem of consciousness and where it precisely exists and just assumes something like if we have the DNA, or maybe the right sort of brain scan, we have the person. But he never remotely demonstrates what this formula might look like, how we might be able to digitize it, and whether there's a consciousness that could actually be identified in the "digital substrate" once it's transferred.

In large part, this is because "AI" as we think about it now is an elaborate sleight of hand not much diffeent from what we see with a talking parrot, crow, or magpie. The other day I posted a video of a crow that's able to answer the question "Where's Walter? with "Dunno", or sometimes it will simply call "Walter!". But think about it for a moment. If you ask it, "Where's Susan?" it will at best give you a quizzical tilt of the head.

It might even give a sort of gurgle, but it won't answer "Dunno". It can't extend the "Where's" part of the question to recognize the second part refers to any proper name, because it doesn't even know that "Walter" is a name equivalent to "Susan". The crow just processes a string of sounds in a way that it recognizes calls for another string of sounds that it's trained to utter, and it's happy to earn the delighted giggles of the humans who feed it doughnuts.

I've noted here previously that what we call "AI" is more specificlly the capability to process natural language linked with massive data search. Computational speed has reached the point where a computer can process either strings of characters or actual speech, run through sets of rules, and search data so quickly that it answers in what appears to be conversation, but this is nothing but a very, very fast and versatile version of what a parrort, crow, or magpie does in responding to strings of sounds.

What makes talking birds entertaining is the appearance of a conversation, and certain parrots like African Greys do this very well. But here's an insightful reddit post (CAG refers to African Grey):

I have a CAG who talks quite a lot. I agree with others here that they say things in context more often than expected. I think people tend to not notice that there are often certain contextual triggers involved though, and that it’s not always as intelligent or amazing as it seems. I’m really not trying to take all the magic out of it because I do think my little man is quite smart and loving, I’m just being realistic.

For example, my CAG knows to ask, “Are you okay?” if I use a depressed tone or expression because that is what my husband asks when I do those things, and he is mimicking him. It’s not that he entirely understands what he’s saying, they’re just very social animals and he’s doing the thing that he’s observed is the thing to do in that scenario.

. . . Does he say it because he expects a real answer like a human does? I sometimes give him one, and it honestly doesn’t seem to affect him one way or another. Does he say it because he’s observed that asking me that can difuse my mood? Maybe; it’s hard to know. But the main reason parrots talk — and any creature learns to speak — is to get things from their environment, which in this example is just general attention from me.

What "AI" does to a less sophisticated observer is basically what an African Grey does in asking or answering questions. AI's computational speed allows it to process language along a far greater range of rules than a parrot's brain, but like a parrot, it's unconscious of what it's saying. It has no agency or will beyond the sets of rules it's given.

The piece at the Real Clear Politics link above reads like mush, but it appears to have an underlying assumption that there will be a "singularity" that will somehow change the human race when machines become "smarter" than humans. But its overall conclusion is hard to tease out. On one hand,

At the superheated center of the AI boom, safety and alignment researchers are observing their employers up close, concluding there’s nothing left for them to do and acting on their realization that the industry’s plan for the future does not seem to involve them.

In support of this, he cites a couple of cases, in particular one Mrinank Sharma, a safety researcher at Anthropic, who decided his high-level jnb was pointless or something and left to “explore a poetry degree and devote myself to the practice of courageous speech.” Exactly what does this prove? But on the other hand,

In other words, the animating narrative of the AI industry — the inevitable singularity, rendered first in sci-fi, then in theory, then in mission statements, manifestos, and funding pitches — broke through right away, diffusing into the mainstream well ahead of the technologies these companies would end up building.

Apparently what's happening is that as the "singularity" approaches, that is, the day when machines become smarter than people, the machines are telling the managers to disregard their safety and alignment researchers, and -- what? Glenn Reynolds the other day told us the machines are learning to make better and better porn. Reynolds, a libertarian who normally would say better porn is a good thing, thinks this is bad. The writer at RCP doesn't even get that specific, but he thinks things are out of control somehow, because the safety and alignment researchers, or at least a few of them, are quitting their jobs to write poetry.

Or wait, they aren't quitting their jobs to write poetry, they're quitting their jobs to get degrees in writing poetry. He concludes,

The AI industry’s foundational story is finally going viral — just for being depressing as hell.

Some people are smarter than parrots. Others not so much.