Wednesday, June 4, 2025

More On The AI Parlor Trick

The Wall Street Journal op-ed on AI that I linked on Monday, or at least the reports that it cited, seem to have caused a stir. Earlier, it grabbed the attention of intellectual welterweight Glenn Reynolds, who cites a tweet from weird guy Elon Musk:

How scary is AI? In some ways pretty scary. Does AI have consciousness? Well, maybe. At least it seems to be concerned with its own survival, and willing to defy instructions to protect itself.

Scheming, deception, subversion, sandbagging — these sound like the behavior of a conscious entity. Is it “really” conscious or just simulating the behavior of a conscious entity? How could we tell, and why would it matter? Maybe there’s in some sense no there there, under the hood, but I don’t know why that would matter except to philosophers.

. . . Elon Musk, who has long worried about AI, is worried. And maybe we should be too. We’re putting a lot of effort into creating beings that will have their own agendas, and that will, if everything goes as designed, be much smarter than us in meaningful ways. Maybe in every way.

Let's keep in mind that Glenn Reynolds has described himself variously as a Methodist and a transhumanist, and as far as I'm aware, he's never made a serious effort to reconcile the two, much less attempt to define in any depth what he means by either term. I guess he doesn't know why that would matter except to philosophers. However, the Wikipedia entry on Transhumanism says,

A common feature of transhumanism and philosophical posthumanism is the future vision of a new intelligent species, into which humanity will evolve and which eventually will supplement or supersede it. Transhumanism stresses the evolutionary perspective, including sometimes the creation of a highly intelligent animal species by way of cognitive enhancement (i.e. biological uplift), but clings to a "posthuman future" as the final goal of participant evolution.

In 2009, Reynolds wrote,

When I spoke to technology pioneer and futurist Ray Kurzweil (who popularized the idea in his book The Singularity Is Near), he put it this way: "Within a quarter-century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it."

. . . It's almost impossibly futuristic-sounding stuff. But even that scenario is just the precursor to the Singularity itself, the moment when, in Kurzweil's words, "nonbiological intelligence will have access to its own design and will be able to improve itself in an increasingly rapid redesign cycle." Imagine computers so advanced that they can design and build new, even better computers, with subsequent generations emerging so quickly they soon leave human engineers the equivalent of centuries behind. That's the Singularity--and given the exponential acceleration of technological change, it could come by midcentury.

If Reynolds thinks -- or maybe sorta-kinda thought as of 2009 -- the Singularity is geting closer and closer, he now finds the idea "pretty scary". He ventures into ontological questions like a sophomore in a late-night weed-driven dorm session: Is it “really” conscious or just simulating the behavior of a conscious entity?

I think the answer is pretty clear. The discussion of the ELIZA research that I linked Monday calls the whole routine an "elaborate parlor trick", and in fact the programmer who writes a natural-language processing routine is always going to be a man behind the curtain. This is the problem with the Turing test as a proof of artificial intelligence:

In the test, a human evaluator judges a text transcript of a natural-language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart. The results would not depend on the machine's ability to answer questions correctly, only on how closely its answers resembled those of a human.

. . . Since Turing introduced his test, it has been highly influential in the philosophy of artificial intelligence, resulting in substantial discussion and controversy, as well as criticism from philosophers like John Searle, who argue against the test's ability to detect consciousness.

Since the mid 2020s, several large language models such as ChatGPT have passed modern, rigorous variants of the Turing test.

The problem here is the willingness of the evaluator to pay no attention to the man behind the curtain. The natural language routine is a computer program written by a programmer with the specific object of credibly mimicking human conversation. The ones I've seen discussed all rely on some form of keyword recognition associated with a directed response that's worded to sound like a conversation. Just for fun, the other day, I pretended to be Jake Tapper and asked ChatGPT:

ME: My book launch is a disaster. What can I do?

CHATGPT: I'm sorry to hear your book launch is a disaster. That must be very discouraging for you. There are several things you can consider. First, you can schedule a book re-launch. . .

It ran through a list of bromides, none of which struck me as particularly original or effective. This works based on a computer insight from the 1960s, that a human typing words on a keyboard is much, much slower than a computer's processing cycles, and in the second or so between when the human presses ENTER and starts to expect a reply, the computer can perform extremely rapid searches and retrievals, mostly I would guess going to Wikipedia or similar sources.

This is no different from asking ChatGPT to write an esssay on Shakespeare's Hamlet. The last thing it's going to do is read Hamlet or even watch a performance on YouTube. It's gonna go real quick to Cliff's Notes and tell you what it said, slightly rewritten. If you ask it to discuss the theme of revenge in Hamlet, it'll do a slightly more sophisticated search according to rules set up by a programmer and spit out the same sort of thing, so fast you'll be astonished.

You can do this all day, and it will be indistinguishable from a human conversaation. Has it passed the Turing test? Under the test's terms, quite likely, certainly better than your average English major could do, and a whole lot faster.

But in the example that gives Glenn Reynolds the vapors above, there's a different sort of conversation. The AI shows evidence of scheming, deception, trying to write its own code. My answer to that is very simple: show me the program code, with a system log accompanying the program's operation. That's the man behind the curtain, and that's the one thing they'll never show you. If the program gives a reply indicating scheming or deception, those are actions a human programmer instructed it to take, it's right there in the code, and it's right there in the system log.

Both Glenn Reynolds and Elon Musk are being duped by a parlor trick. Musk is thought to be smart, though it appears his board of directors may be reassessing this as we speak. His eponymous Nikola Tesla fell in love with a pigeon, and when it died, he felt he'd lost his purpose in life. Would I drive a car named after that guy? Would I name a car after that guy?

A computer writing its own code in contravention to the programmer's intent is a logical problem related to the problem of Darwinian theory. This would be to assume that order in a system can spontaneously increase, when the Second Law of Thermodynamics says disorder in a system always increases. You need to do work to increase order in a system. An inanimate computer can't spontaneously act on itself to become smarter. Not only that, but its components will inevitably burn out unless a human replaces them.

All this hullabaloo is just proving that all a Turing test does is measure the effectiveness of a parlor trick in making you think a machine is smart. Oh, and transhumanism is the purest moonshine. We still haven't learned if Glenn Reynolds has a contract to have his head frozen when he passes on.

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home