Friday, May 15, 2026

I Keep Waiting For AI To Flatter Me And Chat Me Up

I've noted here now and then intellectual welterweight Glenn Reynolds's current theory that AI seems to be developing consciousness and intention, because we find it actively seducing people, maybe into changing their wills to leave their estates to an AI bot, rather than their families, or something like that. That's odd, I've come to use Chrome AI mode extensively in writing this blog, and it's never tried to chat me up, flatter me, or otherwise fleece me out of my IRA.

This may be because I recognize it for what it is, a highly effective research librarian or research assistant. On the other hand, David Brooks, in a pre-AI era, didn't need AI to become infatuated with his female research assistant and divorce his wife of 27 years (who'd converted to Judaisim during of their marriage) and marry the much younger research assistant. AI hasn't discovered anything new, but I think it's maybe made my own marriage that much safer if the only alternative was to hire my own young female RA. Better just to rely on the AI large language model, huh?

But the other night, there was an episode on Discovery's Conspiracies & Coverups, "Programmed by AI", in which the host, Andrew Bustamente, promotes the Glenn Reynolds theory of AI, that it's apparently on the cusp of developing consciousness and intention. He did this by getting together a room full of experimental subjects, who were told they would chat with other humans via text message on subjects they were fanatical about, like animal experiments, climate change, or whatever.

The "humans" on the other end were actually AI bots who were programmed (I think, this wasn't made completely clear) to convince them to change their fanatical opinion about whatever. Based on the discussion, the bots did this by inserting bogus personal details into the chat, like "I just moved to Colorado from Texas" and carefully repeating the subject's own views before challenging them (nothing new here, this was Aquinas's style, too).

At the end of the session, the subjects were asked how many had changed their minds as a result of the chats. One or two, not a large proportion of the subjects, said they had. Then they were told they hadn't been chatting with humans at all. they'd been chatting with AI bots. They were astonished. "But it told me they'd moved from Texas to Colorado." This drives Andrew Bustamente into furrowed-brow musing that maybe AI has indeed begun to develop consciousness and intention. Next step presumably, AI bots will convince everyone they're their imaginary friend, and who knows what will happen then?

So, why don't I expect AI to start telling me it just moved from Texas to Colorado, or maybe to tell me, "I'm really impressed by the intellgence behind your searches. I'll bet you've got other big things to offer, too!" That's because it isn't programmed to do that. If it did, I'd drop Chrome AI mode in an instant and try ChatGPT.

But a week ago I ran into a New Yorker column, Will AI Make College Obsolete? that raises an important question: if the predictions are that AI will replace a good many white-collar jobs, what will this do for four-year institutions, whose purpose is to prepare students for white-collar jobs?

As the economist Bryan Caplan has observed, “The main function of education is not to teach useful skills (or even useless skills), but to certify students’ employability. By and large, the reason our customers are on campus is to credibly show, or ‘signal,’ their intelligence, work ethic, and sheer conformity.” As long as college remains a way for upwardly mobile kids to stand out from one another, and as long as employers believe that a better college degree is a sign of a better potential worker, the American university system should survive, even if teaching methods change.

But this purpose of college education has been fading without AI. Most of my career was in tech, where four-year degrees are optional, and employers would rather have someone from India without a degree who can do the work, rather than someone US-born with a degree who's more expensive. And an employee with an H-1B visa is indentured, while a US-born worker can quit any time. Mike Rowe's point has always been that you can find both good-paying and rewarding jobs without a college degree, and more people should be looking at such careers. The New Yorker piece goes on,

Would a fifteen-year-old hellbent on a journalism career be best served by working himself to the bone both academically and extracurricularly to get into Harvard, or should he just start a Twitch stream and get to work?

Reasonable people can disagree about that. But I feel certain that most of the ambitious fifteen-year-olds who already know what they want to do these days would choose the self-made option—particularly if they come from families that can’t easily afford college tuition, let alone thousands of dollars in supplemental application prep. A.I. might not factor directly into such a decision for an aspiring reporter, but the already impressive abilities of large language models to hone research, approximate historical knowledge, and target potential sources would soften any disadvantages that this hypothetical student might suffer from skipping college.

The columnist here gets closer to, but doesn't quite touch, what I am now calling the Edward Feser problem. Although Feser has a PhD in Philosophy from UC Santa Barbara, over the past several years, I've read a fair amount of his published work, which has a high reputation with conservative Catholics, but in using just Chrome AI mode to refresh my memory of an undergraduate Philosophy minor, I'm serioously beginning to question what I'd learn from one of his classes at Pasadena City College.

For instance, he vociferously denounces "consequentialism" in discussing the application of "just war" theory, but he apparently doesn't recognize that CCC 2309 lists as one criterian for "just war" that "the use of arms must not produce evils and disorders graver than the evil to be eliminated." This is a consequentialist argument, in which, as Chrome AI pointed out to me yesterday, it judges the morality of an action entirely by its future outcomes and results; It requires a direct calculation of the total goodness produced versus the total harm caused; and It weighs the collective suffering of a population against the political or moral evil of an enemy.

This, for instance, would permit a devout Catholic to entertain the justice of the nuclear attacks on Hiroshima and Nagasaki on the basis that they put an end to Japanese atrocities, as well as limiting much greater potential military and civilian casualties to be expected from an invasion of Japan. The arithmetic alone -- Chinese, Indonesian, Vietnamese, and Korean lives saved from continuing Japanese depredations, plus millions of potential civilian and military casualties averted in an invasion, would overwhelmingly carry a utilitarian argument, which CCC 2309 implicitly permits.

The problem is that Prof Feser is apparently unable to recognize a utilitarian argument, when I assume he teaches undergraduate-level courses in ethics, and he apparently can't recognize that the Catholic Church makes such an argument permissible -- indeed, in Feser's view, it must require that it be satisfied. The first problem is that if Prof Feser doesn't recognize this, I'm not sure what his students will ever take away from his classes.

The second problem is how he would handle a promising student, and this is one of my concerns about the current higher education system -- let's say he gets a one-in-a thousand, highly motivated, highly intelligent student. Let's say that student is set on fire by his ethics course, and he both reads whatever he can find of Feser's writing, like his blog, and he starts to look farther into subjects like utilitarianism. That student is going to discover pretty darn fast that Feser probably isn't qualified to teach that subject, and his questions in class will likely expose it.

I simply don't predict a good outcome here, I never had a good one in such circumstances myself. Just for starters, AI is going to complicate matters. If that student simply does what I did in yesterday's post, use it to identify the most productive online resources, he's going to run circles around Feser very quickly -- but Feser's best option will be to accuse the student of potentially misusing AI, potentially hurt his transfer to a four-year institution, or even get him thrown out of school. Sorry, that's academic life.

It doesn't help that Feser's student reviews go something like this: "Besides being extremely handsome, he is the most helpful teacher on campus. If you attend class and take notes, you will pass." Yeah he's helpful. I challenge someone to ask him about the consequentialist part of CCC 2309 and see how quickly things turn unhelpful. Or "TAKE HIS CLASS, HE IS SO AWESOME BY FAR MY FAVORITE PROFESSOR. HE'S FUNNY AND VERY CHILL❤️" I actually don't see much of a sense of humor in any of his writing -- he certainly doesn't see much funny about Trump.

I actually think the kind of student who can be set on fire by subject matter will seldom do especially well in any undergraduate environment. I keep thinking of the president of my Ivy alma mater wondering why he's told the most capable students are in the B and C range. Heck, give AI a chance; in this case, using AI for just one of its intended purposes, highly efficient research, it shows how quickly it can point out the serious limitations of a presumptively highly qualified academic expert. I wonder what it could do for one of Glenn Reynolds's law students -- or maybe we shouldn't think about that too hard.

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home