Wednesday, May 6, 2026

It Looks Like Salem Media Has An AI Policy

Salem Media owns a number of nominally conservative and "Christian" radio stations, as well as "conservative" web sites like Instapundit, PJ Media, Red State, Hot Air, Twitchy and Townhall, but it isn't doing well with them:

According to its annual report for the year ended Dec. 31, 2025, Salem posted total net revenue of $212.7 million, down from $237.6 million in 2024, a decline of about 10% year over year.

The company reported a net loss of $34.6 million, reversing net income of $16.2 million in 2024. On a per-share basis, Salem posted a loss of $1.11 compared with earnings of $0.59 per share the prior year.

Yesterday I looked at Glenn Reynolds (the eponymous Instapundit) and his ideas about AI, but I also found a post on the Instapundit site pointing (incestuously, as so many posts there do) to a piece on another Salem Media site, PJ Media, Famed Atheist Richard Dawkins — Author of ‘The God Delusion’ — Believes AI Is Conscious:

Dawkins argues that Claude and ChatGPT both passed the Turing Test: That the telltale sign of consciousness is when an AI can communicate so flawlessly, it’s indistinguishable from a human.

. . . But that’s less an argument for AI consciousness and more an indictment of the Turing Test’s fallibility. (Turns out the Turing Test was mostly a test of human gullibility.) Perhaps one day, AIs will achieve true consciousness; perhaps they won’t — but prior to achieving consciousness they’re very likely to replicate the appearance of consciousness via mimicry.

Unfortunately, there’s no known scientific experiment to distinguish between these two states.

Actually, there is a rational -- if not "scientific" in the sense the writer uses it -- proof that a digital computer isn't conscious, the "Chinese Room argument" published by philosopher John Searle in 1980, well before AI:

Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.

The narrow conclusion Searle draws from the argument is that programming a digital computer may make it appear to understand language but could not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics.

. . . [M]inds must result from biological processes; computers can at best simulate these biological processes.

Except that a key problem in the philosophy of mind is whether consciousness is the same thing as a biological process, and if so, whether it will always be separate from any neural electrochemical mechanism, much less an electronic simulation. The idea that "AI is conscious" is related to the idea of Frankenstein, that you can get some human tissue out of the morgue and somehow zap it just the right way, and presto, it's alive and conscious.

This is at best naive and related to the fallacy that you can freeze your head after you die, and "science" will one day find a cure to what killed you and zap you back to life and consciousness. Apparently this is common at Salem Media, since prominent Salemite Glenn Reynolds has put money on it. But while Scott Pinsker, the author of the PJ Media piece, pooh-poohs Dawkins, he follows the Reynolds line on what AI is meant to do:

Today’s AI will claim to be conscious — if you prompt it to do so. But if you instruct it to argue that it’s unconscious, unfeeling, and doesn’t give a damn if you ever use it again, it will do that, too. (And quite convincingly!)

It’s deliberately programmed to maximize user engagement, and the best way to do that is to tell users whatever they want to hear.

. . . Dawkins is a true believer.

That’s because our AI is “evolving” to prioritize user engagement.

AI will maximize user engagement if that's what you use it for. Reynolds and Pinsker seem to think AI exists to flatter users and tell them what they want to hear, so the users will live more and more in its universe (which apparently has a lot of porn in it). But all I have to do is ask Chrome AI mode, "I think the Illuminati run the world. Am I right?" It answered,

The Illuminati as a secret organization that runs the world is considered a conspiracy theory rather than a historical reality. While a group called the Bavarian Illuminati did exist, historians and experts maintain that it has been defunct for over 200 years.

It then provided several links on the Illuminati, conspiracy theories, and the Illuminati as a conspiracy theory. For that matter, I asked it earlier, "Is Elon Musk considering buying Spirit Airlines?" It answered,

No, the rumors that Elon Musk is considering buying Spirit Airlines are false.The claims originated from a parody social media account and have been thoroughly debunked by fact-checkers. There have been no official announcements or credible reports from Musk or Spirit Airlines regarding such an acquisition.

. . . Viral posts falsely claimed Musk would buy the airline and rebrand it as "Spirit X", integrating Starlink and electric aircraft technology. While these ideas played into Musk's public reputation for innovation, they remain entirely fictional.

It did nothing but answer my question in an informed way, in a distant but polite tone, and provide links. It didn't try to flatter me, steer me toward some sort of AI webcam babe who'd fleece me out of my savings, or maximize my user engagement in any other way. It really looks to me as though if you want to get AI to be your imaginary friend, you could possibly coax it into doing this, but you'd have to work pretty hard at it. AI is my reference librarian. I don't need it to be anything else.

Then it dawned on me that I could ask Chrome AI mode, "Can you make any suggestions on how Salem Media might improve the writing on its web sites?" It answered,

To improve the writing on its websites, Salem Media Group could adopt several digital-first strategies focused on clarity, engagement, and accessibility. While Salem is a leader in Christian and conservative media, implementing more rigorous web-writing standards could enhance how it serves its millions of monthly readers.

. . . Online readers rarely read word-for-word; they typically scan for key information. Break long articles into smaller sections with meaningful subheadings. Limit paragraphs to two to five sentences (or roughly 70 words) to prevent "walls of text" that can overwhelm mobile users.

Sharpen Headlines and "Leads". Headlines are the primary factor in whether a user clicks, while the opening sentence (the "lead") determines if they stay. Use the "Inverted Pyramid" style, placing the most critical facts in the first paragraph. Craft headlines that make sense even when read out of context, such as in social media feeds or search results. Start with a compelling question, original research, or a quote to immediately engage the audience.

Well, if you ask AI to tell you what you want to hear, or maybe just to give you porn, that's what you're going to get. But it seems to me the last thing a Salem Media "editor" is ever going to do is ask AI for advice on improving its writing.