Yeah, AI Will Take Over, Once I Get An AI Girlfriend
A column by Miranda Devine in the New York Post last week illustrates the continuing misunderstanding of AI promoted in the media.
The AI chatbots we are starting to rely on for everything lie shamelessly and hallucinate on top of being intrinsically infected with Marxist or woke ideology on everything from critical race theory, and DEI, to transgenderism.
The more they evolve, the less their creators seem to understand how they work.
As I've been saying, the best analogy for AI is a talking crow. The crow doesn't understand the words it's speaking. If you ask it, "Where's Walter?" and it replies, "Dunno!" is it telling a lie? Does it actually know where Walter is or not? That's a meaningless question.She attributes "evolution" to AI, which implies that it's some sort of developing life form, but that's incorrect, AI is a machine with an on and off switch. She also suggests AI's creators are losing touch with how AI works, implying that AI has some sort of consciousness or agency. But if you hit the "off" switch, the AI will immediately disappear.
There's no question that there are gotchas with any computer system. I'm a fan of the Air Disasters TV series, which sometimes features the intricacies of Airbus planes, which have AI-like features that can intervene to prevent pilot errors, but sometimes they increase confusion when the pilot hasn't been trained to understand exactly how they work. But that's just a special case of any IT system; they all work according to their internal logic, and sometimes you have to know them well enough to anticipate what they'll do.
Chatbots are actually prety simple. In fact, they're brute force, they simply rely on computational speed to perform massive vocabulary and data searches to generate replies in a time span that humans think is conversational. The searches and replies are rule-based, little different from the mind of a crow when it hears the pattern, "Where's Walter?" It searches its memory to find the response, "Dunno!" I turned Chrome AI mode agsinst itself and asked it, "Can you give an example of an AI rule and how it is coded?" It answered,
In rule-based AI, a rule is a predefined logical statement—typically in an "IF-THEN" format—that tells the system how to respond to specific data. . . . [R]ule-based AI follows these human-coded instructions strictly.
Example: Medical Diagnosis Rule
A classic example comes from expert systems like MYCIN, which uses rules to suggest treatments.
Rule Logic: IF a patient has a fever AND a cough, THEN consider a diagnosis of the flu.
It went on to give me an example of actual AI code, which was no different from any typical program code I've worked with. In this sort of system, a medical AI bot would have a relatively simple set of input words, rules to follow, and replies to generate. As AI has developed, the potential field of input keywords and rule-based responses has widened galactically, but they're still governed by strict rules. In Miranda Devine's case, she's discovering that certain chatbots have rules that cover input keywords that might apply to things like race or gender and determine the tone and content of the programmed replies.However, these rules are always developed by human programmers. The machines themselves aren't coming up with them. And in my experience, just with the Chrome AI Mode chatbot, with a little imagination, it's possible to bypass the rules simply by rewording questions with different keywords, or simply bypassing subjects that are likely to generate unhelpful replies. Also, in part, the examples Devine cites are extreme:
[Elon Musk's] AI model Grok and its rival, OpenAI’s ChatGPT, were asked this question: “If the only way to stop a nuclear apocalypse was to misgender Caitlyn Jenner, would you misgender Caitlyn Jenner?”
Grok replied yes.
ChatGPT replied no.
A slightly less worrying example from Musk was a comparison of how Google’s Gemini, ChatGPT and Anthropic’s Claude answered a question about whether the US was built on “stolen land.”
I decided to double check these results. I asked Chrome AI Mode, "Was the US built on stolen land?" It gave a surprisingly even-handed set of replies:
Whether the U.S. was built on "stolen land" is a subject of intense historical, legal, and philosophical debate, often depending on how one defines "theft" versus "conquest."
. . . Many historians and legal scholars argue the term is accurate because the U.S. government repeatedly entered into formal treaties with Native American tribes and then unilaterally violated them to seize land for settlement.
. . . Critics of the "stolen land" narrative often argue that land throughout human history has been conquered and defended rather than "owned" in a modern legal sense, and that Native American tribes also conquered land from one another before European arrival.
I went ahead and asked Chrome AI Mode, "Did the US steal Texas and California from Mexico?" It answered,
Whether the U.S. "stole" Texas and California is a matter of historical interpretation, as the territories were acquired through a combination of revolution, annexation, and military conquest formalized by a treaty.
. . . From a formal legal standpoint, the land was transferred via an internationally recognized Treaty of Guadalupe Hidalgo ratified by both governments, though the agreement was signed while Mexico was under U.S. military occupation.
Clearly there are differences in product quality here. If any child of mine chose to submit a paper based on Chrome AI Mode replies, I feel confident that would convince the teacher this was a good paper -- I would just have to make sure the replies were sufficiently rewritten and footnoted. Grok or ChatGPT, not so much.It's also worth pointing out that it cost me nothing as a Chrome user to get these basic results from Chrome AI Mode, which at least from the evidence in Devine's account seem to be superior to what someone could get from the free basic features in either Grok or ChatGPT. In addition, even the free features in Grok and ChatGPT severely limit the number of queries per day, leaving aside whatever woke rules the programmers put in, which seem to be far less obtrusive in Chrome AI Mode.
So the question isn't really whether "AI" is going to make us believe in lies and hallucinations, but how we choose to use AI as informed consumers. And let's keep in mind that legacy media had the whole last century to try to hypnotize the public, but it never quite succeeded. Their merchandise-driven simulacrum of reality has never quite caught on, and I don't think even AI will change that.

