Saturday, October 25, 2025

If AI Is So Smart, Why Is It So Dumb?

Via The Guardian,

An artificial intelligence system (AI) apparently mistook a high school student’s bag of Doritos for a firearm and called local police to tell them the pupil was armed.

Taki Allen was sitting with friends on Monday night outside Kenwood high school in Baltimore and eating a snack when police officers with guns approached him.

“At first, I didn’t know where they were going until they started walking toward me with guns, talking about, ‘Get on the ground,’ and I was like, ‘What?’” Allen told the WBAL-TV 11 News television station.

Allen said they made him get on his knees, handcuffed and searched him – finding nothing. They then showed him a copy of the picture that had triggered the alert.

“I was just holding a Doritos bag – it was two hands and one finger out, and they said it looked like a gun,” Allen said.

Recently, Chrome added an "AI mode" to its search feature that has a prompt, "Ask anything". Unlike Chat GPT, this is free. So I asked,

An artificial intelligence system (AI) apparently mistook a high school student’s bag of Doritos for a firearm and called local police to tell them the pupil was armed. How can this happen?

AI replied, at length,

In this incident, an AI gun detection system mistook a student's bag of Doritos for a firearm, triggering a police response at a Baltimore high school. This type of error, known as a "false positive," happened for several reasons related to the limitations of AI-powered computer vision and the conditions of the incident.

A computer vision system "sees" objects by identifying learned patterns in pixels, not by understanding real-world context. The crumpled shape of the shiny bag of chips, combined with the way the student was holding it, likely contained features that closely resembled the patterns the AI was trained to recognize as a gun. According to the company Omnilert, the bag "closely resembled a gun being held," an assessment that led to the system's alert.

After a couple more paragraphs, we get to the meat of the matter:

Unlike humans, the AI system lacks the ability to understand the broader context of the situation. It could not infer that the student was simply eating a snack after football practice. The AI's sole function was to match visual patterns against its "firearm" category, and it triggered an alert based on a pixel match rather than an understanding of the scene.

But let's flash back to May, 2020 in Canada, a country far more sensible and enlightened than ours,

A restaurant employee in a “Star Wars” costume was detained in Canada on Sunday after 911 callers reported seeing someone in a Stormtrooper costume with a gun, police said.

The employee, who was carrying a plastic blaster, had been trying to drum up business for the struggling restaurant, which opened two months before Canadian authorities shuttered eat-in dining because of the coronavirus, the woman’s boss, Brad Whalen, told NBC News.

. . . A video showed multiple officers, some with their guns drawn, shouting at the employee to get on the ground. The worker, who Whalen did not want to identify, could be seen face down in Coco Vanilla’s parking lot. She could be heard sobbing while officers handcuffed her.

Dudley Do-Right's colleagues needed no AI assistance to make this mistake. I asked AI, "How much does an Omnilert detection system [the type used by the Baltimore schools] cost?" It answered,

A tiered plan for the mass notification system typically ranges from $2,500 to $15,000 per year.

So Baltimore schools spent $2500 to $15,000 per year to get exactly the same black eye the Lethbridge, AB Police Service got for free. Of course, the personnel cost of the multiple-officer response and the public relations cost of the snafu are extra -- not to mention the damage to the vendor's reputation. Since I can ask Chrome AI mode anything, I asked, "Has Omnilert made a public response to the Baltimore Doritos incident?" AI answered, again at length,

Yes, Omnilert has issued a public response regarding the Baltimore incident in which its AI gun detection system mistook a student's Doritos bag for a weapon. The company stated that while it regretted the incident, its system "functioned as intended".

It worked perfectly! What's all the fuss about, you doofus? But even AI seems to have had second thoughts, as, just as above, it rambled on for several more paragraphs:

A spokesperson told CBS News that the technology's purpose is to detect potential threats and route them for human review. An image that closely resembled a gun was flagged, verified by human review, and then forwarded to the school's safety team, which the company claims concluded its involvement.

But how does this differ from the Canadian case, where conscientious citizens reported a Galactic Empire storm trooper brandishing a blaster, dutifully notified police, and Lethbridge's Finest took the girl in the Star Wars costume down? Sounds like someone at Lethbridge's dispatch should have asked more questions, just as someone at the safety team should have said, "Hey, that's just a bag of Doritos!" when the system fowarded the image. The difference is that Baltimore was paying good money to avoid the embarrassing situation, while Lethbridge got it for free. Advantage, Canada! AI nevertheless continued,

Despite Omnilert's claims, some Baltimore County officials have expressed concerns and called for a review of the school system's procedures. The incident has raised questions about the reliability of AI technology in school safety and the protocols that lead to an armed police response based on false alerts.

Gee, d'ya think? Since it doesn't cost me to keep asking Chrome AI mode anything, I asked, "Can AI make stupid people smart?" it replied,

While AI cannot magically make people smarter in a fundamental way, it can be a powerful tool to enhance learning, decision-making, and access to information, helping motivated individuals to improve their abilities. However, the effect depends on how it is used; over-reliance can potentially diminish critical thinking skills through "cognitive offloading"

Which is another way of saying if AI is feeding information to unmotivated people who don't think, you're paying big bucks to get the same results you'd get from the same unmotivated people without AI. But hey, it doesn't cost me anything to keep asking Chrome AI mode questions, so I asked, "What was the outcome of the investigation into the Lethbridge, AB takedown of the Star Wars trooper?" It replied,

Ultimately, the officers were cleared, and the investigation concluded that they had followed proper protocol in a stressful and ambiguous situation.

This is exactly what Omnilert concluded about the Baltimore case. I think we can safely say that if you're feeding AI output to mediocre and unmotivated people, you'll get the same result you would have gotten from mediocre and unmotivated people without AI. But hey, it doesn't cost me anything, so I asked, "Given the Baltimore Doritos episode, isn't it true that if you expect mediocre and unmotivated people to act on AI product, all you'll get is the same thing you'd get without AI?" The reply was surprising:

The idea that AI simply amplifies pre-existing human behavior, rather than inherently improving outcomes, is a critical perspective, and the "Baltimore Doritos incident" serves as a powerful example of this principle. The incident did not demonstrate a flawless AI system that was undermined by human failure; rather, it showed that human systems, especially when uncritical or poorly managed, can exacerbate the flaws of an automated process.

The AI system acted as designed, but its design was flawed. An AI-powered gun detection system correctly identified an object that resembled a firearm and flagged it for review. The problem was that the object was just a bag of chips. In this case, the system's "success" was a total failure of real-world judgment.

But even when, as in both the Lethbridge and Baltimore cases, a "total failure of real world judgment" takes place, with or without AI, human institutions are going to excuse it. But then, isn't AI nothing but a boondoggle?