Saturday, May 17, 2025

Pope Leo, AI, And Cheating Your Way Through College

I've ventured into AI now and then here, mostly with a certain amount of skepticism. Late last year, I asked, "If AI Is So Great, Why Do I Get So Much Spanish- And Chinese-Language Spam?" Theoretically, AI should have been able to identify what languages I speak and direct ads in those languages to my e-mail, YouTube, and Facebook feeds, while avoiding the unnecessary waste of sending me ads in languages I don't speak.

I post frequently on Facebook in German and have several German and Austrian Facebook friends. So far, I haven't received a single German-language ad, but I get lots in Chinese and Spanish, neither of which I speak. The basic problem is that AI doesn't make people smart, and it doesn't make up for people not being smart, either. I think there's a misunderstanding of AI in the public imagination, and so far, It appears that remarks from our new Holy Father reflect this. Via the National Review:

Leo XIV views the current challenges posed by AI as analogous to those his predecessor faced 134 years ago. “In our own day,” our new Pope declared in one of his first public addresses, “the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice and labour.”

Cardinal Fernando Chomalí, the Archbishop of Santiago, Chile, elaborated on the new Pope’s vision by sharing what Leo XIV had told him the evening after the election: “He told me he is very concerned about the cultural shifts we are living through, a Copernican revolution really — artificial intelligence, robotics, human relationships. . . . There is a revolution happening, and it must be addressed seriously. The Church can contribute through its moral authority and also its academic strength.”

Pope Leo has a great deal of time to develop and refine his thinking on AI, and if he chooses to write an encyclical on the problems it poses for the human conditon, I'm confident that it will be incisive and well-reasoned. But I don't think AI will either make people smart or replace smart people, whatever else it may do, and it won't create any new set of moral problems. An article came out in New York Magazine last week, Everyone Is Cheating Their Way Through College:

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments. In its first year of existence, ChatGPT’s total monthly visits steadily increased month-over-month until June, when schools let out for the summer. . . . Two and a half years later, students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education. Generative-AI chatbots — ChatGPT but also Google’s Gemini, Anthropic’s Claude, Microsoft’s Copilot, and others — take their notes during class, devise their study guides and practice tests, summarize novels and textbooks, and brainstorm, outline, and draft their essays.

But sometime in the late 1970s, I ran into an op-ed by Gerald Seib at The Wall Street Journal entitled "Why Are They Cheating?" that covered exactly the same ground, except that since this was the late 1970s, there was no AI, no desktops, no laptops, no pocket phones, no online search, no word processing apps, not even any online text that could be copied and pasted. I was so taken by Seib's insight that I did the old fashioned thing, cut it out from that issue of the paper and saved it into a manila folder, which is long since gone. It's probably never been on line, and I'm sorry I can't quote from it now.

But there was just as much cheating, take it from me. In the early 1970s, still unsure of a career, I found myself studying Eng Lit in graduate school and teaching sections of English Comp as a teaching assistant. At the start of my first semester, the more senior TAs laid it out to me and the other new people: plagiarism was through the roof. No need for ChatGPT, the Greek houses had files of old essays that could just be retyped and resubmitted, there was Cliff's Notes, or heck, just Time magazine or the LA Times. The more industrious students could go to the library and find a book that they could copy half a dozen paragraphs from.

If anything, all ChatGPT does is save the need to retype manually what students had available in the 1970s and before. And it was nearly impossible to do anything about it then. You could find suspicious wording in a student paper -- archaic diction or phraseology, or just British spelling from a kid who'd grown up in Garden Grove, but to trace the actual source would have required hours in the library, if it could even be found there, time nobody could spend.

And to bring a case on the honor code, if that was what they called it, required "proof beyond a reasonable doubt", but nobody wanted to be on any board or committee that ruled on anything like that, so there were no real penalties, which is how everyone wanted it. The students and parents felt entitled to good grades and a degree -- that was what they paid for. The department knew that if anyone enforced plagiarism rules, any course that beecame known for doing so would instantly lose enrollment.

The New York Magazine piece pays lip service to the fact that this is old stuff:

It isn’t as if cheating is new. But now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences? After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” he said. “Both in the literal sense and in the sense of being historically illiterate and having no knowledge of their own culture, much less anyone else’s.”

Oh, no! Generations of students will now emerge from university who are historically and culturally illiterate? O tempora! O mores! It wasn't that way when I was in school! I fell into that trap when I was a TA -- they might do this at USC, but I went to an Ivy, where that sort of thing is just not done. I was unbelievably naive; academic culture has tolerated this sort of corruption since time immemorial.

For that matter, of the two women I dated seriously in graduate school, both were also having affairs with their professors. Both eventually became full professors. Surely this was just a random coincdence?

There's an IT truism that predates AI by a couple of generations: garbage in, garbage out. All ChatGPT does is go to Cliff's Noes, which are now on line anyhow, and build clumsy essays that read every bit as badly as the students who turned in equivalent papers in the 1970s and surely long before that. AI will never make people smarter, and it will never replace smart people. In particular, I don't think it represents any moral problem that hasn't existed already, especially in the universities.

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home