from the I’m-sorry-I-can’t-do-that,-Dave dept
The rushed adoption of half-cooked automation in America’s already broadly broken media and journalism industry continues to go smashingly, thanks for asking.
U.S. media companies have long been at the forefront of managerial dysfunction. More recently, that mismanagement has taken the form of wave after wave of “AI” scandals, ranging from getting busted for using AI to create fake journalists and lazy clickbait (often without informing employees or readers), to using AI prone to plagiarism or outright falsehoods.
The latest scandal comes courtesy of the Chicago Sun Times, which was busted this week for running a “summer reading list” advertorial section filled with books that simply… don’t exist. As our friends at 404 Media note, the company somehow missed the fact that the AI synopsis was churning out titles (sometimes by real authors) that were never actually written.
Such as the nonexistent Tidewater by Isabel Allende, described by the AI as a “multigenerational saga set in a coastal town where magical realism meets environmental activism.” Or the nonexistent The Last Algorithm by Andy Weir, “another science-driven thriller” by the author of The Martian, which readers were (falsely) informed follows “a programmer who discovers that an AI system has developed consciousness—and has been secretly influencing global events for years.”
Unlike some past scandals, one (human) Sun-Times employee was at least quick to take ownership of the fuck up:
“The article is not bylined but was written by Marco Buscaglia, whose name is on most of the other articles in the 64-page section. Buscaglia told 404 Media via email and on the phone that the list was AI-generated. “I do use AI for background at times but always check out the material first. This time, I did not and I can’t believe I missed it because it’s so obvious. No excuses,” he said. “On me 100 percent and I’m completely embarrassed.”
Buscaglia added “it’s a complete mistake on my part.”
“I assume I’ll be getting calls all day. I already am,” he said. “This is just idiotic of me, really embarrassed. When I found it [online], it was almost surreal to see.”
Initially, the paper told Bluesky users it wasn’t really sure how any of this happened, which isn’t a great look any way you slice it:
Later on, the paper issued an apology that was a notable improvement over past scandals. Usually, when media outlets are caught using half-cooked AI to generate engagement garbage, they throw a third party vendor under the bus, take a short hiatus to whatever dodgy implementation they were doing, then in about three to six months just return to doing the same sort of thing.
The Sun Times sort of takes proper blame for the oversight:
“King Features worked with a freelancer who used an AI agent to help build out this special section. It was inserted into our paper without review from our editorial team, and we presented the section without any acknowledgement that it was from a third-party organization.”
They also take the time to thank actual human beings, which was nice:
“We are in a moment of great transformation in journalism and technology, and at the same time our industry continues to be besieged by business challenges. This should be a learning moment for all journalism organizations: Our work is valued — and valuable — because of the humanity behind it.“
The paper is promising to do better. Still, the oversight reflects poorly on the industry at large.
The entire 4-page, ad-supported “Heat Index” published by the Sun-Times is the sort of fairly inane, marketing heavy gack common in a stagnant newspaper industry. It’s fairly homogenized and not at all actually local; the kind of stuff that’s just lazily serialized and published in papers around the country with a priority of selling ads — not actually informing anybody.
Other segments of the paper’s silly Heat Index appear to feature experts that don’t actually exist, according to 404 Media’s Jason Koebler:
“For example, in an article called “Hanging Out: Inside America’s growing hammock culture,” Buscaglia quotes “Dr. Jennifer Campos, a professor of leisure studies at the University of Colorado, in her 2023 research paper published in the Journal of Contemporary Ethnography.” A search for Campos in the Journal of Contemporary Ethnography does not return any results.”
In many ways these “AI” scandals are just badly automated extensions of existing human ethical and competency failures. Like the U.S. journalism industry’s ongoing obliteration of any sort of firewall between advertorial sponsorship and actual, useful reporting (see: the entire tech news industry’s love of turning themselves into a glorified Amazon blogspam affiliate several times every year).
But it’s also broadly reflective of a trust fund, fail-upward sort of modern media management that sees AI as less of a way to actually help the newsroom, and more of a way to lazily cut corners and further undermine already underpaid and overworked staffers (the ones that haven’t been mercilessly fired yet).
Some of these managers, like LA Times billionaire owner Patrick Soon-Shiong, genuinely believe (or would like you to believe because they also sell AI products) that half-cooked automation is akin to some kind of magic. As a result, they’re rushing toward using it in a wide variety of entirely new problematic ways without thinking anything through, including putting LLMs that can’t even generate accurate summer reading lists in charge of systems (badly) designed to monitor “media bias.”
There’s also a growing tide of aggregated automated clickbait mills hoovering up dwindling ad revenue, leeching money and attention from already struggling real journalists. Thanks to the fusion of automation and dodgy ethics, all the real money in modern media is in badly automated engagement bait and bullshit. Truth, accuracy, nuance, or quality is a very distant afterthought, if it’s thought about at all.
It’s all a hot mess, and you get the sense this is still somehow just the orchestra getting warmed up. I’d like to believe things could improve as AI evolves and media organizations build ethical frameworks to account for automation (clearly cogent U.S. regulation or oversight is coming no time soon), but based on the industry’s mad dash toward dysfunction so far, things aren’t looking great.
Filed Under: artificial intelligence, automation, brunchlords, journalism, llms, media, reporting
Companies: chicago sun times, hearst, king features