Readers of the Chicago Sun-Times and The Philadelphia Inquirer opened their Sunday papers to find a glossy 64-page supplement titled “Heat Index: Your Guide to the Best of Summer.” Among its features was a “Summer Reading List for 2025,” promising a curated selection of 15 novels by acclaimed authors like Isabel Allende, Andy Weir, and Ian McEwan. The list seemed like a perfect companion for beachside lounging or cozy evenings—until readers discovered a shocking truth: ten of the recommended books didn’t exist. Titles like Tidewater Dreams by Allende and The Last Algorithm by Weir were entirely fabricated, generated by artificial intelligence (AI) and published without fact-checking. This blunder, attributed to a freelancer working for a third-party content provider, has ignited a firestorm of criticism, exposed the perils of unchecked AI in journalism, and raised urgent questions about trust, accountability, and the future of media. This blog delves into the details of the scandal, the mechanics of AI “hallucination,” the broader context of journalism’s AI adoption, and the lessons for an industry at a crossroads.
The Blunder: A Reading List of Imaginary Books
The Chicago Sun-Times’ summer reading list was meant to be a light, engaging feature, recommending novels by well-known authors to captivate readers. The list included tantalizing descriptions, such as Tidewater Dreams, described as a “multigenerational saga set in a coastal town where magical realism meets environmental activism,” and Hurricane Season by Brit Bennett, which “powerfully explores family bonds tested by natural disasters.” Other entries, like The Last Algorithm, promised a thrilling tale of an AI system gaining consciousness, while The Rainmakers by Percival Everett was pitched as a “searing satire” on political ambition. These summaries were compelling, but there was one problem: only five of the 15 books—Dandelion Wine by Ray Bradbury, Beautiful Ruins by Jess Walter, Bonjour Tristesse by Françoise Sagan, Call Me by Your Name by André Aciman, and Atonement by Ian McEwan—were real. The rest were figments of AI imagination, conjured by a phenomenon known as “hallucination,” where AI generates plausible but false information.
The error went unnoticed until readers, including book enthusiasts and librarians, began searching for the titles online and in library databases. Social media platforms like Bluesky and Reddit erupted with posts calling out the Sun-Times for publishing “AI slop.” One user, a Book Riot editor, confirmed the list’s inaccuracies by checking Chicago-area newspaper archives, expressing disbelief at the oversight. Another post on X lamented, “What are we coming to?” as readers shared photos of the printed list, highlighting the absurdity of recommending nonexistent books. The backlash was swift, with subscribers voicing outrage over the betrayal of trust, especially in a newspaper already grappling with financial and staffing challenges.
The Sun-Times and Inquirer quickly distanced themselves from the list, clarifying that it was not produced by their newsrooms but by King Features Syndicate, a Hearst-owned content provider. The freelancer responsible, Chicago-based writer Marco Buscaglia, admitted to using AI—likely ChatGPT or Claude—to generate the list, bypassing his usual practice of fact-checking. “I did screw up, and it was generated by AI,” Buscaglia told reporters, expressing embarrassment and taking full responsibility. King Features, which has a strict anti-AI policy, severed ties with Buscaglia, while both newspapers removed the supplement from their digital editions and issued apologies. Chicago Public Media, the Sun-Times’ nonprofit owner, went further, announcing that print subscribers would not be charged for the flawed edition and promising a review of third-party content partnerships.
AI Hallucination: The Mechanics of the Mistake
The Sun-Times debacle is a textbook case of AI hallucination, a known flaw in large language models (LLMs) like ChatGPT, Claude, and others. Hallucination occurs when AI generates convincing but fabricated information, often filling gaps in its knowledge with plausible-sounding details. In this case, the AI created fake book titles and summaries, attributing them to real authors whose styles aligned with the descriptions. For example, Tidewater Dreams mimicked Allende’s magical realism, while The Last Algorithm echoed Weir’s science-driven thrillers. These fabrications were so seamless that they slipped past Buscaglia and King Features’ editorial process, highlighting the deceptive power of AI-generated content.
AI hallucination is not a new issue. In 2023, a lawyer was sanctioned for submitting a legal brief with fake AI-generated case citations, and in 2024, a scientific journal retracted an article containing AI-fabricated data. The Sun-Times incident, however, is particularly alarming because it occurred in a mainstream newspaper, where readers expect rigorous fact-checking. The list’s inclusion of real books alongside fake ones added to the confusion, as it lent an air of authenticity to the fabricated titles. Social media users noted the irony of The Last Algorithm, a nonexistent book about rogue AI, being recommended by an AI system prone to errors. “I’d read it,” one Reddit user quipped, while others expressed frustration at the erosion of journalistic standards.
The incident underscores a critical flaw in current AI models: their inability to distinguish fact from fiction without human oversight. Unlike traditional research, where sources can be verified, AI draws from vast datasets that may include inaccuracies or incomplete information. When tasked with generating a reading list, the AI likely combined real author names with invented titles, creating a hybrid of truth and fiction. Buscaglia’s failure to verify the output—despite admitting he typically does—reveals the dangers of over-reliance on AI, especially under time or cost pressures.
The Context: Journalism’s AI Experiment and Newsroom Struggles
The Sun-Times scandal comes at a precarious time for journalism, as newsrooms grapple with financial constraints, staff cuts, and the allure of AI as a cost-saving tool. In March 2025, Chicago Public Media announced that 20% of the Sun-Times’ staff, including 23 newsroom employees, had accepted buyouts amid fiscal challenges. The Inquirer also underwent layoffs in the same period, reflecting a broader trend of downsizing in the industry. These cuts have left newsrooms stretched thin, increasing reliance on syndicated content and freelancers to fill gaps. The “Heat Index” supplement, produced by King Features, was one such outsourced product, designed to attract readers with lifestyle content but lacking the editorial scrutiny applied to in-house reporting.
AI’s growing presence in journalism has sparked both excitement and unease. Media outlets have experimented with AI for tasks like summarizing sports scores, generating headlines, or drafting routine stories, hoping to boost efficiency. In 2024, The Washington Post used AI to transcribe interviews, while Forbes faced criticism for publishing AI-assisted articles with minimal human input. However, the Sun-Times incident highlights the risks of deploying AI without robust checks. The Sun-Times Guild, the newspaper’s union, expressed horror at the “slop syndication,” noting that it undermined the trust built by their rigorous journalism. “Our readers signed up for work that has been vigorously reported and fact-checked,” the guild stated, calling for measures to prevent future disasters.
The scandal also reflects deeper systemic issues. The rise of advertorials and syndicated supplements, often disguised as editorial content, blurs the line between journalism and marketing. The “Heat Index” section, labeled as a Sun-Times product, gave no indication to readers that it was externally produced, amplifying the betrayal when errors surfaced. Chicago Public Media CEO Melissa Bell called the incident “unacceptable,” admitting that the lack of transparency about the section’s origins compounded the damage. The episode echoes a 2023 Sports Illustrated controversy, where AI-generated stories with fake author bios led to layoffs and public outcry, underscoring the reputational risks of AI missteps.
Social and Cultural Reactions
The Sun-Times blunder became a viral sensation, with social media amplifying both the humor and outrage. On Reddit’s r/books subreddit, a post titled “Chicago Sun-Times prints summer reading list full of fake books” garnered over 4,200 votes and 393 comments, with users mocking the fabricated titles while decrying the state of journalism. “How did the editors not catch this?” one user asked, while another wrote, “If you didn’t take the time to write it, I won’t take the time to read it.” X posts were equally scathing, with one user noting that the incident was syndicated, affecting multiple newspapers, and another calling it a “learning moment” for the industry. The public’s reaction highlighted a growing skepticism toward media, with subscribers questioning the value of their subscriptions when “AI slop” could infiltrate trusted outlets.
Authors whose names were misused also weighed in. While Isabel Allende and Andy Weir did not publicly comment, novelist Racheal King pointed out the errors, sparking wider scrutiny. Literary communities, including book podcasters and librarians, played a key role in exposing the fraud, demonstrating the power of grassroots fact-checking in the digital age. The incident also fueled discussions about AI’s impact on creative industries, with some users joking that AI was “writing books that don’t exist” while others worried about its potential to mislead readers about real authors’ work.
Economic and Ethical Implications
Economically, the scandal threatens the Sun-Times and Inquirer’s subscriber base, already strained by layoffs and competition from digital platforms. Chicago Public Media’s decision to waive charges for the flawed edition was a goodwill gesture, but the long-term damage to reader trust could be costlier. The Sun-Times, acquired by Chicago Public Media in 2022, and its sister outlet WBEZ have faced multiple rounds of layoffs, with WBEZ axing three podcasts in April 2025. The Inquirer’s layoffs in March 2025 further illustrate the financial pressures pushing newsrooms toward cheaper, riskier content solutions like AI.
Ethically, the incident raises questions about transparency and accountability. The lack of a byline on the reading list obscured its origins, while the absence of a clear disclaimer about its syndicated nature misled readers. The Sun-Times Guild criticized the “slop syndication” as a violation of their commitment to accurate, human-driven journalism, urging management to strengthen oversight. The episode also highlights the ethical responsibility of freelancers and content providers to verify AI outputs, especially when their work appears under the banner of reputable outlets. Buscaglia’s apology, while sincere, does little to mitigate the damage to the newspapers’ credibility.
The Broader Debate: AI’s Role in Journalism
The Sun-Times scandal has reignited debates about AI’s place in journalism. Proponents argue that AI can enhance productivity, automating mundane tasks and freeing journalists for investigative work. Critics, however, warn that its propensity for errors—especially hallucinations—poses unacceptable risks. The incident validates concerns raised by experts like Gabino Iglesias, an NPR Books contributor, who noted the scarcity of full-time book reviewers and the proliferation of unverified online content. “How many full-time book reviewers are there in the U.S.? Very few,” Iglesias said, linking the Sun-Times error to broader industry trends favoring speed over accuracy.
The scandal also underscores the need for clear AI policies. King Features’ anti-AI stance was undermined by Buscaglia’s actions, suggesting that policies must be paired with enforcement. The Inquirer’s publisher, Lisa Hughes, called the use of AI for the supplement a “serious breach” of internal rules, while the Sun-Times vowed to align third-party content with its editorial standards. These commitments signal a shift toward greater scrutiny, but implementing them amid budget constraints and reliance on syndication will be challenging.
Globally, the incident resonates with similar AI controversies. In 2024, a UK publisher retracted AI-generated book summaries after errors were found, and in 2025, an Australian news site faced backlash for AI-written obituaries. These cases illustrate the universal challenge of balancing AI’s efficiency with journalism’s core values of accuracy and trust. The Sun-Times’ call for a “learning moment” reflects a growing consensus that AI must be a tool, not a replacement, for human judgment.
Sponsored
FACTS Transcripts
Apply for a University document anywhere
https://www.factstranscript.com
Quick Transcripts for popular Universities, check your University name now and get started. We help you to get your transcript application online which is accepted for use of IRCC.
No DD, NO Paperwork. 100% Authentic, Reliable.
FACTS Transcripts Charges · Reviews · Assam Universities · Home · Know your University