Clueless AI can’t summarize
Everyone is using AI chatbots to condense complicated material into simple, short, digestible nuggets. Here's why this is a bad idea.
AI saves time by summarizing long articles or reports into easily digestible nuggets, allowing you to grasp the gist without slogging through endless detail.
Great, right? Just one problem: A report by the European Broadcasting Union and the BBC published October 21 found that AI chatbots’ summarizing ability sucks.
Nearly half (45%) of AI-generated news responses had at least one major error. One third (31%) had serious sourcing issues, including missing, misleading, or incorrect citations. One fifth (20%) contained major inaccuracies, such as fabricated details and outdated information.
Google’s Gemini was the worst: Three-quarters (76%) of its news responses had major problems, over double that of other platforms.
Several studies show major AI chatbots exaggerate, overgeneralize, or misrepresent scientific findings in up to 73% of cases, making conclusions seem broader than the source material supports.
The basic problem is that AI can’t think. It can’t tell what’s important, true, recent, who said what, opinion from fact, or original from plagiarized.
AI chatbots are clueless at summarizing.
While the world rushes to AI explain complicated subjects simply, and partly do our thinking for us, AI is failing to give us what we think we’re getting.
We think AI chatbots are enabling us to understand complicated topics and get smarter, but we’re being misled and getting dumber.
Use AI for summarization at your own risk.
More From Mike
Who’s right — the AI zoomers or doomers?
How to make workers happier with less pay
AI cameras race for a real-time edge
Where’s Mike? Sicily!
(Why I’m always traveling.)






