BBC Slams Apple Over Fake Headline Claiming US CEO’s Killer “Shot Himself”

Apple is facing criticism from the BBC after its new AI-powered iPhone feature, Apple Intelligence, generated a misleading headline about a high-profile murder case in the US.

Launched in the UK earlier this week, Apple Intelligence uses artificial intelligence to summarise and group together notifications for users. However, the system incorrectly summarised a BBC News article, making it appear that Luigi Mangione, the man arrested in connection with the murder of UnitedHealthcare CEO Brian Thompson in New York, had shot himself.

The headline read, “BBC News: Luigi Mangione shoots himself,” a claim that was false.

A spokesperson for the BBC confirmed the corporation had contacted Apple to address the issue and resolve the problem. “BBC News is the most trusted news media in the world,” the spokesperson said, saying it was important to maintain trust in the journalism published under the BBC’s name.

Despite the error, the rest of the AI-powered summary, which included updates on the overthrow of Bashar al-Assad’s regime in Syria and South Korean President Yoon Suk Yeol, was reportedly accurate.

The BBC is not alone in encountering misrepresented headlines due to the technology.

A similar issue occurred in November when Apple Intelligence grouped three unrelated New York Times articles into a single notification, one of which incorrectly read, “Netanyahu arrested,” referencing an International Criminal Court warrant for Israeli Prime Minister Benjamin Netanyahu, rather than an actual arrest.

Apple AI notification summaries continue to be so so so bad

[image or embed]

— Ken Schwencke (@schwanksta.com) November 22, 2024 at 12:52 AM

Apple’s AI-powered summary system, available on iPhone 16 models, iPhone 15 Pro, and later devices running iOS 18.1 or higher, is designed to reduce notification overload, allowing users to prioritise important updates. But concerns have been raised about the reliability of the technology, with Professor Petros Iosifidis of City University in London calling the mistakes “embarrassing” and criticising Apple for rushing the product to market.

This isn’t the first time AI-powered systems have been inaccurate. In April, X’s AI chatbot Grok was criticised for falsely claiming Prime Minister Narendra Modi had lost the election before it even took place.

Google’s AI Overviews tool also made bizarre recommendations, such as using “non-toxic glue” to stick cheese to pizza and advising people to eat one rock per day.