An AI Solution in Search of a Problem, Creates a Problem

I obviously get what Bloomberg is trying to do with these – it's similar to the "key bullet points" that CNN and others have long put at the top of their stories as a sort of 'TL;DR' for those who either don't want to read the full article or are trying to decide if they should. But the implementation is pretty bad. They're way, way, way too large at the top of the page. And perhaps related, the AI blurbs themselves are too long.
It's almost as if their Editor in Chief wrote an essay and then made his publication come up with how to implement it.
Anyway, beyond how these summaries look, there are other issues:
The news outlet has had to correct at least three dozen A.I.-generated summaries of articles published this year. One happened on Wednesday, when Bloomberg broke news about President Trump’s auto tariffs.
The article correctly reported that Mr. Trump would announce the tariffs as soon as that day. But the bullet-point summary of the article written by A.I. inaccurately said when a broader tariff action would take place.
TL;DRs don't work very well if the AI didn't even R.
While the article about the situation notes that the LA Times also had issues with their AI summaries, which caused them to pull them back they oddly don't mention maybe the most (in)famous example of this: Apple News. The tech giant shoved this feature in everyones' faces in a clear effort to ship at least some AI features and it completely backfired by serving up a ton of nonsense. As such, it was tweaked and ultimately rolled back – and remains rolled back.
I much prefer the way that The Washington Post implements AI:
Bloomberg is not alone in trying A.I. — many news outlets are figuring out how best to embrace the new technology and use it in their reporting and editing. The newspaper chain Gannett uses similar A.I.-generated summaries on its articles, and The Washington Post has a tool called “Ask the Post” that generates answers to questions from published Post articles.
This is similar to what Matter, a startup where I've long been an investor, does with their "Co-Reader" AI feature. It's not there in your face, it's there in the background if you want to ask a question or go deeper on a topic. It's really useful! It's like the baked-in definition capabilities that Apple's operating systems have long had, but on steroids.
Bloomberg News said in a statement that it publishes thousands of articles each day, and “currently 99 percent of A.I. summaries meet our editorial standards.”
“We’re transparent when stories are updated or corrected, and when A.I. has been used,” a spokeswoman said. “Journalists have full control over whether a summary appears — both before and after publication — and can remove any that don’t meet our standards.”
The A.I. summaries are “meant to complement our journalism, not replace it,” the statement added.
I think my friend John Gruber would take issue with the notion that Bloomberg is transparent when it comes to updating or correcting their stories.
I just don't understand why you roll this effort out so broadly and so in-your-face if it fails even 1% of the time. It undercuts the actual work done by actual reporters. The ultracrepidarians strike again! It feels a lot like an initiative to appear to be at the forefront of AI in this world, but really it's more like a solution in search of a problem – while creating its own different problem instead.
But really, this is just a post to ask Bloomberg to make these AI summaries optional. And, if nothing else, smaller.


