A version of this story appeared in CNN Business’ Nightcap newsletter. To get it in your inbox, sign up for free here.
New York
CNN
—
Apple has been getting hammered in tech and financial media for its uncharacteristically messy foray into artificial intelligence. After a June event heralding a new AI-powered Siri, the company has delayed its release indefinitely. The AI features Apple has rolled out, including text message summaries, are comically unhelpful.
The critique of Apple’s halting rollout is not entirely unfair. Though it is, at times, missing the point.
Apple, like every other big player in tech, is scrambling to find ways to inject AI into its products. Why? Well, it’s the future! What problems is it solving? Well, so far that’s not clear! Are customers demanding it? LOL, no. In fact, last year the backlash against one of Apple’s early ads for its AI was so hostile the company had to pull the commercial.
The real reason companies are doing this is because Wall Street wants them to. Investors have been salivating for an Apple “super cycle” — a tech upgrade so enticing that consumers will rush to get their hands on the new model.
In a rush to please shareholders, Apple made a rare stumble. The company is owning its error, it seems, and has said the delayed features would roll out “in the coming year.”
Of course, the cryptic delay has only given oxygen to the narrative that Apple has become a laggard in the Most Important Tech Advancement in decades.
And that is where the Apple-AI narrative goes off the rails.
There’s a popular adage in policy circles: “The party can never fail, it can only be failed.” It is meant as a critique of the ideological gatekeepers who may, for example, blame voters for their party’s failings rather than the party itself.
That same fallacy is taking root among AI’s biggest backers. AI can never fail, it can only be failed. Failed by you and me, the smooth-brained Luddites who just don’t get it. (To be sure, even AI proponents will acknowledge available models’ shortcomings — no one would argue that the AI slop clogging Facebook is anything but, well, slop — but there is a dominant narrative within tech that AI is both inevitable and revolutionary.)
Tech columnists such as the New York Times’ Kevin Roose have suggested recently that Apple has failed AI, rather than the other way around.
“Apple is not meeting the moment in AI,” Roose said on his podcast, Hard Fork, earlier this month. “I just think that when you’re building products with generative AI built into it, you do just need to be more comfortable with error, with mistakes, with things that are a little rough around the edges.”
To which I would counter, respectfully: Absolutely not.
Roose is right that Apple is, to put it mildly, a fastidious creator of consumer products. It is, after all, the $3-trillion empire built by the notoriously detail-obsessed Steve Jobs.
The Apple brand is perhaps the most meticulously controlled corporate identity on the planet. Its “walled garden” of iOS — despised by developers and fair game for accusations of monopolistic behavior, to be sure — is also part of the reason one billion people have learned to trust Apple with their sensitive personal data.
Apple’s ob
38 Comments
ndr42
The Quote: "AI can never fail, it can only be failed" is something to think about
bigyabai
Ooh I like this one. "Apple's chips aren't slowing down. TSMC is."
gibbitz
I think AI is just running up against a company whose mantra was "it just works" and finding consumers who expect a working product won't tolerate the lack of quality "AI" has delivered. Welcome to reality venture capitalists…
upcoming-sesame
No, Apple AI is a letdown regardless
bbarnett
"Hey, I know! We should spend billions replacing code and data that provide the precise same output every time (or random from data we choose), with completely random, uncurated data that changes with every new model, because why not! It's awesome!", says every company now.
AI is not useful if you want curated fact, if you want consistent output, if you want repeated quality.
How about training an AI on 1990s style encyclopedias, with their low error rate.
Even wikipedia has random yahoos coming in and changing pages about the moon landing, to say it was filmed in a studio.
AI is being trained on random, it outputs random.
nixpulvis
If you can't explain how it works, I don't want it.
If your explanation boils down to a bunch of "it should do…" or "most of the time it does…" then I still don't want it.
MarkusWandel
The scenario in the article, about how AI is "usually" right in queries like "which airport is my mom's flight landing at and when?" is exactly the problem with Google's AI summaries as well. Several times recently I've googled something really obscure like how to get fr*king suspend working in Linux on a recent-ish laptop, and it's given me generic pablum instad of the actual, obscure trick that makes it work (type a 12-key magic sequence, get advanced BIOS options, pick an option way down a scrolling list to nuke fr*king modern suspend and restore S3 sleep… happiness in both Windows and Linux in the dual boot environment). So it just makes the answers harder to find, instead of helping.
pram
I've been experiencing "AI" making things worse. Grammarly worked fine for a decade+ but now since, I guess, they've been trying to cram more LLM junk into it the recommendations have been a lot less reliable. Now it's sometimes missing even obvious typos.
martinald
I just do not understand this attitude. ChatGPT alone has hundreds of millions of active users that are clearly getting value from it, despite any mistakes it may make.
To me the almost unsolvable problem Apple has is wanting to do as much as possible on device, but also have been historically very stingy with RAM (on iOS and Mac devices – iOS more understandably, given it doesn't really need huge amounts of RAM until LLMs came along). This gives them a real real problem, having to use very small models which hallucinate a lot more than giant cloud hosted ones.
Even if they did manage to get 16GB of RAM on their new iPhones that is still only going to be able to fit a 7b param model at a push (leaving 8GB for 'system' use).
In my experience even the best open source 7B local models are close to unusable. They'd have been mindblowning a few years ago but when you are used to "full size" cutting edge models it feels like an enormous downgrade. And I assume this to always be the case; while small models are always improving, so are the full size ones, so there will always be a big delta between them, and people are already used to the large ones.
So I think Apple probably needs to shift to using cloud services more like their Private Compute idea, but they have an issue there in so much that they have 1b+ users and it is not trivial at all to be able to handle that level of cloud usage for core iOS/Mac features (I suspect this is why virtually nothing uses Private Compute at the moment). Even if each iOS user only did 10 "cloud LLM" requests a day, that's over 10b/requests a day (10x the scale that OpenAI currently handles). And in reality it'd ideally be orders of magnitude more than that given how many possible integration options they are for mobile devices alone.
rossdavidh
The worst thing about "AI" is its name. It isn't intelligent, it isn't even dumb. If the current wave had been called "neural networks" or "large language models", then the hype wouldn't have been as breathless, but the disappointment wouldn't be as sharp either, because it wouldn't be used for things it isn't suited for.
It's an algorithm; it's just an algorithm. It's useful for a few things. It isn't useful for most things. Like MVC, or relational databases, or finite state machines, or OOP, it's not something you should have to (or want to) tell the end user that you are using in the internals. The reason most "AI" products brag about using "AI", is there isn't anything else interesting about them.
pedalpete
This is Apple's spin machine working overtime trying to say "we're not failing at AI, everyone is failing at AI".
I'm not sure anyone is going to buy it, but it doesn't cost them anything to get a few of their PR hacks to give it a try.
It's about as convincing as "we didn't build a bad phone, you're just holding it wrong!".
Workaccount2
>If it’s 100% accurate, it’s a fantastic time saver. If it is anything less than 100% accurate, it’s useless.
The insane levels of hypocrisy hearing this come from a mainstream media source. The damage that has been done to all of society by misrepresenting and half-truthing about events to appease audiences is unrivaled, yet here they are on the high horse of "anything less than 100% accurate is useless"
Take note CNN, take fucking note.
flippy_flops
With Apple/iOS, I can’t help but think of the Joker’s quote, “You have nothing… Nothing to do with all your strength.” The efficiency half is excellent but what with the power? AR? Gaming? AI seems the first broad fit. And where was Apple? Literally chasing cars and an ill conceived VR headset.
I say this as a massive Apple fanboy. AI was heavily advertised as a selling point of iPhone 15 Pro and is completely MIA 6 months later. It’s a major letdown. It’s not the end of the world, but let’s just call it what it is.
For those saying Apple doesn’t release imperfect products, may I introduce to you Siri? It was average when they bought it and it’s become a punch line.
And there are so many uses of AI that don’t have to be at the risk level of, “Oops, AI left grandma at LeGuardia.” Apple should go back to its roots and provide high quality LLM/MCP and other API sdks to developers and let them go nuts. Then just clone or buy the apps that work like they always do.
epolanski
Apple went from $170 to $220 after the Apple Intelligence bs promises.
Still sits there despite having long plateaued in revenue and is still priced for some impressive revenue growth.
Go figure.
roughly
Two thoughts:
The first is that LLMs are bar none the absolute best natural language processing and producing systems we’ve ever made. They are absolutely fantastic at taking unstructured user inputs and producing natural-looking (if slightly stilted) output. The problem is that they’re not nearly as good at almost anything else we’ve ever needed a computer to do as other systems we’ve built to do those things. We invented a linguist and mistook it for an engineer.
The second is that there’s a maxim in media studies which is almost universally applicable, which is that the first use of a new media is to recapitulate the old. The first TV was radio shows, the first websites looked like print (I work in synthetic biology, and we’re in the “recapitulating industrial chemistry” phase). It’s only once people become familiar with the new medium (and, really, when you have “natives” to that medium) that we really become aware of what the new medium can do and start creating new things. It strikes me we’re in that recapitulating phase with the LLMs – I don’t think we actually know what these things are good for, so we’re just putting them everywhere and redoing stuff we already know how to do with them, and the results are pretty lackluster. It’s obvious there’s a “there” there with LLMs (in a way there wasn’t with, say, Web 3.0, or “the metaverse,” or some of the other weird fads recently), but we don’t really know how to actually wield these tools yet, and I can’t imagine the appropriate use of them will be chatbots when we do figure it out.
originalvichy
AI working with your OS is absolutely the letdown. I do not want to give my personal computer's data a direct feed into the hands of the same developers who lie about copyright abuses when mining data.
90% of the mass consumer AI tech demos in the past 2-3 years are the exact same demos that voice assistants used to do with just speech-to-text + search functions. And these older tech demos are already things only 10% of users probably did regularly. So they are adding AI features to halo features that look good in marketing but people never use.
Keep the OS secure and let me use an Apple AI app in 2-3 years when they have rolled their own LLM.
4ndrewl
"Apple made a rare stumble"
Auto.
Vision Pro.
AI.
Is there a pattern emerging here?
ichiwells
One of apple’s biggest missed with “AI” in my opinion, is not building a universal search.
For all the hype LLM generation gets, I think the rise of LLM-backed “semantic” embedding search does not get enough attention. It’s used in RAG (which inherits the hallucinatory problems), but seems underutilized elsewhere.
The worst (and coincidentally/paradoxically I use the most) searches I’ve seen is Gmail and Dropbox, both of which cannot find emails or files that I know exist, even if using the exact email subject and file name keywords.
Apple could arguably solve this with a universal search SDK, and I’d value this far more than yet-another-summarize-this-paragraph tool.
LeoPanthera
AI might be disappointing, but Apple Intelligence is definitely a stumble. I've been playing with Gemini and it works shockingly well. I fully expect Apple to catch up, but it will take a while for them to recover from the reputational damage.
aaomidi
Yes and no.
Siri didn’t need to suck all these years. Even before the LLM craze.
seydor
Trough of disillusionment
andrewstuart
AI is at the Web 1.0 stage when people didn’t really know how to make the most of it.
It sounds ridiculous now but Web 1.0 was mostly about putting companies paper brochures onto websites.
It sounds doubly ridiculous that Web 1.0 came to an end when the market crashed because no one could figure out how to make money from the internet.
Web 1.0 started in 1994 and it would be ten years until Facebook arrived.
So AI has some really really big surprises in store that no one has thought of yet and when they do, fortunes will be made.
tallytarik
“Hey Siri open the curtains”
“I found some web results. I can show them if you ask again from your iPhone”
Nah, Apple is the letdown, and has been since before ChatGPT.
kittikitti
Why is a source about AI from CNN being taken seriously? Isn't their "journalism" just clickbait?
amelius
Only if you are susceptible to the RDF.
stavros
To me, "AI is the letdown" is the letdown. The sheer lack of imagination and wonder you must have to see what are almost virtual people, something that was _unthinkable_ five years ago, and to say it's a letdown, I will never understand.
We have programs, actual programs that you can run on your laptop, that will understand images and describe them to you, understand your voice, talk to you, explain things to you. We have experts that will answer your every question, that will research things for you, and all we keep saying is how disappointing it is that they aren't better than humans.
To me, this is very much the old joke of "wow, your dog can sing?!" "Eh, it's not that impressive, he's pitchy". To go from "AI that can converse fluently is impossible, basically science fiction" to "AI is a letdown" just shows me the infinite capability humans have to find anything disappointing, no matter how jaw-droppingly amazing it is.
bradgessler
Apple would be much better saying to the world, "we're going to make Siri better". That's concrete, people get it, LLMs are good at it, and something we'd all appreciate.
Instead they're failing to build a bunch of stuff that nobody asked for under the banner, "Apple Intelligence".
Please Apple, just make Siri better.
throwawa14223
I've noticed that Siri has gotten far worse at playing a song based on a verbal request. Frequently Siri now assures me that songs are not downloaded to my phone only for me to discover that they have been the whole time.
deadbabe
We keep trying to find justifications for business use of LLMs.
We keep getting shut down by simpler, purpose built tools that work predictably.
LLM is just good for synthesizing vague inputs.
ohso4
> Apple’s obsession with privacy and security is the reason most of us don’t think twice to scan our faces, store bank account information or share our real-time location via our phones.
Uh do you have any freaking idea of what happens with your location data? bank account information is a matter of security. So is face ID data.
icu
Where exactly is the Apple Intelligence that was advertised? Siri absolutely cannot go into your phone's calendar and see who you bumped into at some bar or café. I've been using the Pixel 9 Pro as my daily driver and while I really wanted to install CalyxOS on it, I've found Gemini to be actually useful (and I'm generally biased against Google).
Apple is behind the curve like Google was prior to Gemini 2.5 Pro, but unlike Google, I cannot see Apple having the talent to catch up unless they make some expensive acquisitions and even then they will still be behind. I was shocked at how good Gemini 2.5 Pro is. The cost and value for money difference is so big that I'm considering switching away from my API usage of Claude Sonnet 3.7 to Gemini 2.5 Pro.
lvl155
This sort of takeaway is from people who do not have experience in cutting edge. AI is developing at such a rapid pace right now. I’ve seen some amazing things in the past three months.
I will say Apple AI completely sucks for a company with all the resources available to them.
ginkgotree
Right. CNN is absolutely the authority here.
ohgr
Hyper optimism in this thread.
Outside tech users it’s a novelty that lasts about a week or disappears in a puff of smoke the moment money is asked for it.
The whole industry is blind to the fact the market doesn’t need it and it it doesn’t really solve any problems. It’s not even a means to an end.
What consumers want is to be left the fuck alone and their stuff to last longer. But this doesn’t make numbers go up.
acuntcalleddan
[dead]
saagarjha
Nah, Apple just never implemented good AI. I think AI itself is a letdown but let’s not forget that Apple claimed they were going to implement that and they didn’t. If someone tells you they’re going to eat a hamburger and then they just don’t eat lunch you can feel they’re making bad decisions even if the thing they set out to do was also possibly a bad decision.
smallnix
> Apple’s obsession with privacy and security is the reason most of us don’t think twice to scan our faces, store bank account information or share our real-time location via our phones.
People do that because it's very useful, not because it's safe.
puppycodes
They just need more time to implement it.
Most people still have no idea how useful it can be.
I'm a firm beleiver it will be an absolute godsend to older folks who struggle to learn new interfaces and technologies.