The narrative for VR and AR has been pretty static for years. We have a dream of AR devices that look like a pair of glasses and can place things in the world around you, but we don’t seem close to the optics that would make that practical, beyond prototypes and proofs of concept. We do have practical VR devices that are far beyond prototypes or proofs of concept, and good enough for a passionate base of enthusiasts, and we have games and a few other ideas for use cases (fitness?), but we don’t have mass-market, ‘hockey-stick’ adoption. Perhaps 10m units were sold last year, with, apparently, a very high abandonment rate.
So the question has been that we know that VR devices can and will get a lot better in the next five and ten years, but as they do, is there a breakout? How many people will care, and why? If we had the dream device with the amazing experience, is it the next smartphone – the next universal device and universal platform? Or will it look more like games consoles and be used by a couple of hundred million people at most for one or two things? Might it be much smaller even than that? Meta has bet over $35bn and counting that this will be The Thing, but at the moment we don’t know.
There’s a strong echo here of mobile 20 years ago. From the late 1990s to 2007, we had mobile internet devices that were OK but not great, and slowly improving, we knew they would eventually be much better, and we thought ‘mobile internet’ would be big – but we didn’t know that smartphones would replace PCs as the centre of tech, and connect five billion people. Then the iPhone came, and the timeline broke.
Apple’s Vision Pro isn’t an iPhone moment, or at least, not exactly. At $3,500, it’s very expensive in the context of today’s consumer electrics market, where the iPhone launched for $600 (without subsidy, and then rapidly switched for $200 at retail with an operator subsidy). And where the iPhone was a more-or-less drop-in replacement for the phone you already had, nine years after Meta bought Oculus VR is still a new device and a new category for almost everyone. Indeed, the Vision Pro actually looks a bit more like the original Macintosh, which was over $7,000 (adjusted for inflation) when it launched in 1984, and most people didn’t know why they needed one.
I think the price and the challenge of category creation are tightly connected. Apple has decided that the capabilities of the Vision Pro are the minimum viable product – that it just isn’t worth making or selling a device without a screen so good you can’t see the pixels, pass-through where you can’t see any lag, perfect eye-tracking and perfect hand-tracking. Of course the rest of the industry would like to do that, and will in due course, but Apple has decided you must do that.
This is the opposite decision to Meta: indeed Apple seems to have taken the opposite decision to Meta in most of the important trade-offs in making this. Meta, today, has roughly the right price and is working forward to the right device: Apple has started with the right device and will work back to the right price. Meta is trying to catalyse an ecosystem while we wait for the right hardware – Apple is trying to catalyse an ecosystem while we wait for the right price. So the Vision is a device pulled forward from years into the future, at a price that reflects that. It’s as thought Apple had decided to sell the 2007 iPhone in 2002 – what would the price have been?
Why is that the minimum spec? I think this is driven by another opposite decision, and conceptually a more interesting one: this isn’t primarily a VR device. It’s an AR device. When you put on a Quest you’re placed into another world, but when you put on the Vision you don’t go anywhere. As Apple puts it, you look through it.
This is only possible because of the hardware that makes it cost $3,500 – a screen with enough pixel-density (at least double anything else on the market) that you can’t see pixels and can read text, and the custom chips and sensors to drive it (and without tethering it to a Mac Pro). I don’t know if you could have done that at any practical price ten years ago.
I think this is a binary difference, and I think it’s the core of why Apple believes this is the minimum acceptable spec. Better VR screens produce a better VR experience, obviously, but that’s on a spectrum right back to the original Rift back in 2011 (or indeed the 1990s). Conversely, the proposition that you don’t think you’re looking at a screen at all is binary – it’s more like the difference between using nav keys or a stylus and using multitouch. For VR, better screens are merely better, but for AR Apple thinks this this level of display system is a base below which you don’t have a product at all.
Hence, one of the things I wondered before the event was how Apple would show a 3D experience in 2D. Meta shows either screenshots from within the system (with the low visual quality inherent in the spec you can make and sell for $500) or shots of someone wearing the headset and grinning