The problem is that these companies keep wanting to put processing in the glasses. The glasses should only have the minimum circuitry to send, receive and render very high definition video. All the processing should be done in another device (like the arm band) that is carried nearby (like the wireless microphones they use in TV or concerts: https://www.amazon.com.mx/UHF-Wireless-Microphone-System-Kit... ). That way you are not CPU constrained.
I would say have a "cache level" like combination:
* very low/few computation done at the glasses level
* medium computation done at the brick unit level
* hard/intensive computation done in the cloud.
> The problem is that these companies keep wanting to put processing in the glasses.
I agree. The obvious choice is to offload to our insanely powerful phones. Unfortunately WiFi is too disruptive on mobile OSs and raw Bluetooth is.. well does that even need an explanation? Apple are probably the only ones who could deliver a seamless high bandwidth link and decent pairing, atm. But they spent their prototype-billions on a headset instead.
On the other hand though.. do we really need to run multiple cameras and a realtime image processing pipeline to say that the cacao on your countertop is, in fact, cacao? These AR “experiences” make cool demos, but once the novelty wears off, nobody wants to play planetarium or anatomy class for hours a day.
Note that without the whole AR part of it, there’s still some really cool hardware for all kinds of purposes. That can be really handy when you want or need both hands free. For instance POV video for say sports, HUD and voice interface for eg cycling, maybe watching videos while working, anything requiring gloves (cold, wet hands, gardening) todo-list in the corner of eye when shopping, etc etc. You could reduce form factor and increase battery time significantly, even if you keep accelerometer, gyro, projector, light sensor, cameras etc. But for some reason, utility is not even a priority with these companies.
Yeah, I know but with a puck, which isn’t feasible for consumer. A phone already has sufficient compute, but Apple and Google would need to provide/open up radio- and pairing protocols.
I understand your position. It's the architecture that a lot of players in the space (e.g. Qualcomm) are doubling down on --- and it seems intuitively obvious that we should tether smart glasses to ubiquitous phones.
The problem is that by doing so we make glasses secondary, auxiliary devices. You wouldn't leave your house without your phone, but if you forget your tethered glasses, well, no big deal.
The only way we move past phone as personal computing form factor is making alternatives standalone devices. If you could do everything you do on your phone, but on glasses, you might drop the phone habit. If you can't drop the phone habit, and there are only so many things you can take with you when you leave your house, AR glasses will always be an afterthought.
Is it feasible to do wireless transmission of video at extremely low latency?
Wireless microphones are not a good comparison because they are probably analog, but even if they are digital a few dozen milliseconds of delay is going to be imperceptible for audio in a way that video is not.
I would say have a "cache level" like combination: * very low/few computation done at the glasses level * medium computation done at the brick unit level * hard/intensive computation done in the cloud.