So VR is hard enough - to avoid jitter that makes users feel sick you have to respond to a user's head movement, render a new frame with the new information the user should see, respond to any button presses, then draw the frame, in under 14-20ms.
Magic Leap was always a much more difficult problem... they have to respond to a user's head movement, parse the scene the user is looking at (which could be anything), figure out what to draw and where to draw it on the user's environment, then render, all in the same 14-20ms window.
Compounding that they have to do it with a much weaker CPU/GPU/battery than Oculus and friends, which use a phone, or are tethered to a PC with a $1,000 GPU. You wear Magic Leap on your head, no cables.
On its face I was always sort of surprised to hear about the good demos; it always seemed like such a difficult problem, and I know the VR folks are having a tough enough time with it.
That's not the big unsolved problem with augmented reality. Those are all problems VR systems can already solve with enough money and transistors behind them.
The AR big problem is displaying dark. You can put bright things on the display, but not dark ones. Microsoft demos their VR systems only in environments with carefully controlled dim lighting. The same is true of Meta. Laster just puts a dimming filter in front of the real world to dim it out so the overlays show up well.
Is there any AR headgear which displays the real world optically and can selectively darken the real world? Magic Leap pretended to do that, but now it appears they can't. You could, of course, do it by focusing the real scene on a focal plane, like a camera, using a monochrome LCD panel as a shutter, and refocusing the scene to infinity. But the optics for that require some length, which means bulky headgear like night vision glasses. Maybe with nonlinear optics or something similarly exotic it might be possible. But if there was a easy way to do this, DoD would be using it for night vision gear.
The AR big problem is displaying dark. You can put bright things on the display, but not dark ones.
There is actually a solution (EDIT: nope, see replies), but it's tricky: If you stack two sheets of polarizing filters on top of each other, then rotate them, you can get them to pass all light at 0 degrees rotation and block all light at 90 degrees rotation. It's like special paper that adjusts brightness depending on how much it's rotated relative to the sheet of paper behind it. https://www.amazon.com/Educational-Innovations-Polarizing-Fi...
So you could imagine cutting a series of circular polarizing filters and using them as "pixels". If you had a grid of 800x600 of these tiny filters, and a way to control them at 60 fps, you'd have a very convincing way of "displaying dark" in real time.
It'd require some difficult R&D to be viable. Controlling 800x600 = 480,000 tiny surfaces at 60fps would take some clever mechanics, to put it mildly. Maybe it won't ever be viable, but at least there's theoretically a way to do this.
A minor problem with this approach is that the polarizing filter may affect the colors behind it. But humans are very good at adapting to a constant color overlay, so it might not be an issue.
The problem with that solution is optical, I believe. It would work if you were able to put such a filter directly on your retina, but when you put it earlier in the path of the light, before images are focused, you cannot selectively block individual pixels as they appear on your retina. As a result, the dark spots will look blurry.
(Also, if the pixels are dense enough I imagine you'll get diffraction.)
Here's is Michael Abrash's better explanation:
>“But wait,” you say (as I did when I realized the problem), “you can just put an LCD screen with the same resolution on the outside of the glasses, and use it to block real-world pixels however you like.” That’s a clever idea, but it doesn’t work. You can’t focus on an LCD screen an inch away (and you wouldn’t want to, anyway, since everything interesting in the real world is more than an inch away), so a pixel at that distance would show up as a translucent blob several degrees across, just as a speck of dirt on your glasses shows up as a blurry circle, not a sharp point. It’s true that you can black out an area of the real world by occluding many pixels, but that black area will have a wide, fuzzy border trailing off around its edges. That could well be useful for improving contrast in specific regions of the screen (behind HUD elements, for example), but it’s of no use when trying to stencil a virtual object into the real world so it appears to fit seamlessly.
What about Near-Eye Light Field Displays[1][2]? From what I've seen those look to have promise in solving some focus problems and some of the problems with how cumbersome most VR/AR displays are. As a bonus, they can correct for prescriptions.
The answer is a higher resolution screen plus some clever adaptive optics and software. The problem is that even 8k screens do not come close to required resolution... And you also want fast refresh rate.
"We made black with light. That's not possible, but it was two artists on the team that thought about the problem, and all of our physicists and engineers are like "well you can't make black, it's not possible." but they forgot that what we're doing - you know the whole world that you experience is actually happening in [the brain], and [the brain] can make anything." - Rony Abovitz in 2015.
No, they're just putting out enough light to override the background, then showing dimmer areas for black. If they have a display with enough intensity and dynamic range, they can pull this off. Eye contrast is local, not absolute, so this works within the dynamic range of the human eye.
No, they're just putting out enough light to override the background, then showing dimmer areas for black
Right, that's how all existing HMD systems work - but generally not with the whole FOV re-rendered so it's not so cut and dry.
Note that such a system doesn't give you black ever. It gives you muddy [insert lens color on the grey spectrum].
The case you describe ends up with what is practically an opaque lens that is replicating the environment that is not virtual. So you might as well just use VR with camera pass through at that point.
I don't know. It's similar. I wonder what the problems are with using it, then?
One idea that comes to mind is that a regular screen leaks light. If you adjust the brightness on your laptop as low as it will go, then display a black image, there's still a massive difference between what you see there vs when the screen is completely powered off. But if you take two sheets of polarizing filter and stick them in front of your screen, you can get almost perfect blackness. That's why I thought it was a different idea, since the difference is so dramatic. You can block almost all the light, whereas a regular screen doesn't seem to be able to get that kind of contrast.
Let me be clearer: Being able to show black is super important for AR. For one thing, it's a pain in the ass to read text on a transparent HMD , because you never know what colors will be behind the letters. You can make some educated guesses, and put up your own background colors for the text, but since everything has opacity, it'll always matter what's physically behind the "hologram".
Yes, yes, it's still not a hologram, but the popularized version of holograms (from things like star wars) is still the best way to think about AR display.
If you can show SOME black, text legibility becomes a lot easier. Everything can look way better, even if the world always shines through enough to see it.
If you can show PURE black, enough to ACTUALLY obscure the world, now you can erase stuff. Like indicator lights, panopticon optics, and people.
Right. Pictures of what people see through the goggles seem to be either carefully posed against dark backgrounds (Meta [1]) or totally fake (Magic Leap[2], Microsoft[3]) It's amazingly difficult to find honest through-the-lens pictures of what the user sees. When you do, they're disappointing.
The part that's surprising to me is how instantly popular of a startup it became with so little information. Was this "demo" they gave so damn good that all the investors (some really reputable ones such as Google) started throwing money at it, without doing their research to see how real it was?
Perhaps growth just isn't a great metric for future growth. For example, tons of food-delivery and pet-product startups have exponential growth early on and evaporate a bit later when the product ceases to be sold at cost and a newer competitor appears.
>Is there any AR headgear which displays the real world optically and can selectively darken the real world?
They've said they have a solution, and it's more optical illusion than optics. They don't darken the real world, but will make you perceive something similar.
That's one approach - just add a camera, mix the camera image with the graphics, and feed it to a VR headset. Works, but it's more intrusive for the user than the AR enthusiasts want.
The main issue with this approach is that the video pipeline adds dozens of milliseconds of latency, and it becomes awkward to interact with the physical environment. You couldn't play AR pong for example.
They are called LCD screens. The problem is having a high resolution one. (focus problems can be solved with adaptive optics and measuring via IR laser refraction index of the lens)
Two of the major unsolved problems he talks about are latency and the ability to draw black - I would be surprised if magic leap had solved both of these alone and in secret.
Their vapid press releases didn't inspire confidence either.
So VR is hard enough - to avoid jitter that makes users feel sick you have to respond to a user's head movement, render a new frame with the new information the user should see, respond to any button presses, then draw the frame, in under 14-20ms.
A bit of a tangent, but: for some people, it's impossible for VR to avoid making them feel sick. It's fundamental to them wearing a VR headset rather than a technical challenge to be overcome. It's related to the fact that VR headsets can't project photons from arbitrary positions with arbitrary angles towards your eyes (i.e. a screen is planar, but the real world is a 3D volume). Turns out, evolution has ensured our bodies are very good at determining the difference between a screen's projection and the real world, resulting in a sick feeling when there's a mismatch.
I think that when people accept it's inevitable some subset of users will get sick, the VR ecosystem will grow at a faster rate.
Technically this would be possible to overcome with a light field display. Light field displays are currently very far off from becoming commercially viable.
Out of curiosity, what would the tech stack look like?
What I was picturing was:
- LCD display
- Patterned IR image projected onto eye
- Camera to read IR image on the back of retina, to figure out current focal length of cornea
- Realtime chip to adjust LCD display based on focal depth, to simulate light field image.
... The real issue is that response latency would have to be <10ms probably to avoid being disorienting.
It's hotly debated, and the literature is often in conflict as to the exact cause of simulator sickness. Regardless of the exact reason, sickness of roughly half the population appears to be somehow fundamental to VR.
> In a study conducted by U.S. Army Research Institute for the Behavioral and Social Sciences in a report published May 1995 titled "Technical Report 1027 - Simulator Sickness in Virtual Environments", out of 742 pilot exposures from 11 military flight simulators, "approximately half of the pilots (334) reported post-effects of some kind: 250 (34%) reported that symptoms dissipated in less than 1 hour, 44 (6%) reported that symptoms lasted longer than 4 hours, and 28 (4%) reported that symptoms lasted longer than 6 hours. There were also 4 (1%) reported cases of spontaneously occurring flashbacks."
Expected incidence and severity of simulator sickness in virtual
environments
> In an analysis of data from 10 U.S. Navy and Marine Corps
flight simulators, Kennedy, Lilienthal, Berbaum, Baltzley, and
McCauley (1989) found that approximately 20% to 40% of military
pilots indicated at least one symptom following simulator
exposure. McCauley and Sharkey (1992) pointed out that pilots
tend to be less susceptible to motion sickness than the general
population due to a self-selection process based on their
resistance to motion sickness. Since VE technologies will be
aimed at a more general population, such selection against
sickness may not occur. Thus, McCauley and Sharkey suggested
that sickness may be more common in virtual environments than in
simulators.
Findings
> Although there is debate as to the exact cause or causes of
simulator sickness, a primary suspected cause is inconsistent
information about body orientation and motion received by the
different senses, known as the cue conflict theory. For example,
the visual system may perceive that the body is moving rapidly,
while the vestibular system perceives that the body is
stationary. Inconsistent, non-natural information within a
single sense has also been prominent among suggested causes.
> Although a large contingent of researchers believe the cue
conflict theory explains simulator sickness, an alternative
theory was reviewed as well. Forty factors shown or believed to
influence the occurrence or severity of simulator sickness were
identified. Future research is proposed.
Vection
> One phenomenon closely involved with simulator sickness is
that of illusory self-motion, known as vection. Kennedy et a1.
(1988) stated that visual representations of motion have been
shown to affect the vestibular system. Thus, they conclude that
the motion patterns represented in the visual displays of
simulators may exert strong influences on the vestibular system.
Kennedy, Berbaum, et al. (1993) stated that the impression
of vection produced in a simulator determines both the realism of
the simulator experience and how much the simulator promotes
sickness. They suggested that the most basic level of realism is
determined by the strength of vection induced by a stimulus. For
a stimulus which produces a strong sense of vection,
correspondence between the simulated and real-world stimuli
determines whether or not the stimulus leads to sickness.
Displays which produce strong vestibular effects are likely
to produce the most simulator sickness (Kennedy, et al., 1988).
Thus, Hettinger et al. (1990) hypothesized that vection must be
experienced before sickness can occur in fixed-base simulators.
> While viewing each of three 15-minute motion displays, subjects
rated the strength of experienced feelings of vection using a
potentiometer. In addition, before the first display and after
each of the three displays, the subjects completed a
questionnaire which addressed symptoms of simulator sickness. Of
the 15 subjects, 10 were classified as sick, based on their
questionnaire score. As for vection, subjects tended to report
either a great deal of vection or none at all. In relating
vection to sickness, it was found that of the 5 subjects who
reported no vection, only 1 became sick; of the remaining 10
subjects who had experienced vection, 8 became sick. Based on
their results, Hettinger et al. concluded that visual displays
that produce vection are more likely to produce simulator
sickness. It is also likely that individuals who are prone to
experience vection may be prone to experience sickness.
It was mentioned earlier in this report that a wider field-
of-view produces more vection and, thus, is believed to increase
the incidence and severity of sickness (Kennedy et al., 1989).
Anderson and Braunstein (1985), however, induced vection using
only a small portion of the central visual field and 30% of their
subjects experienced motion sickness.
Take these studies with a grain of salt though, as display technology, IMUs, external tracking and more importantly computing power are magnitudes better than what was available in 1995.
Certainly good points. It also might be a mistake to discount studies performed under controlled conditions, though. If the cause of simulator sickness is illusory motion, then the relatively crude technologies available in 1995 may have been sufficient to induce the same effect we're observing today.
I've never met anyone who gets sick based entirely off vergence-accommodation conflict. I've certainly talked to people who claimed it was an issue but would happily go to a 3d film so I assume they were misattributing the cause of their sickness. Maybe having the peripherals line up for 3d films is enough.
I'd be interested to hear some testimonials on how it affects people though. One of the interesting things about VR is finding out the myriad of different ways people are sensitive to small phenomena.
I own a Vive, and get sick using that after half an hour maybe (even though it's not too bad and I can keep using it), and it takes me a few hours to "reset" afterwards.
I don't get sick in 3D movies (but I don't enjoy them, and never choose to see one in 3D if there's an alternative.
The reason I don't get sick in 3D movies I assume is because it's still just a screen, and any head movement I make is still 100% "responsive", with no delay or mismatch.
Edit: Sorry, I read your response a bit lazily. Yes, it's not related to vergence-accommodation, at least not exclusively.
I worked on Hololens stuff at Microsoft. It does everything you describe, and IMO does it really well. It's fairly light and is wireless. The tracking is excellent. A lot of the scene parsing is done in special hardware, with low latency. It updates a persistent model of the environment, so it isn't rebuilding everything every frame.
You don't need to parse the environment from scratch 60 times every second. As long as you get head tracking right you can just focus on what's moving and assume it will keep moving the same way. Further the demos all seem to be around a fairly static environmenment. Remeber you don't need to render the background so a simplified internal model works fine with AR.
If it works near a closeline blowing in the wind that's really hard and impressive.
The problem is even worse. In VR you're competing on latency with human muscle feedback and what your vestibular system is telling you.
In AR, you're competing with on latency with the human visual system that you're trying to augment, which is a race you can't win.
The only thing you can do is try to be very fast (<= 10 ms) so the latency becomes unnoticeable. Unfortunately right now this isn't possible unless you custom-engineer everything in your vision path optimized for latency. Fun R&D project but enormously time- and capital intensive with no guarantees for success.
A trick i've heard at least one of the VR devices is doing isn't re-rendering the entire scene but since most of the movement is just head tracking, it renders a larger buffer and scrolls around in that buffer between frames to get a quicker response.
That only works for very, very minimal differences. It's the Oculus that does this. It guesses where your head will be, renders the scene at that location, and distorts the image slightly to match its guess vs reality. It also introduces a bit of softness, but seems to work pretty well. But it does re-render the whole image each frame.
I wonder if it does it to create intra-frame changes from rotation and/or motion of the head. If it does then that could help a lot with the experience
I have no idea about the PSVR. It's probably the VR device I've heard the least about (technically). I own one, and it works pretty well, but I don't know much about it.
From my understanding this won't do anything to reduce power processing, it's all about reducing latency. That final transform happens at the last moment, with the most up-to-date head tracking information. I kind of figured it happened in the headset hardware itself in order to reduce the latency as much as possible.
That would only work if the scene was a single flat surface and you were only moving in parallel to it. If you were doing that you may as well not use VR and just look at a normal monitor. Otherwise it wouldn't match your motion, which is the problem you're trying to solve in the first place!
Actually moving parallel to it would make the artifacts worse as it'd distort the projection based on distance (i.e. parallax). With a rotation if you're rendering it onto a sphere, doing that rotation will have a very little amount of distortion from it comparatively. It still needs to be rerendered to be correct but this can allow for better intra-frame changes (say game runs at 60 hz, and you need 120hz) with less computation. As a sibling (to you) commenter pointed out, this will make the image slightly "fuzzy" or softer because you end up moving the image on a fractional amount of pixels. As I understand it though, that extra intra-frame can mean the difference between some people getting a VR migraine or not.
The latency / motion sickness issues probably aren't as bad, since you still see the surrounding environment to get a stable bearing. The display tech sounds very hard, though.
I could never really figure out how the variable focus lightfield worked.
I sort of assumed it was like a stack of images in the Z plane. So not only are they doing everything you mentioned, they are also rendering the 3D scene in slices?
Isn't the head-tracking separate from the graphics rendering? I mean, you could just render onto an area that's somewhat larger than your fov, and select some area every couple of ms based on the head movement.
You want to integrate rendered objects onto the physical world, so you have to know exactly the pose these objects should be rendered at as if seen from your point of view. The objects transforms are piloted by the head tracking. Usually with pose prediction thrown in the mix to squash latency.
Not surprising, given that the hype train was billowing at full steam. A friend of a friend interviewed there and eventually walked away saying "Wow. That is a lot of cool-aid"
Meanwhile, teams like Oculus are reportedly trying to sidestep AR by investing in "mixed reality", which will bring live camera feeds in to VR, with screens still covering both eyes.
Interesting for sure, but from what I've hear from people using the Rift it's the fixed focus that gives them headaches.
I know of a company that make cut scenes for A games, most of their people can't wear the Oculus for more than a few minutes. Big headaches, motion sickness, etc...
Before you jump on me, this could be due to the stuff they are trying to show on the Rift -- would content make a difference? Sudden shifts in scene from close to far off?
I do vr projects for work occasionally and previously was in film when they were pushing stereo. Content makes a huge difference. On a flat screen the most dramatic space contrast is left-to-right, in stereo it's near to far. If you're exaggerating something you'd use those. It's way more fatiguing on your eyes to focus near to far, than left to right--it also takes like 10x as long for people to focus and recognize something that changes near-to-far.
FPS games are awful for VR when you move around with a joystick. Also, a lot of the "dramatic" things that get used to try and sell games make you dizzy. There's also some acclimation. They say devs are the worst subjects for testing of some vr demo will cause sickness.
I also think people underplay the motion sickness film and FPS video games can have on people who are unacclimated (FPS games didn't have big money pushing it in the early 90's and were on tiny, low res screens). I know I've felt motion sickness playing flight simulators and certain FPS games, especially at first. Movies like Blair Witch and Cloverfield were criticised for making people feel sick.
To address the focus problem, I've been wondering why there's not much talk about light fields for VR (maybe because of patents?).
First there's plain old motion sickness. The first time I flew in Elite Dangerous in VR, I tried swooping into a landing pad slot. It was a maneuver I could do ok autopilot from playing in a flat screen, but when I pitched up hard, my stomach dropped. My body was expecting G's it just wasn't feeling. I had to take the Rift off and take some deep breaths to ward off the nausea. I don't usually get motion sickness. I can read on shaky busses with no problems.
Second is simulator sickness, which I think comes from latency and resolution issues? This also hits people differently. I have friends who can only do minutes with the Rift on. I can last as long as my face can take the pressure of the headset, meaning an hour or two at a time.
All in all, yeah, some people just won't be able to handle cutrent gen VR for various reasons.
Does AR explicitly exclude camera+screen? I always thought that stuff like pokemon go and any similar "paste some 3d over a camera feed in real time" stuff counted as augmented reality.
Wow. So if Magic Leap is vaporware, does that leave Microsoft as the only ones publicly making progress in AR? Plenty of companies are shipping VR, but AR ends up being the future, HoloLens is building up quite a lead.
Seems like SnapChat isn't doing too bad. Sure, maybe their tech isn't as advanced... but they've begun solving the equally hard problem of getting it into people's hands naturally and getting them to use it and even pay for it.
Their current gen glasses do not do any AR - but they've been acquiring / hiring very aggressively in space for a couple of years now. It's pretty transparent what their long term goal is with their glasses.
I think it's actually a really smart approach. MS spent ~2 billion and multiple years of R+D to get the Hololens built. Snapchat's coming at it from another angle, start simple and build up with each successive generation. The fact that they already have people clamoring for something this basic is telling...
There's also Meta which is taking preorders for its second developers kit at $949. It has significantly larger field of view (90 degrees) but must be tethered to a computer
"Magic Leap is vaporware"
Where are you getting that? The article seems to imply that feature to feature they are matching the Hololens and that is the worst case scenario.
Edit: On second thought you maybe meeting the exact definition of Vaporware in that they are not manufacturing it yet[1]. No indication that they won't be able to though.
Well it hasn't shipped for a long time, and as it stands now it's worse than its competition, not to mention they have way less resources than their competition. And probably the main reason they didn't ship until now is exactly because of all these reasons. Which means the more time they spend not shipping, the harder it will become for them to ship.
I think it's fair to speculate that they will end up as vaporware. Only alternative I can see is them pivoting to a more niche field so that they don't directly compete with hololens.
People started saying Duke Nukem Forever was a vaporware long before the parent company actually admitted it.
The fact that they're still far away from getting to market . . . . Their platform strat of making devs work in seattle/sf and timeshare the devices . . . makes it seem as if they're going to fail super hard. Hololens is clearly out front. The device itself is really impressive. And I imagine V2 of the hardware is coming soon. Wait for more credible players to enter the space for real competition.
Hololens has the lead but whoever get's to the consumer market first will still have a huge advantage. The only thing stopping MS from getting to that point is having a consumer friendly price.
I'm not sure that there will be a significant first mover advantage here. Everybody thought that the Oculus CV would be the only headset to get because the DK2 was pretty cool, then the Vive was released and people realized how important motion controllers were for immersion, and Oculus got creamed. I think that there will be bits of tech like this for each generation of VR/AR, one small set of features that's obviously better, which will win the generation.
ODG has had AR glasses on the market for longer than Hololens, with an actual glasses form factor, running on Android, and should be announcing their new model at CES2017.
I went to https://membership.theinformation.com/subscribe because it looks like something I'd pay for. I hate the trend of not putting pricing upfront. I'm just going to assume I can't afford it.
That said, $40/month is a bit much. I'm paying less for the IDE that I'm using every single day. I doubt I'd be getting the same value out of that website.
Yep. The Information did a ton of deep investigative work on this. If you find this article interesting you should consider supporting them by subscribing.
Sadly the way journalism works today is one outlet does a ton of time-consuming, expensive digging. Then within minutes of posting, every other outlet copies the story
Karl Guttag, one of the MagicLeap critics and analysts has an interesting update on what he thinks ML will actually release [0]. The product they might release is actually different from what all the hype has been about: LCOS Microdisplay, two sequential focus planes, Variable Focus, photonics Chip.
> But at least one of these videos — showing an alien invader game that let the wearer of the supposed headset or glasses make use of real-world objects — was created by visual effects studio Weta Workshop. Prior to today, it was believed Weta had simply created the visual assets for the game. However, The Information reveals the entire video was created by the studio.
Was this ever really in doubt? You don't have to be a physicist or AR expert to note that throughout the video they occlude bright background colours with dark AR elements - somehow projecting 'black light'. Whilst I wouldn't expect the man on the street to pick this up it would be nice if journalists about to pen a breathless puff piece would at least give the subject matter 30 seconds of consideration.
Here is the video in question: https://www.youtube.com/watch?v=kPMHcanq0xM It doesn't have any disclaimers that this was faked -- some earlier videos from Magic Leap did have disclaimers.
It is obviously too high resolution and pixel perfect to be from a headset display though. So it didn't fool me, but I think the way it is presented could be quite misleading to folks without a 3D background.
Yeah, it's funny how they think they got some scoop here. It was obviously a concept video. Totally obvious, and not just from what you mentioned, just the look and feel of the whole thing in general.
I always find it instructive to read glassdoor reviews about a company I'm interested in. There's obvious astroturfing and upset people, but lots of interesting information also leaks out there.
Further behind than expected. I have always suspected that this is a giant money pit with zero chance of success. Another decade or two and maybe technology will catch up to the desire to understand the world around you in real time, but today it's not possible. It's not magic, it's dreaming.
Hmm interesting. So the article makes it sound like one of their faked videos was used to bring in software developers but if they come in and it's simply not working anywhere near the level shown, wouldn't that seem suspicious?
I don't think this necessarily means the company is a scam or anything like that but it seems a little...suspect at least to me if I was interviewing with them.
Curious if the community has an issue with it or not.
I can't help but have similar feelings about this, and I think your question about whether "the community has an issue with it" is especially pertinent.
Many journalists and internet pundits suggest that Magic Leap's technology was obviously over-hyped. See for example the title of The Verge's article. Adding further evidence that it was too good to be true, they used faked videos to attract software developers, as you have pointed out.
So how is it that Google, Alibaba, and Andreessen Horowitz were convinced to hand over hundreds of millions of dollars each to this company? Was it fraud, over-promising on behalf of Magic Leap's founders, or could these huge VC firms not see what everyone else could see? Is it really that easy to secure a billion dollars in funding? The question as to "what's going on here" relates as much to the startup / VC community as it does to Magic Leap.
Nah, an engineer working in VR/AR/Computer vision would know the video was not a truthful representation of the actual thing, but more something they were "aiming at", or a presentation of a vision.
I don't think it really fooled anyone into working for them. Not so sure about investors.
There is a ton of AR/VR hype driven by Adobe After Effects and not real product results. Now whenever I see AR/VR tech that follows this pattern, I stop paying attention. It's unfortunate for the entire industry.
It's not surprising because they are not basing it on any traditional widely manufactured components. It would be as if you promise to ship flat panel displays before LCD production lines have even been created and LCD only existed in the labs.
So, either it's a scam, or, they legitimately plan to create a new industry which requires multiple years capital investment to get production off the ground.
>>The company has raked in $1.4 billion in funding
Let that sink in. Is it a record for startup funding with no product or service released? I don't recall how far along Uber was when they passed that mark.
And what exit number do they need to be considered a success for the VCs?
I thought they had given up a couple of years ago on the gimbal-mounted fiber with the piezo collar. They raised the last round to buy a MEMS fab which I assumed was to do a DLP-like thing down a fibre laminate.
And in fact the Guttag piece linked in this comment page also suggests this (says the DLP demo was the most impressive). I doubt they could use actual DLP (because the TI DLP group is a pain to work with) but it would make sense that their MEMS chip would be also a micro mirror array.
From Dalv-hick on reddit (Jack Hayes @Halo_AR_ltd):
>I presume they're going for something similar to what I'm building in one of the implementations, using switching polarisation-sensitive lenses in a stack.
The other thing they mentioned and which is feasible immediately is using a liquid lens, if the input is small.
This would be used for 45 degree combiner but too small for a waveguide exit or for Oculus size lenses.
---
>Building a single FSD isn't difficult.
Building a single FSD with wide deflection (more pixels) and fast oscillation (more frames and made necessary to counteract the wider deflection) is considerably more difficult.
Building an FSD with the previous attributes an removing jitter/ distortion is more difficult.
Doing all the previous for an assembled array to line up perfectly is more difficult again.
Finally doing an array, particularly a 2D array at scale, and expecting a significant amount of them to actually work is crazy.
The FSD improvements FredzL points to are at odds with each other.
An FSD array can either use to each unit to create a different focus plane/ view or add extra pixels to the 2D resolution for any one instant of time.
One of the University of Washington papers does hint at alternative though.
In the paper it shows a fibre array (line) being placed in a thick block for better alignment.
This would prevent the fibres being vibrated to create the image.
Instead, on a single line, of a different paper they mention trying acousto-optic modulators.
Dennis Smalley had a good paper on the subject, in which ultrasonic waves travelling through a class of materials will locally change the refractive index.
Light bends by different amounts in different refractive indices, thereby allowing the possibility of image formation.
Solid-state scanning arrays or even multiple beams on a MEMS mirror are probably better than FSDs.
They might be able to use FSD, I don't think it's a good idea.
> $1.4 billion in funding at a valuation of $4.5 billion
Man this number doesn't look good in more than one way. Valuations aside, I'm guessing the cumulative dilution for everyone in the company is super high
What is troubling about AR/MR which I feel a lot of people overlook is the limited FOV. From the demo videos shown by Magic Leap (and Hololens) on YouTube it feels as if the entire FOV is available, which is barely the case - the Hololens has an FOV of 120x120 and Magic Leap is said to have a lower angle than that.
Verge covered Hololens and they mention [1] the FOV as a limitation.
Unfortunately Hololens FOV is more like 30×20°. Other AR players are mostly in the same ballpark at the moment. The good thing is that this gives very high pixel density. The other thing is that you don't see the edges unless the 3D render is cut off, so it may not feel as tunnelled as in VR.
Yes, that's the ugly truth. And their PR departments do their best to hide this almost deal-breaking fact. The current experience is underwhelming (limited FOV) in comparision to the faked PR demonstration videos.
It's interesting just how many Magic Leap stories get posted to HN. In the last month or so I must have seen at least 3 stories about Magic Leap, but no other VR/AR technologies.
I don't even remember the last time I saw a story on HN about Oculus, or Vive, or whatever the Sony Playstation VR product is called, nor about the Hololens. But the Magic Leap stories are popping up like mushrooms.
These things often come in clusters, though we usually penalize posts that are from a string of copycat or follow-up posts that don't add new information. I have the impression that this one does, though, albeit cribbed from a better source.
The video of the AR game embedded in the article looks very suspicious. It's hard to believe that's a game that actually exists, or at least exists anywhere except that specific room. We've got depth and occlusion of objects when the aliens come out from down the hallway, and the laser guns look like their physical props.
I'd like it to be true, but it looks too good to be true.
All of Apple's most successful products were developed mainly rumour-free (and certainly hype-free) behind closed doors, popping suddenly into the world at WWDC or similar.
Having used Microsoft HoloLens for more than a month. Microsoft is doing a disservice to a great product by dropping the ball on HoloLens compatible games & software. It has potential to greatly transform gaming. It ought to start dropping the price and releasing more titles.
I have also used HoloLens, and Microsoft is not targeting gamers right now. They want engineers etc, corporations and universities with deep pockets. IMO the FoV is too narrow for the consumer market/gamers at this time. It is perfect for what they are aiming at right now though.
Microsoft is surely targeting gamers, they showed Robo Raid during the presentation, including even an unreleased version of Minecraft. Robo Raid & Fragments are way more polished and utilize significantly more capabilities of HoloLens compared to any other app.
Everything in The Information's article seems sourced to me. Some of the sources are "former employees" or similarly vague, but the reporter had an actual demonstration of Magic Leap's latest and greatest.
Jeez this sucks. I was always behind Magic Leap with the logic that "big investors like a16z google alibaba wouldn't get suckered" but god, :( this is disappointing.
> I was always behind Magic Leap with the logic that "big investors like a16z google alibaba wouldn't get suckered"
Here's a few clarifications that might change your thinking about funding announcements.
Google Ventures passed on Magic Leap. Google Capital passed on Magic Leap. Sergey was enamored so Google's own balance sheet provided most of the B round. Word is that A16Z put only $3MM in -- basically enough to get some info rights and not piss google off.
Nobody I will quote in a forum(!) but I live in silicon valley where secrecy is poor.
If you want to look on the web it's not hard to verify if you're willing to do more than just a google query. For example you can easily tell from the web sites of GV and GC that ML is not in either of their portfolios. I can't find any Form D for magic leap but you can see what some of their investors put in if you dredge some of those non-googleable-but-public databases.
Yes it's quite obvious GV and Gcap are not in Magic Leap. But it doesn't mean they passed on Magic Leap. It could've been that Google Corp told them to back out because they were doing it alone. Speculation of course, but hard to tell.
Or even better, change the title to that of the actual article: "Magic Leap is actually way behind, like we always suspected it was". That doesn't appear too long for a HN title.
So what happens to the CEO? Does he get fired? Charged with fraud? Tarred and feathered? Given a pistol with one round and expected to do the honorable thing?
Wow, people are quick to upvote news when it is something that everyone likes do agree. Kind of like facebook. Personally, I found this article with as much little substance as the ones hyping ML.
Sites hunting clicks on both sides and we are providing it.
Magic Leap was always a much more difficult problem... they have to respond to a user's head movement, parse the scene the user is looking at (which could be anything), figure out what to draw and where to draw it on the user's environment, then render, all in the same 14-20ms window.
Compounding that they have to do it with a much weaker CPU/GPU/battery than Oculus and friends, which use a phone, or are tethered to a PC with a $1,000 GPU. You wear Magic Leap on your head, no cables.
On its face I was always sort of surprised to hear about the good demos; it always seemed like such a difficult problem, and I know the VR folks are having a tough enough time with it.