Hacker News new | past | comments | ask | show | jobs | submit login
Magic Leap’s technology may be years away from completion (theverge.com)
232 points by evilnig on Dec 9, 2016 | hide | past | favorite | 170 comments



So VR is hard enough - to avoid jitter that makes users feel sick you have to respond to a user's head movement, render a new frame with the new information the user should see, respond to any button presses, then draw the frame, in under 14-20ms.

Magic Leap was always a much more difficult problem... they have to respond to a user's head movement, parse the scene the user is looking at (which could be anything), figure out what to draw and where to draw it on the user's environment, then render, all in the same 14-20ms window.

Compounding that they have to do it with a much weaker CPU/GPU/battery than Oculus and friends, which use a phone, or are tethered to a PC with a $1,000 GPU. You wear Magic Leap on your head, no cables.

On its face I was always sort of surprised to hear about the good demos; it always seemed like such a difficult problem, and I know the VR folks are having a tough enough time with it.


That's not the big unsolved problem with augmented reality. Those are all problems VR systems can already solve with enough money and transistors behind them. The AR big problem is displaying dark. You can put bright things on the display, but not dark ones. Microsoft demos their VR systems only in environments with carefully controlled dim lighting. The same is true of Meta. Laster just puts a dimming filter in front of the real world to dim it out so the overlays show up well.

Is there any AR headgear which displays the real world optically and can selectively darken the real world? Magic Leap pretended to do that, but now it appears they can't. You could, of course, do it by focusing the real scene on a focal plane, like a camera, using a monochrome LCD panel as a shutter, and refocusing the scene to infinity. But the optics for that require some length, which means bulky headgear like night vision glasses. Maybe with nonlinear optics or something similarly exotic it might be possible. But if there was a easy way to do this, DoD would be using it for night vision gear.


The AR big problem is displaying dark. You can put bright things on the display, but not dark ones.

There is actually a solution (EDIT: nope, see replies), but it's tricky: If you stack two sheets of polarizing filters on top of each other, then rotate them, you can get them to pass all light at 0 degrees rotation and block all light at 90 degrees rotation. It's like special paper that adjusts brightness depending on how much it's rotated relative to the sheet of paper behind it. https://www.amazon.com/Educational-Innovations-Polarizing-Fi...

So you could imagine cutting a series of circular polarizing filters and using them as "pixels". If you had a grid of 800x600 of these tiny filters, and a way to control them at 60 fps, you'd have a very convincing way of "displaying dark" in real time.

It'd require some difficult R&D to be viable. Controlling 800x600 = 480,000 tiny surfaces at 60fps would take some clever mechanics, to put it mildly. Maybe it won't ever be viable, but at least there's theoretically a way to do this.

A minor problem with this approach is that the polarizing filter may affect the colors behind it. But humans are very good at adapting to a constant color overlay, so it might not be an issue.


The problem with that solution is optical, I believe. It would work if you were able to put such a filter directly on your retina, but when you put it earlier in the path of the light, before images are focused, you cannot selectively block individual pixels as they appear on your retina. As a result, the dark spots will look blurry.

(Also, if the pixels are dense enough I imagine you'll get diffraction.)

Here's is Michael Abrash's better explanation:

>“But wait,” you say (as I did when I realized the problem), “you can just put an LCD screen with the same resolution on the outside of the glasses, and use it to block real-world pixels however you like.” That’s a clever idea, but it doesn’t work. You can’t focus on an LCD screen an inch away (and you wouldn’t want to, anyway, since everything interesting in the real world is more than an inch away), so a pixel at that distance would show up as a translucent blob several degrees across, just as a speck of dirt on your glasses shows up as a blurry circle, not a sharp point. It’s true that you can black out an area of the real world by occluding many pixels, but that black area will have a wide, fuzzy border trailing off around its edges. That could well be useful for improving contrast in specific regions of the screen (behind HUD elements, for example), but it’s of no use when trying to stencil a virtual object into the real world so it appears to fit seamlessly.

http://blogs.valvesoftware.com/abrash/why-you-wont-see-hard-...


What about Near-Eye Light Field Displays[1][2]? From what I've seen those look to have promise in solving some focus problems and some of the problems with how cumbersome most VR/AR displays are. As a bonus, they can correct for prescriptions.

1: https://www.youtube.com/watch?v=uwCwtBxZM7g

2: https://www.youtube.com/watch?v=8hLzESOf8SE


That makes sense. Thank you for the explanation.


The answer is a higher resolution screen plus some clever adaptive optics and software. The problem is that even 8k screens do not come close to required resolution... And you also want fast refresh rate.


"We made black with light. That's not possible, but it was two artists on the team that thought about the problem, and all of our physicists and engineers are like "well you can't make black, it's not possible." but they forgot that what we're doing - you know the whole world that you experience is actually happening in [the brain], and [the brain] can make anything." - Rony Abovitz in 2015.

https://www.youtube.com/watch?v=bmHSIEx69TQ&feature=youtu.be...


That was a nauseatingly obtuse way to say "We're trying to create a standing wave on the retina."

That approach is devastatingly hard, but probably the best way to do it.


No, they're just putting out enough light to override the background, then showing dimmer areas for black. If they have a display with enough intensity and dynamic range, they can pull this off. Eye contrast is local, not absolute, so this works within the dynamic range of the human eye.


No, they're just putting out enough light to override the background, then showing dimmer areas for black

Right, that's how all existing HMD systems work - but generally not with the whole FOV re-rendered so it's not so cut and dry.

Note that such a system doesn't give you black ever. It gives you muddy [insert lens color on the grey spectrum].

The case you describe ends up with what is practically an opaque lens that is replicating the environment that is not virtual. So you might as well just use VR with camera pass through at that point.


How would that be different from an LCD, which is something they've presumably looked at and not used?


I don't know. It's similar. I wonder what the problems are with using it, then?

One idea that comes to mind is that a regular screen leaks light. If you adjust the brightness on your laptop as low as it will go, then display a black image, there's still a massive difference between what you see there vs when the screen is completely powered off. But if you take two sheets of polarizing filter and stick them in front of your screen, you can get almost perfect blackness. That's why I thought it was a different idea, since the difference is so dramatic. You can block almost all the light, whereas a regular screen doesn't seem to be able to get that kind of contrast.


I don't think that level of black is a good thing for AR. If you can't distinguish the augmentation from reality, I'd argue that's a bad thing.


Let me be clearer: Being able to show black is super important for AR. For one thing, it's a pain in the ass to read text on a transparent HMD , because you never know what colors will be behind the letters. You can make some educated guesses, and put up your own background colors for the text, but since everything has opacity, it'll always matter what's physically behind the "hologram".

Yes, yes, it's still not a hologram, but the popularized version of holograms (from things like star wars) is still the best way to think about AR display.

If you can show SOME black, text legibility becomes a lot easier. Everything can look way better, even if the world always shines through enough to see it.

If you can show PURE black, enough to ACTUALLY obscure the world, now you can erase stuff. Like indicator lights, panopticon optics, and people.


Right. Pictures of what people see through the goggles seem to be either carefully posed against dark backgrounds (Meta [1]) or totally fake (Magic Leap[2], Microsoft[3]) It's amazingly difficult to find honest through-the-lens pictures of what the user sees. When you do, they're disappointing.

[1] http://media.bestofmicro.com/V/H/563597/original/Meta-Collab... [2] http://www.roadtovr.com/wp-content/uploads/2015/10/magic-lea... [3] https://winblogs.azureedge.net/devices/2016/02/MSHoloLens_Mi...


And that would necessarily be bad?


I was thinking the exact same thing.

Anyone seen a TFT "filter" mounted on the front of a transparent OLED "backlight"?

Yes I'm sure they must have thought of it. Wonder what the issue is.

[edit] I read the explanation further down now


You mean a TN LCD screen? They work exactly like this, using liquid crystal to change polarisation angle. Pretty much every LCD ever.

It can give pure black if the polarising plates and liquid crystal is accurate enough.


The part that's surprising to me is how instantly popular of a startup it became with so little information. Was this "demo" they gave so damn good that all the investors (some really reputable ones such as Google) started throwing money at it, without doing their research to see how real it was?

Sounds very suspicious to me.


Especially considering the nightmare of a funding climate we have. Those demos must have been truly life altering to warrant the $500m in investment.

Profitable startups with hockey stick like growth can't even raise a couple million...damn.


Why is this post greyed out? Did people downvote it? Is not a reasonable question to ask why capital is wasted?


Perhaps growth just isn't a great metric for future growth. For example, tons of food-delivery and pet-product startups have exponential growth early on and evaporate a bit later when the product ceases to be sold at cost and a newer competitor appears.


>Is there any AR headgear which displays the real world optically and can selectively darken the real world?

They've said they have a solution, and it's more optical illusion than optics. They don't darken the real world, but will make you perceive something similar.


Why can't you "just" use a camera and project the light with "standard" vr technology?


That's one approach - just add a camera, mix the camera image with the graphics, and feed it to a VR headset. Works, but it's more intrusive for the user than the AR enthusiasts want.


The main issue with this approach is that the video pipeline adds dozens of milliseconds of latency, and it becomes awkward to interact with the physical environment. You couldn't play AR pong for example.


> The AR big problem is displaying dark

What about those windows where, at a press of a button, they turn relatively opaque?


They are called LCD screens. The problem is having a high resolution one. (focus problems can be solved with adaptive optics and measuring via IR laser refraction index of the lens)


Michael Abrash wrote about this a while back and it made me suspect that Magic Leap wasn't where they were pretending to be.

http://blogs.valvesoftware.com/abrash/why-you-wont-see-hard-...

Two of the major unsolved problems he talks about are latency and the ability to draw black - I would be surprised if magic leap had solved both of these alone and in secret.

Their vapid press releases didn't inspire confidence either.


So VR is hard enough - to avoid jitter that makes users feel sick you have to respond to a user's head movement, render a new frame with the new information the user should see, respond to any button presses, then draw the frame, in under 14-20ms.

A bit of a tangent, but: for some people, it's impossible for VR to avoid making them feel sick. It's fundamental to them wearing a VR headset rather than a technical challenge to be overcome. It's related to the fact that VR headsets can't project photons from arbitrary positions with arbitrary angles towards your eyes (i.e. a screen is planar, but the real world is a 3D volume). Turns out, evolution has ensured our bodies are very good at determining the difference between a screen's projection and the real world, resulting in a sick feeling when there's a mismatch.

I think that when people accept it's inevitable some subset of users will get sick, the VR ecosystem will grow at a faster rate.


Technically this would be possible to overcome with a light field display. Light field displays are currently very far off from becoming commercially viable.


Well, wether to product is viable or not, that's what Magic Leap is trying to deliver.


You could approximate this using a standard display if you could dynamically track and respond to focal depth.

That might be harder than getting a functional light field display, though.


Much easier, we have the tech in ophthalmology and adaptive lenses that are used in cellphone cameras.


Out of curiosity, what would the tech stack look like?

What I was picturing was: - LCD display - Patterned IR image projected onto eye - Camera to read IR image on the back of retina, to figure out current focal length of cornea - Realtime chip to adjust LCD display based on focal depth, to simulate light field image.

... The real issue is that response latency would have to be <10ms probably to avoid being disorienting.


Isn't it more likely to be lag and inner ear related, not related to light angles? Why do you think it's light angles?


It's hotly debated, and the literature is often in conflict as to the exact cause of simulator sickness. Regardless of the exact reason, sickness of roughly half the population appears to be somehow fundamental to VR.

Here's some interesting reading:

https://en.wikipedia.org/wiki/Simulator_sickness

https://news.ycombinator.com/item?id=5265985

> In a study conducted by U.S. Army Research Institute for the Behavioral and Social Sciences in a report published May 1995 titled "Technical Report 1027 - Simulator Sickness in Virtual Environments", out of 742 pilot exposures from 11 military flight simulators, "approximately half of the pilots (334) reported post-effects of some kind: 250 (34%) reported that symptoms dissipated in less than 1 hour, 44 (6%) reported that symptoms lasted longer than 4 hours, and 28 (4%) reported that symptoms lasted longer than 6 hours. There were also 4 (1%) reported cases of spontaneously occurring flashbacks."

Simulator Sickness in Virtual Environments (PDF): http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA295861

Some interesting quotes from the report:

Expected incidence and severity of simulator sickness in virtual environments

> In an analysis of data from 10 U.S. Navy and Marine Corps flight simulators, Kennedy, Lilienthal, Berbaum, Baltzley, and McCauley (1989) found that approximately 20% to 40% of military pilots indicated at least one symptom following simulator exposure. McCauley and Sharkey (1992) pointed out that pilots tend to be less susceptible to motion sickness than the general population due to a self-selection process based on their resistance to motion sickness. Since VE technologies will be aimed at a more general population, such selection against sickness may not occur. Thus, McCauley and Sharkey suggested that sickness may be more common in virtual environments than in simulators.

Findings

> Although there is debate as to the exact cause or causes of simulator sickness, a primary suspected cause is inconsistent information about body orientation and motion received by the different senses, known as the cue conflict theory. For example, the visual system may perceive that the body is moving rapidly, while the vestibular system perceives that the body is stationary. Inconsistent, non-natural information within a single sense has also been prominent among suggested causes.

> Although a large contingent of researchers believe the cue conflict theory explains simulator sickness, an alternative theory was reviewed as well. Forty factors shown or believed to influence the occurrence or severity of simulator sickness were identified. Future research is proposed.

Vection

> One phenomenon closely involved with simulator sickness is that of illusory self-motion, known as vection. Kennedy et a1. (1988) stated that visual representations of motion have been shown to affect the vestibular system. Thus, they conclude that the motion patterns represented in the visual displays of simulators may exert strong influences on the vestibular system. Kennedy, Berbaum, et al. (1993) stated that the impression of vection produced in a simulator determines both the realism of the simulator experience and how much the simulator promotes sickness. They suggested that the most basic level of realism is determined by the strength of vection induced by a stimulus. For a stimulus which produces a strong sense of vection, correspondence between the simulated and real-world stimuli determines whether or not the stimulus leads to sickness. Displays which produce strong vestibular effects are likely to produce the most simulator sickness (Kennedy, et al., 1988). Thus, Hettinger et al. (1990) hypothesized that vection must be experienced before sickness can occur in fixed-base simulators.

> While viewing each of three 15-minute motion displays, subjects rated the strength of experienced feelings of vection using a potentiometer. In addition, before the first display and after each of the three displays, the subjects completed a questionnaire which addressed symptoms of simulator sickness. Of the 15 subjects, 10 were classified as sick, based on their questionnaire score. As for vection, subjects tended to report either a great deal of vection or none at all. In relating vection to sickness, it was found that of the 5 subjects who reported no vection, only 1 became sick; of the remaining 10 subjects who had experienced vection, 8 became sick. Based on their results, Hettinger et al. concluded that visual displays that produce vection are more likely to produce simulator sickness. It is also likely that individuals who are prone to experience vection may be prone to experience sickness. It was mentioned earlier in this report that a wider field- of-view produces more vection and, thus, is believed to increase the incidence and severity of sickness (Kennedy et al., 1989). Anderson and Braunstein (1985), however, induced vection using only a small portion of the central visual field and 30% of their subjects experienced motion sickness.


Take these studies with a grain of salt though, as display technology, IMUs, external tracking and more importantly computing power are magnitudes better than what was available in 1995.


Certainly good points. It also might be a mistake to discount studies performed under controlled conditions, though. If the cause of simulator sickness is illusory motion, then the relatively crude technologies available in 1995 may have been sufficient to induce the same effect we're observing today.


I've never met anyone who gets sick based entirely off vergence-accommodation conflict. I've certainly talked to people who claimed it was an issue but would happily go to a 3d film so I assume they were misattributing the cause of their sickness. Maybe having the peripherals line up for 3d films is enough.

I'd be interested to hear some testimonials on how it affects people though. One of the interesting things about VR is finding out the myriad of different ways people are sensitive to small phenomena.


I'll chime in on this:

I own a Vive, and get sick using that after half an hour maybe (even though it's not too bad and I can keep using it), and it takes me a few hours to "reset" afterwards.

I don't get sick in 3D movies (but I don't enjoy them, and never choose to see one in 3D if there's an alternative.

The reason I don't get sick in 3D movies I assume is because it's still just a screen, and any head movement I make is still 100% "responsive", with no delay or mismatch.

Edit: Sorry, I read your response a bit lazily. Yes, it's not related to vergence-accommodation, at least not exclusively.


I worked on Hololens stuff at Microsoft. It does everything you describe, and IMO does it really well. It's fairly light and is wireless. The tracking is excellent. A lot of the scene parsing is done in special hardware, with low latency. It updates a persistent model of the environment, so it isn't rebuilding everything every frame.


Any particular reason you stopped working on it?


It was a contract position.


He probably moved to another secretive project in the hardware division. Thats what my friend did.


You don't need to parse the environment from scratch 60 times every second. As long as you get head tracking right you can just focus on what's moving and assume it will keep moving the same way. Further the demos all seem to be around a fairly static environmenment. Remeber you don't need to render the background so a simplified internal model works fine with AR.

If it works near a closeline blowing in the wind that's really hard and impressive.


The problem is even worse. In VR you're competing on latency with human muscle feedback and what your vestibular system is telling you.

In AR, you're competing with on latency with the human visual system that you're trying to augment, which is a race you can't win.

The only thing you can do is try to be very fast (<= 10 ms) so the latency becomes unnoticeable. Unfortunately right now this isn't possible unless you custom-engineer everything in your vision path optimized for latency. Fun R&D project but enormously time- and capital intensive with no guarantees for success.


AR though leaves much less of a problem with simulator sickness because normal vestibular system and motion perception is used elsewhere.

VR is a much tougher nut to crack.


A trick i've heard at least one of the VR devices is doing isn't re-rendering the entire scene but since most of the movement is just head tracking, it renders a larger buffer and scrolls around in that buffer between frames to get a quicker response.


That only works for very, very minimal differences. It's the Oculus that does this. It guesses where your head will be, renders the scene at that location, and distorts the image slightly to match its guess vs reality. It also introduces a bit of softness, but seems to work pretty well. But it does re-render the whole image each frame.


I wonder if it does it to create intra-frame changes from rotation and/or motion of the head. If it does then that could help a lot with the experience


>It's the Oculus that does this.

And the PS no? I think that all the VR system will (optionally) do this to reduce the processing power needed.


I have no idea about the PSVR. It's probably the VR device I've heard the least about (technically). I own one, and it works pretty well, but I don't know much about it.

From my understanding this won't do anything to reduce power processing, it's all about reducing latency. That final transform happens at the last moment, with the most up-to-date head tracking information. I kind of figured it happened in the headset hardware itself in order to reduce the latency as much as possible.


The technique is called asynchronous timewarp

https://developer3.oculus.com/blog/asynchronous-timewarp-exa...


That would only work if the scene was a single flat surface and you were only moving in parallel to it. If you were doing that you may as well not use VR and just look at a normal monitor. Otherwise it wouldn't match your motion, which is the problem you're trying to solve in the first place!


Actually moving parallel to it would make the artifacts worse as it'd distort the projection based on distance (i.e. parallax). With a rotation if you're rendering it onto a sphere, doing that rotation will have a very little amount of distortion from it comparatively. It still needs to be rerendered to be correct but this can allow for better intra-frame changes (say game runs at 60 hz, and you need 120hz) with less computation. As a sibling (to you) commenter pointed out, this will make the image slightly "fuzzy" or softer because you end up moving the image on a fractional amount of pixels. As I understand it though, that extra intra-frame can mean the difference between some people getting a VR migraine or not.


The latency / motion sickness issues probably aren't as bad, since you still see the surrounding environment to get a stable bearing. The display tech sounds very hard, though.


Hello? Hololens did it already and tether-free. Here is my mixed reality capture tape taken on HoloLens. https://m.youtube.com/watch?v=Av3Fdx5RnUI


I could never really figure out how the variable focus lightfield worked.

I sort of assumed it was like a stack of images in the Z plane. So not only are they doing everything you mentioned, they are also rendering the 3D scene in slices?


Isn't the head-tracking separate from the graphics rendering? I mean, you could just render onto an area that's somewhat larger than your fov, and select some area every couple of ms based on the head movement.


You want to integrate rendered objects onto the physical world, so you have to know exactly the pose these objects should be rendered at as if seen from your point of view. The objects transforms are piloted by the head tracking. Usually with pose prediction thrown in the mix to squash latency.


Can anyone name a company that raised significant money pre-launch in a party round and that turned out okay?

Color, Clinkle, Theranos, Magic Leap... they all have that in common. I'd be interested if there are counter examples.


What's a significant amount of money?

Solexa raised 130MUSD pre-launch: http://41j.com/blog/2016/01/a-solexa-story/

Quite a new life science/biotech companies do.


I think biomedical companies are very different than technology companies in terms of product launches. So not really comparable.



Otto


Not surprising, given that the hype train was billowing at full steam. A friend of a friend interviewed there and eventually walked away saying "Wow. That is a lot of cool-aid"

Meanwhile, teams like Oculus are reportedly trying to sidestep AR by investing in "mixed reality", which will bring live camera feeds in to VR, with screens still covering both eyes.


Interesting for sure, but from what I've hear from people using the Rift it's the fixed focus that gives them headaches.

I know of a company that make cut scenes for A games, most of their people can't wear the Oculus for more than a few minutes. Big headaches, motion sickness, etc...

Before you jump on me, this could be due to the stuff they are trying to show on the Rift -- would content make a difference? Sudden shifts in scene from close to far off?


I do vr projects for work occasionally and previously was in film when they were pushing stereo. Content makes a huge difference. On a flat screen the most dramatic space contrast is left-to-right, in stereo it's near to far. If you're exaggerating something you'd use those. It's way more fatiguing on your eyes to focus near to far, than left to right--it also takes like 10x as long for people to focus and recognize something that changes near-to-far.

FPS games are awful for VR when you move around with a joystick. Also, a lot of the "dramatic" things that get used to try and sell games make you dizzy. There's also some acclimation. They say devs are the worst subjects for testing of some vr demo will cause sickness.

I also think people underplay the motion sickness film and FPS video games can have on people who are unacclimated (FPS games didn't have big money pushing it in the early 90's and were on tiny, low res screens). I know I've felt motion sickness playing flight simulators and certain FPS games, especially at first. Movies like Blair Witch and Cloverfield were criticised for making people feel sick.

To address the focus problem, I've been wondering why there's not much talk about light fields for VR (maybe because of patents?).


I don't know anything about this topic, but there was a demo/paper at SIGGRAPH 2015 of a light-field HMD. It was a pretty popular booth, for what that's worth: http://www.computationalimaging.org/publications/the-light-f...


It's a mix of a few things.

First there's plain old motion sickness. The first time I flew in Elite Dangerous in VR, I tried swooping into a landing pad slot. It was a maneuver I could do ok autopilot from playing in a flat screen, but when I pitched up hard, my stomach dropped. My body was expecting G's it just wasn't feeling. I had to take the Rift off and take some deep breaths to ward off the nausea. I don't usually get motion sickness. I can read on shaky busses with no problems.

Second is simulator sickness, which I think comes from latency and resolution issues? This also hits people differently. I have friends who can only do minutes with the Rift on. I can last as long as my face can take the pressure of the headset, meaning an hour or two at a time.

All in all, yeah, some people just won't be able to handle cutrent gen VR for various reasons.


Does AR explicitly exclude camera+screen? I always thought that stuff like pokemon go and any similar "paste some 3d over a camera feed in real time" stuff counted as augmented reality.


We distinguish "video see-through" and "optical see-through" Augmented Reality.

The first one is mainly solved, the second one is way harder due to head tracking latency, computer vision latency, and user-dependent eye projection.


Wow. So if Magic Leap is vaporware, does that leave Microsoft as the only ones publicly making progress in AR? Plenty of companies are shipping VR, but AR ends up being the future, HoloLens is building up quite a lead.


Seems like SnapChat isn't doing too bad. Sure, maybe their tech isn't as advanced... but they've begun solving the equally hard problem of getting it into people's hands naturally and getting them to use it and even pay for it.


Does Snapchat actually have any AR? I wasn't under the impression the goggles actually had any UI, just the "active" LED and camera input.


Their current gen glasses do not do any AR - but they've been acquiring / hiring very aggressively in space for a couple of years now. It's pretty transparent what their long term goal is with their glasses.

I think it's actually a really smart approach. MS spent ~2 billion and multiple years of R+D to get the Hololens built. Snapchat's coming at it from another angle, start simple and build up with each successive generation. The fact that they already have people clamoring for something this basic is telling...


There's also Meta which is taking preorders for its second developers kit at $949. It has significantly larger field of view (90 degrees) but must be tethered to a computer


"Magic Leap is vaporware" Where are you getting that? The article seems to imply that feature to feature they are matching the Hololens and that is the worst case scenario.

Edit: On second thought you maybe meeting the exact definition of Vaporware in that they are not manufacturing it yet[1]. No indication that they won't be able to though.

[1]: https://en.wikipedia.org/wiki/Vaporware


Well it hasn't shipped for a long time, and as it stands now it's worse than its competition, not to mention they have way less resources than their competition. And probably the main reason they didn't ship until now is exactly because of all these reasons. Which means the more time they spend not shipping, the harder it will become for them to ship.

I think it's fair to speculate that they will end up as vaporware. Only alternative I can see is them pivoting to a more niche field so that they don't directly compete with hololens.

People started saying Duke Nukem Forever was a vaporware long before the parent company actually admitted it.


"and as it stands now it's worse than its competition"

except it isn't, the information article is one person's opinion and other people that have used the production device have said differently


Microsoft has a product on sale, magic leap have a lot of big claims.


> The article seems to imply that feature to feature they are matching the Hololens and that is the worst case scenario.

But the article specifically says "as it stands now, is noticeably inferior to Microsoft’s HoloLens headset"

So I don't get how matching Hololens is the "worst case scenario"


The fact that they're still far away from getting to market . . . . Their platform strat of making devs work in seattle/sf and timeshare the devices . . . makes it seem as if they're going to fail super hard. Hololens is clearly out front. The device itself is really impressive. And I imagine V2 of the hardware is coming soon. Wait for more credible players to enter the space for real competition.


Hololens has the lead but whoever get's to the consumer market first will still have a huge advantage. The only thing stopping MS from getting to that point is having a consumer friendly price.


I'm not sure that there will be a significant first mover advantage here. Everybody thought that the Oculus CV would be the only headset to get because the DK2 was pretty cool, then the Vive was released and people realized how important motion controllers were for immersion, and Oculus got creamed. I think that there will be bits of tech like this for each generation of VR/AR, one small set of features that's obviously better, which will win the generation.


I think it could apply, a significant part of the advantage is that they already have the OS and they're building the app eco system.

But then again, first mover advantage has been more of the exception than the rule for quite a while in tech.

Edit - Androids "auto add to dictionary" creates a lot of stupid capitalizations.


Check out http://castar.com launching 2017. I backed their Kickstarter so I'm excited to get one :-)


ODG has had AR glasses on the market for longer than Hololens, with an actual glasses form factor, running on Android, and should be announcing their new model at CES2017.


Not on the same scale, but castAR is definitely in the game.


and castAR


When will castAR support dogs and cats? Will dogs or cats be first? Will dogs and cats be able to use it together?


This is essentially a reblog of the (paywalled) The Information article: https://www.theinformation.com/the-reality-behind-magic-leap


I went to https://membership.theinformation.com/subscribe because it looks like something I'd pay for. I hate the trend of not putting pricing upfront. I'm just going to assume I can't afford it.

Edit: possibly $400USD/year from https://hunterwalk.com/2013/12/17/400-for-the-information-is...


The pricing is hidden somewhere in their FAQ (https://theinformation.zendesk.com/hc/en-us/articles/2188655...)

That said, $40/month is a bit much. I'm paying less for the IDE that I'm using every single day. I doubt I'd be getting the same value out of that website.


It's definitely not targeted at you and I, which is a pity because it appears to be one of the sources for quite a few articles I've read.


Yep. The Information did a ton of deep investigative work on this. If you find this article interesting you should consider supporting them by subscribing.

Sadly the way journalism works today is one outlet does a ton of time-consuming, expensive digging. Then within minutes of posting, every other outlet copies the story


Someone posted a pastebin in this HN submission.

https://news.ycombinator.com/item?id=13135371


Karl Guttag, one of the MagicLeap critics and analysts has an interesting update on what he thinks ML will actually release [0]. The product they might release is actually different from what all the hype has been about: LCOS Microdisplay, two sequential focus planes, Variable Focus, photonics Chip.

[0]: http://www.kguttag.com/2016/12/06/magic-leap-when-reality-hi...


> But at least one of these videos — showing an alien invader game that let the wearer of the supposed headset or glasses make use of real-world objects — was created by visual effects studio Weta Workshop. Prior to today, it was believed Weta had simply created the visual assets for the game. However, The Information reveals the entire video was created by the studio.

Was this ever really in doubt? You don't have to be a physicist or AR expert to note that throughout the video they occlude bright background colours with dark AR elements - somehow projecting 'black light'. Whilst I wouldn't expect the man on the street to pick this up it would be nice if journalists about to pen a breathless puff piece would at least give the subject matter 30 seconds of consideration.


Here is the video in question: https://www.youtube.com/watch?v=kPMHcanq0xM It doesn't have any disclaimers that this was faked -- some earlier videos from Magic Leap did have disclaimers.

It is obviously too high resolution and pixel perfect to be from a headset display though. So it didn't fool me, but I think the way it is presented could be quite misleading to folks without a 3D background.

EDIT: Later when Magic Leap did post videos that didn't use post production to create the effects, they put in a disclaimer that it was real: https://www.youtube.com/watch?v=kw0-JRa9n94 https://www.youtube.com/watch?v=iCVd9ZDPjXU


Well yes, you do have to have an engineer mindset to pick this type of details.


Yeah, it's funny how they think they got some scoop here. It was obviously a concept video. Totally obvious, and not just from what you mentioned, just the look and feel of the whole thing in general.


> “This is a game we’re playing around the office right now,” reads the video’s description — an assertion that could not have been true.

If by "game" they meant gaming potential employees and investors, then the assertion may well have been true.


I always find it instructive to read glassdoor reviews about a company I'm interested in. There's obvious astroturfing and upset people, but lots of interesting information also leaks out there.

https://www.glassdoor.com/Reviews/Magic-Leap-Reviews-E799754...


Further behind than expected. I have always suspected that this is a giant money pit with zero chance of success. Another decade or two and maybe technology will catch up to the desire to understand the world around you in real time, but today it's not possible. It's not magic, it's dreaming.


Hmm interesting. So the article makes it sound like one of their faked videos was used to bring in software developers but if they come in and it's simply not working anywhere near the level shown, wouldn't that seem suspicious?

I don't think this necessarily means the company is a scam or anything like that but it seems a little...suspect at least to me if I was interviewing with them.

Curious if the community has an issue with it or not.


I can't help but have similar feelings about this, and I think your question about whether "the community has an issue with it" is especially pertinent.

Many journalists and internet pundits suggest that Magic Leap's technology was obviously over-hyped. See for example the title of The Verge's article. Adding further evidence that it was too good to be true, they used faked videos to attract software developers, as you have pointed out.

So how is it that Google, Alibaba, and Andreessen Horowitz were convinced to hand over hundreds of millions of dollars each to this company? Was it fraud, over-promising on behalf of Magic Leap's founders, or could these huge VC firms not see what everyone else could see? Is it really that easy to secure a billion dollars in funding? The question as to "what's going on here" relates as much to the startup / VC community as it does to Magic Leap.


Nah, an engineer working in VR/AR/Computer vision would know the video was not a truthful representation of the actual thing, but more something they were "aiming at", or a presentation of a vision.

I don't think it really fooled anyone into working for them. Not so sure about investors.


There is a ton of AR/VR hype driven by Adobe After Effects and not real product results. Now whenever I see AR/VR tech that follows this pattern, I stop paying attention. It's unfortunate for the entire industry.


It's not surprising because they are not basing it on any traditional widely manufactured components. It would be as if you promise to ship flat panel displays before LCD production lines have even been created and LCD only existed in the labs.

So, either it's a scam, or, they legitimately plan to create a new industry which requires multiple years capital investment to get production off the ground.


But you could build a $30K prototype and show it to investors, saying that the cost would go down when mass production ramps up.


And then not deliver on the pie in the sky promise, or delay long enough that somebody spray solves the issues. Also known as a scam.


Well to be fair, it could take 10 years to go from prototype to full scale production. People are too impatient these days.


(Contains mirror of article in the comments) https://www.reddit.com/r/magicleap/comments/5ha6kr/the_reali...


>>The company has raked in $1.4 billion in funding

Let that sink in. Is it a record for startup funding with no product or service released? I don't recall how far along Uber was when they passed that mark.

And what exit number do they need to be considered a success for the VCs?


They hold the record for largest series C round, if that helps put things into perspective


Abovitz, July 2016: "We have Class 100 cleanrooms running now. We're debugging a high production line this summer. We are in the go mode, soon-ish"

http://www.stereoscopynews.com/hotnews/3d-technology/vr-ar/4...

http://www.businessinsider.com/magic-leap-production-begins-...


I thought they had given up a couple of years ago on the gimbal-mounted fiber with the piezo collar. They raised the last round to buy a MEMS fab which I assumed was to do a DLP-like thing down a fibre laminate.


And in fact the Guttag piece linked in this comment page also suggests this (says the DLP demo was the most impressive). I doubt they could use actual DLP (because the TI DLP group is a pain to work with) but it would make sense that their MEMS chip would be also a micro mirror array.


From Dalv-hick on reddit (Jack Hayes @Halo_AR_ltd):

>I presume they're going for something similar to what I'm building in one of the implementations, using switching polarisation-sensitive lenses in a stack.

The other thing they mentioned and which is feasible immediately is using a liquid lens, if the input is small.

This would be used for 45 degree combiner but too small for a waveguide exit or for Oculus size lenses.

---

>Building a single FSD isn't difficult.

Building a single FSD with wide deflection (more pixels) and fast oscillation (more frames and made necessary to counteract the wider deflection) is considerably more difficult.

Building an FSD with the previous attributes an removing jitter/ distortion is more difficult.

Doing all the previous for an assembled array to line up perfectly is more difficult again.

Finally doing an array, particularly a 2D array at scale, and expecting a significant amount of them to actually work is crazy.

The FSD improvements FredzL points to are at odds with each other.

An FSD array can either use to each unit to create a different focus plane/ view or add extra pixels to the 2D resolution for any one instant of time.

One of the University of Washington papers does hint at alternative though.

In the paper it shows a fibre array (line) being placed in a thick block for better alignment.

This would prevent the fibres being vibrated to create the image.

Instead, on a single line, of a different paper they mention trying acousto-optic modulators.

Dennis Smalley had a good paper on the subject, in which ultrasonic waves travelling through a class of materials will locally change the refractive index.

Light bends by different amounts in different refractive indices, thereby allowing the possibility of image formation.

Solid-state scanning arrays or even multiple beams on a MEMS mirror are probably better than FSDs.

They might be able to use FSD, I don't think it's a good idea.


> $1.4 billion in funding at a valuation of $4.5 billion

Man this number doesn't look good in more than one way. Valuations aside, I'm guessing the cumulative dilution for everyone in the company is super high


What is troubling about AR/MR which I feel a lot of people overlook is the limited FOV. From the demo videos shown by Magic Leap (and Hololens) on YouTube it feels as if the entire FOV is available, which is barely the case - the Hololens has an FOV of 120x120 and Magic Leap is said to have a lower angle than that. Verge covered Hololens and they mention [1] the FOV as a limitation.

1: https://youtu.be/4p0BDw4VHNo?t=3m


120x120°? That would be wider than VR headsets!

Unfortunately Hololens FOV is more like 30×20°. Other AR players are mostly in the same ballpark at the moment. The good thing is that this gives very high pixel density. The other thing is that you don't see the edges unless the 3D render is cut off, so it may not feel as tunnelled as in VR.


Yes, that's the ugly truth. And their PR departments do their best to hide this almost deal-breaking fact. The current experience is underwhelming (limited FOV) in comparision to the faked PR demonstration videos.


It's interesting just how many Magic Leap stories get posted to HN. In the last month or so I must have seen at least 3 stories about Magic Leap, but no other VR/AR technologies.

I don't even remember the last time I saw a story on HN about Oculus, or Vive, or whatever the Sony Playstation VR product is called, nor about the Hololens. But the Magic Leap stories are popping up like mushrooms.


You'd be fine to post some :)

These things often come in clusters, though we usually penalize posts that are from a string of copycat or follow-up posts that don't add new information. I have the impression that this one does, though, albeit cribbed from a better source.


I'm not complaining. I just found it curious.


Oh good. I fear my neurons have developed complaint-shaped receptors.


> or whatever the Sony Playstation VR product is called

It is, in fact, called the Playstation VR.


The video of the AR game embedded in the article looks very suspicious. It's hard to believe that's a game that actually exists, or at least exists anywhere except that specific room. We've got depth and occlusion of objects when the aliens come out from down the hallway, and the laser guns look like their physical props.

I'd like it to be true, but it looks too good to be true.


Can we please fix this headline to "further behind than expected"?


The whole thing sounds rather similar to everyone's favourite black turtleneck wearing entrepreneur.


Sounds like the opposite.

All of Apple's most successful products were developed mainly rumour-free (and certainly hype-free) behind closed doors, popping suddenly into the world at WWDC or similar.


I wasn't referring to Steve, but the CEO of a far more recent fallen unicorn.


Gotcha. Thanks for the correction


Having used Microsoft HoloLens for more than a month. Microsoft is doing a disservice to a great product by dropping the ball on HoloLens compatible games & software. It has potential to greatly transform gaming. It ought to start dropping the price and releasing more titles.


I have also used HoloLens, and Microsoft is not targeting gamers right now. They want engineers etc, corporations and universities with deep pockets. IMO the FoV is too narrow for the consumer market/gamers at this time. It is perfect for what they are aiming at right now though.


Microsoft is surely targeting gamers, they showed Robo Raid during the presentation, including even an unreleased version of Minecraft. Robo Raid & Fragments are way more polished and utilize significantly more capabilities of HoloLens compared to any other app.


An archive of the original report cited in this post: http://archive.is/DHVL1


Interesting to see that everyone here is so eager to assume that the article is right. The article might be right, but it's also just gossip.


The Information is a highly respected source, and the story was written by Reed Albergotti:

https://www.theinformation.com/reporters/reed-albergotti

It's investigative journalism, not gossip.


Maybe, but it's still full of speculation.


Everything in The Information's article seems sourced to me. Some of the sources are "former employees" or similarly vague, but the reporter had an actual demonstration of Magic Leap's latest and greatest.


I don't want my living room as the backdrop for games - it's kind of a mess at the moment.

I want to be taken OUT of the real world to a virtual world.

Seems a dumb idea to be displaying stuff onto my living room.

Outside things, maybe that different - as Pokemon go has proven.


Has Pokemon Go really?

The "AR" is extremely limit and I strongly suspect most players turn it off, other than to get occasional brag/joke photos.


> [Pokemon Go's] "AR" is extremely limit and I strongly suspect most players turn it off

Wandering around the real world with it gamified on your mobile phone is augmented reality. You can turn off the camera, but not AR.


Jeez this sucks. I was always behind Magic Leap with the logic that "big investors like a16z google alibaba wouldn't get suckered" but god, :( this is disappointing.


> I was always behind Magic Leap with the logic that "big investors like a16z google alibaba wouldn't get suckered"

Here's a few clarifications that might change your thinking about funding announcements.

Google Ventures passed on Magic Leap. Google Capital passed on Magic Leap. Sergey was enamored so Google's own balance sheet provided most of the B round. Word is that A16Z put only $3MM in -- basically enough to get some info rights and not piss google off.


This is very interesting. Do you have a source on this? I can't find anything online.


Nobody I will quote in a forum(!) but I live in silicon valley where secrecy is poor.

If you want to look on the web it's not hard to verify if you're willing to do more than just a google query. For example you can easily tell from the web sites of GV and GC that ML is not in either of their portfolios. I can't find any Form D for magic leap but you can see what some of their investors put in if you dredge some of those non-googleable-but-public databases.

They're a Theranos if you ask me.


Yes it's quite obvious GV and Gcap are not in Magic Leap. But it doesn't mean they passed on Magic Leap. It could've been that Google Corp told them to back out because they were doing it alone. Speculation of course, but hard to tell.


Do you know what chemicals fueled this moon shot?

https://www.youtube.com/watch?v=w8J5BWL8oJY


Don, you're an expert in that technology!


I don't kn2ow what you're talking about.


Well played, sir!


I call BS, how does A16Z piss google off in this scenario, more money for Magic Leap the better, I doubt GV or GC even saw the deal


Actual title should be "Magic leap lied about a video demonstration of its tech". Or Magic Leap still noticeably inferior to the Microsoft Hololens


I don't like pointing out typos, but this one is in the title and should be fixed: it should be "than".


Or even better, change the title to that of the actual article: "Magic Leap is actually way behind, like we always suspected it was". That doesn't appear too long for a HN title.


Not too long, but it's baity, so we need to change it as per the site guidelines (https://news.ycombinator.com/newsguidelines.html). We'll see what we can do.

Edit: ok, we changed it to language from the first sentence—often a source of lesser baitiness.


It doesn't really sound right still, even after that correction. "Magic Leap is actually way further behind than expected" would be more correct.


*than


Even then it doesn't sound like natural English.

I'm not sure why the original title didn't suffice.


"Magic Leap is Further Behind than Expected"

Better is: "Magic Leap is Further Behind than Previously Thought", but I don't know if it will fit.


An improvement, but still not good English.


So what happens to the CEO? Does he get fired? Charged with fraud? Tarred and feathered? Given a pistol with one round and expected to do the honorable thing?


Wow, people are quick to upvote news when it is something that everyone likes do agree. Kind of like facebook. Personally, I found this article with as much little substance as the ones hyping ML.

Sites hunting clicks on both sides and we are providing it.


Indeed, but one of the issues is that HN has grown a lot. Five years ago an article with 100 votes was noteworthy, not so much today.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: