Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft’s ‘mixed reality’ headsets are a bit of a mixed bag (techcrunch.com)
76 points by rbanffy on Aug 28, 2017 | hide | past | favorite | 53 comments



I've built a few HoloLens prototypes for my present company, and a few Occulus ones for my former. I haven't stayed up to date with all of the internal hardware specs, but from my experience building for the HoloLens was easier and a lot more fun. I haven't seen these new headsets yet, but I believe they use the same tools and frameworks. There's some really cool stuff that some Xbox developers have put out that you can build on top of. And the Skype app works amazingly well.

I'm just curious if these headsets are aimed at gaming as opposed to the HoloLens which is geared towards productivity.


>> And the Skype app works amazingly well.

That's a first. Let the down votes begin but Skype has always sucked. I'm hoping they do a better job with Teams.


I refuse to put Skype on my phone, I'm not a fan. But I've never seen something like the Skype on HoloLens before. It works really well, and the person you're calling can interact with your 3D space from their computer. You can also share files and conference in with other HoloLens.


I'm interested in getting into Hololens development. What languages, frameworks and tools would you recommend? From what I've read it seems like I'd be using Unity and C#.


Sorry I didn't get back on yesterday. I have a small blog I started just to help people with initial setup that you can see here http://illuminationworks.blogspot.com (I'm not a blogger and it shows) But basically it's built in Unity, and you use visual studio for deployment to the device. The dependencies can be tricky to setup, but once it's in place it is very easy to deploy. There's a lot of articles about the general concepts, but you need to be on top of the HoloToolkit to really use the advanced features. https://github.com/Microsoft/MixedRealityToolkit-Unity


Unity for 3D applications. You can use UWP to make traditional 2D apps which you can also use in hololens


You can also use C++ and DirectX for 3D applications.


Why did they decide they could sacrifice frame-rate when the first generation of PC VR - after extensive study - decided it was the one thing you should never sacrifice?

Did somebody in sales overrule the engineers?


> first generation of PC VR

What do you mean by this?

I'm only asking for clarification, because when I hear "first generation of PC VR" - I think "REND386".

I doubt, though, that many other people do, because for some reason, all of that old history and the lessons from it have seemingly had to be "re-discovered" with VR technology after the Rift came out.


Oculus CV1 and the Vive are commonly considered to be "first gen" headsets in the modern VR community, with anything older than that (including recent stuff like the Oculus DK1) being essentially just experiments leading up to the development of gen 1.

Though as with anything involving multiple "generations" of products, the exact definition is pretty fuzzy.


Yeah - I did wonder about clarifying that as I typed it - there's obviously a long history to VR (I'm a child of the 80s so I'm not completely ignorant of the twists and turns of computing history)



I remember it well on BBC Tomorrow's World.

I'm always tempted to watch "The Lawnmower Man" although I suspect it hasn't aged terribly well.


60 fps is what mobile VR works at. And is used only on integrated graphics.


And you'll never get anywhere close to the realism of PC VR on mobile, nor will you have that experience for a long time. Just because it's "possible" to do VR with much lower graphics power, doesn't mean you'll have a similar experience. The trade-off is almost linear.


Realism is good, but not essential. Only some productivity uses need high polygon count and depend less on textures (because you are conveying physical form) while gaming depends on what the desired look is. I don't think that enjoyability is a linear function of polygon count and texture quality.

And realism will come. We still have a good 4x increase in transistor density in Silicon before going for the next clever trick.


Yeah there are a lot of applications I can see that wouldn't need anything more than a flat square, maybe with a video overlaid on it. Not everything needs high-poly normal-mapped graphics with ambient occlusion and 4x AA and 10x AF.

I was fixing a fence last weekend, and man it would have been nice to have an AR overlay of where the next board should be placed and where to put the screw when I can't see the board it's sinking into. That doesn't need realistic graphics, it just needs to be mobile.

For VR, I mean Fruit Ninja was a really popular game for a long time.


Fruit Ninja would be a good example of something that needs 90Hz to be playable for any length of time without causing discomfort.


Although the tech to achieve both are somewhat similar, AR and VR solve completely different problems.

The core value of VR is to replace our real life. Graphics to convince you of this new reality is a hard requirement.

The core value of AR(or MR?) is to augment our real life. Being able to surface information about the world around you is the hard requirement. For example while cooking if it can tell you what and how much of the next ingredient you need in a recipe, it doesn't matter if the displaying of that information on top of the item is a bit laggy. The stepwise jump in utility is still there.


These aren't AR headsets though, they're VR headsets. Microsoft seems to have decided they're going to use the term "Mixed Reality" to refer to both.


Oh I didn't know. That's pretty confusing since I remember mixed reality being just a rebranding of AR...


"The core value of VR is to replace our real life"

That's ... debatable. It's no more that than a regular screen.

VR displays track head movement in 3D space and then give the possibility to use this positioning information when rendering a 3D scene.

This is the basic requirement for AR as well. Pokemon level AR just gives an image of real world where the 3D scene can be overlaid. ARkit level AR uses what ever data inputs are available to guestimate the 3D geometry surrounding the user and offering this 3D information available for the program context. I suppose the third step is when this data is fed to some machine learning algorithm to 1. detect features in the data 2. label them (i.e. 'carrot' etc).

Now, what to do with all this data is the business of the developer of the application.

It's the application and it's users who decide what the value proposition is. It's not stamped on the underlying technology. The technology is the substrate and the facilitator for the end user applicariona but they alone do not provide any value. Owners of Vive in the current ecosystem where VR applications are scarce and unpolished can probably understand this intuitively.

The potential of VR and AR to provide value are different but the value proposition depends wholly on the application developed for the substrates.

Sorry about the long rant. I just wanted to point out that it's non-value adding and detrimental to innovation to slap labels and qualities on things that don't posses them.

Nobody[0] wants just a Ferrari engine. Everybody wants the whole car.

0: In the general consumer context, I know somebody would want it


Microsoft is messed up in many ways.


I wasn't able to tell from the video or the article... what does mixed reality mean?

I was assuming it was something like augmented reality lite. This just looks like VR-lite.


Who knows what the marketing blender will do to these terms but this is what they've traditionally been:

VR - Virtual Reality. Your field of view is completely blocked by a display that shows you something completely artificial. Pro: Complete visual virtual immersion. Cons: No idea what's going on outside.

AR - Augmented Reality. You look through glasses that have a transparent display of some kind. Virtual objects are placed into the real environment right in your field of view. Pro: No latency view of the real world. Con: Aligning objects and keeping latency low enough to blend well is hard!

HUD - Heads Up Display. You look through glasses that have a transparent display of some kind. Virtual objects are placed onto the display, but not into the real environment right in your field of view. Pro: Can display various pieces of data where you are looking anyway. Con: Not meant to align or register with anything you are looking at.

MR - Mixed Reality. Your field of view is completely blocked by a display. Cameras capture the real environment and present them on the display where virtual objects are mixed into the capture. Pro: Mixing the environment and keeping things synced is easier than AR. Con: The display will be just a hair behind what you would normally see so it can feel a bit off.


that MR sounds about right, but I didn't see any of that in the video. Could have missed it though... seems like there were lots of sub second cuts in the editing...


MR is just Microsoft branding. The new headsets are VR headsets with integrated tracking cameras so their is no need for Vive-style Lighthouse trackers or Rift-style external cameras.


Microsoft branding? This is the same thing Magic Leap has been touting, it's an actual term in the industry.


Magic leap is creating glasses which overlay things onto the real world. These are VR headsets - your view is completely blocked.


It's an actual term to decribe an entire spectrum of augmented to virtual reality. It's an incredibly confusing term to brand a product that very clearly falls on a specific part of that spectrum.

People also fling critique at Magic Leap for the same problem, since it is very clearly an AR device.


Because tracking is done via camera (instead of more cumbersome room sensors like Oculus/Vive), those cameras can also be used to display surroundings and augment them, like a lower quality hololense.

So there is support for augmented reality, but it looks like most if the demos currently being made are for VR.


People in this set of threads keep jumping to the conclusion that because there are cameras on the headsets you'll be able to integrate real world visuals to create AR-style exeperiences.

It would make a lot of sense, but there is no support in the API for the low cost devices for doing any of these things. The cameras are purely used for tracking purposes at this point and none of the HoloLens-style AR experiences that would seem to make sense are possible with this generation of low cost MR headsets and APIs.


What's kind of frustrating is that MS did show concept videos of what's called "Spatial Capture" on the HoloLens running on the MR Display, so that you would at least have the gross shape of large objects in your VR space. But that hasn't been the case so far, and it's been difficult to suss out documentation that is specific to the VR headset, rather than just old HoloLens docs.


If they can, it's not a feature that has yet been released. There is no data about the outside world coming in to the headset.


> there is support for augmented reality

Unfortunately this isn't the case. It's just pure VR.


It's a cross between AR and VR in which a different virtual world is inserted into the real world [1]. It predates the term, but the HoloChess scene in Star Wars is to me a good example. The contrast is with AR, in which the real world is annotated with virtual info, and VR, in which the real world is hidden. But the terms are a bit blurred.

[1] https://en.wikipedia.org/wiki/Mixed_reality


This isn't how Microsoft is using the term. For them it's essentially a hollow marketing differentiator and a way to lump Hololens and inside-out tracking under the same banner.

To all intents and purposes just substitute "VR" whenever Microsoft says "Mixed reality".

I really hope their usage doesn't catch on.


Since when does augmented reality only annotate the world? This seems to be a meme spread by Google Glass, which wasn't even augmented reality since it didn't use reality in any way.


Agree - 'annotate' isn't rich enough. My bad.


I wrote an article about how Microsoft thinks about mixed reality in our developer documentation: https://developer.microsoft.com/en-us/windows/mixed-reality/.... The term comes from a paper published in 1994 by Paul Milgram and Fumio Kishino. Our usage of the term mixed reality derives from the concepts they introduce their, in particular the virtuality continuum.

It is fair to say that the headsets being introduced this holiday target VR experiences. That said, we are building a platform that covers the entire spectrum of devices (from HoloLens to all the new headsets coming this holiday). The platform is consistent between all of the devices. In fact, we made very few additions to the API set from the original APIs introduced with HoloLens to support these new devices. Since the platform supports the broad spectrum of devices, and it's a single API set for all of them, it only makes sense to refer to this category as mixed reality.


It's a platform. Same apps can work in AR headsets (like Hololens) and VR headsets (like the ones made by Acer, Dell, HP)


Your reaction makes me think "mixed" is just the wrong word entirely for this product. To me, mixed has a slightly-negative connotation (mixed reception, mixed feedback).


Technically speaking, the product is called an "occlusive display". It's in the "Microsoft Mixed Reality" product line, which included HoloLens.

It's a shitty bit of marketing legerdemain for sure, but if you dig into the docs you'll see they never literally claim these headsets are MR devices. I guess they are "internally consistent" is what I'm saying.

I have the HP headset, and I like it. I find it more comfortable than the Rift. I find text a lot easier to read in it. I'm not going to be playing hardcore games in VR, because to me that's just not what VR is for, that's what regular PC gaming is for.


The platform provides development tools for AR applications as well. Some demos have shown using the headset cameras to bring in real space elements for AR applications.


I wish the article would have gone into more detail about the tracking on the controllers. My biggest concern with those is that they might not work well in games where you have to reach and grab something not currently in your field of view, like guns holstered on your back or at your hips.


This article takes a deep dive on the controllers:

https://www.roadtovr.com/microsoft-mixed-reality-vr-motion-c...


Thanks, that _is_ a bit more detailed. Unfortunately, it seems like my concerns were well founded:

> And for times when your hands will go out of the camera’s field of view, Microsoft is doing its best to compensate. When that happens, the system relies purely on the controller’s on-board IMU to estimate positional movement until it reappears in the camera’s view. This works well enough for quick jumps in and out of the camera’s view, but after a second or two, the IMU-only tracking estimation is too unreliable, and it appears that the system will eventually freeze the location of the controllers in the air and only feed them the rotation data from the IMU

Not a lot of detail there on how well the IMU-only tracking works, but it doesn't sound like it's all that accurate. And I guess there's only one front-facing camera, so it'll be pretty easy to lose tracking if you ever need to interact with something outside your FOV.


This same thing happens with the Vive controllers. The shape of the controller and how you use it make it much more susceptible to occlusion issues than the headset. It also has an internal IMU to attempt to compensate, but it's only good for about 3 seconds before you start to notice the drift. There are a lot of sub-3 second occlusion events that you just never notice, otherwise. So who knows, I'd have to see the MS controller in action to get a better idea. I wouldn't be surprised if MS could come up with a decent mitigation.


The Vive has two base stations which are supposed to be positioned in opposite corners of the room. In order for the controller to lose tracking, it has to be hidden from the view of _both_ of those base stations at the same time. Still possible, but I suspect that will happen much less frequently than with this system, which as far as I can tell will lose tracking whenever your controller goes outside the view of a single, front-facing camera on the headset itself.

For example, if you reach behind your back it loses tracking. If you look in one direction and aim a gun in the other, it loses tracking, if you look up and reach for pistols holstered at your hips, it loses tracking, etc. Someone please correct me if I'm wrong, because as-is it sounds to me like this system isn't going to work very well for a lot of the more action-based games in VR.


There are definitely "shadow spaces" in even an ideal Vive setup where you're body could be occluding one base station and your hand holding the controller could be occluding the other.

It's really going to depend on the application. You don't focus your attention uniformly through the 3D space. if the application design has important interaction points that just so happen to intersect with those shadow spaces, you'll find yourself in a very frustrating situation of never being able to get good tracking. There are times when I find I can't finish a drawing in Tilt Brush or a machine in Fantastic Contraption very easily because the detail I want to work on is in a shadow space. And for many reasons, I think gun-games are quite far from the best uses of VR. I am sure a certain class of gamers will like it, but I don't think that class intersects well with current, 2D-display FPS gamers, and I don't think it will be anything close to the majority use of VR.

So yes, some UI designs will be ideal for one design and particularly buggy in another. The Vive is optimized for broad-stroke gesture control radiating out from the user in (roughly, minus the shadow spaces) equal precision 360 degrees around the user. The Windows MR is optimized for fine-detail control centered in front of the user. If you like playing wave shooters and only wave shooters, I suppose you'll want a Vive. Otherwise, I don't think it's a completely cut-and-dried situation.


> There are definitely "shadow spaces" in even an ideal Vive setup where you're body could be occluding one base station and your hand holding the controller could be occluding the other.

Huh, I've never really experienced that. Or if I have, it doesn't happen frequently enough for me to notice. It does happen in some spaces outside my play area, beyond the guardian system, but inside it tracking is near perfect. I have a Rift, not a Vive though, so maybe this is an issue specific to the Vive? My Rift uses 3 sensors, not 2, so perhaps the occlusion issue is lessened by that. Or perhaps your Lighthouses are just positioned a bit strangely?

> I think gun-games are quite far from the best uses of VR. I am sure a certain class of gamers will like it, but I don't think that class intersects well with current, 2D-display FPS gamers

Agreed. There are other games which would suffer from this type of occlusion issue though. Echo Arena, for example, often requires you to grab onto a surface with one hand while turning your head in a completely different direction to look at your surroundings. Any game that requires you to interact with something not in your field of view would have this problem.

For non-game applications like Tilt Brush, Google Blocks, or Google Earth though I can see this headset working just fine. Casual games which involve primarily working with your hands might also work well. (Job Simulator, Rick and Morty VR, etc. Not Fantastic Contraption though; that one requires you to reach behind your head to grab new parts, if I recall correctly.)


Isn't Microsoft jumbled anyways? In my opinion they're just wanting to get third hands in everything...


I really feel like going through my post history on here and on reddit to point out all the people that said this was going to be a "game changer" that would effectively kill Oculus and Valve's VR hopes. I was skeptical back then and this only confirms that my skepticism was warranted. Everyone was saying that MS would be able to take a hit on the price and would effectively neuter the higher-price-point VR headsets. I kept making the point that some compromise would have to be made to keep the price point down and several people fell back to Microsoft's financial status as proof of their ability to deliver a competitive product at a lower cost. Glad to see that I wasn't crazy then and that this is exactly what's happening here.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: