But I'm really happy Microsoft has chosen to create something similar - I had hoped they would. Their implementation seems quite solid, I wonder if it supports libraries like Three.js yet.
If anyone is wondering why you might use this, the reason is pretty simple. Before this, there was only two ways to build HoloLens apps: Unity, or C++/DirectX. DX is great but slow to develop, Unity is fast to develop but requires a ton of prerequisite knowledge, and it consumes a ton of RAM. Baseline for this project will probably be similar to mine (around 15MB) whereas Unity's baseline in my experience is around 10x that at 150MB. This is substantial on HoloLens, because the maximum amount of RAM allotted to any application is 1GB.
I'm looking forward to seeing this bring web developers into the new world of MR. I'll be interested to see if this can be coupled with React VR for rapid prototyping.
This seems like a lot of work. Couldn't Microsoft just support the WebVR API interface in Edge or some other type of AR extension into Edge? That is all I would want. Then I can just use ThreeJS or Babylon.js with it from a normal website, and then there is no need to ship custom desktop applications with embedded.
I guess I do not want to ship applications if I don't have to.
As far as I know Microsoft plans to support and contributes to WebVR work. Microsoft involvement and requirements were the reason for the most recent (1.1) spec update. This was explained by Brandon Jones here: http://blog.tojicode.com/2016/09/update-on-webvr-spec-chrome...
This isn't aimed at regular desktop applications it's aimed creating applications for the Hololens, Microsoft's Mixed Reality platform. You could use this to make desktop apps but I'm not sure the point of a holographic desktop app or what the really means.
They're referring to Edge on HoloLens, and it's something I've thought a lot about to. Something like `canvas.getContext('webgl-holographic')` to harness holographic space rendering from the browser.
One thing that is not clear with HoloJS: how do you aquire the head position and orientation on the JS side to be able to do gaze-centered selection (or, more accurately, nose-pointer selection, but people tend to call that gaze)? It seems to pass the view and projection matrices directly to the shader code. I haven't seen where it is available in a scriptable API.
It would be better to include an extension to the WebVR API to provide the environment geometry. There is a lot of work going into WebVR applications and frameworks. It'd be a shame if that work couldn't be leveraged for AR.
Also, I feel pretty strongly that automatically applying the stereo rendering is a mistake. This prevents being able to do any sort of changes between eyes, as is necessary for displaying stereo photos, photoshperes, or cubemaps. They can be a huge visual fidelity cheat on limited systems.
> It would be better to include an extension to the WebVR API to provide the environment geometry.
This is on their radar by the looks of it: https://github.com/Microsoft/HoloJS/issues/4 I've also thought about this.. Could be a bit tricky. The meshes are quite large and getting them over the bridge could be a bottleneck.
> Also, I feel pretty strongly that automatically applying the stereo rendering is a mistake.
This happens to be a limitation that (I believe) is specific to their ANGLE fork, which may change at some point.
To be honest, I don't want a webapp. Webapps may have a good thirty second use case, but adter that, i want something thatnis with all my other programs that I can pin to a menu/quick launch bar, that runs in a different process space so that a bug in a different program doesn't take down your program, etc. I want the same UI/UX as a native app, and that's something tat webapps still fail terribly at.
Those sound like an odd bunch of criteria. I'd go for a list more like:
1. Performance (either speed of opening if it's the kind of app you open and close a lot) or just responsiveness in use
2. Quality of user experience
3. Stability
4. Cost.
5. Features
6. Interoperability
At the moment across a range of app types it's roughly 50/50 between web apps and native apps and I occasionally switch alleigence.
I definitely agree (and prefer writing/using native apps), but i would say that uwp apps get all of the UX benefits of a native app (the js apps for UWP get access to all the same libraries as the .net/native apps; I don't know about default styling but I'm sure someone has a template).
Also, UWP apps are full apps like any other windows app.
Totally. I'm writing an app myself using UWP, albeit in C#. I was addressing the wanting it to be a webapp part, not the UWP or written in javascript parts, which is a totally different issue.
Agreed, this would be the best way forward IMO. There is a benefit to shipping applications if you have a need to access other system level APIs. But for the general use case of rendering, well, it's a bit of a hassle to have to create an app.
How does this relate to the existing Windows support for JS apps via wwahost and the WinRT JS projection? Does that not work on HoloLens for some reason?
This uses the same engine, however that host doesn't provide any support for 3D. You wouldn't be able to write WebGL in it, for example. This library exposes a WebGL interface to the JS environment.
Interesting. With a little hackery, it should be fairly easy to get this working with PlayCanvas. I'm not sure why it needs to use a new element type though (canvas3D).
Canvas does not support all the 3D elements needed for spatial recognition and accessibility via 3D audio. I'm hoping to read through and see canvas3d support that feature.
What does Canvas have to do with audio? Web Audio has the necessary components to do spatialized audio. And spatial recognition is an input. Canvas is for output.
But I'm really happy Microsoft has chosen to create something similar - I had hoped they would. Their implementation seems quite solid, I wonder if it supports libraries like Three.js yet.
If anyone is wondering why you might use this, the reason is pretty simple. Before this, there was only two ways to build HoloLens apps: Unity, or C++/DirectX. DX is great but slow to develop, Unity is fast to develop but requires a ton of prerequisite knowledge, and it consumes a ton of RAM. Baseline for this project will probably be similar to mine (around 15MB) whereas Unity's baseline in my experience is around 10x that at 150MB. This is substantial on HoloLens, because the maximum amount of RAM allotted to any application is 1GB.
I'm looking forward to seeing this bring web developers into the new world of MR. I'll be interested to see if this can be coupled with React VR for rapid prototyping.