Hacker News new | past | comments | ask | show | jobs | submit login

One thing that is not clear with HoloJS: how do you aquire the head position and orientation on the JS side to be able to do gaze-centered selection (or, more accurately, nose-pointer selection, but people tend to call that gaze)? It seems to pass the view and projection matrices directly to the shader code. I haven't seen where it is available in a scriptable API.

It would be better to include an extension to the WebVR API to provide the environment geometry. There is a lot of work going into WebVR applications and frameworks. It'd be a shame if that work couldn't be leveraged for AR.

Also, I feel pretty strongly that automatically applying the stereo rendering is a mistake. This prevents being able to do any sort of changes between eyes, as is necessary for displaying stereo photos, photoshperes, or cubemaps. They can be a huge visual fidelity cheat on limited systems.




> I haven't seen where it is available in a scriptable API.

They're provided at `window.getViewMatrix()` and `window.getCameraPositionVector()`. See here: https://github.com/Microsoft/HoloJS/blob/master/HoloJS/HoloJ...

> It would be better to include an extension to the WebVR API to provide the environment geometry.

This is on their radar by the looks of it: https://github.com/Microsoft/HoloJS/issues/4 I've also thought about this.. Could be a bit tricky. The meshes are quite large and getting them over the bridge could be a bottleneck.

> Also, I feel pretty strongly that automatically applying the stereo rendering is a mistake.

This happens to be a limitation that (I believe) is specific to their ANGLE fork, which may change at some point.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: