Hacker News new | past | comments | ask | show | jobs | submit login

Thanks for the question!

It sounds like you have easy access to VR HMDs, so if you get a chance to plug into an Oculus rift any time soon I recommend you try it out - and would definitely love your hands on feedback.

We've done a fair bit of jiggering to make sure a majority of web based content is both legible and usable - and our UI has been built to try to be as intuitive as possible, and eliminate a lot of the bumbles that we too associate with many VR experiences.

It's a free download, and you don't have to create an account if you don't like - once you download, you'll be presented with an account sign up / log in form where the keyboard can be used and messed with a bit. We also use chromium for all of our login / account creation flow in VR - so you can get a taste of what that feels like as well. If you want to try something like Trello out, just create a throwaway account and never verify the email - then you can pull up a website like Trello or NYT, where you can assess the usability and legibility.

I think that if you're coming from a place of comparing this to existing 2D based collab tools like Skype / Zoom etc you'll have a hard time seeing the benefit, but if instead you try to look at how those tools are insufficient compared to a real-life meeting you might see where we fit. Our goal is not to replace 2D based methods, but to allow for a level of presence previously only possible with in-person interactions. This shines in particular in situations where you're meeting with three or more people along with content at the core of the meeting.

Hope you get a chance to try it, and would love to hear what you think and how we can make it better!




Thanks for the reply. Incidentally I've also worked on a bunch of problems you guys must have had related to the UX in VR. For example I've also implemented a bunch of virtual keyboards ;-)

Are you using straight CEF or have you improved the compositor to composit directly into a texture? IIRC CEF only provides the composited web page in a bitmap and then you're going to have to do repeated texture uploads which is going to be a drag.

Does this support VIVE too or only Oculus?


Awesome to hear that! Would love to check out your work if you have a link / video or anything like that!

We actually forked CEF - and had to make a few changes to allow for integration in the way that we needed. We do use OSR mode, and update a texture in that way - although we need this buffer anyways, since we're sending video frames across the peer to peer mesh - so even if we did go straight GPU, we would still have to download the buffer from the texture.

It's a drag, but there are a number of techniques to improve the performance. Resolution is one great approach - since the resolution of the HMD makes having a high resolution on the browser kind of useless, so reducing the resolution also reduces the pressure on the GPU. Also, we can limit frame rate based on the kind of content being used - and also, we can leverage dirty rects to optimize for content that isn't changing. Since we're running multiple browser tabs, the latter technique isn't as useful for a particular page, but makes it more performant when a user is doing multiple things, like watching a video on the shared screen and scrolling through wikipedia or a news site like NYT.

Up until we consolidated the build for Oculus release, we supported OpenVR and still do in our code - just not the Oculus build. We've gotten a lot of interest in the Vive build in this initial release, so might look to reintroduce that. Before pushing to Oculus, Dream would just launch off the desktop and detect which HMD you had plugged in and then launch the appropriate platform. Shouldn't be a ton of work to bring it to Steam!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: