Great work on this. I lead a remote team, and if the vision here were fully realized and available on the Oculus Go/Quest, I’d trade some travel budget to equip the entire team with it.
You asked what features it would need. My use cases are basically what we do when we bring the team to one location for face-to-face work. While there are use-cases for typical distributed work, there aren’t many things we can’t do with the tools available: Slack, Google Docs, email, git, etc.
What is missing that only VR can solve?
Generally:
Space. No matter how many monitors we place on our desk, we run out of real-estate quickly. Collaborate planning and problem solving often requires space to visualize our thoughts and ideas.
Social presence. When human problems overshadow technical problems (i.e., all the time), the feeling of being fully present to one another is necessary.
Specifically:
A whiteboard. In VR, you could make something far better than the 10k devices out on the market today (I’m making this number up, since “request a demo” is the standard price)
Emotionally engaging avatars. Seeing a head turn isn’t enough. We need to be able to see another person’s unique facial expressions in order to connect. With goggles covering half the face, I don’t know this is possible, but ML research in 3D facial reconstruction from 2D photos has shown a lot of promise. Perhaps in-painting combined with reconstruction would do the trick.
Beyond these two items, I see a large space for reimagining UI in VR. Unlike a monitor and mouse, VR knows generally where you are looking, and in the case of Oculus, what you are saying. Voice commands that are contextual to eye gaze (tracking would be better) could be combined with VR’s version of a right-click context-menu as an HUD. It doesn’t need to be as crazy as Iron Man, but given a limited number of options, voice commands seem an ideal fit for VR.
A few years ago, I prototyped a HUD/Voice UI as the central feature in a VR browser, but I found resolution on my DK2 to be too limiting. Maybe the time is right to try again. I’d love to see someone try!
Thanks for the kind words! We are totally on board in terms of getting Dream on to the upcoming Quest, and standalone VR headsets in general. We've had a bit of success in getting businesses to install VR set ups in a conference room, but when this stuff is a $400 standalone device that doesn't cannibalize your work computer or phone I think it completely changes the viability of using VR for productivity / collab (especially on the go).
Thanks for the feedback on what kind of stuff you'd like to see! Deeply agree on the whiteboard thing, and it's one of the next features we want to tackle on our road map. In particular, we're excited about the ability to annotate either a white screen (normal whiteboard) or content that is being shared. The sensitivity of the controllers plays a big part here, but even for something like marking up a presentation or document this could go a long long way.
I agree that VR headsets have a bit of ways to go in terms of detecting facial gestures and emotions. I've seen some really cool tech using putting galvanic response sensors in the facemask, which could do everything from detect overall eye direction (up down etc) as well as emoji level gesture detection. I think over time, these features will become standard in most HMDs. In the meantime, we tried to balance the line between uncanny / creepy to useful and engaging. Perhaps it doesn't come through as well in the video, but when you're in VR just the head motion / hand position goes a long long way. Also, we built our entire stack to ensure that we're able to send this data as quickly as we get it from the HW - and keep our latency down, this provides for a surprisingly engaging experience and would love to hear what you think if you get to try it sometime!
We are keen to explore the addition of voice and contextual gaze based actions. We had a bit of this built before launch, but for the feature set we launched with didn't really use much of it. With regards to voice, we are planning to integrate with something like Google Voice Engine but wanted to make sure we had a good text entry method for things voice has a really hard time with like URLs, passwords and usernames - these were in the critical user flow, since they're required for logging in / creating accounts as well as selecting what kind of content to share. We also added Dropbox / Google Drive integrations to make bringing in content more fluid and intuitive, so overall you can kind of see where we've been prioritizing but we definitely have a long ways to go and more to build!
You asked what features it would need. My use cases are basically what we do when we bring the team to one location for face-to-face work. While there are use-cases for typical distributed work, there aren’t many things we can’t do with the tools available: Slack, Google Docs, email, git, etc.
What is missing that only VR can solve?
Generally:
Space. No matter how many monitors we place on our desk, we run out of real-estate quickly. Collaborate planning and problem solving often requires space to visualize our thoughts and ideas.
Social presence. When human problems overshadow technical problems (i.e., all the time), the feeling of being fully present to one another is necessary.
Specifically:
A whiteboard. In VR, you could make something far better than the 10k devices out on the market today (I’m making this number up, since “request a demo” is the standard price)
Emotionally engaging avatars. Seeing a head turn isn’t enough. We need to be able to see another person’s unique facial expressions in order to connect. With goggles covering half the face, I don’t know this is possible, but ML research in 3D facial reconstruction from 2D photos has shown a lot of promise. Perhaps in-painting combined with reconstruction would do the trick.
Beyond these two items, I see a large space for reimagining UI in VR. Unlike a monitor and mouse, VR knows generally where you are looking, and in the case of Oculus, what you are saying. Voice commands that are contextual to eye gaze (tracking would be better) could be combined with VR’s version of a right-click context-menu as an HUD. It doesn’t need to be as crazy as Iron Man, but given a limited number of options, voice commands seem an ideal fit for VR.
A few years ago, I prototyped a HUD/Voice UI as the central feature in a VR browser, but I found resolution on my DK2 to be too limiting. Maybe the time is right to try again. I’d love to see someone try!