Hacker News new | past | comments | ask | show | jobs | submit login
Daydream Is Google’s Android-Powered VR Platform (theverge.com)
183 points by T-A on May 18, 2016 | hide | past | favorite | 108 comments



We're less than 2 months away from the first real releases of VR equipment and already the ecosystem is fractured in to at least 4 different platforms/SDKs/styles: Oculus, SteamVR/OpenVR/Vive, Google Daydream, Playstation VR.

I fear the fracturing will make total adoption lower and slower, as developers will have to choose sides or spend way more time developing for all platforms.


Yes, a single ecosystem would help adoption, but it would also significantly reduce innovation. It's so early in the lifecycle that we're better off emphasizing innovation.

Anybody buying now has to recognize that anything they're buying will be obsolete in a year or two. Once you realize that, you realize it doesn't matter too much if you pick the 'wrong' one. You'll be replacing your headset with v2 or v3 of the winner anyways.


"It's so early in the lifecycle that we're better off emphasizing innovation."

Early in the lifecycle, when there is a lot of R&D overhead and risk, is where companies actually have incentive to collaborate. It's very unlikely that the ecosystem will "open" later, instead we'll have to (as you said) accept the ecosystem of whoever wins.


And later in the lifecycle there are more vertical integrations that companies monetize, incentivizing collaboration to get as many users as possible.

It's all a part of the race to the bottom, from cloud data services to internet browsers.


It's really three levels:

High End - Desktop ( Vive / Oculus ) $600-800

Console ( PSVR ) $400-500

Mobile ( Google / Oculus / ?? ) $99-??

Console vs Desktop vs Mobile -- that's the real issue. The leap from Mobile VR to Desktop VR is huge. PSVR -- remains to be seen what Sony does on the hardware front (PS4.5 with new ATI card, or ??).


It's a good point, and I don't really see an issue.

After all, to this day gaming is distributed already between console/desktop/mobile, with each audience being large enough to ensure good profitability in every environment.


There's also high-end mobile (Samsung Gear). 300 bucks? 400? Don't know exactly.


Gear VR is only an additional $99. You have to have one of the select few phones it works with, but I don't think it's any more fair to count that in the cost of the setup than it is to count the PC in the cost of the desktop VR system, or the house in the cost of the roomscale VR system.


My bad! I went by the old Samsung Note-based price range. Had no idea they lowered the price that much.


They also gave it away to a lot of S7 buyers. So most people have it for 0$.


It's the price of the shell, you still need the phone to put in it.


Consumer VR is in it's infancy. Embryonic almost. In the early days of desktop computer, how many different OSes and hardware configurations were there? In the years of the automobile before the Model T, how many wildly different types of car were available to the public?

One platform to rule them all will come in time, but we should be happy that for now the ecosystem is competitive, it means the consumer gets a voice in deciding what shape the tech will take


Embryonic VR was the Virtual iGlasses tethered to a Pentium tower PC. Worked surprisingly well. Alas, the $600 price tag and similarly embryonic graphics tech was beyond the interest of most developers. Still have mine...


And several more contenders as well that don't get talked about as much.

One major problem is that none of them are open, and none of them have pushed to create standards; they all want to unilaterally define themselves as the standard. I think we'll get a more unified ecosystem when someone starts looking to collaborate on standards rather than exclusivity. Perhaps Google will do that with Daydream, or perhaps someone else will.


The VR Ecosystem looks a lot like the console ecosystem, which has dealt with the problem by adopting SDKs like Unity or Unreal.


True. Even then, I'm still holding my breath for Unity's VR API to support Google Cardboard. Anyone know when that's coming?



Nope. I mean this: http://docs.unity3d.com/Manual/VROverview.html

The whole point is to use Unity's single high level device independent VR SDK (which enables many internal rendering optimizations), and to avoid plugging in yet another VR SDK in addition to a bunch of other separate GearVR, Oculus, Sony, etc SDKs, and having to write complex branching code (and build configuration scripts) in my app to support multiple APIs with my own ad-hoc layers of abstraction (which is costly to maintain and soon to become obsolete). That's Unity's job, and why I gladly paid them for a license.

Unity has a device independent VR API that supports Oculus, GearVR, Sony and other devices, but as far as I know it does not yet currently support Google's Cardboard SDK. They've promised to support it, but they haven't delivered it yet, nor said when to expect it.

Unless Google wants to pay me for wasting my time, the last thing in the world I want to do is to spend a lot of time and effort duct taping Google's Cardboard SDK onto the side of my existing app that already uses Unity's built-in VR SDK, and then Unity finally releases their the built-in support for Google Cardboard that they've been promising.

The potential and promise is there, but until I can just tick the "Virtual Reality Supported" checkbox in the Unity player settings to transparently support Google Cardboard, GearVR, Oculus, Sony, Vive, etc, the rubber hasn't hit the road.

Until then, you have to do short term dead end hacks like this, which will probably break every time anything it depends on releases a new version (which is often): https://github.com/ludo6577/VrMultiplatform


I'm waiting on that as well. But things are so early on and some of the SDKs Unity has to integrate haven't even hit v1., so it's a bit hard to support.

Also, even if they do automate handling the rendering and input aspects of each VR SDK, you will still have to deal with the various different platform / store level aspects. Google Play versus Oculus Home versus Steam versus whatever else comes in. It's already a struggle maintaining separate Rift/GearVR builds in Unity, when I have to maintain 4 different environments and their platforms? Yeah...


It already works on Gear VR (Samsung's phone-based VR thing), so we know it's feasible for it to run on phones that way.


What I mean is being able to check "Virtual Reality Supported" and have it support Google Cardboard just as well as it supports Oculus, GearVR, using Unity's built-in device independent VR API.


Availability of head tracking, movement tracking around a space ("room scale VR"), and input mechanisms varies enormously between these. Room scale VR on a Vive with Steam VR controllers is just not the same thing as google cardboard.


I agree, each brand of device is vastly different, especially when it comes to input. There's a common core that needs to be handled at a low level under the hood, but in an extensible manner. Unity has already announced their intention of tackling this terribly difficult problem, but now we're just waiting for them to deliver on their promises.

They're the only ones in the position to really solve the problem at the right level, since it interacts so deeply with their rendering and input system, which is all under the hood so not possible for external developers modify, hook into, or fix problems.

Unity needs to integrate the input system with their new UGUI user interface toolkit, and integrate it with VR devices so all the different kinds of input devices plug in and are handled consistently. So they need to get very friendly and cooperative with all the different VR and input device hardware manufacturers they support, and do whatever it takes including flattering, shaming or partying them into cooperating if they foolishly decide they'd rather lock their developers into writing code specifically for their device, so that it doesn't work with other manufacturer's devices on purpose. (Here's looking at you, Apple!)

Unity has announced a new input system [1] and published a prototype [2], and asked for developers to give them feedback [3], but it's way too early to use in a product.

I'm optimistic that they mean what they say, and glad they're publishing the prototype and asking for feedback from developers, but I appreciate it's a difficult problem that will take a while to get right.

I've read over the documentation, and it seems to take a nice modular approach that can abstract away many differences between input devices, but it's just a hard problem by its very nature, and there will always be special circumstances for particular input devices that you need to handle on a case-by-case basis. So both the VR API and the input API should support hooks and customizations and plug-ins for special purpose hardware, and ways of querying capabilities (like position tracking), enumerating devices, reflecting on and hooking into the model.

I'm disappointed that neither the new UGUI user interface system nor the new input system prototype seem to fully support multitouch input and gestures in a nice way, like the TouchScript library does [4], but I hope they eventually support that as well.

One frustrating example of a bad API that both the Oculus and Unity SDKs have is the "recenter" call [5], which resets some hidden state inside the tracking code so the current direction of your head is "forward". You can't read and write the current yaw nulling offset [6], you can just "recenter", whatever that means. So there's no way to smoothly animate to recenter -- it always jerks your head around. They should expose that hidden state, and let me read and write it, so I can recenter smoothly. It's never a good idea to have a bunch of magic and hidden state behind an API like that. Plus the documentation is terrible -- they don't define what any of the terms mean, what the model is, or what the actual effect is mathematically. (Does it reset the yaw around the neck, resulting in a discontinuous jump in eye position? How can I tell what those measurements are, or know what the model really is? Does recenter work differently with devices that track the actual absolute position of your head, as opposed to estimating it from a model of the eye position relative to the neck?)

The input system should also support other kinds of gesture recognition like motion tracking (staring, tilting, pecking, shaking and nodding your head), popular "quantified self" input devices like the ShakeWeight with its optional heart rate monitor [7], network and virtual device adaptation, emulating and debugging hardware devices in the editor, etc. But right now it's so early in the game that the new input system prototype isn't yet integrated with the new VR API or the (less) new UGUI toolkit.

Oculus has examples of how to integrate head pointed ray casting with UGUI using reticles and cursors, but that code is brittle and dependent on their particular SDK and app-specific utilities, and it copies and modifies a bunch of the Unity UGUI input tracking code instead of subclassing and hooking into it, because Unity didn't expose the public virtual methods and hooks that they needed. They've beseeched Unity to fix that by making their API more public and hookier, but for now, Oculus's example UGUI integration code is pretty hacky, complex, and not a long term solution.

All the input tracking stuff will work much better when Unity fixes it under the hood (and lets developers open up the hood and hot-rod the fuck out of it, or shop for competing solutions on the asset store), instead of having 6 different VR and multitouch SDKs that all solve it in 6 different but overlapping ways, which you can't integrate together.

[1] http://blogs.unity3d.com/2016/04/12/developing-the-new-input...

[2] https://sites.google.com/a/unity3d.com/unity-input-advisory-...

[3] http://forum.unity3d.com/threads/welcome-new-input-system-re...

[4] http://touchscript.github.io/

[5] http://docs.unity3d.com/ScriptReference/VR.InputTracking.Rec...

[6] see "nulling problem" -- this is a GREAT paper: http://www.billbuxton.com/lexical.html

[7] https://www.youtube.com/watch?v=JImGs2Xysxs


I highly recommend that anyone working with user interface input systems should read this classic paper by Bill Buxton:

Lexical and Pragmatic Considerations of Input Structures

http://www.billbuxton.com/lexical.html

PRAGMATICS & DEVICE INDEPENDENCE

"From the application programmer's perspective, this is a valuable feature. However, for the purposes of specifying systems from the user's point of view, these abstractions are of very limited benefit. As Baecker (1980b) has pointed out, the effectiveness of a particular user interface is often due to the use of a particular device, and that effectiveness will be lost if that device were replaced by some other of the same logical class. For example, we have a system (Fedorkow, Buxton & Smith, 1978) whose interface depends on the simultaneous manipulation of four joysticks. Now in spite of tablets and joysticks both being "locator" devices, it is clear that they are not interchangeable in this situation. We cannot simultaneously manipulate four tablets. Thus, for the full potential of device independence to be realized, such pragmatic considerations must be incorporated into our overall specification model so that appropriate equivalencies can be determined in a methodological way. (That is, in specifying a generic device, we must also include the required pragmatic attributes. But to do so, we must develop a taxonomy of such attributes, just as we have developed a taxonomy of virtual devices.)"

Also check out Proton, which is a brilliant regular expression based multi-touch gesture tracking system, which would work very nicely for VR and multi-device applications. Proton is to traditional ad-hoc gesture tracking as Relax/NG is to XML Schema.

http://vis.berkeley.edu/papers/proton/


Check out InstantVR [1] as one input integration solution. I'm experimenting with it for switching between Kinect, Leap, and Hydra for hand/arm support. A completely native solution would be much better, but this is a decent stop gap until we get that.

[1] https://www.assetstore.unity3d.com/en/#!/content/23009


That looks quite useful! Thanks for the recommendation.


Once you have tried the HTC Vive, you'll know that any other VR system is already obsolete. Be it because of the experience or the hardware. I would adventure to say that the Vive makes all other systems look ridiculous.

My bet is that in the future, VR will become more and more similar to what the Vive is offering now. Room-scale VR is a game changer, as much as VR itself.


But Oculus will have room scale soon too: Oculus Touch is coming soon. It'll give controllers that add finger control to the mix, and a second camera to enable room scale.

I'm waiting for reviews and availability of the GTX 1070, as well as better headset availability before pulling the trigger on a headset. Hopefully Oculus has their controllers out by then, otherwise I probably won't wait for them. (I won't wait for AMD either; if they want me to consider Polaris, they better release it ASAP).


> But Oculus will have room scale soon too: Oculus Touch is coming soon.

They haven't even shipped all their headset preorders yet, I doubt "soon" is the word you want there.


Well.. yeah?

New technologies always results in lots of competing platforms, then consolidation, then stagnation, then a new competitor entering, new features, market chaos, then consolidation again.

And people will complain at ever stage.

And it has always been thus.


Let us also not forget Google funded company Magic Leap going head to head with MS Hololens in the Augmented Reality space.


People have differing needs and differing devices, so fracturing under the title "VR device" is perfectly normal. What developers need is a standardised API to access all of them with parts specialized to cater for each device's unique functionalities. Which will eventually happen. I cite ios+android libraries/frameworks as an example.


I'm afraid I don't understand the nuances between when an ecosystem has healthy competition vs when it is fractured.

Certainly if different systems support different amounts of a standard (eg browsers and the web standard), fractured seems like the right term.

Is the requirement for a fractured ecosystem just different APIs for the same thing? Are ML frameworks fractured? I suppose they would be.

What's the solution? Would you propose a VR standard? I imagine cardboard was an attempt at that. But standards are slow moving - not very effective at innovating.

I think a standard will naturally emerge once we know what should be standardized. But you don't want to kill innovation yet.


Or its a bunch of ideas thrown out there.. maybe 1 will stick and 3 will die. Sucks for early adopters with dead hardware, but if we in 2016 have the tech to make this work, our odds of hitting the money are pretty high right now.


That's what always happens. I'm glad those brave early adopters are out there, making difficult and expensive decisions and then regretting it so I don't have to.


Ignore the hype. Save your cash. Wait for version 2.0

You'd think we would have learned this by now, but I'm still way too excited about the vive.


The problem though is that the company able to push their product to win the market and the company with the best product probably won't be the same.


I think Android is a pretty good example of how "fracturing" is just FUD and not really that big of a deal.


Not even close to being the same thing. On android there's a common codebase that it's guaranteed to be there for all devices even if you have to add a couple of additional layouts for the odd cases, which it's what it mostly comes down to in this day and age. The same can't be said for VR right now.


It's not like there's a lot of freedom on how to express headset orientation. Every headset so far provides a quaternion you can poll. The word from developers like Northway Games with Fantastic Contraption is that even hand-motion controllers like the Vive and Oculus Touch are providing nearly identical interfaces.

No, you won't have literal one-to-one code, at least outside of WebVR. But I highly doubt porting is going to be that hard.

If anything, because you have to do the graphics on your own, there is an even greater chance of successfully building cross-platform systems. At the end of the day, it's all just GLSL.


...people commenting on this thread are using Mac, Windows, Ubuntu, Debian, CentOS, Android, iOS, and a few others...

Does that make total adoption lower and slower?


Well. It is all about games.My bet is on PSVR for now.


At the beginning it will be all about games, but once the technology is good enough (Gen 3/4) it will transition to be the universal computer interface (replacing standard monitors).


AR will. VR's immersiveness is actually a problem for non-entertainment uses.


There are more types of entertainment than games. It's easy to imagine VR Concerts, VR Sightseeing, and VR Movies / Television. There are also many practical VR uses that the immersiveness will add to: 3d modeling, real estate/architectural tours, teleconferences, remote debugging, systems design / modeling.


Actually, I expected to hear more about VR after the releases. Could it be that the hype is over before it began?


Wonder if they'll rename the screensaver in Android TV, also called 'Daydream'? [1]

[1] https://play.google.com/store/apps/details?id=com.google.and...


All Android devices had "Daydream" screensavers (Settings > Display > Daydream).

In the latest Android N preview, it's been renamed to simply "Screen saver".


Going to be awfully confusing for consumers searching for "daydream" looking for VR content or screen savers and finding both:

https://play.google.com/store/search?q=daydream&c=apps


On the bright side, my daydream app will get some accidental installs!


I assumed they called in Daydream because they already had a trademark for that name. Like how Microsoft re-appropriated the Surface trademark.


The headset doesn't look different from other mobile VR devices. I was hoping/expecting to see something with tango integration for head tracking.


I wonder if the headsets will ever get as small as a normal pair of glasses (or goggles)


I think due to how optics works the screen must be a little bit away from the face, most of the bulk is empty space.


Perhaps not ready for prime time yet, hopefully later this year


>"A Daydream home screen will let people access apps and content while using the headset; an early look shows a whimsical forest landscape with the slightly low-poly look that Google has used in Cardboard apps."

Google Bob! ;)


Was anyone able to glean from the video if the controllers/HMD provided positional tracking, or just orientation tracking? Either way, a more open mobile VR platform sounds great (though a little disappointed in that Daydream feels like an "us-too" announcement). I was hoping they would announce solving positional tracking on a mobile device. Nothing all that earth shattering here at the moment. :/


It's definitely not positionally tracked, looks like it's just a nice imu + touchpad.


I don't know much about this space, how do you know it's definitely not positionally tracked? My reason is that they didn't talk about it, so I figure it isn't but I'm not certain.


None of the controlled objects in VR responded to any of the positional movement. You can see that the wand / pole is just rotating around a center-point attached to the user.

Positional tracking is a really challenging problem with a lot of limitations. If they were using computer vision to track the controller it would need to be held within the device's FOV and have identifiable elements such as a marker or LEDs. There are other approaches, but nothing I've seen that would work well on an HMD.

I've been working with AR/VR for the better part of a decade, and controller tracking's come up several times. I can tell from my own work that the device they're showing off is IMU only. It's too bad, something equivalent to the Vive controllers for mobile VR would be fantastic.

And at the end of the day - you're right. If it was spatially tracked then they definitely would have mentioned it.


> Positional tracking is a really challenging problem with a lot of limitations. If they were using computer vision to track the controller it would need to be held within the device's FOV ...

First, I think positional tracking of the person within his/her environment is one of the main things that makes the difference between VR and AR. So there's value in that. If you can do it with computer vision maybe it can work in almost any environment.

Second, there are RF based positional tracking systems. They're more expensive, true (not if you can design chips yourself, so might even be cheaper for google), but you don't need the controller to have visible elements.

Third, hand tracking like this should be able to work usefully, shouldn't it ? I know it doesn't do distance, but ... Just hang it off the viewer pointing downwards with a large volume immediately in front of the user. Not great for gaming probably, but for a virtual keyboard it should work, no ? http://www.ctxtechnologies.com/products/vk-200-keyfob-virtua...


Because good enough IMUs and optical systems for positional tracking are hella expensive compared to where in the market this thing is targeted.

There'd need to be either:

    - a very expensive IMU in the controller, talking $300-500
    - a camera and hefty CPU in the controller,
    - a camera and dedicated image processing CPU on the headset
Vive positional tracking works because you control the environment. Oculus positional tracking works because you're already running on a hefty PC. You don't have either of those advantages on mobile, so you're left with either IMU sensor integration or SLAM, both of which are more expensive than $100 to do well and leave processing time for graphics.


Why do you think the IMU is expensive? I thought the gear vr and occulus were using the same chip. Care to shed some light?


The Gear VR/Oculus IMU is only good for orientation. Positional tracking with the Oculus Rift is done with an external camera, detecting a constellation of infrared LEDs on the headset itself. Gear VR thus does not have positional tracking.

To be able to integrate the acceleration data from an IMU to get positional data out of it is not something you can do with commodity IMUs, even the one made for Oculus. They're too imprecise and too inaccurate and too slow, and any analysis tricks to improve the results all introduce latency. Oculus' sensor has very little drift (though it still has drift, which is another thing the camera system on the desktop Rift can correct for) and the noise in the orientation data is not noticeable or can be filtered much more simply without introducing "too much" latency.

There are IMUs that are good enough to integrate acceleration data to position data, but they are prohibitively expensive for this application.

For what it's worth, I used to see a marked difference in the quality of orientation tracking between my Galaxy Note 4 in Google Cardboard vs. the Gear VR, but I see no different between GC and GVR on my Galaxy S7.


That's what I figured. Still better than tapping my forehead on the GearVR.


The video gave the impression of controller position tracking. If it's actually just an IMU, that's going to be pretty jarring.


This is a big deal. It basically adds the two factors that made GearVR better than cardboard to any VR ready Android phone: OS integration and better IMU sensors. It basically puts high quality (but not cutting edge) VR in everyones pocket, all you need is a dumb holder with lenses.

My prediction is that this will play a huge role in mass adoption of VR!


I completely agree. I thought Samsung/Oculus had gotten it right with the GearVR, but the performance improvements from a native, full-stack integration of VR functionality is giving me goosebumps. They even have a new spec which multiple vendors can get behind.

I'm going to start researching this platform now and will invest my time as a developer in it as soon as possible. I predict that this will be a good platform to get behind. Perhaps I'm being naive, but in a space with as much market potential as VR, this vision has me incredibly excited.

edit: One more thought: The hardware that they showed in the demo appears to be a step ahead of GearVR in usability as well. The headstrap isn't painfully assembled (my GearVR unit falls apart half the time I put it on) and it comes with input controllers. GearVR is woefully lacking in input options out of the box.


TLDR: Google's following in Samsung's Gear VR footsteps with OS optimizations/additional sensors in devices to improve VR experience.


Have they said anything about latency or asynchronous timewarp? It's a bit too technical to make it into end-user marketing content, but for game developers ATW is a huge deal and it's an area where Cardboard was lagging way behind.


Well they have a certified device sticker... that sticker requires certain latencies. They did not reveal the numbers, but in theory that should be a big step.


I remember them mentioning in the keynote something about 20ms latency wrt. Daydream


No, but I think Imagination hinted at that in its new post:

http://blog.imgtec.com/powervr/presence-in-virtual-reality-a...


The potential to improve Android's audio latency is the main reason I'm interested in this latest development, could end up having positive knock on effects for the Linux audio in general too.


In this specific case, it's not Linux here that's the problem but Android itself. The kernel has been capable of near-millisecond audio latency for years, even more so with an RT-patched kernel. Google just hasn't cared much about audio latency so far, and their VM probably makes things worse.


The kernel has been capable of sub 10ms audio latency for years, but the audio subsystems that are commonly used in Linux hold Linux back from achieving that level of performance (on general purpose Linux systems at least, embedded systems are a different matter).

ALSA isn't flexible enough, PulseAudio is too slow, JACK is too specialised, etc... CoreAudio on OSX shows what an audio stack can be capable of, all the Linux audio solutions currently available are subpar compared to CoreAudio.

AudioFlinger is Android's audio server. I'd be interested to see how far this can be pushed, but perhaps Google will route around it and provide a different solution for the type of low latency audio that immersive VR requires, IIRC using an alternative audio stack was Samsung's approach with GearVR.


I agree completely about the user-space tools on Linux. My doubt is that Google will come up with a solution that applies anywhere except Android. But given the hopeless situation there at the moment, I'd be okay even with that.


What's the usage scenario here? I'm just not seeing mobile VR as a usable thing. Does Google expect people to carry goggles around everywhere they go like they do their phone? Worse, the crummy graphics and low framerate in the video seems like a recipe for VR sickness.

I have a Vive at home and its wonderful for gaming, if a bit undercook and still unable to deliver a pixel density that makes me happy (Vive2 perhaps?), but I can't imagine a phone remotely competing with that still underpowered experience.

I can see AR projection built into one's existing glasses a la Google Glass or perhaps what MS is doing with AR, but VR is a totally different beast. Comfort, performance, fov, pixel density, graphics quality, audio quality, "presence," etc really matter. I just don't see Google pulling that off and even if they got close, who exactly is clamoring for VR phones? I suspect Google has just become too mobile centric and is shoehorning in whatever is hot into its Android line and seeing what sticks (instead of refining Android to be a better experience it seems). I'm not sure if that's wise. VR seems to be more at home attached to a powerful computer in a safe indoor space where people can feel free to move around without injury and get a high quality VR experience. You shouldn't be doing VR at the bus stop.


For me, I would be most excited about the lecture applications.

For example, what if every TED talk had a VR broadcast so you could slap on your goggles and "attend" it. This goes for conference key notes or college lectures. Anything where Physical attendance is a barrier and not a key component of the experience I think would benefit from a lightweight VR experience like this.


BlizzCon etc virtual tickets just got a lot more interesting in your scenario :)


Well, the entire pay-per-view field really :) , NBA finals are already on VR, imagine that for all other sports.

They're not ready yet:

http://techcrunch.com/2015/10/28/a-live-nba-game-is-cool-in-...

but it's definitely coming.


Why not just use a real VR headset then? When the prices come down on this stuff it'll just be seen as another peripheral for your computer. It won't be this enthusiast toy. Are you walking around with your VR mask everywhere you go just in case you see a TED talk you like? That seems incredibly inconvenient. Even a low profile mask is a fairly bulky item.

I think attaching a phone to your face is going to give a subpar experience compared to a dedicated device. I still don't see a use case here that's going to get people excited. Especially after the largely milquetoast reception smartwatches have gotten. VR masks are about 1,000x more geeky than those. Image conscious people aren't going to be putting them on their face to consume content in public.


For the same reason people bought iPods instead of audiophile stereos: yes, the quality was inferior, but there is a great value to "here, now".

Just minutes ago a co-worker grabbed the office Cardboard VR and slapped in his phone, and watched some of the Google I/O keynote. Cheap. Easy. Portable. Low res, but there he was virtually at the presentation.

People will get used to the oddity of VR just as they did with cell phones, hands free ear pieces, SIRI, coffee shop computing, etc and will get over self driving cars.

My Dodo Case Cardboard VR folds up and fits nicely in my messenger bag. And I like my Apple Watch, thank you very much.

Yours is a naysaying template I've heard for nearly every new technology for decades, and here we are with most all those things now ubiquitous.


>eople bought iPods instead of audiophile stereos

iPods fit in pockets. VR headsets don't. Did you even bother to look at the Google reference design? Its a huge headset and controller. Those aren't convenient mobile things. They're not remotely portable like a phone or ipod is.


Read the rest of my post.

They still have a place without having to be tied to a computer. Like I said, we have a Cardboard at the office, same dimensions you object to, easy to have around, easy to share, cheap, no dedicated computer needed. There are more phones in the office capable of using the head mount than there are people. Need more head mounts? cheap order from Amazon, arrive in two days. No hassle of sharing a computer for it, just pop in your phone.

And like I said, some units do fold up convenient for travel.


> Why not just use a real VR headset then?

Because a "real VR headset" includes the cost of the display screen and requires an external control device, whereas a mobile VR headset uses a mobile device that you would probably have already anyway as both the control device and display, and so is significantly less expensive and less hassle to use, even if you never use it anywhere but inside your own house.

> When the prices come down on this stuff it'll just be seen as another peripheral for your computer.

Mobile VR is just another peripheral for your computer -- the little computer most people carry around in their pocket, not the big clunky ones that less and less people are bothering with.

EDIT: By "control device", above, I mean the processing center, not the user interface device (which will still need to be separate for a mobile VR device.) I see that that could be misunderstood.


Where do you get the idea that phones are only useful when a person is not at home? I use my phone for lots of things when I'm in bed or sitting out on my balcony, despite having a nice computer downstairs.


Because nobody[1] buys computers anymore.

[1] Yes, I'm aware the computer market is huge, and there will continue to be a huge market for computer based VR and I'm completely wrong about this. BUT the mobile market is bigger.


How is a vr Ted talk different from watching it on YouTube? Aren't they essentially 2d events?


I find one of the biggest challenges of YouTube being the editing between presentations slides and videos of the presenter. Being able to look at a stage as if you were in attendance from the audience would alleviate this challenge greatly (at least for me).

This has utility for note taking as well.


Having used a gear VR (briefly), I would not want to try to read text from a slide displayed in it. It would be better to have the video feed of the presenter and a video (or text!) feed of the slides and be able to show both and maybe switch between them -- i could imagine showing one in a big rectangle and the other in a small rectangle.

I'm pretty sure I've watched corporate internal presentations with this style of UI -- it's quite useful.


Well you don't need VR to solve that problem for sure. Either better editing or just recording slides and presenter separately and allow the viewer to switch between them (or see both at the same time) on a regular display would completely solve the issue.


> For example, what if every TED talk had a VR broadcast so you could slap on your goggles and "attend" it.

No thanks. I currently would rather read a paper than youtube a lecture. I would surely rather read a paper than strap a $600 device to my head.


Having completed my undergrad degree without attending most lectures by just by reading text books and papers, I wouldn't be so quick to dismiss the value of a lecture and a clear and concise presentation on a complex topic. Granted at my university, many of the lecturers were un-inspired/English as a second language, but there are presenters which can provide clarity and communicate understanding through lecture.

One of my favorite presenters, for example, is David Beazley who has several presentations/examinations on the Python GIL I found very useful. As another Python example, I also found Andrew Montalenti's overview of multi-processing in Python quite enjoyable as well (https://www.youtube.com/watch?v=gVBLF0ohcrE). Finally, Sal Khan was such an inspired presenter that he founded Khan Academy (https://www.khanacademy.org/) based on his great success teaching mathematics to his younger relatives and their classmates through YouTube videos.


Guess you are only thinking in the perspective of a power user who enjoys high quality games on the VR. But, Google wants to make this a utility by having one connected to your phone and let them enjoy the Youtube VR content, small games, netflix etc. I don't think carrying a small headset with me a problem and hope it evolve into a foldable/compact stuff.


Yes, but VR needs something that's better than Google Cardboard, but more open than GearVR. I expect Android VR to become the "lowest common denominator VR experience".


The fact that there is a controller for input instead of having to tap my forehead will increase usability a ton. Though I hope they provide a way to clip the controller to the HMD when not using it.


People who want to play with it, but don't want to spend 800-2000 dollars on a "real" VR setup?


80% of the Vive experience at 1/10th the cost.


Screen wise maybe, but I would value positional tracking at more than 20%. Not being able to lean in for a closer look (much less move around) is a big immersion breaker on the GearVR


Mobile VR wins with cost, rapid hardware iteration, portability. VR isn't a totally separate realm from AR... "mixed reality" is a buzzword about mixing the two. As for who's clamouring for VR phones, Samsung's Gear VR is doing very well (and Samsung mobile hardware is already used in Oculus products).


Augmented Reality headsets will be the "every day use" thing. After trying out Microsoft Hololens, I am very very impressed with what these kind of headsets can bring to the table.


Not only that, but as technology improves and miniaturizes, I fully expect the distinction between "VR devices" and "AR devices" to disappear. In the future we will be able to switch between them at will and the worlds will blur.

Imagine walking through an AR doorway overlaid in your home, where you can see a beach on the other side. You walk through and magically are on said beach. You turn around and the door is gone--the illusion is complete.

More realistically, I see AR increasingly taking the form of your "main screen" with notifications, browser windows, etc. all accessible from there and just taking up visual space in your FOV. If you decide you want to focus on work or engage in another more immersive experience (movie watching, gaming, etc.), you slip into VR mode and suddenly the distractions are gone.

Again, lots needs to happen to get us there, but I see the blending of the devices as inevitable once the form factor is popularized. My guess is it will take something as sexy and revolutionary as the first iPhone to get this to go mainstream since you are right in that most people will not carry a giant headset around.


More realistically, I see AR increasingly taking the form of your "main screen" with notifications, browser windows, etc. all accessible from there and just taking up visual space in your FOV. If you decide you want to focus on work or engage in another more immersive experience (movie watching, gaming, etc.), you slip into VR mode and suddenly the distractions are gone.

Yes. The games and immersive videos are cool, but I'm even more looking forward to replacing almost all dedicated displays with a pair of normal-looking glasses.


It's kind of disheartening to see how much negativity gets heaped towards VR on HN lately.


God seriously VR right now is so lame.

By now it should be way, way better.

Whoever is driving this shit off the cliff - please just stop while you're ahead, go back to designing hospital websites or whatever it was before you tried your hand at this, and let someone else do the whole newfangled "VR" thing instead.


Your comments have unfortunately been breaking the HN guidelines quite a bit. Please post civilly and substantively, or not at all.


So...only cheering is allowed from the peanut gallery?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: