John Carmack is a good role model for any techies that move up to management. If you want to keep in touch with the base tech that your devs are using, you have to dive into a project like this where all your tech knowledge is brought to bear on a problem and you learn lots of new things.
None of this "well I guess I knew a bit of C++, let me find another engineer to work on this Netflix app and offer micromanaging style tips and tricks" which is what I've seen from a lot of managers who used to be technical.
He's also an amazing role model for regular programmers; figure out the requirements, take a crack at it with a prototype and then iterate. The iteration doesn't have to be an overtime week affair.
Why do you say that Carmack is in management? I don't get the impression that he's a people manager. A CTO does not necessarily have people reporting to them in a people management sense. From what I've read, I would assume he does not. (I could be wrong - I don't really know.)
High judgment individual contributors often take on responsibilities that can be considered management, like deciding business and technical strategy, designing products, prioritizing roadmaps, etc., but while these are management functions, taking on these responsibilities does not mean that one is in management.
From the blog post, it sounds like Carmack is a highly productive, high judgment individual contributor with the responsibilities you'd expect of a CTO (technical strategy). I would say that someone is "in management" when their chief function is managing other people. From this post, Carmack seems to be delivering work as an individual contributor and (very senior) technical lead.
Along the same lines, I recommend we discourage phrasing like "move up to management". Management is a different job, not a better or superior one. In well-run technical companies there are managers and individual contributors at all seniority levels, such that one does not need to become a manager to "move up", even to CTO level.
In some of his recent keynotes (especially the later quakecon ones) he focuses a lot on how he's changed his mind and that he now believes that software development has a lot of "social science" aspects to it. I think he's given a lot of consideration to how one manages a software team, and even though he sometimes disappears into his office to write something like this Netflix app, he still does manage people.
Management is a different job, not a better or superior one.
Oh please. Until management is no longer a more profitable career track than staying in engineering, could we stop lying to the younger folks that might believe the nonsense they read on HN?
In well-run technical companies there are managers and individual contributors at all seniority levels
Non-management contributors with C-suite pay (and occasionally titles), are still rare enough to be noteworthy. Unless they're on par in compensation and control of the company, pretending traditional corporate management doesn't consider themselves their superiors is just ego-stroking the engineers too dumb to be insulted.
>I recommend we discourage phrasing like "move up to management".
Word. At Etsy, we'd occasionally call out people (in a friendly way) for saying "I got a promotion to manager!" Someone (usually a more senior manager who used to be an IC) would say "not a promotion, lateral move." Drove the point home, at least at an IC level, that management is not a move up.
The problem is that most techies who move up to management don't have Carmack's brain. This guy is a machine! For a normal person, it is almost impossible to get a workload done that's comparable to Carmack's.
Especially combined with management tasks; I find coding and management hard to combine. At least in my brain they feel very different and as a CTO I feel I must do both. I do not want to end up like a CTO that is out of touch or has his current tech knowledge from buzz words in management magazines like so many tech management people I know who are my age.
I wonder if part of it is that some techies who aspire to management do so because they don't actually like developing software.
As an individual contributor I can't imagine giving up my day-to-day coding for a management position. Management responsibilities, sure. I'd wager that most software developers who are given enough autonomy are doing a lot of micro-project management anyway.
Giving up on researching new technology and playing around with stuff? Not a chance.
I'm in this boat at the moment (one man band contractor of many years finally starting to take on employees). I'm finding that as I delegate more of the routine day-to-day work away, I'm actually finding myself with more time to play around with new stuff. Trouble is, in my line of work (corporate .NET stuff) customers aren't interested in anything new and exciting - they just want Windows applications and ASP.NET forms connected to SQL Server databases.
Of course, if you're lucky enough to be in a job (I'm thinking frontend web) where playing with exciting new tech is part of the job, I can absolutely see how the move to management and the loss of overall hands-on time with the code can be a bad thing. Thankfully for me it's been somewhat liberating.
I can imagine. I think you can do both by focusing on one for awhile and then moving back to the other role. In the short term it is a set back but long term you can develop both skills. Meanwhile you can share management responsibilities with others and find out who makes the team gel the most and who are the great coders. Eventually if you can retain all this trained talent I believe you'll have a really strong team. Of course this happens naturally as people are promoted and moved around but I think if we could admit to ourselves that a manager once does not need to be a manager always, we'd all be so much better off.
In the Netherlands, where I am from, at least (I think it is less so in the US?), you are a complete failure/loser if you are a coder > 35 y/o (I am 40). You are supposed to be a manager then. My path was different than that but I cannot help thinking this attitude which is pushed in university and by (a lot of) parents influenced some decisions I made. Coders over 45 are considered sad and when you meet them in companies they are usually in the basement and treated like idiots. We were hired to give this mobile app development course to a large company in the Netherlands; it turned out this was just to fill out the training responsibilities the company had with their employees. All of the trainees were over 45, most over 50 and the company didn't care what we did and they would never use it there but apparently their employees expressed interest to learn about app development. Sad.
Yikes that is sad. Hopefully things like open source and github will allow aging developers to show their value. If a business sidelines an employee based on their age then that's the company's loss right? And hopefully the employee can find a smarter employer.
I hope this 'tradition' is ending/will end, but the past 5 years working with medium to large companies in NL showed me it was still alive and kicking.
I struggle with this every day. The amount of juggling priorities and helping a dozen different people that a technical manager has to do is completely at odds with the focus required to write non-trivial code.
I wonder what Carmack's managerial responsibilities are like. I imagine if he's developing/designing for most of the week, how much time does he really devote to being a CTO/manager?
I would be interested in examples of people (besides Carmack) who are able to handle day to day management responsibilities and stay technically sharp in new technologies coming out. There's only so much time in a week, and your performance drops as you exceed 40-50 hrs/week.
I'd rather hire someone with management skill who knows how to delegate to those who have the required technical skills, rather than spin constantly trying to be the master of everything and failing at it.
"The reasonable thing to do was just limit the streams to SD resolution – 720x480. That is slightly lower than I would have chosen if the need for a secure execution environment weren't an issue"
> "The reasonable thing to do was just limit the streams to SD resolution – 720x480. That is slightly lower than I would have chosen if the need for a secure execution environment weren't an issue, but not too much"
Looks like the need for a "secure execution environment" led to the limitation.
And yet all HD content is still easily available for pirating.
When will content producers learn that they're losing money because of terrible distribution strategies and not because it's technically possible to pirate?
Content Producers everywhere pretty much realize DRM is a non starter. The reason its still in there is:
1.) There are old 5, 10, 15 year contracts that still stipulate that the content needs to delivered over some reasonably secure DRM system, and no one wants hire lawyers to renegotiate those contracts.
2.) For new contracts, content producers just reuse old contracts, and the side that wants to eschew DRM isn't usually the content producers. That said, when the other side brings up "how about we remove the DRM clause", the producers are pretty much "sure, for $100,000 more bucks". So its become a negotiating point the studios now hold to extort money from any platform that wants DRM-less content.
You could wonder "why don't producers care about the UX" - for most big studios digital revenues are still nothing compared to physical sales and syndication rights - so they have no incentive to care.
Um, no. I just spent the last eight years of my life building a streaming service and fighting against DRM, and the last two years negotiating terms with several major studios and finally having to capitulate to DRM. They are most definitely not "just reusing old contracts" and you don't have a clue what you're talking about.
I wasn't talking out of my ass. Maybe my information is little dated - but from talking to actual legal teams at Turner & Warner Bros., this is what I was told.
Admittedly it was 2 years ago when I last talked to them - but given that it took you 8+2 years to finally get rid of DRM, there is some truth to what I was saying. It'd be helpful if you actually provided some counter reasons as to why content producers are actually holding onto DRM, and enlightening the rest of us, especially the poster who asked the question, rather than just saying "You're wrong because I spent 10 years in the industry".
> for most big studios digital revenues are still nothing compared to physical sales and syndication rights
This begs the question. The revenues are nothing compared to physical sales and syndication because in many cases watching the content legally online is not an option at all.
Actually, the harsh reality is that we do not yet have a digital content ecosystem that makes consuming any digital media easy for the vast majority of the population (US or otherwise). This is why netflix can do what they do: they have a walled garden where the entire experience is driven purely by their interface and delivery mechanisms (see also: Steam in the PC games space).
I've written specs for studio approved DRM schemes and done pre-dvd content on in-flight entertainment systems. Please believe me when I say that it is a nightmare even if you DO know what you're doing with all the tech involved, but even if it weren't a legal problem there's a good example that is pretty damning as a foundational assumption: we still don't know how to share a file.
I mean that sincerely: try to get your average non-tech worker to send a file to your grandmother on their own. I can not overstate this enough. This does not work.
We have a lot of distance to cover before the digital content world cares about our niche whims.
You are describing turn-key solutions. They are services that provide complete end-to-end consumption methods, like Netflix. In the scenarios of content consumption (be it articles, movies/tv, or even games) even recent history has shown that it isn't the availability of content that is the limiting factor, it is the accessibility of content. What I mean by that is not that you have authorization to consume the content, but that you have the capability of enacting your end of the transaction (in this case, pressing play).
The reason VHS and DVD sales are better than digital is because it is still vastly easier to put a thing in a magic box hooked up to the TV and press play on the remote. Even this is frought with danger (How do I hook it up, how do I get to the right input, where exactly is the button to play the movie on this DVD menu?) This is also why you hear those complaints near constantly.
There's other factors as well (like people not feeling like they own a file, but a DVD is theirs), but again the barrier to entry for consumers on computers is getting past having to think of any of the moving parts. This is true in a lot of places, like automatic transmissions and ATMs (read up on the history of ATM interfaces, it's fascinating if you're into that).
Currently on a PC if you download a file that is a piece of media to consume, you have several important barriers for most people:
- I have to put the file somewhere, and I don't understand filesystems
- I have to have software that uses the file format, that works on my machine
- I have to know how to use that software
- I have to know how to purchase the file, whatever that is supposed to mean
- I have to like how it's being displayed
- I have to know how to get it from the machine I downloaded it on to my TV or whatever
There's probably others in some scenarios, but you probably see the point. A lot of things we tech users take for granted is that these things are very, very hard for most people, and they always have been (again, physical tech like cars or elevators follow this). Even giving someone a link is barely beginning to scratch the surface of this process. So this is, bar none, the reason digital is going to have problems until netflix or youtube or whatever can just play everything.
This is also, as a related aside, why a web browser and mobile apps win in breaking down those barriers. Everyone already has them and it's the "click icon to do thing" model, where icon might be a bookmark or typing a simple address, but the point is the same. No installs, no understanding of local mechanics and difference. It's also why Apple products are viewed by the consumers as more user friendly. There's very little to understand about your machine on a Macbook. You don't have to take my word for it here. Watch your average user use a mac. Or read their user interface guides.
I disagree. Without DRM it would be as simple as generating a temporary signed link to S3 upon clicking 'Buy'. I could implement it myself in an hour. Even grandma would be able to use it no problem (click buy -> double click file -> watch movie).
The depth of thought that Carmack is able to apply to every detail of a problem is astounding. Granted, he's been working on VR for years now and pioneering realtime graphics before that, but his perception about what the end user wants out of a product is second-to-none.
(Expect to see the Virtual World idea that prompted that post pop up again for VR, too. Expect it not to work any better this time.)
It's cute but it's just demoware. Once the novelty wears off, the usual thing to do will be the "void theater". That's our subjective perception of a good movie anyhow, that the entire rest of the room is gone.
I'm not sure how much I agree with that... I'd much rather have headphones listening to music on a plane/train/bus than trying to listen to something in the open... likewise for watching video... watching on my phone isn't too bad, but being able to get a "big screen" view with a similar experience to headphones for your eyes, I'm happy to see it.
To me, it doesn't need to be so "VR" but that could lead to some cool UI/UX for interaction. I think that this could be something that takes off first for heavy commuters, and second for gamers... It will just come down to marketing this appropriately. If they can get a version for iOS and advertise in the airline magazines, for example, I think it could sell very well.
I'm not saying movies in VR is a bad idea. There's a lot of good optical reasons for that. I'm saying simulating a living room, but on a computer!, is silly. Just watch the movie.
Might be nice to watch a movie in a comfy room if you are stuck on a long flight. I think headsets out to at least ship with a noise cancelling headphone option built in. Plus, the room scene is easily interchangeable. With a decent room scanner it could be your own home, or a Hollywood screening room, or the Acropolis.
Yeah, I would think a big screen kind of floating in space, or perhaps simulating the backdrop glow that some higher end tv's have against a simple wall... Sorry, I thought you were calling out the whole thing, not just the implementation details.
Not to mention that this has to be one of the corniest things I've seen since the 90s. From a tech side it's interesting, but from an end user side it's something I would make fun of with my friends. I wonder how much of that is some kind of uncanny valley esque problem. I laughed out loud with I saw the screen shot.
Have you actually tried a VR theater application yet? Even on my DK1 over a year ago, the experience[1] was pretty compelling. Add in resolution improvements and the fact that you can render stereo 3D with perfect fidelity (no hacky glasses in the equation) and it gets even better.
People don't quite get it until they've tried it. The most surprising thing is the way that the 3D stereoscopy of the environment combined with the head tracking in VR conveys scale. The movie theater actually looks AS BIG AS A MOVIE THEATER SCREEN. It's not "strap this thing on your face and get kind of an illusion of a 3D movie floating in front of you", it's "Strap this thing on your face and see a massive screen in front of you that couldn't physically fit in the room you're currently sitting in". Not to mention that you'll ideally get virtual theater surround through headphones that is fixed in space, such that turning your head keeps the sounds coming from their respective speaker positions relative to your head rather than staying the same.
As for 3D stereoscopy in movies; that's an inherently limited format (it's limited because of the fixed viewer viewpoint and the edges of the screen). 3D stereo and VR are not comparable by any means. About the only thing that they have in common is that you use two eyes to view them. Here though, the VR cinema adds an advantage - 3D stereoscopic content can be shown perfectly without any cross-talk between images, which helps with the integrity of the effect. Note effect. IMHO 3D on a fixed movie screen is strictly a special effect. When used in such a way, it's great. When overused or used improperly, it sucks.
TL;DR: '3D movies' and VR shouldn't be uttered in the same sentence.
Very true. The key word is "compelling". Once you try a few VR devices, even low end ones like a decent phone and a Google cardboard, you can definitely sense the possible seed of something game changing here. Maybe VR will work out, maybe it won't - but that alone I think makes it worth exploring.
One can get a preview of the 3D without 3D glasses or a VR headset by zooming out and looking through the screen like it's a random dot stereogram (or "magic eye" picture).
I'm glad I was not the only one wondering at the value of this. It struck me as a lot of very impressive technical effort to achieve something nearly as good as I can achieve by propping a $200 Android tablet on my knee. And well under what I get with the projector on my coffee table.
And I think "novelty" is the magic word here. Stereoscopic 3D has had several waves of popular enthusiasm, including 1850s stereoscopes, the ViewMaster in the 1940s, 3D movies in the 1950s, VR in the 1990s, and 3D movies again recently. Every time it has been an amazing thing that will change the world right up until the time the bubble pops and it is recognized as a cute novelty.
It'll be very interesting to see if this wave makes it past the novelty stage. But so far I've seen nothing to make me think it will.
Right now, I prefer movies on GearVR to a home-theater system with full HD on a fairly close and large TV. For 2D content, the resolution isn't great (a little over 720p effective), and there is a very apparent "screen door" effect.
In terms of 3D, I watch people's minds get blown by simple apps, when much better stuff is in development. When my Mom asks, did I bring that funny headset so that she can see Paris again? this is something different.
This will be really great with 4K displays and improved content.
It could all work out, but my contention is that the "mind blown" feeling is exactly what drove the waves of 3D we've seen for the last 150 years. Novelty is fun and exciting, but it wears off with exposure. Only then do we find out whether something has lasting value.
On one hand I want it to succeed because it seems kinda cool, on the other hand it could be another Google Glass. At least you tend not to wear these in public, so it's got that advantage.
Am I the only one who wants to use a VR headset for watching videos while lying on the bed?
I've always dreamed of having a ceiling mounted TV for this purpose, but VR headsets can accomplish the same goal with much less hassle and have other uses.
Years ago I did exactly this. Had the then-excellent (but under-supported) Virtual iGlasses; lacking other apps to apply, I ended up ditching my TV monitor and watching videos with the head-mounted display while lying down.
Is there any practical use for this? My understanding is that movie theaters are preferred to home theaters because the distance from your eyes to the screen is far enough that you can focus to infinity, which is easier on the eyes. Could a vr environment make viewing Netflix 'easier' on the eyes? Anyone have an opinion or link?
This is along the lines of why I'm interested in it. I actually want a mock environment (e.g. a living room) in the virtual environment that I can watch tv on OR code on.
I'm extremely nearsighted – age and my already excessive use of computers are exacerbating this. Because of the design of the VR Gear, I can almost see clearly at the highest correction level (similar to what I would see if I wore my 2-4 year old glasses).
Since the preponderance of evidence supports the hypothesis that looking at 'near' things (e.g. computer screen, books, tv across a small room) exacerbates myopia, I'm hoping that doing my normal activities on (a) a screen with infinite distance and (b) a device which allows me to change the correction of the lenses, means I might be able to reverse some of my myopia. I don't think it will cure my myopia, but if I could stagnate or reverse the loss I've had over the last few years (or dare I hope, decades), it would be a blinkin', technological miracle! ;-)
You are still focused on a nearby screen. It just looks like a large screen at a distance due to the stereoscopic effect. The Oculus' lenses simulate a focal distance of 1.3 meters, which is not much better than a tablet in your lap. Another problem is that Oculus' focal distance is fixed, so you are not exercising your eye's ability to change focus. This isn't directly a problem that you wish to address in your comment above, but you may want to consider it. A technology that will better address these issues is the light field display. See: https://research.nvidia.com/publication/near-eye-light-field...
My understanding is that the optics are designed to actually be at an infinite focus.
---
Short question:
> Now I read about this HMD Oculus Rift, which claims that you are always focused on the "distance" which I assume is the same as infinity focus in photography.
The short answer:
> In the same way as a telescope eyepiece, they create a virtual image at infinity.
> In the HUD the objective lens focus the image from a display (on the left in the diagram) and the lens at the front of the HUD reimages it at infinity.
But the key for me is someone who isn't myopic noticing that ...
> I've been able to see far away objects much sharper than I was able before, as if my sight was getting trained at infinity focus (which makes sense, I guess).
Thank you, colordrops! I looked for official, or at least more definitive, information on the focal distance to no avail. This definitely qualifies!
That said, it is of the DK2, which is clearly different than the DK1. Which leads me to wonder what the focal distance is for the Gear VR (Note4) and the new Gear VR.
Sadly, I'm so nearsighted at this point, that I have a hard time reading anything more than ≈4 inches away (things are out of focus at ≈2 inches away, but it's good enough and there are typically enough clues in the 2~4 inch range that I can still read normal text). In other words, if I'm only training my eyes at a distance of 30.5 inches for the next half decade, I suspect it will still be enough to lead to an improvement, and there will be even more improvements in tech (both in the VR & optometry) during that time frame.
It's more about establishing a relationship with Netflix than to create a theater experience. There are already cameras and video formats that support stereoscopic free-viewing in 360 degrees. Once Netflix starts posting these videos Oculus will already be poised to support them.
Contrast watching Netflix by holding a tablet in front of your face for 2 hours. Tends to get awkward, tedious, even painful, and limited to wherever you can prop it up sort of comfortably.
Having tried it, nicer to have the screen attached to your head. At least until the doorbell rings.
Seems a little too anthropomorphic. Like the Virtual Reading SNL fake ad from 20 years ago. "It's like reading a book in your living room - only better!"
It's not actually all that surprising for people that use styled subtitles with antialiased edges. In extreme cases (lots of glyphs on the screen) you can end up with a noticeable framerate reduction.
It shouldn't need to hurt your framerate too much, considering that the font rendering only needs to happen once every few seconds. A new subtitle can be rendered, kept in memory as a texture, and then just blended by the GPU as pixels. The titles are also known ahead of time, so it's possible to set up a pipeline with no sudden increases in processing load.
The context of that is that the Netflix GUI and the video are being composited together before 3D rendering happens. That's a surprising place to find subtitles, to me. Why are you putting them on the virtual screen? Why not let them float in space in front of your eye? They should be where the sound is, not where the picture is.
I agree conceptually, but our eyes can't focus on 2 things at once when they're at different distances (or appear to be). By putting the 2D screen at the same place in 3D as the subtitles, it'll be less tiring on the eyes.
That's making the assumption that we only have a static screen to place them into. In VR, we have the entire environment around the screen and potential for a HUD locked to the user's eyes in space. It would be cool to try different implementations and see which experience is best.
"noticeable" is probably the surprising part. I'm sure it also uses more power on a normal Android phone too, but not to the degree that it does in VR.
Why on earth simulate a living room with a TV on the wall? Why not simulate being in the best seat in a concert-hall sized movie theater with a massive screen? How about a little old fashioned movie theater, or a 50s drive-in?
I'm sure all these kinds of things can come later, it just seems odd to me that simulating being on a couch is the first take...
Reading John Carmack's thought processes to approaching, addressing, and moving on from individual challenges is so fun and refreshing to me. Granted, the matter-of-fact tone is inherently humble, but it does seem like a tone of making the complex sound simple, intrinsically for the audience's benefit. Just a line like "unfortunate waste of memory...but it gives me the timing control I need" is a succinct demonstration of trade-offs and explanation without, well, seeming to have many outside constraints - well, I mean there's the pursuit of the functioning program, sure, but I get the sense freedom in this environment is used studiously.
This is an interesting proof of concept, and it's cool to know that you can carry a "pocket living room" around with you.
That said, shouldn't virtual reality free us to do more than replicate real life environments? It looks like VR will have its own period of skeuomorphism until better UI is invented. I'd love to be a designer at Facebook right now.
I agree with what you're saying, but I don't think there's necessarily a concept of skeuomorphism in VR.
Skeuomorphism is a mapping of real world 3D physical objects to a more constrained space, like a 2D phone screen, ostensibly to aid usability in an otherwise unfamiliar space by invoking recognition and instinct.
In theory at least no mapping like this is necessary at all for VR, things just look and act however they do, in a recreated 3D space. Of course there will be 'skeuomorphism' insofar as objects from the real world will be copied 1:1 into the VR environment, but it's kind of a redundant term at that point.
It's indeed an interesting question if we'll discover better 'UX' for virtual 3D environments than the physical ones we've built for ourselves in the real world. I'd venture yes, including but not limited to discovering tweaks that break the laws of Physics to allow greater convenience. Like wormholes that act as hyperlinks for 3D space, or something.
It'll be fascinating to see if things like this are experimented with and accepted from the get-go, or if there will be a 'skeuomorphism-like' era of VR where we play it safe and just copy our existing world for a while, until we collectively 'find our feet' in VR and learn to make tweaks that expand the possibilities. I'd hope and actually somewhat expect the former.
Of course there's a concept of skeuomorphism. Everyone expects their POV about 2m off the "ground". Objects are of a common range of sizes, 3D space is even & regular in each direction, etc. "Floating" is kinda nifty but likewise just an extension of our real-world 3D experience.
People in general aren't ready for scaleable space (sizes changing orders of magnitude instantly), varying measurements (say: X is normal, Y is logarithmic, Z is sinusoidal), warping space (various Einstein's Dreams scenarios here like consequences of "speed of light is 15 MPH", or Interstellar scenarios), varying or nonexistent notions of "up" (see Ender's Game arena), absence of normal gravitational phenomena, etc.
Early on in the 3D game realm, game writers explored lots of variations on non-skeuomorphic scenarios (Descent II comes to mind, a 3D flying maze game devoid of any sense of "down"). Many years later, with endless technology & imagination available, 3D games are dominated by soldiers running around battlefields little different from reality.
People will have enough trouble with entering/exiting VR. Having seen other technologies bloom, I assure you it will be years before advancing beyond paradigms based in the real world.
I do wonder what the point is of spending lots of energy making alternate universes if we end up replicating thing we’re doing now. In a world where we can only walk, we'd like to be able to fly. In a world where we can only fly, we'd like to teleport. But then we’d probably make the environment bigger to ‘keep it fun’ so what’s the point? Do we all just want to be floating points of light?
Maybe the only real benefit of VR is having the undo button.
All in all, maybe it's not so bad that we haven't had much choice in the design of our species and world so far. Super tangentially related: http://www.bbc.com/news/magazine-34151049
I think the VR paradigm will evolve as a social signal of detachment from availability for interaction. Like being on the phone. It's like closing the door to your office.
AR may become a commonplace, useful layer on social interaction. For people with compatible hardware... AR that could aid daily life will always signal social detachment for some, but for a younger generation it may be as natural of a tool of rich expression as a whiteboard.
We have somehow managed to understand that wearing headphones and sunglasses means you don't want to talk to people, or that you don't want to be interrupted. Wearing headgear around will die off like Bluetooth headsets if it doesn't allow for clear distinction between local and digital interaction. The phone to ear gesture is well known. Talking to thin air remains dubious to those around you.
John Carmack overcame all of these technical issues and wrote up this blogpost in the past month with all of his other responsibilities at the same time. Humbling.
The virtual theater (or TV screen) was one of the first things I thought about when I heard about the power of new VR technologies. People don't want to switch devices if they don't have to and projecting output to a single rectangle in a virtual space makes a lot of sense on paper.
I would be the first to say that of course seeing people in real life would be superior, but with the internet I'm finding that some of my closest friends are sometimes thousands of miles away. This is a way to keep in contact that's as good or better than something like facetime and far, far superior to buying a plane ticket just to hang out.
A lot of people on HN (and general tech sites) seem to think that VR will be used primarily for something other than gaming.
If I were working on any kind of VR software in the next five years, it would be a game, absolutely no question about it. (A game that does not involve the character walking around, by the way.) As long as the hardware requirements for a good VR experience are high, anything else is a novelty.
Serious PC gamers are your audience. Period. Alternative hardware that doesn't rely on a beefy GPU will (for now) be extremely limited and uninteresting.
And even after a few years, you're still going to need a very good reason to spend at least a few hundred dollars on a VR set. I will be very surprised if home users are buying them for something other than games.
The one person I know making money with VR programming right now is not in the game, or even entertainment, business. Without giving away what he does, his clients buy VR gear and his software to experience a customer-specific simulated environment (and no, not porn).
Someone made a VR theater for the oculus rift (it's on Steam) which allows you to stream a movie or video to watch with other friends. That's pretty awesome.
You could share it with a friend who wasn't physically there if they had a similar setup, but I"m guessing they're talking about the fact people who are by themselves like watching TV and movies in VR compared to on a standard TV screen in a real room.
At first I balked at the idea of watching a movie in VR, but the more I think about it the more I like it. I could easily watch it while lying down on a bed for instance. It would need to be very high resolution however.
Most people I know what watch movies at home do so with distractions. They have their phone out seeing tweets and other notifications. They might have a spouse or kids or guests. They have snacks and drinks. All of those seems like it makes movies in VR likely to be a novelty for most. I'm not saying I won't try it. In fact I have tried it. But so far it hasn't been worth it. Maybe during a plane flight.
PS: Yes I get you could integrate the notifications stuff into VR. In fact if you want to get silly show a virtual tablet on your virtual coffee table in front of your virtual TV. Replying though will be an issue, at least for a while.
Replace "virtual living room" with "virtual IMAX theater" and you might have your answer.
EDIT:
But the real answer is that the question isn't that different from "why would you want to play a video game on a VR headset?" The answer, of course, is immersiveness. Existing video games (and film/TV) may not be optimized for VR, but future content might be.
Imagine being able to look around 360 degrees in a film. It will be a totally different experience from a traditional film where the director precisely frames every shot, and may or may not be more enjoyable for certain types of content.
It would also require a lot more thought put into scene changes. In VR video games, suddenly moving the player from one location to the other was found to be too disorienting and broke immersion.
A pretty low quality IMAX. I don't disagree with the concept. I love it. But I think we'll need at least a 4k helmet and even then you would probably only be able to watch videos at 1080p or so, with the rest of the pixels being dedicated to the "cinema" or other spaces outside of the "screen".
The resolution cap in the article is due to Netflix's DRM, but current VR movie theater/media playback applications let you load your own media.
When you do this with HD content, it actually looks quite good in spite of the resolution of the actual HMD due to the subtle movements of your POV. The in-game camera tracked by your head is always barely moving, so the pixels you actually see in the source media are always being transposed and blended differently.
What I meant is that you need a certain amount of pixels just to be able to fully represent the 1080p video pixels, and if you see "extra space" in the VR world around the screen, then you need 1080p video + extra space in pixels, so it probably should be at least 4k.
You need a certain amount of pixels to represent the 1080p video in one particular frame. When the next frame comes along, your head has moved enough that you'll be seeing a different subset of those total pixels. At sufficient framerates (and motion capture rates), this actually does a pretty good job of approximating the full resolution of the imagery (especially when this is happening 2 or 3 times per source video frame, as the case might be with NTSC/PAL content).
Even without the DRM resolution cap, you need somewhere greater than 4096x4096 pixels per eye to adequately represent a 1920x1080 display. VR is not going to be as crisp as a normal monitor for quite a long time.
Tons of interesting scenarios. You're sitting on your couch next to your best friend who recently moved to Singapore and he/she's doing the same.
You're sitting on your couch with the MST3K avatars and you can see them laugh as they deliver their Rifftrax.
And tons of other creative possibilities that we couldn't possibly think of today once people start getting used to consuming a movie in a simulated environment.
Not everyone lives in a great movie theater environment.
I saw a demo of the GearVR which was pretty believable, and if this thing is higher resolution, more immersive, etc., I might honestly find myself questioning why I need to decorate my living room if I have VR goggles on all the time at some point in the not to far future.
When VR becomes ubiquitous, and you can have shared experiences, expect that to be a very real thing.
For a while after a bad leg injury, I had to lay in bed, on my back, with my leg elevated. Either I watched tv and got a crick in my neck (from the odd angle) before a 30 minute show was done, or I tried holding a book up and in the air and reading it, but I never made it more than a few pages before my arms started to feel like very, heavy, looming weights. I ended up listening to a lot of podcasts and phoning a lot of friends. I would have loved to be able to watch tv from my back during that recovery period!
You jest, but when I moved three years ago, I watched about three seasons of Burn Notice lying in bed, holding my phone in an awkward position overhead about 8 inches from my face. It was less bad than it sounds, but this product would basically mean that I could do that much more comfortably, without bleeding light over and waking up my wife (or vice-versa).
The article states that most people prefer to have some distance between themselves and the medium that plays the movie. They don't want to be enveloped by the movie. Hence a virtual theatre was created for the Netflix VR app.
I do wonder if they will also add the full-screen experience. That could be interesting ...
Well, if they simulate a movie theater and you burn some extra money to simulate the price of snacks you can almost have the entire theater experience without the risk of being shot by some nutjob
I know you're probably being sarcastic, but that's probably only true if you're a female in an abusive relationship, or particularly if you're a male with suicidal tendencies.
People spend far more time at home than the movies which increases the statistical chance of dying from a gun at home compared to elsewhere. The user was showing that comparing what is more or less likely is usually very irrelevant because the statistics are often misleading. Ignoring "time spent somewhere" makes home one of the most deadly places to be.
When I think about it, I feel that this would mainly appeal to people like factory workers who live in horrible dorms (windowless closets), prisoners, parapalegics, the homeless; in other words people who are trapped in a specific environment. The problem is that most of them can't afford this setup... yet.
When I think about it, even low income individuals in the US have access to cheap enough flat screen TVs to have a nicer real life living room than a virtual one afforded by Oculus
That state is caused by addiction, that, and not VR, is the problem in the picture. The enabler can be whatever: hard drugs, nicotine, coffee, TV, VR, video games,...
Perhaps, but I also see it as a visualization of a future where the masses cannot afford a nice home, and spend the majority of their time in a VR world where they can imagine they live in a nice home.
Are you suggesting those people will literally forget they are living in the real world? I don't see this happening, since they will have to occasionally take the headset off to work, eat, use toilet.
What they will do is ignore the state they live in, and that is already happening today, mostly along with prescription pills, hard drugs and video games. This is what VR is capable of doing, nothing worst that we already have. Therefore VR is not a concern by itself.
Initially, the tech simply won't be good enough to blur that line sufficiently or to have it be "always on." So it will be a temporary thing you put on and experience.
When it is more immersive, and devices are more like glasses or contacts, you'll likely start seeing people spend more time with some form of VR component being "always on." Things like artwork on the walls, lighting, upholstery, paint colors, etc.
Taken to the extreme, if you are a poor person living in a shipping container at some point in the future, and you can afford a device like this, I don't think it is unreasonable to expect that person to spend a disproportionate amount of their time in the virtual experience. That said, they will know it is virtual--it will still just be preferable to reality.
It all comes down to escapism and how that works. This is just a better form of it that will be more widely accessible down the line. I've never argued that VR is a new concern for that--it is just a new tool to facilitate existing approaches.
As others have mentioned a bit, while being a cute demo this really isn't advantageous. The constant brightness gets irritating with lights 3" in front of your face beaming directly into your eyes, and good luck trying to eat pizza and drink something while doing this. On the pro side, well, um.... not very clear what that would be over having an actual TV.
If you don't have a TV and all you have is the oculus, then why try to re-create such an environment at all? You loose the immersive qualities of the oculus to begin with.
To me the point of oculus is having content made directly for it... where it is 180-360 degrees, and you can't see it all without looking around. Repurposing standard Netflix movies doesn't give much an advantage.
I own the occulus and have tried various bits of movie playing already. While I haven't tried this netflix implementation, it is not difficult to imagine.
VR has some limited cool potential uses, but it's utility is being overblown by the community similar to the 90s.
I would really love to be able to watch a movie and look around the scene. Even if the camera was at a stationary point.
Imagine watching Batman while perched up on a ledge in Gotham... when the Joker comes flying down the street to the right and you look right to watch, while someone else viewing the movie looks left to see Batman flying down.
i wonder if john watched daredevil one episode at a time or if he binged on it
i was intrigued by the virtual theatre concept, but i found watching a film was too much for comfort's sake
ignoring resolution issues, or strain, the goggles pressing against my face was the source of the most discomfort
when i took off the goggles my face was hot and sweaty and my eyes had white circles around them where the blood had been kept out
these current vr headsets are goggles with elastic and as long as that is the case i think they will fail to attract a consistent, returning, user base
your eyes are unable to breathe or receive blood
imagine watching a movie while wearing ski goggles, then have those goggles shine bright lights into your eyes
it is an uncomfortable experience
MSFT's holo lens are glasses that sit off your face, allowing circulation below the lens,this sort of design is certainly the way forward
I tried this out and watched some Supernatural. My only gripe would be that I wish I could move further back from the virtual screen. At least I think that's what the problem is.
There's a lot of focus in this thread on 'Why would you want a fake living room environment?' but he makes this statement:
"You could even go all the way to a face-locked screen with no distortion correction, which would be essentially the same power draw as the normal Netflix application, but it would be ugly and uncomfortable."
This is what his post is about - the work to accomplish a screen surface that exists in 3d space independent from your head. What goes around the screen is a secondary issue.
I don't get it. Why would you want your head forced to point in one direction when you could, for example, lie down and point your head in the most comfortable position.
None of this "well I guess I knew a bit of C++, let me find another engineer to work on this Netflix app and offer micromanaging style tips and tricks" which is what I've seen from a lot of managers who used to be technical.
He's also an amazing role model for regular programmers; figure out the requirements, take a crack at it with a prototype and then iterate. The iteration doesn't have to be an overtime week affair.