Hacker News new | past | comments | ask | show | jobs | submit login
Why You Won’t See Hard AR Anytime Soon (valvesoftware.com)
140 points by phenylene on July 21, 2012 | hide | past | favorite | 77 comments



This seems to me a bit like saying we could never have a flat tablet computer because its impossible to have a perfectly flat crt that's thin enough what with the magnetic yoke, the electron beam, shadow mask etc.

"Breakthrough" is used much too often these days. Breakthroughs are breakthroughs because you don't see them coming. It will be laugh-out-loud unexpected when it comes, seem obvious in hindsight, and be one of the few things ever invented that might deserve a patent monopoly to be granted for a small handful of years.


I agree, and I think the author would too - the key phrase in the headline being "Anytime Soon". I think he expects a breakthrough to happen - but remember that there's generally quite a bit of time between the initial breakthrough and its feasability for large-scale consumer applications. The first LCD display was created in 1972, and the first patent of the underlying technology was issued in 1936 to the Marconi Wireless Telegraph company. I don't remember seeing them in Wal-Marts until the late 90's at least.


As we approach "the singularity" or whatever you want to call it, the idea->prototype->product dimension contracts quickly.


Do you have any evidence for this statement?


The time stays the same since as ambition grows, the difficulty in prototyping and producing a product increases. Any gains made in rapid prototyping are offset by higher goals.


That point is already adressed in the article, he says:

"Hard AR is tremendously compelling, and will someday be the end state and apex of AR.

But it’s not going to happen any time soon."

Then later:

"Of course, there could be a technological breakthrough that solves this problem [...] In fact, I actually expect that to happen at some point[...] But so far nothing of the sort has surfaced in the AR industry or literature[...]"


My TLDR of this article is: Due to the additive blending, the "augmented reality" of the immediate future then is restricted more towards 2D HUD-like overlays and ghost-like 3D overlays.

True. Although, if you think about it, that's still pretty cool "sci-fi" tech, and opens up a lot of exciting futuristic possibilities. For example, you can still have a 2D HUD giving all the context-sensitive information you need, and you can still have ghost-like 3D images overlaying the real world to help you out.

In fact, I have no problem waiting for nonadditive AR glasses for everything except video-games, because in real-world use I want to be able to discern reality from augmented data.


11 ways to solve what you're trying to do.

1. Direct optic nerve connection. 2. Optogenetics (hack the ganglion for blue/yellow on/off control) 3. Holographic displays for close focusable screens 4. Eye drops that can be stimulated to block or darken light 5. An AR that doesn't obsess over sight but uses soundscapes. 6. Invert/distort the image so the brain relearns what light v dark means. 7. High res lcd's that use hemispherical lenses over groups of pixels to produce defocused light. 8. Simplify the problem by making AR windows not goggles. 9. Embrace the imperfections, delays, tears, for artistic license. 10. Constrain the environment so hard AR arcades precede portable devices. 11. Sponsor an x prize

Ultimately feasibility and time-frame to market are questions of money, not a lack of ideas or technology. Given a billion dollars Abrash could have out a hard AR system in under 5 years.


Does anyone else feel that widespread use of "hard AR" isn't actually desirable?

I'm all for soft AR, and anything else that increases the convenience and bandwidth of human/computer interaction.

But hard AR - seamless with the real world - makes it possible for people to quite literally lie to themselves (or, more sinister, be lied to) about what's real around them. I have nothing against solipsism, but with that kind of technology, it seems like it'd make more sense to go full-on virtual reality, if you want that, rather than viewing the real world through rose-tinted glasses.


People already lie to themselves and are lied to. Millions of years of evolution have given us a brain with a remarkable ability to heal over any damage to its worldview. (See _Phantoms in the Brain._)

AR is not the gating factor here.


"People already lie to themselves"

The most compelling illustration I've seen of this is the McGurk Affect[1].

[1] https://www.youtube.com/watch?feature=player_detailpage&...


That's not quite the same as a brain "healing over damage to its worldview" though. That's a side effect of the brain incorporating multiple input sources to overcome unreliable and noisy data. It's not the brain "lying to itself", but the brain recognizing (and I'm stretching the word "recognize" here because this is not a conscious effect at all) that auditory input is less reliable than visual input for differentiating certain consonants, and letting the visual interpretation dominate.

This is really a very different concept from self-deception, because it has evolved specifically because in practice it tends to provide a more accurate view of the world.

On the other hand, self-deception measures, by definition, function to decrease the accuracy of our worldview in order to help us in some other way. For example, realizing that the waterfall at the top of the mountain is not inhabited by a fertility spirit will make your worldview more accurate, but it may also get you executed by the rest of your village, and since lying convincingly is harder than professing a legitimately false belief, we have evolved to cultivate false beliefs when it is politically expedient.


like in Serial Experiments Lain, where "The Wired" (the internet) eventually becomes indistinguishable from reality


On drawing black: Maybe biologists will discover a way to use carefully timed pulses of light to cause rods and cones to emit a diminished signal, or maybe all incoming light will be polarized, with an inverse polarized all-wavelength laser that cancels out the incoming light. Or maybe simcop2387's holographic mask idea (https://news.ycombinator.com/item?id=4273811) on a contact lens would work.


Maybe we won't be using biological eyes any more. Maybe we'll be interfacing at the retina or neural level. If it's not any time soon, it may not be using any simple solution we can think of right now.


Indeed... and as hard as the problem seems, what we think we see is already unlike what hits our retina - the mind fills in blind spots, balances color and detail, patches retinal irregularities, smooths motion, and so forth. The solution will probably use these qualities to its advantage.

(In Oliver Stone's 1993 miniseries Wild Palms, the near-future AR/VR experiences are sometimes enhanced via a hallucinogen/hypnotic called mimezine.)


When speculating on this sort of thing, it's unnecessary to think in terms of biology and the whole two eyes thing.

A cybernetic implant or augment could produce the effect of having another set of eyes that was seeing something else entirely.

Even cybernetic eyes that had a uniformly high-density perception area, instead of what's actually concentrated in a relatively small zone in biological eyes, might have the advantage of situating things "off to the side" but not requiring you to actually look at them to make sense of what's being conveyed.


Scott Westerfeld's novels The Risen Empire and The Killing of Worlds mention something like what you mention - several characters have implants that allow them to have multiple "levels" of sight and hearing to interface with technology.


Recommendation duly noted. I'm a big Vinge fan, too.


I think a big problem to solve also is the decoding of the external world, so that the AR knows what is empty space, what is floor, what is a wall, etc.


On drawing black:

LCD glasses that "darkened" at specific points do have the problem that those spots would be blurry, yes.

But if the spots are big enough, then they will create solid block spots at their middle. And the display can then "fill in" the darkened blurry spot proportionally with the "same" pixels from the camera, altering the ones it wants.

It would look bizarre for people looking at the person wearing the glasses, with strange black dots flitting across their glasses... And the syncing would have to be perfect (very difficult).

But it's an engineering issue, not a fundamentally physical one.


The time cost of signalling and unit computation is fundamentally physical. Drift and error in inertial and position sensing is fundamentally physical.

You can make very good goggles if the money and time per unit isn't an issue. Registration to the point that there are no detectable flaws is very hard. You have a hard timing constraint from the human visual system on the order of 5 to 50 milliseconds. In that time, you need to sense the visual environment, determine position and its derivative vectors, figure out how your simulation needs to be updated, render the new data, and display it to the user.

Not something you can necessarily solve by throwing money and engineering hours at it until it works. In fact, there may be a few Nobel prizes in physics between here and there, or possibly biological integration well beyond our current horizon.


So, I have one issue with this: "hard AR" already has to solve all the mentioned problems of video passthrough if it wants to make virtual objects seem lifelike: dynamic range, field of view, lag, all of it. If you somehow solve of them, which of course seems impossibly hard at present, then there's no point in worrying about see-through AR; just use video passthrough.


He mentions some of the even harder problems to solve that are specific to video pass through mainly, the problem of having to focus exclusively on a screen closed to your eyes all the time which gets tiring fast.


Virtual retinal displays will be able to fix that; they display an image directly on the retina, using lasers, and will eventually be able to have a range of focal depths in a single scene.


It shows how fast computer technology moves these days that he spends the whole article talking about how hard and far-off this tech is, then casually mentions that it might be available in as little as 5 years.


Yeah, I see soft AR being somewhat common in 5 years, especially if Google glasses and its competitors take off. It'll be another 5-10 years after that until it is as ubiquitous as smart phones are today, IMO.

Hard AR might be an expensive toy at that point, though I'm not sure if the author is speaking about "it's possible and has been done" or "everybody is doing it."


I felt the same way. I think 5 years is way too soon if the problem is as hard as he describes. Until I can have a real-time conversation with Siri without any lag or a network connection and with her understanding nuanced colloquial American English, hard AR is still at least a decade away.


Couldn't you just use something like Circular polarization http://en.wikipedia.org/wiki/Circular_polarization. Each point on the glasses is a TFT and polarizes the light coming in to the opposite direction thus creating a good black surface.

His second argument about the processing time is good if the processing is done on the phone but with cellular networks becoming better organized you could easily have a computing cluster do most of the work. Using that and some basic statistical inferences (to fudge some of the processing) you can get pretty impressive response times.


“you can just put an LCD screen with the same resolution on the outside of the glasses, and use it to block real-world pixels however you like.” That’s a clever idea, but it doesn’t work. [..] a pixel at that distance would show up as a translucent blob several degrees across

The LCD solution was the first one that came to my mind too, and I'm surely missing something here but why wouldn't the regular translucent pixels suffer from the same problem as that above?


Probably, which is why I think a better idea is to watch the person's eyes and try to guess their focus. That combined with a higher resolution screen (maybe 3x?) would let you try to mask a diffraction mask to match where light should be coming from for that part of the person's eye.

At least I think this should work, combined with being able to control the translucency of the pixels would allow it to be fairly accurate.


The additive pixels are taking a separate optical path, combined in at the last second by either a beamsplitter or a waveguide.

In current commercial HMDs, the additive pixels are statically lensed to a fixed focus distance (not infinity, as a sibling comment asks, but rather a best guess of where the user will be focusing most often, e.g. 1.25 meters).

Using dynamic lenses to put the additive pixels at any focus distance is an exciting topic of current research:

http://www.quora.com/Volumetric-3D

Dynamic lenses could also be put on the forward-facing / physical-reality optical path both before and after the subtractive pixel layer, to put the subtractive pixels at any distance as well, solving the problem. (This would incidentally also allow lensing physical reality itself to any distance too, creating perfect / hyper-real vision correction.)


I'd guess the display pixels are holographically projected into the field of vision at infinity?


Great piece. As an entrepreneur in the CV/AR industry, it's interesting to see how a larger company examines the problems faced by technology that is about to converge. As a smaller company, we have to come up with a product that will stick in a related industry that will hopefully position us well when the time comes for true AR (hard or soft).

"skate where the puck's going, not where it's been"


http://www.fastcompany.com/magazine/36/cdu.html

"You'd have to be a real idiot to skate to where the puck used to be"


http://www.fastcompany.com/magazine/36/cdu.html*

That has to be one of the most useless articles I've ever read. So, yeah, if you take a metaphor for a generalization about anticipation, you can "debunk" it by ignoring that it's a metaphor and quibbling over minor issues of pedantry. But that article does nothing to share any actual useful information, so far as I can tell.

"You'd have to be a real idiot to skate to where the puck used to be"*

ROFL... yeah, that quote is hilarious, though.


The Gretzky Principle is nice to come across from time to time.


Notes to AR adaptation: I believe that AR will be introduced not through everyday life experience but via event context. (Similar to the 3D dorky glasses that we use in the cinema today). In those scenarios, the environment is much more controlled and even process lagging can be forgivable. For example: take any stadium close sport field, like Soccer Euro championship. How amazing would it be to go to a empty soccer stadium and just see using AR the game in your hometown. Not saying there are no tech challenges but it is definitely easier where the env is controlled and even been 3-5 minutes behind is still ok, for a new kind of experience.


If I had to bet, I would say that AR glasses of the future will replace your vision entirely with a camera feed (called video-passthrough in the article). Today's glasses have a low field of view and lag, but these are only quantitative problems. They can be solved with gradual improvements. They're not physical limitations. The camera is right next to the display, so there are no fundamental speed-of-light issues. It's easy to imagine that, over the years, the lag will be reduced enough to be imperceptible.

The problem of drawing black with see-through glasses looks like a much harder problem in comparison.


Aside from how obviously cool Hard AR would be (for us SF nerds particularly), and some interesting gaming/entertainment(/adult) applications, is there any real need for Hard AR? It seems like it would be used primarily to turn reality into fantasy, which I suppose has its merits, but is that enough of an impetus for the countless man-years of experimentation and development to get us there?


I can see two game changing practical use cases directly:

Releasing computer people (and most others) from slavery to a desk when working -- including meetings. For personal interaction, you would need to stay inside ca 50-100 milli seconds distance.

Giving information to people when they need it while repairing/building/driving/etc a physical object.


I think definitely your second example, possibly your first, could be accomplished with "Soft" AR. Remote personal interaction would still be reliant on the latency of the network/medium, and without touch, I'm not sure how much more use superimposing someone believably into my field of vision would be as opposed to having a display or a holographic avatar of some sort that doesn't require full Hard AR.


Does anyone know the author of the article, I can't find their name anywhere?



Awesome, thanks :)


I noticed the same thing and thought I was going crazy. It turns out that the blog link goes to a stripped-down alternate version of this page:

http://blogs.valvesoftware.com/abrash/why-you-wont-see-hard-...


It looks like the page has now been changed to include the full view including name in the header section. :) Its really eerie not being able to see who's the author of a piece of writing.


>how do you draw black?

emit antiphotons, duh.


Lol, using antimatter to destroy incoming photons, to have the color black, would be a seriously comical but cool over engineering.


Must-control-physicist-inside.


No, please don't. Unless you're just going to object to me using the word matter to talk about a photon. It was a joke, I don't think it requires that kind of precision. I think it loosely appropriate to refer to all anti-particles collectively as anti-matter.


That was the reason (and the fact that an antiphoton IS its own photon, and, well, complicated stuff), don't worry. ;)


Are you an actual physicist?


Yes.


Question, have you had points where something you just learned deeply and suddenly changed your understanding/intuition of reality, I only ask so as not to presume, and what were they, or the biggest one or two?


You're thinking too hard. He's using "blend" mode and he needs to use "dodge". Or maybe it's "burn", or "dissolve".


Needs better definition of 'anytime soon': 3 years? 9 years? 20 years?

Also, arguing that something isn't possible seems risky for a practitioner in the field. It invests identity in the negative proposition, potentially biasing perception against new possibilities. (When a 'Hard AR' breakthrough does occur, it will come from a researcher who thinks it's possible and imminent.)


Putting a time frame on the commoditization of particular technologies is really, really, really hard. Most of the big ASIC design shops (was at one for a few years) have to design a feature 2 - 3 years in advance and hope to god that it becomes standardized/commoditized to hit ROI. Even the big shops get it wrong sometimes. In which case they never ship it or just get surprised by someone who did it better or cheaper than they did.

But I agree, arguing something isn't possible is pretty short-sighted, but I'd like to mention it seems he's caveated these approaches with something to the effect of "today's technology and its trajectory." So in his opinion, the tech isn't being actively developed, thereby I think it'd be appropriate to say "impossible without intervention."


At the end of the article:

> I’d be surprised if it was sooner than five years, and it could easily be more than ten before it makes it into consumer products.


Thanks, I'd missed that. 'Hard AR' - with opaque and dark solid-seeming objects - within the next 10-20 years would feel very 'soon' to me! So from my perspective the author's concluding paragraph shifts 180' from the tone of his headline and list-of-unsolved-problems.


agreed, i thought he was being wildly optimistic with his estimates at the end there.


Sensor person here. These a problems are all solvable with enough sensors and compute power. What good is it? I am sure that there are specialist applications that make sense, like for doctors and mechanics that want to see schematics overlaid. I haven't seen a single consumer level use-case that makes sense. Games? Sure. Anything else?


I think the military use-cases will pay for hard AR anyway (think Deus Ex HR-style 'see through walls to find targets' or 'target leading' AR), and we'll start to discover the consumer use-cases after the fact.

Actually AR laserquest/paintball/airsoft games immediately falls out of that - now you need to be able to move silently, because a microphone array processing impulse noise superimposes your location on the blacked-out bunker walls. I'd pay to play.


Here is a use case.

Suppose that you have face recognition + AR. Then you can have an application that keeps track of notes and tags them to people and objects.

So you make a shopping list, walk into a store that it knows, and there are arrows pointing to everything you have to remember to get. Or you are in a meeting, make a note to ask John a follow-up question, then when you meet John again you have a glowing reminder to ask him.


It is easier to just use Amazon and have a 3d hologram of the items (or just the fake theater 3d). Less intrusive too.


Amazon sells me milk? Amazon knows what I need to ask my co-worker?

Hey, I didn't say it was the most compelling use case. Just that it was a use case.



> I’m sure that one day we’ll all be walking around with AR glasses on (or AR contacts)

All light sources generate heat. Is it not a good idea to have a possibly intense heat source so close to your optic nerves?


Light sources generate heat primarily as a waste product from elements that radiate in the infra-red spectrum - there is no rule that visible light must carry enough energy to significantly warm the surface it hits. The more efficient the source, the less heat is generated, so it's just a matter of using an efficient light source for AR technology. In addition, the human body is very good at eliminating waste heat, so even something running at slightly above body temperature shouldn't pose a problem - we are able to dissipate the heat of the sun and metabolic heat very well already.


All light sources generate heat.

Actually that is no longer true (!!)

http://www.wired.co.uk/news/archive/2012-03/09/230-percent-e...


And yet soft AR, e.g. Aurasma, may revolutionize many areas:

http://www.youtube.com/watch?v=frrZbq2LpwI

That is, do we actually need hard AR right now?


For transitional period, it might be even desireable that AR experience isn't lifelike. Easier adoption, avoiding PR disasters, tunning customer in etc.


You don't need AR, contact lenses etc. You just need a way to remove the screen and keyboard, i.e. a better way to interact with a computer.


Wouldn't a combination of multiplicative blending and additive blending give you the possibillity to create images with adjustable opacity?


TFA mentions focusing issues with LCD glasses (for blocking light). Would LCD contact lenses have same issues?


Am I the only one who finds AR not that desirable or even scary?


No. Soft AR, maybe useful. Hard AR, very scary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: