Hacker News new | past | comments | ask | show | jobs | submit login
Google Glasses are real, will use two 0.52-inch micro displays (geek.com)
77 points by ukdm on Feb 23, 2012 | hide | past | favorite | 47 comments



The article concludes by asking what we'd do with a wearable computer, but that's not the right question to ask. We already have wearable computers, but we call them "smartphones".

Instead, we can think about what uses we have for new interfaces on these computers. Gesture-based interfaces have existed in browsers for a long time. And my friends from the subcontinent would say that they've had precursors for many centuries[1].

HUDs do represent something "new" (in terms of availability, not technology). So perhaps, every time I look at an ad, I get a red/yellow/green indicator of their rating by the BBB. Or I look at a printed URL, make a gesture, and it's bookmarked for me. Think of the data overlays that you could actually use and could likely be monetized quickly.

What I fear, of course, is selling ads into my eyes based on what I'm looking at, because everybody (particularly Google) seems to think that advertising is the primary way to fund the web and mobile apps. I already hate billboards around town: this seems like an opportunity for a sea change in how we monetize our work.

[1]: https://en.wikipedia.org/wiki/Mudra


You seem to be drawing a line between mobile and wearable based on where the computing resource resides. [1] But no-one really cares where the computing resource resides. And cloud computing seems poised to render that distinction largely moot anyway.

When most people draw a line between desktops and mobiles and wearables, they're talking about the interface and app model. That's why netbooks get lumped in with desktops and tablets get lumped in with mobiles, despite being essentially equivalent in size and mobility.

[1] "Is the computer on my desk, or in my bag, or in my pocket, or on my belt?" vs "Do I sit at a desk with a keyboard/pointer and manipulate many apps, or do I use it pretty much anywhere with a single app in focus, or do I use it by virtue of wearing it with some new app model that fits wearable use cases?"


The best case for Google ads on a wearable would be for them to use AR to replace existing ads... look at a billboard, instead of seeing the ad on it, see a different one there. Google then effectively gets to "tax" all advertising (the ad campaign buying the billboard now needs to pay Google to let it through), and we don't have any more ads than before, just different ones.


Why replace the places where people have thought to place ads when so much of the world is empty of ads and has plenty of space?


> We already have wearable computers, but we call them "smartphones".

Smartphones are not wearable computers. I don't wear mine, but always have it with me in my pocket. I don't wear my wallet - it's in my pocket. (Laptops are also not wearable computers, even though I always have it with me.)


Do you wear your belt, or is it just always with you, hooked through some loops on your pants?

This isn't meant sarcastically -- the idea of "wearing" something is pretty tied up with the context of the object (e.g. is it "clothing") and less to do with how you're actually carrying it around, which can make "wearable" computing something of an oxymoron depending on how it's interpreted.


Whether or not the two things are equivalent in one way ("you have it with you") isn't what we're discussing. Am I wearing this book in my hand? No I am not. The ring on my finger? Yes. "Wearable" is a word that means a specific thing. It doesn't mean "something you have with you".


Right. So why are you wearing your belt (hooked through some fabric loops on your pants) and you're not wearing your phone (resting in a fabric pouch on your pants)?


No, but unlike a book, I can clip my phone to my belt (well, if it were 10 years ago and I still thought that didn't look terrible). Given Bluetooth and PANs, there's just not much different between what's in my pocket and what's on my belt.


Perhaps a distinction can be drawn based on how you use the device - with exception of audio and vibration notifications, phones have to be in your hand to be used. Wearables can be used without overly impeding whatever you're doing.

Still, having an always-accessible computing platform and internet terminal, even if it has to be held, is a pretty big deal :)


In your pocket is close enough for almost all purposes. Laptops that you have to carry are qualitatively different.


Use a strap like they do with iPods for joggers, you are now wearing your computer. I think the point is the smartphone can be considered "wearable" simply because you can, not because people already do.


I have a friend who is almost blind. Nothing wrong with his eyes. It is his visual cortex that has the problem. Can't be corrected with lenses.

I think this type of technology will be of great use for people that have this kind of genetic deficiency.

As an example. Currently my friend uses an iPhone to take a picture of a menu, holds the display a few inches form his eyes and zooms in on the picture to read the menu.

I think he could use these glasses to stream zoomed in video and actually see what is going on around him.


I wish someone would make glasses using the Microvision laser retina displays. I worked in the MIT Media Lab wearables group as an undergrad a decade or so ago, and they were pretty awesome then, but aside from a couple of defense applications, I've never seen them ship multiple units. (http://www.microvision.com/wearable_displays/index.html)


They've had these almost available for so long that I now assume there's some major flaw that they're not talking about.


I've tried to get my hands on their stuff in the past, and one of the issues appears to be that its considered 'national strategic' which is code for this stuff gives our military and advantage over other guys. And that was as close as I got to getting an answer for general availability.

That being said you can sign up as a developer and get a 'kit' which has like two eye pieces and various support stuff for like $20,000 but that was not something I could invest in to satisfy my curiosity :-)


Is that the group that built a shoe-heel embedded computer, a one handed keyboard and some red monochrome one-eyed glass ?

I saw this once on TV, can't stop dreaming about it since.


I think so -- there were other places doing this, too. The most interesting "in the wild" use of wearable computing was of course from UC Santa Cruz -- the Eudaemons (http://en.wikipedia.org/wiki/Eudaemons) who built a roulette-defeating computer. Apparently Claude Shannon and Edward Thorp built something similar in 1960 too.

I worked for Steve Mann (I was an undergrad research assistant; more a learning experience than actually doing anything useful beyond sysadminning), who mainly did "painting with light" and other digital photography things, using his WearComp platform as a tool (he'd been developing it since the early 1980s). He also wore a 5W radio transmitter on his head (to get 56Kbps data back in the late 1990s). He's now at U of Toronto. (http://en.wikipedia.org/wiki/Steve_Mann)

Thad Starner (now at GA Tech) and Lenny Foner were probably doing the most useful general wearable computing work at the time. There were one-handed chording keyboards ("HandyKey Twiddler", the new version is http://www.handykey.com/), PC104 based embedded computers (now you'd just use a cellphone), and probably the top display device was from MicroOptics -- there was another headmount display which was cheaper but not as good, and it fully occluded one eye. Stuff ran linux, and it wasn't too hard to bring up QVGA X11 or an 80x24 console.

The thing I wanted to build, given the tech at the time, was a purely auditory wearable computer. Even in 2012, displays are kind of inadequate, but we have had viable portable audio sources (headphones) for decades. Combine that with sensors and some kind of ubiquitous camera (either mounted on the person, or just using environmental cameras as inputs), and you could do something pretty good.


rdl, can you comment on how they're fixing the image on the retina as the eye saccades? (or are they doing that at all?)


I don't meant to be rude, but the author just made this up. The glasses will not be using "two .52 inch micro displays", unless Google happens to be working at the height of early 90's tech.

Current video glasses employ Holographic Optical Elements, which are pretty neat. Turns out that you can make a (limited) holographic representation of almost any lens. Since holographic films are flat, you can collapse big, difficult parts of an optical system into nothing more than a flat sheet of plastic. HOEs, though perhaps not in everyone's thoughts day-to-day, are definitely present in your day-to-day. There's one behind your LCD right now. It's a holographic representation of a diffuser, with a carefully designed (but somewhat limited) diffusion angle.

So the Google Glasses will almost certainly use holographic optical elements. That doesn't mean that Princess Leia is going to jump out of them, and it doesn't mean that they're going to display anything in 3D. It means that the technology used to bend and squish the light into your eye is diffractive, not refractive (the difference between a pinhole and a glass lens). They will almost certainly take the form of the extant Vuzix goggles or the Lumus glasses, both of which are almost market-ready. In both cases, a micro/pico/nano-projector is fired into a holographic element that compresses the projection strongly in one axis. So you go from a rectangular projection, which doesn't fit in the temple of your glasses, to a line projection, which does, traveling along the temple of the glasses. As the beam travels, it is bounced into a lens with a mirror, or on the inside of a plastic conduit or holographic waveguide. Both the Vuzix lens and the Lumus lens have slightly different approaches, but essentially they have multipart holographic lenses embedded in the lens in front of your eye that uncompress the compressed axis section by section and split the projection out into your eye. What you see, in the end, is a large image floating out in space. Note that without tracking hardware, this image does not track your eye, it tracks your head. So your floating screen will move with your head and not your saccades. Please note that, optically speaking, the language I am using here is very coarse.

The holographic optical elements, and their particular design, are the key technology here. They can be flat, so the glasses can be made reasonably thin and small. Micro projectors are almost small enough to fit in the temples of glasses already (no need for ".52" micro displays, wtf). You can see images of the Vuzix and Lumix lenses on Google Images and in videos of CES 2012.

Keep your eyes open for the rainbowy rectangles visible in the lens. That's the HOE. Lumus' HOE is in sections, bouncing the screen out in columns; Vuzix is either doing much finer columns or has some other approach. Lumus claim some HD resolution; I think the Vuzix rez is still unclear or varies.

http://www.kguttag.com/wp-content/uploads/2012/01/Vuzix-Holo...

http://www.instablogsimages.com/1/2011/12/15/lumus_see_throu... There are two manufacturers producing this technology - Vuzix and Lumus.

Personally, I've never been so excited about a technology (and I've been watching this space for years) and I'm so totally jazzed that Google is doing this. I hope they're using the Lumus stuff because Lumus has been around forever but won't sell samples to the public. I'll buy these things the moment they come out and hack the hell out of them.


"unless Google happens to be working at the height of early 90's tech"

We are working at the height of 90s tech. For a consumer product, cheap enough to mass produce at a price people are willing to pay ($200-$600) you have to step backwards in time a bit for embedded system design.

I've built a wearable using this type of display (the half inch micro display) and the biggest issue is that you cannot resolve an image well that close to your eye without giving you a instant eyestrain headache. You need a lens system to create virtual distance between your eye and the display. This is also the big issue with the "project the display on/in the lens approach." You need to have a focal length for the display similar to the depth at which the user spends most of their time looking, otherwise switching from real world to display and back is rather headache inducing.

I'm really excited about this too, but more for main-stream attention on a field that has been marginalized for over 10 years now.


Wait, sorry, do you mean you (personally) or you (Google)?

I'm aware of other microdisplay-based goggles like the Recon Instruments stuff - where a microdisplay approach makes more sense, but it's not covering your FOV.

Have you worked with HOEs at all? Or worn these glasses? It's true that you need to place the image carefully, but, for example, in the case of the Lumus display it appears about 10 feet out from you, which was alright for me on the showroom floor.

At CES 2012 Vuzix claimed they'd have $600 goggles in Q2 2013. This fall they are doing a monocle for $2500. No pricing at all from Lumus. Of course, I am appropriately skeptical about these claims, but HOEs are eminently manufacturable, pico projectors are cheap, and as you say, the renewed attention on this space is pretty rad. I'm optimistic. If the Google Device doesn't use the future looking stuff, I'll just buy the Vuzix set and develop on that.


ok, pronoun breakdown because apparently I did poorly.

We=Engineers in general

I = Me personally

you = potential designer of a wearable display, could be replaced with "one"

If the displays work as well as you describe (I hope they do!) then they would be great for mass production in a product like this(a glasses based wearable) in 5-10 years or so when they are inexpensive.


Yeah, especially since you are working in this space, you really, really need to see these. They're not 5-10 years off like they were 5-10 years ago.


put that tech in these and proceed to take my money http://ak.buy.com/PI/0/350/216477416.jpg


Let me adjust my tinfoil hat here.

Let's assume the following technologies would all make sense in something like this: - Forward-facing camera. - Eye tracking. - Object recognition (similar to Google Goggles)

So Google can tell what you're looking at, and for how long. Nope. Can't see any possible risks there from the company which tracks you across the web...


I'm still in the early phases of this and not that dogmatic about it, but I'm noticing in my life I'm starting to trend towards a "fight the cloud; own your personal computers" party line. Augmented reality can be a greatly empowering technology, but you should own it. And you don't own the cloud. He who owns the CPU calls the cycles; be someone who owns, not someone who is owned.


Software freedom becomes more and more important in this world. Let's hope Google follows their policy with the Nexus line and makes room for custom firmware.


I'd be more excited if these were being built by a company that didn't have a stake in the smartphone wars, but hopefully they'll be "open" enough to work with any OS via Bluetooth. Either way, this sounds kind of amazing. I look like a nerd already, I'll take the risk of looking like more of one to have a tiny screen in my goddamn glasses. This is the future.


There has been a lot of talk that these glasses will connect via 3g/4g and have different sensors to improve the augmented reality experience. Why not just have a display, power source, and bluetooth module? Then use your phone for the difficult computing.


Because the primary reason your phone is as big as it is, is because of the screen.

A big, bright screen forced a big battery.

I don't know what the battery usage on a HUD screen is, but I suspect a lot less - they don't require backlighting for a start.

That cuts down on the battery requirements, which cuts down on the size & weight (Obviously this is speculation: it wouldn't be a huge surprise if you had to carry a battery pack wired to the glasses somehow)

Secondly, the glasses themselves need to do a lot of fast, low-latency image processing. It isn't at all clear that bluetoothis an appropriate low-latency, high bandwidth connection mechanism for this.

Having said that, I suspect you'll be able to connect to your phone via bluetooth if you want - but the primary processing will be done on the glasses.


Can't wait to see what happens to people wearing these while they drive. And the laws that follow.


It will be even more fascinating when applications that could be used to assist in driving become commonplace, too. potential-hazard-detection, in-eye GPS, etc, could be incredibly useful. ...reading the current twitter from your friends... not so much


Can I root my Google Glasses and add software that simulates the sunglasses in the John Carpenter classic 'They Live'?

(For example: http://peteashton.com/images/they-live-20100615-034749.jpg )


The grid. A digital frontier. I tried to picture clusters of information as they moved through the computer. What did they look like? Ships? Motorcycles? Were the circuits like freeways?

I kept dreaming of a world I thought I'd never see. And then... one day... I got in.


OK. It will be possible to see quite realistic 3D but how it will affect my eyes?


Goggles will do nothing.


I can't imagine that this will do anything but make people look really nerdy.


Oh, come now; everyone here has more imagination than that.

I feel like our brains have already rewired themselves somewhat since the web got really ubiquitous. I find it fascinating to consider a greater magnitude of constant information, meta information, facts, distractions.

Technology is neutral. Some of us made lives we wouldn't have without the Internet. Others have gone missing, lost in cyberworlds devoid of human touch.

It will be incredible to see what happens when the next interface allows each of these directions to flourish. Google Goggles are a hop and a skip down the path if you ask me.


Yeah, and even if its a 1.0 thing that they expect to perfect over a couple versions, wearing something on your face does a lot to your sense of identity. It needs to look right, which pretty much means it needs to be well-designed — not Google's strong point.


It will be interesting if it catches on and everybody starts wearing glasses. Will we start hearing insults to people who are only "two-eyes?"


Also, sign me in, I've been wanting something like this for years.


Well, if you think that this will be the first attempt ever to produce a portable HUD, I believe the impact this will have won't depend that much on its commercial success.


It's not in any way the first attempt to produce a portable HUD.

It might be the first attempt to produce a "low cost" mass market portable HUD, though.


Yes, I actually meant a marketable product.


So wait for the Apple designer glasses version :o)


I would buy it regardless.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: