Hacker News new | past | comments | ask | show | jobs | submit login
Your eyes suck at blue (nfggames.com)
68 points by siim on Nov 10, 2010 | hide | past | favorite | 40 comments



Ugh. We don't have an entirely clear picture of how our eyes physically detect color, much less how we perceive it, but there are serious problems with the argument the author makes here. You cannot simply take a color photograph of a scene, split it in to three channels, then point out that the blue channel is "dark and contains less detail" as evidence of our inability to perceive the color blue. The fact is that the blue channel really is darker because of the actual lack of blue light in the photo.

The trick here is that the areas that have a lot of detail (her face, for example) contain less blue. If you use a color meter to inspect the areas around the girls face, you'll find that there is less blue light present. That makes sense, considering that our skin doesn't contain a lot of blue pigment. This fact is exacerbated by the fact that the author overlaps the channel samples in a way that places emphasis on the areas impacted the most.

Basically, the author fails to understand the additive color model. We don't notice the pixelation of the blue channel in this photo because the result of the alteration is to introduce a low-contrast color in to the photo where the aberration overlaps: yellow. If you look closely, you'll see that the areas where you see cyan and magenta in the red and green channels are replaced by yellow in the corresponding blue channel alteration. The effects are diminished by two factors: there isn't much blue luminance present to influence the other colors, and yellow contrasts poorly with most of the colors in the photo where we notice it (the hood is white).

If you were to take a color-neutral photograph and split out the RGB channels, you'd perceive the same level of detail in all channels.

EDIT: I'd kind of like to take back that last statement about perceiving the same level of detail in all channels. I don't know that you would, but that's not the primary thing that bugs me about the author's argument. My main point is that his argument is flawed, not his assertion. I don't know enough about human color perception to make that argument.


I do know a little about human color perception. Although the author's example is flawed, his argument does stand. Human eyes are much less sensitive to details in blue compared to green and red. Here's the best illustration I can find in a minute's googling: http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/OWEN... from http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/OWEN...

This shows up in the Red-vs-Blue battle analytics in both Halo and Team Fortress 2 --Blue wins measurably more often because they are harder to focus on. Red-vs-Green would be more fair, but that would screw over the large male population with Red-Green colorblindness.

This is why the standard conversion of linear RGB to greyscale is 30%red + 59%green + 11%blue. 33% each would make the blue seem to have too much influence after conversion. This is why BluBlocker glasses make the world seem more sharp. It's why I try to minimize blue in my IDE color schemes.

If you are designing a purely pragmatic UX that requires seeing fine details, I'd recommend a yellow-on-black color scheme with some green and little blue. The classic green/amber terminal screens of yore were ugly, but effective.


> Human eyes are much less sensitive to details in blue compared to green and red

IIRC our strength with green is why many night-vision systems use only green. It has other benefits such as not killing your night vision, but when you take the darkness of the night and remove the red and blue components, you can see better. Once again, IIRC.


Furthering this, I've also read (but can't cite off hand) that it's evolutionary.

A good chunk of the world is green, ripeness of fruits and vegetables can be determined at a distance by detecting green, and moonlight reflecting off of stuff will likely be green more than other colors, enabling the cones to do some of the night vision work besides the rods. Conversely, the only major blue things in nature tend to be the sky and flowers, neither of which provide a significant survival advantage. Interestingly, some women may actually be tetrachromats, giving them incredible ability to differentiate reds.

Also...I can't recall the exact reason, but we have trouble focusing on blues as well (wavelengths maybe?). If you have one nearby, one night go park near the middle of the lot of a Petsmart, which has a bright red/blue sign. Look at the sign while moving your head left and right, the blue letters will appear to move while the red remain stationary.


While looking for the illustration above, I also found this page http://starizona.com/acb/ccd/advtheorycolor.aspx with this illustration of "Spectral response of the dark-adapted human eye. Note the lack of red sensitivity." http://starizona.com/acb/ccd/advimages/eyeqenight.jpg

Compared to the daylight-adapted eye: http://starizona.com/acb/ccd/advimages/eyeqe.jpg

This is interesting news to me.


This caught up with me once as a young teenager; I was driving home on Halloween (totally sober) and I completely didn't see a stoplight. The red just blended in with the darkness.


You're wrong.

No, you can't just split a picture into three channels and say "Hey, blue looks dark," because blue might actually be dark.

You can, however, make a picture grayscale, then turn that same grayscale picture into redscale, greenscale, and bluescale. The luminance would be exactly the same for every pixel, the only difference would be the pixel's color.

I did that, in fact, and you know what? Your eyes really do suck at blue: http://www.flickr.com/photos/jemfinch/sets/72157617048178001...

We do have a clear enough picture of how our eyes work to know that the blue receptors are far fewer than the red and green receptors. Your eyes suck at blue. My eyes suck at blue. All of our eyes suck at blue.

Every time this story comes up, someone brings up this point. That's why I did it right: so I could reply to arrogant comments like yours that assume because an experiment is flawed that the theory it tried to prove must be wrong.


Actually, your conclusion is incorrect. While there are fewer receptors for blue, each is much more sensitive. All you demonstrated is that the sRGB color model is blue deficient. See http://en.wikipedia.org/wiki/Color_vision and http://www.ecse.rpi.edu/~schubert/Light-Emitting-Diodes-dot-... for details.

In particular, I'd like to draw your attention to the CIE 1931 chromatic diagram in the wikipedia link. This is supposed to represent the visible spectrum that the eye can see. The triangle is the sRGB colour space, what your monitor can reproduce. Notice how little blue the triangle contains? This is why your blue image looks so dark.

From the second link, it also turns out that CIE 1931 actually underestimates blue sensitivity. The book chapter discusses a corrected version called CIE 1978. It also has a plot of the eye sensitivity to various wavelengths. It turns out that our eyes are about as good at both blue and red, but more sensitive to green and yellow.

Experimentation is difficult. There are often a lot of factors you need to consider. Also, may I ask that you be a little less confrontational in the future? It's quite unnecessary. The majority of people here have good intentions.

edit: upon further research, it turns out it's even more complicated than just the sensitivity and cone numbers. Here: http://hyperphysics.phy-astr.gsu.edu/hbase/vision/rodcone.ht... it states that we should still have less sensitivity to blue. However, we do perceive it to be the same intensity despite this. It appears that we do have difficulty determining details from blue objects, though. The reason is that most of the blue receptors are on the outer areas of the retina. It is a complex topic apparently.


Agreed. If red=255 looks brighter on your monitor than blue=255, well, that's how the monitor was designed!


I edited my post within 15 minutes of posting, because I re-read it and realized that it came across that I was challenging the assertion that our eyes are less sensitive to blue. A fact I wasn't sure of either way (but have since learned the facts about). See tensor's and corysama's excellent posts below that contains some great links.

I said plainly in my edit, "My main point is that his argument is flawed, not his assertion." No need to get snarky.


My apologies, I missed that edit (somehow, despite my posting time).


Happens to me all the time :) I have to read everything twice.


Anyone curious about how I know there is less blue in the photo should open the photograph in an image editor and inspect the histogram for each channel. If you don't understand color histograms, read my dandy article on the topic:

http://upload.bradlanders.com/mycanikon/essays/histograms/ht...

The article focuses on average luminance, but histograms are interpreted the same for all color channels when looked at individually.


Here's a little experiment that I did. http://punchagan.muse-amuse.in/blog/do-our-eyes-suck-at-blue... I tried swapping channels, to cancel any unsymmetric effects like the use of Bayer filter etc. And still I find that the Green channel is always the most pixelated. The difference between Red and Blue channels is not all that perceivable.


Lots of text there, and you certainly sound confident, but I tried an experiment, and it confirmed the claims of the article. I used Paint.NET, loaded in an image, copied it to three layers. Adusted each layer to be a single colour channel, then changed each of the layers to 'Additive' mode.

Pixellating the blue layer - I could perceive at most some 'colour blotching', but no real loss of 'sharpness'

Pixellating the green layer - pixellation was easily visible.

Pixellating the red layer - the effect was somewhere in between.

You should give it a try. Here's my test Paint.NET image file with the layers all set up for you:

http://dl.dropbox.com/u/714931/bluejay.pdn


The additive color model isn't exactly the same as the layer modes you'll see in image editing apps. This "mode" affects how the values of the current layer are applied to the layers below it. The normal mode is to replace the values below. When you switch to additive, the RGB channels from the current layer are "added" to the values of the layer below. This is entirely different than the concept of the additive color model.

There are two broad color model types: additive and subtractive. Additive color models (like RGB) "add" color to arrive at white. Subtractive color models (like CMYK) "subtract" color to arrive at white. In the RGB additive color model, we most frequently refer to the primary colors, RGB, but the secondary colors (cyan, magenta, and yellow) are equally important. The primary colors are the result of raising only one channel to full luminance while all the others are at zero. The secondary colors are produced by raising all channels to the maximum, then dropping one channel to zero. The secondary color for the blue channel is yellow.

The consequence of this is that you can't simply pixelate the blue channel in an additive model RGB image and claim this proves a lack of ability to perceive color in the blue light spectrum, because the alteration of the primary color will inevitably affect the distribution of the secondary color, depending upon the luminance of the other channels in the region.

A better test would display a test pattern in different colors, but matching luminosity. The trouble with testing this on your computer is that your display must be calibrated. On a properly calibrated display, the display of RGB[0,255,0] and RGB[0,0,255] should have identical luminance values. Very few people have calibrated displays, and even if you do, the chances that your display is accurate throughout the color gamut for a given luminance value is even less.


> On a properly calibrated display, the display of RGB[0,255,0] and RGB[0,0,255] should have identical luminance values.

It does, and your eyes still suck at blue.

Why do you fight so vehemently against the scientific fact that your eyes have fewer blue receptors?


I'm not arguing against for or against that fact. I'm arguing that these "testing" methodologies are flawed. I'm, apparently, doing a very poor job of expressing the distinction.

Let me state in as clearly as I can:

* A good test would ensure that the luminance values for all colors matched exactly throughout the test image.

* Said test would need to be displayed using a device that is calibrated to ensure displayed luminance matches encoded luminance.

* A test that pulls color data from a source image with mixed luminance values in each channel is flawed.

* This statement makes absolutely no claim as to the human ability perceive any of these colors.


I'm sorry, but this is all irrelevant waffle.

The issue is that the human eye is less able to distinguish detail in the blue spectrum, as the article (and a quick test) shows.

(Blimey, I just noticed your first comment got 22 votes! Apparently irrelevant waffle gets upvoted on HN, if it sounds confident)


It's relevant because the testing method is flawed. Illustrating a fact using a flawed example/method is bad science.


Sorry, no. You've really missed the point I'm afraid.

A little bit of knowledge, as they say...


How do you know the unnoticeable pixellation of the blue isn't an artifact of the way blue is displayed by your screen?


If you have an LCD screen with a normal RGBRGB pixel layout, you really can't expect problems. Then again, you can always use a magnifying glass.


So, my screen doesn't display blue in a way that pixellation is visible to the human eye, and this is somehow not an issue with the human eye how?


Does the article mention that images taken with a digital camera (with a few exceptions) only sample 1/2 the green pixels and 1/4 each of the red and blue ones?

http://en.wikipedia.org/wiki/Bayer_filter

There will be more information in the green channel because that is how the camera is built. I'm sure somewhere there is proper research that was was used in developing the Bayer filter that indicates the human eye is more sensitive to green, but this looks like a case of bad methodology ending up with the right conclusion through luck.


To add to your point, a typical imager pixel can only sense one color - red, green or blue (caveat: there are now some imagers that can simultaneously sense multiple colours). The green value of a non-green pixel is interpolated from the surrounding pixels capable of sensing green. Thus, only 1/3 of your RGB image is "real" - the remaining 2/3 is interpolated.


> This is how DVDs work: a high res green image and two low-res images, one for red, one for blue.

Not true. MPEG-2 uses the YCbCr colorspace, consisting of a high resolution Luminance signal (brightness) and a low resolution Chrominance signal (color). So in fact, all color information is subsampled, green is not treated specially.


Green is absolutely treated specially in YCbCr; you just have to understand how YCbCr relates to RGB.

ITU-R BT.601 defines YCbCr as follows:

    Y ~= 0.30 R + 0.59 G + 0.11 B
    Cb ~= -0.17 R - 0.33 G + 0.5 B
    Cr ~= 0.5 R - 0.42 G - 0.08 B
Y is given the most bandwidth, and green makes up 60% of Y. Cb and Cr are allocated substantially less bandwidth, and green still makes up a sizable chunk of the value. In total, green occupies about 2/3 of the bandwidth in YCbCr. That's pretty much the whole point of doing it---RGB spends an unnecessary amount of bandwidth on R and B.

(reference: http://en.wikipedia.org/wiki/YCbCr )


Indeed. The author of this article is pretty badly misinformed.


This plays into something that Robin Hanson was doing with colors. Blue = Far Mode = grainy, picture unclear and far away. http://www.overcomingbias.com/2010/05/color-meanings.html


I'm wondering how much of this isn't an artifact of that specific picture; how do we know the RGB distribution in the original pic isn't skewed away from blue? That might explain why there's little information in the blue channel, right?

edit: no -> little


Low blue sensitivity may be due to the eye's lens being opaque to UV (very bright blue). This is for good reasons because then daylight illumination would be painful and cause retinal damage over time.

The blue color receptor can actually capture a wider range blues and low UV shades. People who had cataract surgery (which replaces the defective lens with an artificial one) may see into these deeper shades of blue if they received an older style of implant.

http://www.guardian.co.uk/science/2002/may/30/medicalscience...


Trivia: DXT compression (the texture compression used in virtually all modern games) uses a 5:6:5 format for end channels giving the green channel the extra bit of precision, for exactly this reason.


I wouldn't have thought so given that blue is the favorite color of most people[1].

[1] http://www.joehallock.com/edu/COM498/preferences.html


Not really. Favourite colours have to do more with social and cultural features than your ability to finely distinguish between different shades of said colour. I strongly believe that the proper phrasing for the results should not be "blue is the favorite color of most people" but rather "blue is the favourite colour of most White North Americans/Western Europeans".

I bet if this was done in China, it would be more heavily weighted towards red, and if done in ancient Phenoncia, it would probably be purple.


I think this is why people like blue.

http://www.flickr.com/photos/lighthearted/37225804/


Maybe also because `blue' (or `red' or `green') is easy to say. More so than more detailed descriptions of colors.


That is pretty neat, although I'm not sure I understand all the outrage. "THE DVD FORUM IS STEALING OUR PIXELS!!"

If your eye doesn't notice the difference, are you being "bilked"?


I think it's the same situation as so called "audiophiles" who spend large amounts of money on gold cables and ultra expensive headphones. They may or may not be able to tell the difference but the knowledge that their 10000USD headphones produce a slightly large range of sound than the 100USD pair makes them believe that the more expensive headphones are worth it. In the same way you could probably sell some sort of... DVD re...bluer? or something and make a mint even if no one could tell the difference.


I'm not sure it would work if no one could tell the difference, and I'll cite your example of the audiophiles.

Many people are overpaying for sound equipment that would sound no different to them than stuff half the price, but the fact that there are people that can tell the difference, who talk about that difference constantly, keeps the deluded part of the market in the dark. These people will never do some sort of double-blind test to identify whether or not they can tell the difference, but since there were originally people who could recognize it, others started to follow blindly. There has to be some starting off point before the masses buy into the hype.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: