Hacker News new | past | comments | ask | show | jobs | submit login
5K iMac Gets 10-Bit Color (cinema5d.com)
187 points by tambourine_man on Nov 3, 2015 | hide | past | favorite | 96 comments



In some ways, this is Apple pushing things forward... but in other ways, it's just them catching up. Until this update, there was no way to get 10-bit displays working with Apple products at all, even third-party monitors that were 10-bit capable. The OS simply wouldn't support 10-bit display. I know you could get it in Windows, and almost certainly in Linux with the correct voodoo.

My post guys wanted to stay on the Mac platform, so we dealt with 8-bit displays (color wasn't a huge part of our workflow -- but 10 bit would've been nice). But it was one of those head-smacking moments for a platform that was supposedly media-friendly.

Note, there's some confusion in the comments so far -- 10 bit is a professional display, and is extremely uncommon. Pro cameras and software can usually handle higher bit depths -- 16 for DSLRs, 10-12+ for pro video cameras -- but you're probably not seeing it unless you've carefully set up your computer. Those bits are still useful: they're the raw material for generating your 8-bit final image, so you don't get banding when adjusting color or exposure. And they're essential for containing a wider range of possible values, letting pro cameras represent a wider dynamic range than is possible in 8 bit systems.

And 8 bits is more typical -- it's literally baked into many file formats, like JPEG. Some crappy screens on consumer electronics can't even represent the full 8 bits per channel; 6 bits + dithering is sadly really common, even in screens that advertise themselves as 8/24-bit. Also, color depth can be reported both in bits-per-channel and total bit depth; a 10 bit-per-channel display is 30 bits of color information per RGB pixel, 10 each for red, green, and blue.

(Some of the confusion is probably intentional; I had a hardware partner brag about their '15 bit display,' which sounded very weird to me... until I realized it was really 5 bits per channel, which is roughly bad-ATM-screen quality.)


How much software fully supports the 10-bit Eizo or NEC displays on Windows these days? Do typical GPUs now support 10-bit color, or is it still only expensive workstation GPUs?

My impression was that only a few niche applications have bothered (notably, Photoshop has had support for a few years), since 10-bit displays are so rare/expensive, but I’m not an expert.

As for Linux, as far as I know the stable version of GIMP still only supports 8 bits/channel color, so displaying at higher bit depth is right out. I guess maybe some specialized video or 3d rendering software could use a 10-bit display? I doubt typical Linux apps support it.

* * *

Edit: In doing some web sleuthing, there seem to be endless issues with compatibility. (As you’d expect from a new technology that requires upgrading every part of the pipeline for support: applications, operating system, video card drivers, video cards, display interface, and the displays themselves)

I see this on a random web forum: “I got an answer from an engineer at Adobe, why Photoshop CC cannot display 10bit/color under Windows 8, 8.1 and 10: There is no OpenGL-path with 10bit/channel like in Windows 7 any more :(”

Adobe’s troubleshooting site says “Note: 30-bit display is not functioning correctly with current drivers. We are working to address this issue as soon as possible”


Linux is a first class platform for many of the atypical apps that would benefit from higher bit depths.


I’m seeing a lot of web discussions like this: https://devtalk.nvidia.com/default/topic/771081/30-bit-depth...

Can you find an example of someone talking about using 10 bits/channel color displays on Linux in practice, and having it behave properly? What kind of work are they using it for? (Maybe my google-fu is deficient, but all I can find are people complaining about compatibility problems and Nvidia marketing materials.)


In the last few months, I've used 30-bit color on Gentoo in the context of microscopy and data visualization.

Enabling it with a modern NVidia card is dead easy. Basically, in xorg.conf, replace occurrences of 24 with 30. However, not much makes use of the extra colors. Although Qt5 will correctly output 24-bit color in 30-bit mode, Krita is the only Linux application of any kind I found that can actually display additional colors. It's easy to test - show a full screen gradient.

No OpenGL compositors (eg kwin, compiz) support 30-bit color to any extent. Total fail, here. Tearing like it's 1998.

Notably, with Qt5, it does work to make 10-10-10-2 and 10-10-10-0 GL contexts and render to them with your own old-school GL calls, nv_path_drawing, and compat & core profile shaders. However, you do have to manage the GL context yourself - QOpenGLWidget has no 10-bit suppport. You can still use Qt's GL abstractions, just not QOpenGLWidget. Easiest approach is to use QWindow in a widget container (created with that QWidget.createContainersomethingorother static member function), as was commonly done with Qt 5.3 (ie, before QOpenGLWidget).


Back in 2002 I worked on some plugins for a color grading system that supported 4K.

Internally it processed color in 12 bits per channel and I think it supported 10bit color output.

This was running Linux with NVidia GPUs and was totally amazing to see 4K footage being graded in realtime.

That system is now called Baselight and AFAIK it is one of the best: http://www.filmlight.ltd.uk/products/baselight/overview_bl.p...


Compositing and grading for VFX/film. Most high-end VFX studios use Linux.


With what software?


Davinci Resolve is what Company 3 uses (The Martian, Spectre, Jurassic World, etc). It's available on Mac and Linux, but the big shops use the Linux version as they can expand it allowing for more real time nodes.

Baselight is another major platform for color grading that runs on Linux.


I worked for a company that performed stereoscopic conversion for Hollywood films. Their primary tool was NUKE [1] running on Linux. It's also available for Mac and Windows, but because NUKE encompassed their entire workflow, the underlying OS wasn't especially important.

[1] http://www.thefoundry.co.uk/products/nuke/


Resolve, Nuke, Flame, etc


If there's one problem this thread makes me want to solve it's the undiscoverability of the unknown. 99.9+% of software is stuff most of us have never heard of, but we don't know what to search for because we don't know how to know what we don't already know. 99% of business opportunities we might have are invisible to us because we don't know how or where to look. 99+% of the dating population is out of reach of each other because of path dependence problems -- their lives simply won't ever intersect.

If I had my own startup accelerator, tonight (and only tonight ;-) ) this would be my RFS#1. Solve the knowledge bootstrapping problem. Google clearly failed to link 30-bit to VFX.


I've been thinking about this problem on and off for several years (including a startup with an alpha version a couple of years ago).

It's especially important in learning.

How do you organize and present information in a way that finding things most important/natural/easy/interesting to you isn't limited by dissipation of such information through old-fashioned network effects.


No matter what, I think you must first build a knowledge graph. One example that was recently posted to HN is https://news.ycombinator.com/item?id=10468732. Maybe it could be crowdsourced or generated by mining Wikipedia.

Then you need some way for people to find out where they are on that knowledge graph, and a recommendation engine for where to go next. I also think it would be useful to mix in a few random topic nodes that are some number of hops away from a person's current knowledge, plus occasionally a completely random topic of the day.

But that's just the most trivial beginning to an enormous problem.


I'm glad there are others who think the same way! Like always, probably more topic-specific solutions will lead the way to a hopefully universal collection of knowledge.


One way to think of it is as a very early precursor to the civilization bootstrapping kits included on starships in Vernor Vinge's Zones of Thought novels (particularly The Children of the Sky).


More or less what I thought reading down this thread. 'New' tech in the consumer market is tech from 10+ years in specialized trades (4K in 2002 from an earlier comment, touchsreen on your Win8 box? Had touchscreen on the retail POS 10 years ago[1]).

So what is around now that I don't want to wait 10 years for?

[1]cslight differences may occur


Linux workstations have been the preferred platform for high-end finishing applications like NUKE, Flame and Baselight for years.

Generally it's NVIDIA hardware on Red Hat.


Krita, a KDE project competing with GIMP, supports a lot more colorspaces than GIMP. Also, next to all KDE apps support it.

8 bit? 16 bit? 10 bit? CMYK? YPbPr? XYZ? Krita has them all.


A funny note: During Longhorn (PDC 2003), some MS devs indicated that the color pipeline for everything in Windows was moving to a 128-bit pixel struct. Some floating point part, I think. Course that never shipped but it sounds cool.

I think Carmack mentioned that in some extreme-but-possible cases, 64-bit isn't enough to represent the results correctly.


I have to assume that's 128-bit precision in 3D rendering -- partly because it's Carmack, but largely because 64-bit color would be... well beyond human perceptual limits.

This StackExchange post claims ~2.4 million colors for the limit of human color vision; it cites CIE color science: http://photo.stackexchange.com/questions/10208/how-many-colo.... 64 bit per channel color would produce ~2e19 different colors; if it's 64 bits per channel it would produce some even-more absurd number like ~8e57. (Besides the ridiculous numbers, I assume those numbers are per-channel, because it's not divisible by 3.)

And yes, higher bit precision is really important for drawing 3D graphics. Look at the price premium you pay for Nvidia's Quadro cards, which can process in higher bit precision than your stock cards... they're typically 5-10x a consumer card: http://www.nvidia.com/object/quadro.html

Mostly advertised for high end CAD or VFX, where bit precision errors can result in odd display artifacts or incorrect rendering.


I think the context was that after doing multiple layers of texture mapping on a 3D object, a 128-bit color space wasn't enough to be fully accurate. I'll see if I can find what I'm most likely miss misremembering.

Thanks for the SE link. 2.4M can't be right or we wouldn't be able to see banding all over the place right? The link says including luminosity, it might be up to 100M. So 24 bit wouldn't cut it but 30 would?


Like all things perceptual, an exact answer is maddeningly difficult to pin down (individual to individual variation, subjective issues like ambient light, etc).

Even at 10-bit you could in theory see banding if your color discrimination were good enough, I think. It depends on the material... but the 10-bit displays I've seen look awfully, awfully good. (And the ability to represent more dynamic range starts to matter -- you have to have a wide scale to show off the bit depth, too. A monitor I saw at NAB this year has two backlights and can show off a lot more light... campfires glowed, headlights looked like they were bright, very real... that's the real direction for displays, I think.)


Probably 4 32-bit floats per pixel for ARGB, it's what most GPUs use internally for shader computations since DirectX 10.


Bragging about a "10 bit display" sounds very weird to me, having grown up with the progression from 1-bit, 4-bit, 8-bit, 16-bit, up to 24-bit color! I suppose now everyone has gotten so accustomed to 24-bit color that the newbies have forgotten its name? I'm not sure how else to account for the moderately hilarious concept of "10-bit" color as an upgrade.


It's 10 bits per each of the RGB channels, 3 x 10 = 30 bits total.


Yeah, I figured that out, it's just weird - as though an ad for a new car were bragging about its 45 MPH top speed, or an ad for a new cell phone wanted you to be impressed by its 3-hour battery life.


Why did this support take so long? I remember handling "10 bit" files in SGI in the late 90's, and while most vfx houses are on linux these days, I though mac os was useful also.

p.s. I think this was the format: https://en.wikipedia.org/wiki/Cineon


You've been able to handle "10 bit" files on Macintosh or Windows since forever. What's new here is native video card driver support for allowing 10 bits per channel (30 bpp) content to be viewed real time through the main display.

Let's be very clear here — being able to view 30 bpp is a very, very marginal benefit for most people most of the time, even if they're working on high bit depth material. Far more important has been the substantial improvements in display gamut and accuracy.

For content creators the important thing is for this additional bit depth to be captured in the first instance and preserved during manipulation.


You’re right that if you have small pixels (200+ DPI), and especially if your content doesn’t have too much dynamic range, you can use dithering to make your content look pretty smooth on an 8-bit display.

A non-dithered 10-bit display should look better than an 8-bit dithered display though, and if you add dithering to the 10-bit display, you should be able to get a pretty darn nice view of 12 or 14-bit content. For instance, 10-bit displays should be great for looking at smooth gradients (e.g. defined with CSS) over a large area of the screen, for which 8-bit displays still often get fairly posterized.

One thing about recent displays is that they’re very bright, and have impressively dark blacks (with a smooth glossy glass surface with some kind of anti-reflective coating on it).

For instance, the Macworld UK review measured the 5K iMac with max brightness of 445 lux, with a 1160:1 contrast ratio. That’s really damn impressive, better than you’ll find in a movie theater. (It’s hard to tell exactly what their test methodology was though. I can’t tell if they measured in a typical room, or in complete darkness.)

Such a high contrast ratio lets you start to look at images with more dynamic range than you could get in a typical photo print, or an order of magnitude more than you’d get in e.g. a magazine. But if you start looking at such images, you really start pushing up against the limits of an 8-bit display, and you are forced to choose what part of the brightness range you’ll posterize. A 10-bit display is very helpful for those types of images.


Erm, whoops. I meant 445 nits.


Is it very marginal? Shouldn't it reduce banding by a factor of 4 or so? I notice banding a lot on all sorts of content.


The banding is mostly caused by color compression rather than device support. 99.99% of the content one usually sees is compressed, and many times it's a second compression over something already compressed (I don't know any "uncompressed Youtube" or "raw Instagram").


It's possible to notice banding with 8bpc if you look at a color gradient, especially if it wasn't dithered. But if you add proper dithering (over all 3 channels) and you're looking at a natural image (yeah, not a 256-color web-optimized one)... I don't know. Dithering plus higher resolution has pretty much the same effect as a higher bit depth.


I have a few guesses.

I think video DACs used to ship with lookup tables, so they'd be 10-bit internally, but you could only feed them 8-bit data. SGI could always throw more money at things, they had expensive hardware.

The DVI standard only deals with 8-bit samples. HDMI and DisplayPort support 10-bit samples, but they're newer. Most desktops used DVI for digital video until not that long ago.

Software support would be a lot of work back in the day when applications drew to a single framebuffer. These days, you usually have the applications draw to dedicated buffers, and the compositor draws the screen. This makes it easier to mix 8-bit buffers with buffers of other depths.

Typically, you'll want to do 10-bit with hardware support, because it's just so much faster. Software is more geared towards 8-bit or 16-bit, but GPUs will support 10-10-10-2 in a single 32-bit word. They'll also be much better at working with 16-16-16-16 or 8-8-8-8, and mixing all of them together.

Then there's all the software support.

(Note: I've remember editing 16-bit files in Photoshop ages ago, but they'd be displayed at 8-bit, and the main difference was reduced banding after extended processing.)


From a comment in the article:

  Hey guys, it appears it’s not just for the 5K iMac, 
  I have tested this on 2013 Mac Pro with an Eizo CS230
  monitor and can confirm that you can get 10bit output.
http://www.lsdigi.com/2015/10/el-capitan-10bit-display-suppo...


Yeah - and it also works with last year's 5K imac. (source: I have one)


I also have a 2014 5K imac with El Cap, and my system report shows 30-bit / ARGB2101010

However, when I view the test image from the article's comments (http://www.imagescience.com.au/kb/getattachment.php?data=MTU...) I still see banding. Hmm.


What are you viewing it in? It opened in Pixelmator for me where it was showing banding. Then I tried again with preview where it was ok. So the app you use to open the file might not have proper 10bit support


Preview. The banding is pretty odd (the bands have gradients _within_ them, it's weird), and is also displaying artifacts from OS X's default window-transparency settings. Turning those "down" helps, a bit, but there's still banding. I don't see an option to turn it off completely. So I'm not sure where to point the finger.


On the downside, it appears that the newest iMacs still won't do Target Display Mode. This is a bummer -- I like the iMac but hate the idea of having this big beautiful display on your desk that you can't use to give your laptop more screen real estate when you want to.


Anyone care to explain in brief to someone who reading 10 bit color still thinks "hey, weren't we at 32bit color already?"...


We are currently at 24-bit colour palettes - 8 for each colour. 32-bit is used for ARGB, with alpha-channel, so it's still 8 bit per channel. This, however, moves us to 30-bit palette, which is nice.


What I don't understand is why 8 bit color isn't enough. Back when 8 bit color was released (yes, I'm old), we were told that it provided way more colors than the human eye could discern - 24 million as opposed to the 10 million that the human eye could discern.

What changed?

Is 10 bit color smoke and mirrors or is it something that only a smaller fraction of the population can appreciate?


For seeing pictures and watching videos, 8 bit is good.

Its when you start manipulating images that it doesn't work out. For example, you have a nice sunset picture but its dark and you can't see the people in front. As a result, you try to increase saturation, contrast and exposure. The people's faces are still dark so you try to change the exposure more aggressively. That's when the picture breaks down. The sky sunset becomes pixelated, maybe the skin becomes white etc. If you use a 8-bit display you wouldn't know if it's because of the effects or because of the 8bit monitor. With a 10-bit display, if it looks crap, at least you know its from the picture you took.

This might look an extreme example bit it happens often for photographers and videographers. It can also happen for consumers if they try to check a picture which lots of minimal variations if the same color in the same picture. A good example is a picture of a white sky. It can appear pixelated in a few parts


It's not smoke and mirrors. It's akin to the audio world where the final product (an CD or MP3) is mastered in 16 bit audio, but is mixed and edited at a higher bit depth, often 32 bit.

In the color world, most of ones job is about mapping what a sensor captures (most modern DSLRs are 12-16 bits per channel) onto the reduced palette offered by 8 bit color. Actually, most cheap displays are 6 bit color with some fancy techniques to help fool our eyes.

Research banding and dithering with respect to color to get a better idea of the subject.


We're gradually moving toward displays with wider color gamut and dynamic range for brightness. We need more bits to cover more area without reducing the precision with which we represent the more common subset.

Also, 10-bit grey-scale is really important in the medical imaging field. You don't want your xrays compressed down to just 256 levels.


The funny thing is that the typical (old) LCD monitors have greyscale banding. If you think on HSL , on a typical monitor you will have an effective 8-8-6 . Usually monitor firmware try to give the illusion of a effective 8 bit on luminosity channel, doing little changes of colour to soft the banding effect. Intereting the old CRT monitors not have this problem thanks that they not don't do any image processing beyond playing with hue/brightness/contrast (and this could be archived by a few op-amps and a few pots)

This my case with my old Samsung monitor, but this monitor have some tricks that helps a lot to see bad softing on 3d meshes thanks to the weird banding/posterization when you look it at extreme angles.


To add to this, with 10-bits per channel, you could essentially have a 40-bit desktop in ARGB.


Would that mean the RGB values change in the future?

It would go from 0-F to 0-K wouldn't it?

so... #KKK = white? ;(


Hilarious, but no, that's not how it's going to work

You get a number in front of ff from 0 to 3, so 3ff is the maximum instead of ff


A lot of software already use normalized RGB.

rgb(1.0, 1.0, 1.0) == white


And the idea is that you specify as many decimals as you need there?


Yes but that's an interface problem.

Ofcourse you can't provide more decimals than the number of bits used by the float.

EDIT:

  0.999 * 255 (8 bit) = 255
  0.9999 * 255 (8 bit) = 255
  0.999 * 4096 (12 bit) = 4092
  0.9999 * 4096 (12 bit) = 4096
So for lower color depths the number of decimals isn't that important.


Look up CSS-color level 4. It supports non-integer arguments for rgb() and rgba() which supposedly solves the problem, though in a bit ugly way, because the total range is still 0..255.


And why why why I cannot buy an external Apple monitor for my Macbook for getting that? I need to buy a whole Mac to get that :(


A cinema5D reader reported that he got 10 bit on a Mac Pro with D500 graphics and an Eizo CS230 monitor. So it probably depends on your Mac and monitor but you there's a good chance you can get 10 bit output from you MacBook.


Give it some time, and you’ll probably need a new Macbook to get an Apple retina display working.

Support for Thunderbolt 3 (which should have enough bandwidth to drive a 5K display at 60 Hz) is coming with Intel Skylake chips, which should be arriving next year.

http://blogs.intel.com/technology/2015/06/thunderbolt3/


Thunderbolt 3 is USB C. That explains why Apple didn't ship USB C on their new iMacs.


If you have the new "MacBook", it can only drive 4K anyway. You'll likely get 10 bit color with a cheap Dell P2415Q display or similar.


limitations of Thunderbolt 2


Because Apple still isn't selling 5k monitors. I imagine it will happen eventually.


I think it's misleading that the title says "10-bit" instead of "30-bit".

8-bit graphics is normally 256 different colors (at least for DOS era programming). 24-bit graphics is 8-bits per RGB channel.


That's a great step, but isn't it still not ideal because at 5k resolution, you could still get banding on a continuous gradient covering the screen without dithering?


Theoretically, yes — 10 bpp increases the number of distinct shades from 256 per channel to 1024. So with a 5K display at 10 bpp, each shade would be repeated for five physical pixels.

In reality, no — the differences in brightness between each shade are going to be too subtle to notice.

Most sources of egregious banding usually comes from 8 bit source material manipulated or "colour managed" to an 8 bit output product. If the source material was 10+ bits then banding would be far less noticeable, even if the final output is still 8 bits.


I find that most significant banding comes from a large gradient that spans between two very similar shades. (E.g., a full screen gradient that goes from grey to slightly-darker-grey).

I fear that 10 bpp would hide the banding from you, so that when your users see it on their 8 bpp displays they notice the banding that you didn't see or get a chance to dither out with noise.


I notice this a lot with web media. Designers have really nice setups, so they don't have any problem designing sites with tiny thin gray text on gray backgrounds with little gray icons etc, not realizing that it looks like indecipherable mush on other people's hardware.


My system info says 32-Bit Color. Is this incorrect? http://imgur.com/PAk8K8e


ARGB8888 = 8 bits each of alpha, red, green, blue. If you had 10 bits per channel it would say ARGB2101010 http://i.imgur.com/ds1t6HV.png


This is really confusing.

ARG8888 and it says Pixel depth: 32-bit Color

ARG2101010 and it says Pixel depth: 30-bit Color


I'm waiting for the first real 32-bit colour monitor, the one that becomes transparent when alpha is set to zero.


So you'd actually see through the monitor to the internals our out the other side? That would be interesting :)


I wonder why it's not ARG10101010 for 10-bit RGB + Alpha

Surely it's not actually 2 bit Alpha in ARG2101010


The alpha part of the framebuffer isn't actually used because monitors don't support an alpha channel, it's just convenient to pad everything out to 32 bits anyway.


Interesting. Do API:s support 10 or 12 bit colors? Is it limited to "low level" rendering, similar to when you render a region of screen using a 3D hw device, while the rest of the screen is being drawn at 8bpp using the graphics API:s of the OS?

I guess my question is: Has OS X always had graphics API:s that uses floats instead of rgb bytes for everything?


At least iOS uses CGFloats for argb components in its UIColor class:

https://developer.apple.com/library/ios/documentation/UIKit/...:

Going beyond 8bit would be awesome; if you have large gradients and don't use dither, you get really heavy banding. Imagine a gradient on a web page background. If your screen is 1024 pixels high, thats 4 rows of pixels of the same color even if your gradient is going from 0 to 255 on a channel. If your range is smaller (dark gray to lighter gray), it gets even worse. Even older APIs could draw benefit of this, for example a CSS gradient specifying #rrggbb...#rrggbb with 8bits/channel could be rendered with more than 8 bits per channel to avoid banding.


As much as I dislike Apple, I do appreciate when they push things forward. They've done the opposite for laptop design (bad keyboards, hot chassis) but Retina has probably contributed a bit to other OEMs shipping not-totally-crap panels. Maybe this will push that a bit more?

What I'd pay for a X series ThinkPad with a 10-bit "Retina" display...


I also love Apple, but it's a bit of a stretch to call this "pushing things forward", when 10-bit color has been supported on Windows since Windows 7.


I thought support was sorta so-so, requiring special apps + driver support. Plus I've not seen any popular PC products shipping with it. If Apple makes this part of their flagship, it might help. Or maybe PC vendors will keep shipping low-res, ugly ass displays.


Kinda like support on Apple now - "only with certain models, certain versions of the OS, and only in two apps". I realize this will change, but it's not much different.


Have any of these devices included 10-bit displays? That's the interesting part.


There's been 10-bit support in AMD graphics cards (since at least 2008). Dell, NEC, Eizo and others have shipped monitors with support. Combine the two off you go. So, yes, PCs have been using 10 bit color for some time.


The "original" article is here, though in German: http://www.heise.de/mac-and-i/meldung/iMacs-lernen-10-Bit-Fa...


I was hoping for a push to 10-bit monitors for awhile. First step would be for nvidia to unlock 10-bit output on their geforce series and then for monitor manufacturers to lower the prices / introduce 10-bit panels... Not to mention standardize on a port for 10-bits.


Well, I know the first thing I'm checking when I get to work in the morning.


10-bit color. Retina-level DPI. 120 Hz refresh rate. External display. Pick... two?


10 bit color, 5400rpm laptop hard drive. 2 steps forward right?


Wait, seriously? Dude.


Sorry I was wrong, the 5200 laptop drive is in the 21.5" 4k iMac.


When you need that extra push over the cliff...


tldr; too much mac universe; I'm assuming 10 bit means 10 bit per channel instead of 8, so what the rest of the world would call perhaps 30 bit over 24 bit?


10-bit color and 8-bit color are common idioms in the video industry to refer to 30-bit pixels and 24-bit pixels.

It makes more sense in video where one often uses the YCrCb colors space where the color components (Cr Cb) are often sampled at half the rate of the luma components. In this case, 10-bits corresponds to 20-bits per sample point.


Sure looks like a lot of non-Apple uses of this term to me:

https://www.google.com/search?q=10-bit%20color%20-apple


In the article it shows a screenshot where Apple themselves also call it 30 bit.

And the text for the image is as follows:

    The graphics driver enables 30 bit pixel depth. 10 bit for each RGB color. Image courtesy Mac & I magazine.


I am wondering why you are getting downvoted when what you said about 30 bit is precisely correct; 10 bit is 10 bits per display plane, which has traditionally been called 30 bit as per AMD[1] and NVidia[2], as supported since 2006.

[1] (PDF) https://www.amd.com/Documents/10-Bit.pdf [2] (PDF) http://www.nvidia.ca/docs/IO/40049/TB-04701-001_v02_new.pdf


He’s getting downvoted because of the snark. If he read the article, he’d see that even Apple themselves refer to it as 30-bit:

https://www.cinema5d.com/wp-content/uploads/2015/10/iMac-10-...

http://i1.wp.com/www.LSdigi.com/wp-content/uploads/2015/10/S...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: