Hacker News new | past | comments | ask | show | jobs | submit login

Why is the standard considered to be CD quality? In that way, the article shows its age. Today you wouldn't be talking about 44.1kHz 16 bit, it would be all about 24 bit 192kHz. If you're looking at spectrum plots, CD is very much on the low quality side of the spectrum of what's possible. Maybe we should be considering megahertz sampling rates and 32 bits, surely we have enough bandwidth.

Why not then? Because there is a ton of science and empirical evidence that humans cannot hear the difference[1]. Good engineering is about meeting the requirements with minimal cost. If the requirement is that it sounds good to humans, and the cost is number of bits to encode (and thus store and transmit) the signal, then modern codecs like Opus are clearly superior to uncompressed and losslessly compressed signals, much less higher sampling rates.

If your goal is something other than good engineering, for example the aesthetic satisfaction that the bits are the same as what the mastering engineer put on the CD, or for some reason caring how clean spectrum plots of artificial signals look, then the arguments may have some merit. But let's be clear on the goals.

[1]: https://people.xiph.org/~xiphmont/demo/neil-young.html




>Why is the standard considered to be CD quality?

Because it can fully reproduce everything the human ear can hear. Higher bitrates are only useful for production or archival.


The point is to have all of the information stored in your archive.

You can compress it for listening later, but you can never add information _back into_ the file. Store it in FLAC for archival purposes.

An equivalent would be archiving works of visual art in JPEG and not something lossless.


Agreed, for archival purposes we should be using lossless codecs. Not because you can hear the difference but because it makes it easier to reason about whether there's any distortion introduced by the compression. And we can consider the original artifact, as created by the mastering engineer, an authoritative source of truth, even if it's an imperfect representation of what was performed by the musicians.


What is the distortion if no human is able to hear it?


Just to give one example, if you want to do forensic analysis on the signal based on inaudible differences. That's a valid use case for an archive that doesn't apply to consumer (even audiophile) consumption of music streams.


Forensic analysis to answer which question?


What tools were used to create the audio? For example, the exact patterns of dither-based noise shaping[1] may reveal insight, but are by definition inaudible. Or perhaps there's an ultrasound source - something recorded near old-school CRT monitors may have a 15.75kHz tone, ordinarily outside the threshold of hearing but shows up as a clear peak on a spectrogram.

[1]: https://www.sweetwater.com/insync/hear-effects-dithering/


> something recorded near old-school CRT monitors may have a 15.75kHz tone, ordinarily outside the threshold of hearing

Only for some people -- the upper limit of human hearing varies between 15-20kHz, depending on the person and their age. For many children and younger adults (myself included), CRT coil whine is well within our audible range, as an incredibly annoying high-pitched squeal.

This comes up in speedrunning communities sometimes -- many runners prefer to play on CRTs due to their fast response time, and streamers who use CRTs need to remember to set up a notch filter on their microphone, or else their stream may be borderline unwatchable for younger viewers and the streamer might not even realize it.


Interesting, thanks




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: