Hacker News new | past | comments | ask | show | jobs | submit login

The point is that because of sampling, order of operations can matter. So having a 88k file -> apply an effect -> downsample to 44k, can sound different than having a 88k file -> downsample to 44k -> apply an effect.



This is an important point. The main reason that pro audio gear pushes bit depth and sample rate up to higher that 16/44.1 audio is because when you start doing the floating point math to mix and apply effects to audio you can end up with audible differences when multitrack recording. In this case (and I still think it’s optional for all but the most demanding recording of live performance) higher sample rates can help and to a lesser degree but depth can give you more dynamic range.

I give that long preamble to say once a record is done and mastered, having > 16/44.1khz is wasted bandwidth.


You can verify this by mixing to mono or splitting stereo and inverting the "after" and mixing them back into the "before".

If you get silence, they're perfectly identical.


Being pedantic here, but since it's on topic. This only applies for non-linear processing, which is most of what we want to do when mixing music. But not exclusively.


Where would “taking a 44k track and up-sampling it to 88k (audio DLSS?), applying effects, and then downsampling back to 44k” fall between those two points in the spectrum?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: