It actually does work exactly as I’m describing it, unless I did a poor job above. A 20khz sine wave sampled at 44.1Khz gets exactly 2 samples to describe it. At 192Khz? it gets 8, or quadruples. But the bigger difference is in the dynamic range anyway. Meanwhile, frequency is a time-based phenomenon (cycles per second). The word length determines the bit depth, or dynamic range.
Again, you’re not understanding how Nyquist works. Nyquist says that, given an analog signal, if you have X samples, then you can reconstruct the original analog signal up to X/2 Hz. In PCM audio, you will also have a time-domain performance of (I have googled): 1/[(sampling rate) * 2^[bit depth] * 2Pi)], not the commonly used formula of 1/(sample rate). This means that Redbook audio has a time resolution of 1/(44100 * 2^16 * 2Pi), or roughtly 60 picoseconds. Which is much lower than the smallest time delay our ears can recognize anyway. Therefore, time resolution in Redbook audio is not an issue.
In real life, if you want to avoid aliasing, you can only reconstruct up to a point below the Nyquist frequency. In sox, you can easily apply a low-pass filter with 99% bandwidth, which would allow you to reconstruct up to ~21830 Hz, which is beyond the limits of the human era. If you’re worried that’s a too steep filter, then go to 95% bandwidth. That would be ~21000 Hz, still beyond our limits.
A bigger sampling only really means you can reconstruct higher frequencies. The “step” model that is typically used to describe digital signals is misleading: that discrete signal is mathematically equivalent to the continuous signal of the original analog source. You can believe that ultrasonic frequencies are somehow important for playback, fine (I disagree, but whatever), but saying that higher sampling rates are more accurate in the time domain is false. Or, better said, it may be, but Redbook is already much more accurate than the accuracy our brain demands.
A bigger bit depth gives you a bigger dynamic range, yes. But, as I said, Redbook audio can have more than 96db, if it’s properly dithered (even old TPDF helps here). And I don’t understand why any music track would have a dynamic range of 120db, unless the producer really hates us and our ears.
Anyway, the purpose of this thread was to ask about the hi-res versions. If you don’t believe that there are real benefits, I’ll refer you to the following website:
I was merely replying to your own reply to Density, and because I’m tired of people spreading the myth that somehow “Hi-Res” is a huge improvement because “there’s more data!!!”, when it’s not even an improvement at all. Yes, there is more data, but no one is asking for ultraviolet information in home video formats, yet somehow ultrasonic information is very important because… reasons. You’re saying that “Hi-Res” is actually better for playback purposes (and therefore relevant for this thread’s topic), which is not.
The mastering behing “Hi-Res” versions can still be better, though. But you have that problem with Redbook releases too.
Speaking of releases and remasters, personally I think only comparisons between releases involving loudness normalization and/or double-blind tests should be accepted, really. We humans have a very small, unreliable “audio cache”. But the typical “audiophile” hates double-blind tests. I wonder how well the Empire tracks in the “Ultimate Digital Collection” fares against ABC’s fanmade remaster.