True, the surround channel delay is supposed to be implemented by playback hardware only. And it is only supposed to happen during upmixing, not when playing 5.1 mixes.
In this case, the ‘hardware’ is the Dolby Media Decoder application that created the upmix.
Ah, I believe that I understand now. The receiver applies the delay when upmixing, so whatever software performs the upmix has to do the same. What about the -3dB attentuation though? If that is “baked in” to a 5.1 mix, does setting the surrounds to -3dB on the receiver have an effect only when upmixing?
(The LFE channel is, of course, the only reason for even bothering with making it 5.1 in the first place; if not for this, I would have distributed my edits in stereo and just let people upmix them in their receivers like they would anything else.)
This statement surprises me a bit. If true, why not make a 2.1 track? I thought that the process that you use gives better results than a standard upmix performed by a receiver. Thanks for all the explanation by the way!