Distance from the sound source, yes. Phase correction would be necessary as a part of compensation curves. Magnitude shifts impose their own effect on the phase, and the group delay as a consequence. Since you brought it up, I don't think group delay can appropriately be defined beyond the envelope of the waveform. That is, how the spectrum is superposed. If you assert group delay as a strict measure of information, then this admits certain conflicts with causality. CSDs are STFTs with a rectangular window function applied to the impulse. Group delay would shift the envelope of lower frequency content forward. This includes both the initiation and the end tail. By free-field, I meant facing the speaker directly. That is how the standards normally define the free field. It isn't very much like ordinary 2 channel listening. What's not clear to me is how the PRTF is being used to incorporate the directivity of the ear. If this were all that were needed, amplitude shifts can be easily encoded into the signal. In a room, however, the early reflections scatter over the ear from different angles, yielding additional cues. Given the size of headphones, I don't find it so plausible that time interval leads to easy isolation of the first sound to correlate to the very short time reflections.