Guys, PCM is lossless in the audible band when considering the frequency domain. This is a mathematical fact, and the founding information theory behind MQA hasn't changed it. Bit depth is just quantization error is just noise floor is just dynamic range. Nothing complicated. We can hand-wave and repeat "information theory" over and over again, but at the end of the day, the frequency domain is a solved problem. Where PCM is "lossy" is in the time domain. MQA calls this "temporal blurring" and points to the dreaded pre- and post-ringing on impulse response reconstructions as examples. These artifacts introduce uncertainty about where the impulse occurs in the sampling interval, and about the phase information of the harmonic components. And by the way, this isn't a new concept. Minimum phase filters, apodizing filters, DSD, have all been attempts to get around this limitation. MQA just seems to be the first to have explicitly referred to this time-domain uncertainty as "lossless vs lossy" for the purposes of marketing. Now, MQA claims to be lossless by "fixing" this uncertainty in the time domain once and for all. This is literally part of their marketing. Apparently, the impulse response of MQA is better than that of air, which I would guess is where their claims of lossless temporal resolution originate. Okay fine, but how. MQA says, a combination of a non-rectangular sampling kernel and an encoding that, while proprietary, can be inferred to depart from the Fourier model in favor of something like wavelet theory. Why? Because this allows the format to localize ("identify") both frequency AND time domain information without any uncertainty. Time domain information can be fully localized, and amplitude information below the resolution of the sampling interval can be retrieved. The problem, as I mentioned before, is that their technique relies on a fully proprietary MQA production process. Including hardware and software, from recording to playback. Moreover, the reduction in bit depth that results from the MQA magic sauce folding process is supposedly offset by their special sampling method, which again implies it needs to be present to offset the loss in fidelity. This is all straight from the horse's mouth, not just me inventing. But, who here thinks this is happening for any of the music being distributed as MQA, let alone a majority of the catalog? Even if it were, how is the consumer supposed to know? Moreover, should anyone care? It is yet to be established whether pre- and post-ringing artifacts even matter.