Pre and Post Effin Ringing and shit like that

Discussion in 'Blind Testing and Psychoacoustics' started by ultrabike, Aug 17, 2016.

  1. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    89,777
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    Oh lordy lordy. Do you actually know WTF you are talking about? Do you know what passband means? I could explain this to you (because other readers will benefit), but at this point, I really do not want to. HINT: frequency domain.

    Let's just stop while you are behind. You guys are bordering on magickal / alchemical beliefs masquerading as science because of the use of cool words like Nyquist and reconstruction. I don't where the heck you guys learned or made up your stuff; but it most certainly is not grounded in the math.
     
    Last edited: Dec 11, 2018
  2. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    89,777
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    I suspect noise. LIke -160db of noise. It's not going to be patterned into THD or IMD. Would be an interesting exercise to run calculations.
     
  3. MRC01

    MRC01 New

    Joined:
    Dec 10, 2018
    Likes Received:
    21
    Trophy Points:
    3
    Location:
    Earth
    Let's assume the passband is 20 Hz - 20 kHz. The stopband or cutoff is Fs/2 or Nyquist, say 22,050 or 176.4k if it's 8x oversampled. The transition band is then 20 kHz to either 22,050 which is only log2(22050/20000) = 0.14 octaves wide, or to 176.4k which is log2(176400/20000) = 3.14 octaves wide.

    True, but that's not what I meant. We actually have 3 waves:
    1. The output from the microphone.
    2. Wave (1) after applying an anti-aliasing filter, necessary for AD conversion.
    3. The wave reconstructed from the digital samples.
    Distortion can refer to comparing any 2 of these waves.

    As I understand it, the W-S theorem says that (2) and (3) can theoretically be identical. But it doesn't say that (1) and (2) can be identical. But that's what really matters: how closely does the wave reconstructed from the digital samples match the mic feed? It cannot, because whatever antialiasing filter you apply will have some passband distortion. It could allow some higher frequency aliased components to leak through, or in squashing them, shifts phase or amplitude in the passband.

    I'm not saying that the antialiased mic feed is different simply because it has frequencies above Fs/2. That's trivially true. My point is that that there's no perfect antialias filter, they all induce some passband distortion. Put differently: compute the DFT of waves (1) and (2). Each DFT is a list of "components" in the frequency domain, each having a frequency, amplitude and phase. Of course the DFT for (1) has frequencies outside the passband. Ignore them. If you compared only the remaining components (those having frequencies in the passband), they would not be identical. Some would have slightly different frequency, phase or amplitude.
     
  4. ultrabike

    ultrabike Measurbator - Admin

    Staff Member Pyrate MZR
    Joined:
    Sep 25, 2015
    Likes Received:
    8,960
    Trophy Points:
    113
    Location:
    Irvine CA
    Stopband is not the cutoff. The stop band and the transition band are a range of frequencies, not a frequency.

    And if you have Fs/2 = 176.4 kHz, that better not be the cutoff.

    This is non-sense.

    WTF!!! All wrongness aside, let's humor your (1) and (2) can't be identical from an oversimplified "theory" perspective. Bullshit.

    All non-idealities aside (which are plenty), a mic is bandlimited. And therefore I could pick a sufficiently high sampling rate to capture it. If that is the case, you can think of the anti-aliasing filter as a means to remove noise, since that will also alias and reduce your SNR.

    Furthermore, aliasing will not shift phase or amplitude or all that other bullshit. It will "map" frequencies above Fs/2 down the passband in a predictable and periodic way. It's non-linear distortion, adding out-of-band components in-band. And who the f**k cares about ultrasonic shit north of 20 kHz? Why can't I define my passband cutoff there?

    As far as your other discussion about the DFT, maybe run something through MATLAB. You are not making sense.

    Yes shit can be slightly different after a filtering operation. But you have to qualify that. Is it like -160 dB of who-the-hell-cares-ness?
     
    Last edited: Dec 11, 2018
  5. ultrabike

    ultrabike Measurbator - Admin

    Staff Member Pyrate MZR
    Joined:
    Sep 25, 2015
    Likes Received:
    8,960
    Trophy Points:
    113
    Location:
    Irvine CA
    LOL!

    I'm re-reading my latest posts, and I read them as "ultrabike lost his shit".

    Sorry guys. I'll try to do better.

    I do stand for what I said though: Linear phase > Minimum phase in terms of linear distortion.
     
    Last edited: Dec 12, 2018
  6. MRC01

    MRC01 New

    Joined:
    Dec 10, 2018
    Likes Received:
    21
    Trophy Points:
    3
    Location:
    Earth
    This debate motivated me to get out my pencil & spreadsheet and actually implement the W-S formula. And you're right: the impulse ripple frequency of sinc(t) is outside the passband. In fact, that ripple frequency is Fs/2, the Nyquist frequency, which is at the filter cutoff. It's in the output, but extremely attenuated, and its frequency is above the threshold of hearing. I assumed it was in the passband because it's in the output which is bandwidth limited, but I forgot the output also includes the transition band up to the cutoff frequency.

    The W-S theorem says Sinc(t) is the functional lego brick from which to build signals. It plants one centered at every sampling point, scales it by the sampling amplitude, then snaps them together via superposition. I constructed some signals in a spreadsheet following the W-S formula and it's mathematically beautiful how it works. And it seems this where the Gibbs effect originates, as the superposition of all these overlapping sinc functions, which is cool to be able to construct. So the pre-ring ripple does exist, but it rings at Fs/2 and should be inaudible. Also, oversampling (or increasing bandwidth) increases Fs which pulls that ripple frequency octaves above audibility.

    Here are some snapshots from the spreadsheet.
    Impulse response:
    [​IMG]

    Step response:
    [​IMG]
    However, part of my original point still remains, which I will put into a separate post for clarity.
     
  7. MRC01

    MRC01 New

    Joined:
    Dec 10, 2018
    Likes Received:
    21
    Trophy Points:
    3
    Location:
    Earth
    I agree with you there. I never argued that minimum phase was better than linear phase. In fact, I said earlier that minimum phase is a cure that is worse than the disease.

    Back to the 3 waves I mentioned earlier:
    1. The original analog signal before AD conversion (mic feed)
    2. Wave (1) after AA filter applied before sampling
    3. The wave constructed from the samples

    Distortion can enter the picture in at least 2 ways, both related to the fact that AA filters aren't perfect. I mentioned one earlier: the AA filter applied during A->D always causes some passband distortion, in two possible ways. (A) it might leak through a small amount of energy at frequencies above Nyquist, or (B) it might shift amplitudes or phases in the passband. There is no perfect AA filter, so some combination of A or B happens, however small.

    I described case (B) earlier and how it can make signals (1) and (2) different. Since then I thought of a way signals 2 and 3 can be different without violating the W-S theorem. If you follow the theorem, wave 3 is a perfect reconstruction of the bandwidth limited signal described by the sampling points (at least for all practical purposes, summing each sinc(t) pulse over a wide enough range of surrounding samples). But read the theorem's caveat carefully: "bandwidth limited". In the case of (A) above, wave (2) is not that bandwidth limited signal! Allowing some energy above Nyquist to leak in, it gets aliased into passband frequencies, changing some sampling points (however slightly). So wave (3), which we reconstruct according to the theorem is slightly different from wave (2). This doesn't violate W-S because wave (3) is perfectly constructed yet wave (2) didn't meet the conditions for W-S.

    All that said, while my point is that AA filters aren't perfect, I believe that well engineered filters can push this distortion to negligible levels. What is far more important is microphone choice & placement, the room used for recording, position of musicians in the room, etc.
     
    Last edited: Dec 12, 2018
  8. MRC01

    MRC01 New

    Joined:
    Dec 10, 2018
    Likes Received:
    21
    Trophy Points:
    3
    Location:
    Earth
    PS: you were wondering earlier how many samples the DAC must consider before the influence on the current sampling point, of the decaying ripple from distant sampling points drops below the noise floor. The spreadsheet makes that easy to compute, so here's your answer:

    1 second of redbook CD is 44,100 samples. That's 22,050 on each side of the current sample. The amplitude of the decaying ripple from those distant sinc(t) sample points (each 22,050 samples away) is at -97 dB, which is below the noise floor of 16 bit audio.

    Put differently: consider the impulse sinc(t) response where sample 0 amplitude is 1.0. The wave ripples decay as you move away. The peak amplitude of the ripple 22,050 samples later has magnitude 1.44e-5 which is 97 dB below 1.0.

    This is a worst-case scenario assuming max amplitude of all samples. For an actual musical signal you don't need to go so far.

    Such a reconstruction filter would consider 44,100 samples when constructing each point in the wave. I have no idea whether in practical applications that is high or low. But I think the C code to do that could run in microseconds so it seems feasible. And it seems quite a coincidence that since redbook is 44.1 / 16 bit, exactly 1 second of data puts the residual into the noise floor.

    PS: that sinc(t) ripple noise doesn't decay down to -144 dB (24 bit audio) until 5M samples! That's almost 2 minutes of CD audio - way too much to buffer or compute. Fortunately, it doesn't need to be anywhere near that extreme because noise from other sources are at much higher levels.
     
    Last edited: Dec 12, 2018
  9. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    89,777
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    I love the Whitaker-Shannon stuff. It's like magic. Seriously, like magic. HTF does the math reconstruct stuff that is so heavily aliased to the human eye?

    It's like that line in the Thor movie: Your Ancestors Called it Magic, but You Call it Science. I Come From a Land Where They Are One and the Same.'
     
    Last edited: Dec 12, 2018
  10. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    89,777
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    Some things to consider:

    The realities of real world signals. Unless you are using a dog whistle as an instrument, the energy past 20kHz drops massively. It actually starts to drop off way before for natural instruments.

    Limited bandwidth or decreasing sensitivity with increasing frequency of microphones

    And finally the big one: analog low pass filter before ADC. Yeah. This is why people don't die from screwy CT scans results.
     
  11. MRC01

    MRC01 New

    Joined:
    Dec 10, 2018
    Likes Received:
    21
    Trophy Points:
    3
    Location:
    Earth
    Speaking of natural instruments with extreme HF, they are rare but they do exist. This excerpt is from an extremely high fidelity recording of a flute quintet with a castanet player.
    Check out the spectrum when the castanets are being played: significant energy beyond 20 kHz.
    [​IMG]
    Now check out the individual samples for the castanets:
    [​IMG]
    Those rise times look jerky because they're ridiculously sharp, the upper limit for CD; above 20 kHz near Nyquist. Castanets are an awesome audiophile tool.

    That said, recordings like this are extremely rare. A real gem if you can find one. Invaluable for testing audibility of DAC filters, compression algorithms, etc.
    It's track 8 of Geniun GEN 87108.
    https://www.amazon.com/Tour-France-DEBUSSY-BIZET-SAINT-SAENS/dp/B000WM803A

    Here's where it gets really interesting. The highest pure tone I can hear is about 15 kHz. But in A/B/X testing I can reliably detect a parametric EQ, -3 dB, Q=2 @ 18 kHz applied to this particular track. This suggests that the human ear is highly nonlinear, and we can perceive the sharpness of transient attack requiring frequency components we cannot hear as pure tones. IOW, I can't hear 18 kHz as a pure tone, but I can hear when it's missing from a transient because it sounds ever so slightly smeared in time.
     
    Last edited: Dec 12, 2018
  12. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    89,777
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    Oh, I'm very well aware of castanets, going back to the early mp3 days.

    f**k castanets. Seriously f**k'em. They aren't an awesome audiophile tool - they are for tools who listen to gear (or algorithms) for the sake of such.

    I don't listen to castanets just I don't listen to opamps nor to tubes. I listen to music, and if from digital sources, preferably from linear phase filters. I have absolutely no music, whether on LP or CD, with castanets.
     
    Last edited: Dec 12, 2018
  13. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    89,777
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    This test makes absolutely no sense in the context of our discussions regarding 44.1kHz PCM and analog reconstruction and AD-DA aliasing. I feel like you are making arguments for the sake of making random arguments.
     
  14. crenca

    crenca Friend

    Pyrate
    Joined:
    May 26, 2017
    Likes Received:
    3,824
    Trophy Points:
    113
    Location:
    Southern New Mexico
    The filter MQA applies (within an MQA DAC) is an attempt to be as "filterless" as possible, doing away with so called "ringing" but allowing massive in-band aliasing, which Bob S says is less of an audible issue than "ringing".

    Thanks for sticking this out purr1n and utralbike. The incessant talk/theory about "transients", dirac like out-of-band impulses/"leading edge" that are supposed to be critical to a real high fidelity recording is everywhere in Audiophiledom.
     
  15. MF_Kitten

    MF_Kitten Banned per own request

    Banned
    Joined:
    Jan 1, 2016
    Likes Received:
    181
    Trophy Points:
    43
    I had an idea recently, and this just happens to look like a good place to ask...

    If I have a pure dirac impulse response, like a single short "burst" which sounds like a cingle click when played back, and I use a linear phase EQ on that, I will get pre- and post-ringing... But if I use the raw file as a guide and just chop away the ringing, or at least most of it, I would still be left with a linear phase processed signal, right? The ringing being an artifact of the process?

    I know a lot about using linear phase stuff when mixing audio, but my use of it is always purely artistic and pragmatic, and I never did deep studying on the mechanics of it. I want to try linear phase processing of impulse responses, and since I could potentially chop away all the ringing before exporting, I just wanna know that I won't be negating the processing by doing it.
     
  16. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    89,777
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    Not sure I am following.

    Why do you want to apply linear phase EQ to an impulse response? Is this impulse response to be used to process a signal for room / EQ correction?

    In terms of chopping off the ringing of a raw file? What file, are you referring to the impulse response?
     
  17. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    89,777
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    OK. I think I get what you are saying.

    You can't do that. A real-world impulse response will have a long trail of uneven decay and ringing. You can't just chop "ringy" shit off after linear EQ processing because we have no idea what parts of the "trail" are legit (part of the room or speaker) and what parts are the consequence of the linear EQ function. The ripple will be embedded in the impulse response, not nicely tagged on after it - it just doesn't work like that

    If you chopped anything off an impulse response, even the low level uneven ringy stuff, then you've basically fucked it up. The low level stuff is crucial and much more audible than we might think because our hearing is logarithmic.

    In any event, linear EQ ripple is far far far far less consequential than what certain audiophile gear manufacturers or people with masters degrees but no engineering experience would have us believe.

    Linear phase filters are not a problem. Let's not make it one.
     
    Last edited: Jan 20, 2019
  18. MF_Kitten

    MF_Kitten Banned per own request

    Banned
    Joined:
    Jan 1, 2016
    Likes Received:
    181
    Trophy Points:
    43
    Okay, I'll elaborate.

    I can record an impulse response of, say, my guitar cabinet, by playing back a dirac "pop" through it and recording that.

    Say I want to EQ the resulting recorded signal with a linear phase EQ and print it to .wav... This leaves a version of the recording with ringing pre and post "pop". If I line them up, I can see the original non-ringy signal, and if I chop away anything that isn't present in the dry recording so they look the same, does that leave me with the EQ'd signal without ringing, or does it leave me with a pile of garbage?

    I can try this out some day to check actually...
     
  19. SoupRKnowva

    SoupRKnowva Official SBAF South Korean Ambassador

    Pyrate Contributor
    Joined:
    Sep 26, 2015
    Likes Received:
    4,249
    Trophy Points:
    93
    Location:
    Austin, TX
    If you chop off the “ringing”, it isn’t EQd anymore
     
  20. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    89,777
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    I highly doubt that if you record a impulse played back through your guitar cabinet, you are not going to see ringy behavior, unless the guitar cabinet, drivers, room, and microphone exhibit perfect behavior.

    If you actually did record a perfect impulse or near perfect one with no or little under/overshoot and decay, then you probably recorded the wrong thing, not what the microphone picked up.
     

Share This Page