Headphone imaging & soundstage

Discussion in 'General Audio Discussion' started by MuZo2, Feb 9, 2016.

  1. MuZo2

    MuZo2 Friend

    Pyrate
    Joined:
    Sep 28, 2015
    Likes Received:
    25
    Trophy Points:
    8
    What defined headphone imaging & soundstage. Different headphones & iem seem to have different perceived soundstage and also imaging. So lets make it specific to in ear monitors so we dont run into argument of open or closed headphones.
    Does frequency response or phase contribute to imaging or soundstage?
     
  2. Cspirou

    Cspirou They call me Sparky

    Pyrate
    Joined:
    Sep 27, 2015
    Likes Received:
    8,200
    Trophy Points:
    113
    Location:
    Northwest France
    For me soundstage doesn't come from phase but from hearing both channels at different time delays. For headphones I have never really experienced soundstage except on binaural recordings. I feel like it has less to do with headphones but more with amps or sources allowing a little amount of crossfeed.
     
  3. MuZo2

    MuZo2 Friend

    Pyrate
    Joined:
    Sep 28, 2015
    Likes Received:
    25
    Trophy Points:
    8
    May be soundstage is wrong word for headphones, but there is definitely headstage and imaging which some headphones and iem do better. Whats the science behind it?
     
  4. Tyll Hertsens

    Tyll Hertsens Grandpappy of the hobby - Special Friend

    Pyrate
    Joined:
    Sep 26, 2015
    Likes Received:
    2,805
    Trophy Points:
    93
    Location:
    Bozeman Montana
    Home Page:
    I think there is such a thing as "head stage" with straight up headphone listening without crossfeed, but it's not like that which we hear on speakers.

    I tend to think time coherence is one of the factors; I do think the Freak Phase JH Audio stuff did improve imaging on the JH13.

    Also, I think that a clean impulse response (without secondary transients and hash) is important to good imaging because you ears get their psychoacoustic cues from the leading edges of the signal. When those leading edges are obscured with hash there's less clear info to cue on.
     
  5. Kunlun

    Kunlun cat-alyzes cat-aclysmic cat-erwauling - Friend

    Pyrate IEMW
    Joined:
    Sep 26, 2015
    Likes Received:
    5,729
    Trophy Points:
    113
    Location:
    Meow Parlour
    I just posted in Marv's bully pulpit thread, but earphone makers have talked to me about adjusting their earphones to make use of the Haas effect to add space and depth. What exactly they do, I'm not qualified to say, but they are doing it (one did mention fine-tuning the FR in an unexpected way)...
     
  6. MuZo2

    MuZo2 Friend

    Pyrate
    Joined:
    Sep 28, 2015
    Likes Received:
    25
    Trophy Points:
    8
    Thanks for the information I will try to check those factors.
    Time coherence, is this only for multi-driver iems? for some single drivers I dont see that issue...
     
  7. Lurker

    Lurker Facebook Friend

    Joined:
    Oct 10, 2015
    Likes Received:
    132
    Trophy Points:
    33
    Stop me if I'm talking nonsense but could the impression of space be purely related to fr?

    I mean here are some earphones that are regarded as having a "good sound/headstage":
    Sennheiser IE800, Sony MDR-EX1000

    On Tyll's sheets they both show a more or less significant dip in the upper mids and a small spike at 5-6kHz
    On the other hand you have headphones like the Shure SE-530 which have a less impressive stage and show a consistent downslope instead of the dip and spike.
    I think the dip gives the impression of things sounding distant while the spike gives spatial cues...

    If imaging would be related to phase and or clean impulse response headphones like the se-846 or the k812 would be terrible in that regard which they to my knowledge aren't.
     
  8. MuZo2

    MuZo2 Friend

    Pyrate
    Joined:
    Sep 28, 2015
    Likes Received:
    25
    Trophy Points:
    8
    So recessed mids creates perceived headstage. But that would mean using EQ you can change headstage.
     
    Last edited: Feb 16, 2016
  9. Klasse

    Klasse Friend

    Pyrate
    Joined:
    Sep 28, 2015
    Likes Received:
    117
    Trophy Points:
    43
  10. ultrabike

    ultrabike Measurbator - Admin

    Staff Member Pyrate MZR
    Joined:
    Sep 25, 2015
    Likes Received:
    8,960
    Trophy Points:
    113
    Location:
    Irvine CA
    Man! TL;DR all of it.

    But about the random stuff I read here and there, here are some of my thoughts:

    1) Headstage diz or dat definition or not, playing stereo sounds through headphones will sound like... playing stereo sounds through headphones.
    2) The RS-1 is colored. Specifically, it's bright. Meaning, it's to loud in the upper mids and treble. Simple.
    3) Kool that Darth Nuts likes his SR-007. They are Ok.
    4) Don't get this "Four Depth Cues" concept. Maybe if he was a little more concise it would help.
     
  11. Klasse

    Klasse Friend

    Pyrate
    Joined:
    Sep 28, 2015
    Likes Received:
    117
    Trophy Points:
    43
    I've read that a lot of time ago.

    I don't take that concepts as a reference by any means. I've enjoyed some of his analogies with regards to sound and images. And I do think that the perception of soundstage is mostly in our mind, and not so much in the headphone.

    In my view we have to make a slight mental effort to put things together and give them spatial sense when listening to recordings through headphones (with binaurals this is easier but still). Even at a live venue, if you can't see where the sounds are coming from, you'll need a slight hint of thinking to guess with some degree of precision where's each sound source located.

    That's why I always take very precise soundstage descriptions with a (big) grain of salt.
     
  12. ultrabike

    ultrabike Measurbator - Admin

    Staff Member Pyrate MZR
    Joined:
    Sep 25, 2015
    Likes Received:
    8,960
    Trophy Points:
    113
    Location:
    Irvine CA
    I'm not sure soundstage perception is mostly in our minds.

    We have two ears and certain processing goes own in our brains to figure out location. But the brain needs data to do it's number crunching.

    I believe our hearing system works as a triangulation system when it comes to soundstage perception. It is aided by visual cues, tone changes, and perhaps other things as well. The brain may do it's Kalman Filter like stuff (or something of the sort), but it needs data. You can over determine a Kalman Filter and even the brain and maybe come up with a better solution. If you are under determined, you are pretty much out in the weeds AFAIK and IME.

    When using headphones, w/o the time deltas between left and right ear, w/o some credible cross channel information, w/o rate of change of tone, and w/o visual cues, we are pretty much screwed IMO as far as sound localization is concerned. And such is my experience with them unless some external to the brain processing is applied.

    To me "headstage" is a term to describe the randomness that our brains make out of sound localization through headphones. And such "headstage" is a fucked up soundstage, heavily dependent on how things were recorded among other things. I can only see it sort of working with binaurals, when aided by visuals and after some effort. With stereo, pretty much I feel you are on your own.
     
  13. Klasse

    Klasse Friend

    Pyrate
    Joined:
    Sep 28, 2015
    Likes Received:
    117
    Trophy Points:
    43
    Sure, you won't get the whole set of cues with stereo and headphones, but provided the right recordings there's still a good number of identifiable cues.
    Of course, different headphones presents the data in a diferent way and thus some people might prefer one or the other.

    One of my main critics to the Sennheisers HD600 and HD650 with regards to soundstage is that they have that kind of mid-forward presentation that I find hard to interpret spatially, and thus I end up with a perceived soundstage that's clearly on the intimate side of things, clear and focused but intimate, despite them being fully open. Recordings play a huge role here and thus this is not always the case, just something that's comparatively more frequent when I'm using HD600/HD650.

    On the other hand, I tend to prefer DT880/HD800's presentation (some would say slightly recessed upper midrange) because they give more room to the sounds, and thus the headphone feeling dissapears more frequently and I'm able to perceive depth more effortless.

    I don't know how this translates to IEMs since I hardly use IEMs, but anyway this is just my personal experience, my recordings, my ears, my brain. Subjective stuff.
    It's hard to aim to a particular frequency response when there's so much variance among recordings.

    Granted, headphones are not speakers.
    I had a pair of Focals 706V on Focal stands in a slightly treated room and the soundstage was far easier, bigger and more natural.
    And those are just entry level quality speakers.
    Headphones are headphones after all, there's still plenty of physical limitations and engineering challenges to be solved.
     
  14. ultrabike

    ultrabike Measurbator - Admin

    Staff Member Pyrate MZR
    Joined:
    Sep 25, 2015
    Likes Received:
    8,960
    Trophy Points:
    113
    Location:
    Irvine CA
    Well, in the end if something works and makes one happy then that's that.

    I personally do not like the DT880s and the HD800s. But they have their strengths.
     
  15. ipm

    ipm Acquaintance

    Joined:
    Dec 22, 2016
    Likes Received:
    27
    Trophy Points:
    28
    Location:
    Hamilton, Ontario, Canada
    Try looking into how this effect is produced in a recording studio.
     
  16. Mdkaler

    Mdkaler Friend

    Pyrate
    Joined:
    Jul 22, 2016
    Likes Received:
    390
    Trophy Points:
    63
    Location:
    CA
    In some cases I agree pushing back mids can elongate the headstage, but I don't perceive the effect with EQ.

    I'm rolling some tubes in my Vali 2 into Jotunheim polarity reversed with HD6xx kiss balanced. Not definite conclusions yet, so I am posting here with some of my findings. With live performance/songs mastered with wide headstage, vocals are pulled back (in volume and in perceived position) and the stage becomes narrower but much longer/more depth.

    With the right songs/tracks, I enjoy tremendously how the tube sound/nuances become spatial cues and fall into the right places. For a second I thought the Jot's become slower, but soon I realized it is taking the Vali 2 to the next level. I can now open my eyes and still enjoy the soundstage. A glimpse of listening to a decent speaker system.
    I have read that some people don't perceive any benefits with Vali 2 into Jot. Mind you this is done with polarity reversing, and I am the few that think Jot itself has some depth.
     
  17. ipm

    ipm Acquaintance

    Joined:
    Dec 22, 2016
    Likes Received:
    27
    Trophy Points:
    28
    Location:
    Hamilton, Ontario, Canada
    Tubes color sound. This suggests that at least part of the effect is EQ.
     
  18. Mdkaler

    Mdkaler Friend

    Pyrate
    Joined:
    Jul 22, 2016
    Likes Received:
    390
    Trophy Points:
    63
    Location:
    CA
    I forgot to mention the above was done with no software EQ.

    Not experienced with studios/mastering, but I know changing the decay of some notes can make it sound more spacious.
    Heard @Out Of Your Head, and with some closed backs the imaging is awesome. But when I tested it with flat music it sounded, well, flat.
     
  19. ipm

    ipm Acquaintance

    Joined:
    Dec 22, 2016
    Likes Received:
    27
    Trophy Points:
    28
    Location:
    Hamilton, Ontario, Canada
    Cool.

    It has to be a combination of things including various filters, EQ, panning, time delays, and how the overall speaker headphone interacts with the listener in a room etc. The question is, what is the dominant effect in all of this?
     
  20. skem

    skem Friend

    Pyrate
    Joined:
    Nov 1, 2017
    Likes Received:
    1,911
    Trophy Points:
    93
    Location:
    Charles River
    Reviving this dead thread. What causes sound stage from a DAC?

    Porkfriedpork and I are engaged in a protracted DAC shootout and we have noticed that different DACs, using the same source material, driving the same amps and the same headphones have different stage width. Like hugely different. This remains true even for DACs that seem to have very similar *perceptible* frequency responses and notionally similar reconstruction filters. Fwiw, the narrow stage was on Benchmark DAC3 with a their "custom" linear phase filter and the wider stage was on Gungnir Multibit with their custom "burrito" linear-phase filter. What else in the reconstruction to be considered here? Wonder if @ultrabike has insights here?
     
    Last edited: Jan 7, 2018

Share This Page