Discussion in 'General Audio Gear Discussion' started by Mikoss, Sep 14, 2016.
I generally agree with this idea. A reference is need as a basis of comparison.
If you are going to post in a year old thread, please at least add SOMETHING to the discussion.
Castle is a retard BTW. He was highly encouraged to go away.
He is of a rare breed and one of two who have earned the unique distinction of being put on ignore.
Life is too short for a known S/N reduction.
But then I imagine I too am on a list or two as well.
And so it goes.
On timbre, a lot of good descriptions in this thread already, but one thing that always stood with me is the relation ship between (1) harmonics and (2) their indivual envelopes over time. It's the combination of those two that makes one instrument sound different from another.
I remember in uni doing a project where we had to synthesize a piano sound and a trumpet. Sure enough, a little Matlab routine to generate harmonic sine tones with a specific envelope and you got something that sounded like those particular insturments... Kind off... More like fisher price version of it. Your brain could interpret it as that instrument but it sounded very "synthetic" *duh*.
The FR and/or harmonic structure is one thing, but what makes something really "real" is how those harmonics/overtones decay over time. And that's where it becomes increasingly difficult to capture that in a meaningful measurement in my opinion.
The term plankton in the audio context was created/coined from CS correct? I've never heard anyone out of this community use the term.
Interesting. Do you recall if the code had accounted for decay component such as this? I suppose that it must have.
I did some Matlab-based projects recently that involved putting accelerometers on grinding equipment and looking at signals via an FFT and also using wavelets. The idea was to classify when the equipment was going to fail in some way via the frequency response in the vibrating equipment enclosure. This was a multivariate problem given the placement of several sensors.
My point is that perhaps the decay could be captured as it progresses in time in some way via wavelets. This may make a signal reconstruction work better than looking for basis functions that ignore time.
The original assignment was just to use exponential decays and that sounded very fake, but still a piano was a piano and a trumpet was a trumpet. I went one step further and actually derived envelope functions from analysing real piano sounds. This is a long time ago, but what I remember was that I used a sliding FFT window over the piano sample and extracted the magnitude of the harmonics. I then used (I think) some kind of spline interpolation to be able to synthesise my own signals.
Part of a piano's sound is also realizing that the three strings per note (and really the whole piano) are not all perfectly in sync/tune. If it were, it would sound very artificial. There's a bit of give and each string will be slightly off from the next.
Yeah. I borrowed it from Lampizator guy. Felt it was a more specific term than detail. There's detail, then there's plankton. For example, the Benchmark DAC1 is detailed, but doesn't reproduce much plankton.
Cool. Signal shaping is not easy.
The main thing I do is data analysis for fault detection. My stuff establishes a set of basis functions of some kind and looks to determine how a fault provides a signature along the selected basis functions. The idea is to find how 'bad' is different than 'good'. You establish the 'good' first, then see when the input set of signals differs from this.
The selection of the basis functions to reduce the dimensionality (number of necessary basis functions) of the data in a way that is still sensitive to faults is the thing. The point with this is not signal reconstruction since much of the signal is ignored in order to focus on the fault; however, the methods do provide a means to rebuild a signal as needed. You would simply keep the all the parts that you decomposed the signal into (minus the noise I suppose).
The wavelets could provide something beneficial to this given that they not only account for signal shaping, but also provide a means of showing when the shape occurs in time.
This may or may not work in this music example, but someone could try it at some point to see how it fairs in removing the 'unnaturalness' that you reported. It may work.
All the best to you.
Its a nice instrument for sure. I have a few friends that play very well. I never got past one finger on each hand.
I suppose that all of this signal processing talk leads to the conclusion that this can't be all that brutally complicated with the computing power that is available today. After all, a synth can emulates a piano fairly well. This means that even before Moog in the 1960's, people have understood how to do this signal simulation work to a good degree. The easiest thing may be find an open-source code that is used in a synth of some kind and to see how it provides the desired effect.
Anyway, its an old thread but its interesting to me.
I respectfully disagree. I've yet to hear a really convincing emulation. The most popular and convincing virtual pianos used in music production still rely on sampling hundreds of individual real piano strikes recorded at different velocities.
No problem at all. I'm sure you are right. I just did not want to sound like I had all the answers with my previous posts about wavelets, etc. I hope you understand that I'm not trying to push a viewpoint on anyone at all. This is just friendly conversation for me that is no different than sitting and having a tea and chatting with someone.
Regarding the piano emulation, do you have any credible links on that, that I could look at? I like the data processing angle on things and this may be interesting reading/viewing. Algorithms and code would be even better.
Separate names with a comma.