Kyma Forum
  Kyma Sound Exchange
  The Haas Effect

Post New Topic  Post A Reply
profile | register | preferences | faq | search

next newest topic | next oldest topic
Author Topic:   The Haas Effect
David McClain
Member
posted 23 January 2001 08:02         Edit/Delete Message   Reply w/Quote

haas.kym

 
Here is a simple sound for experimenting with the Hass Effect (loudness vs. time of arrival). The left channel has a short delay (1-10 samples) and the right channel has an attenuator. Playing a live sound, e.g., a recording, through this Sound produces some astonishing results.

I found that with as little as 20 to 40 microsec delay in the left, the sounds appear to come from the right, even when the amplitude is only 10% of the left amplitude. Of course this is a completely unnatural situation since sounds on one side are generally louder on that side as well.

I think it is amazing that one can discern time of arrival differences on the order of 1/50 - 1/30 wavelength of 1 KHz tones.

- DM

IP: Logged

Marcus Satellite
Member
posted 23 January 2001 19:47         Edit/Delete Message   Reply w/Quote
it is amazing the degree to which
we distinguish temporal info via our
ears. there are some fascinating
examples of exploring large data
sets as sound instead of as visuals.

our pitch discrimination is awesome
too. our eye cannot distinguish
fine shades of value or hue to the
degree that the ear can detect
difference in pitch. but then
light waves probably don't interfere
to the degree that sound waves do.

our eyes cannot even come close to
distinguishing events in time like
our ears. for simplicity, compare
a typical sample rate to a film
frame rate: 44100/24= 1837.

our eyes however process an enormous
volume of information in the same
time span. using another set of
questionable numbers:
#filmResolution pixels*time/#audioframes ->
2048*1216*24/44100 = 1,354

uh...in advance, i'd like to say i recognize the complete lack of rigorous scientific backing to these figures. for fun only.

IP: Logged

David McClain
Member
posted 23 January 2001 23:33         Edit/Delete Message   Reply w/Quote
What impresses me the most is that we can discern differences on the order of 20-40 microseconds. The wave propagation speed for nerve conduction is on the order of 10 m/sec. And I am accustomed to seeing effects visibly and acoustically blur when their temporal separation is on the order of 100 ms. So clearly, it doesn't seem to be the time of arrival per se, but some discernment of wave interference being processed in the brain.

I don't know offhand any way to test this hypothesis, because it takes two ears to discern this effect. So how would one distinguish between time of arrival and discernment of interference? Perhaps by using very short pulsed sounds?

- DM

IP: Logged

David McClain
Member
posted 24 January 2001 02:52         Edit/Delete Message   Reply w/Quote

phfrq_haas.kym

 
Here is another variation on the Haas effect, performed only with phase alteration.

This Sound takes a stereo signal in and splits it into separate L and R paths. These signals are sent through All-Pass phasing filters whose frequencies are driven asymmetrically by an LFO, and their Q's are driven symmetrically from another LFO.

The phase shifts are quite large in comparison to the sample rate (10's to 100's of samples delay).

Since only the frequencies within the effective passband (f/Q) of the filters are delayed it is interesting to hear entire sounds from bass on up to highs being shifted left to right and back as the LFO completes its cycle. But remember that only a few selected freqencies are being delayed here.

So this, I think, points up another very strong psychoacoustic effect -- namely that early arrival of higher partials effectively sends an entire sound (even bass sounds) to one side or another. A sound is apparently being identified and localized to one side or the other by the brain with only a few high partials being delayed.

I don't know the name of this effect -- maybe it is just a variation of the Haas effect.

- DM

IP: Logged

pete
Member
posted 24 January 2001 07:13         Edit/Delete Message   Reply w/Quote
Maybe David it is because with real world sounds it is not often that there is enough delay to make bass frequncies shift much (in terms of phase angle) ,that our brains have decided to use only the higher frequencies to determine position.
What I find amazing is that we don't seem to be able to notice any difference in the phase relationship between harmonics i.e If we listen to a square wave (in mono) then put it through a phase shifter that keeps all the harmonics at the same level but shifts the harmonics to different phases, then listen again , we can't tell the difference.
Dose this mean that the sensor in our ear for deciding what the sound is is different to the sensor that determines where the sound is.
What the sound is needs only one ear.
Where the sound is needs two ears.

any thoughts on this ?

pete

IP: Logged

David McClain
Member
posted 24 January 2001 17:51         Edit/Delete Message   Reply w/Quote
While it is true that we are largeley insensitive to absolute phase among the partials, you do hear a difference if you play a phase shifted signal in one channel against the original signal in another channel. (Of couse simply adding them in Mono would wipe out many of the partials if the phase shift is large enough).

Friends of mine are into "Very High" fidelity listening systems and they arue about the phase relationships among their speaker/drivers all the time. But just move a few inches from your favorite listening spot and you have substantially changed the phase relationships in the highest frequency bands. But the music doesn't generally sound noticeably different does it?

I suspect the modified Haas Effects results from our early (cave man era) days when we needed to be on guard against potential predators. I do like the point you made about not having enough time with the lowest frequencies to ascertain spatial location. And so your point coupled with our primitive protective brains seems enough to interfere with some musical effects, i.e., it is hard to shift the spatial location of just the highest partials with respect to the main body of sound.

I find these effects quite fascinating. What if we could inject control signals more directly into the brain, as by electrical transducers applied to the scalp? Would we be able to overcome these ear-wired brain responses or not?

- DM

IP: Logged

David McClain
Member
posted 24 January 2001 18:16         Edit/Delete Message   Reply w/Quote
I guess a large part of the lack-of-effect when shifting listening positions is that although the absolute phase of the higher frequencies has shifted, there is still a consistent phase relationship between the two ears.

I know that sounds can be identified by the harmonic interrelationship between upper partials -- just send a voice through a hi-pass filter like a telephone and you can still identify it even without the lower partials.

But suppose one were to shift the phase of only one or two high partials. What would be the effect? I think my modified Haas experiment modified entire clusters of high partials and that may have led to the perception of entire sounds shifting in spatialization. The brain may have been identifying the entire sound on the basis of harmonic interrelationships among group of higher partials.

If I just crank up the Q in the experiment to rather large values, like 100 or so, and use a high enough sound source, I should be able to hit only one or two high partials...

- DM

IP: Logged

pete
Member
posted 27 January 2001 14:10         Edit/Delete Message   Reply w/Quote
Hay David
This is where kyma can help. By doing a real time analisis/resynth there is a tool called spectrum modifier which can turn on and off individual partails so that if there were two spectrum modifiers feeding two osc banks then individual partials could take a different route i.e. osc bank two and get modified and remixed.
You triggered some thoughts re my quest to find out how the brain decodes sound and what info about the sound is it being given to work with.
As we know a single sine wave as no good for telling us where in the room a sound is comeing from (asuming the speekers are good and there is nothing buzzing to add extra harmonics). Maybe this is because in terms of phase the delay is ambigious lets. say there was no phaseshift between left and right ears, this could realy be 0 degs or 360 degs or 720 degs etc. But as soon as we add the info for another harmonic, then the number of posibale answers to what the above phase shift could be, is halfed (I think ?).If we continue to add harmonics then the real delay is confirmed as the second posiable delay that was correct for all harmonics would be in terms of minites or even hours and therfore not worth considering.
So what I think is that our brain is being sent L vs R phase info for each partial and not strick L vs R time delay info as such.
Maybe this could be prooved by sending a mono sound but muting the first milisecond of sound in the left speaker and then muting the right speaker for one milisecond just before the sound is cut.
Then tring the opposite i.e. puting a delay of one milisecond on the left hand signal but switching the sound on just after both left and right signals have started and then switching the sound off before the left or right signal have finnished. I wonder which of the above would give us positional info. Of cause the soucre sound would have to be constant as any changes during the middle of the sound would give our ears clues about the dummy delays.

also in your experiments where your source sound was at 1Khz but you could detect tiny phase shifts, I wonder if you could still detect them if the higher partials were cut.

As normal I'm just guesing and I wonder if any one has any thoughts.

pete

IP: Logged

David McClain
Member
posted 27 January 2001 14:39         Edit/Delete Message   Reply w/Quote
Wow Pete! You have given me quite a list of things to try here. It will probably take me the rest of this weekend to try them out. But I really appreciate your feedback on these ideas! I think Kyma is a perfect tool for such investigations. It can handle just about everything you suggested in your post.

- DM

IP: Logged

David McClain
Member
posted 27 January 2001 15:01         Edit/Delete Message   Reply w/Quote
Perhaps I should explain some of the motivation behind these experiments. Others might want to try as well...

I was listening to a swept oscillator the other day, through my hearing correction system, and I noticed that the tone, starting at low frequencies, seemed in the middle of the stereo field, where it should be. But as it swept up toward 4 KHz it veered completely to the right, and then about 6.5 KHz it swept sharply to the left. The sound was applied equally to left and right channels, and I verified this effect even when wearing the headphones reversed. So I can say it isn't due to an amplitude imbalance through my equipment. (If you are an audio pro then maybe you don't want to perform this experiment... I don't depend on my ears for my living.)

The sharp sweep at 6.5 KHz occurs in a very narrow band between 6.3 KHz and 6.6 KHz. I find this surprisingly narrow at such a high frequency. The veering at 4 KHz seems completely uncorrectable, as even selective amplification by gains as large as 90 dB fail to provide an audible signal in my left ear.

But regarding the 6.5 KHz shift, my thought was that perhaps by manipulating phase of selected freqencies I could employ the Haas effect to realign the sound to the center of the stereo field all the way up the sweep, since the Haas effect provides spatialization despite amplitude imbalances. (provided, of course, that you hear at least something in both ears...)

But my initial experiments with the modified Haas effect let me down. Perhaps some of Pete's suggestions of phase shifting in combination with drastic delays and amplitude modulation can provide a partial answer.

My hope is to restore "normal" hearing for critical listening needs, such as mixing music. After my modified Haas experiment, I began to realize the the old saying "You can never completely restore lost hearing" was true. I can restore a great deal, but some fine details seem to be uncorrectable.

Anyway, Pete, you have given me restored hope. I will try some of your ideas and see where they lead. Thanks!

- DM

IP: Logged

David McClain
Member
posted 27 January 2001 15:10         Edit/Delete Message   Reply w/Quote
BTW, my hearing correction system runs currently on a 40 MHz SHARC DSP, and provides "loudness" compression in 64 frequency bands, independently for left and right ears. It is magnificent!

- DM

IP: Logged

pete
Member
posted 27 January 2001 19:41         Edit/Delete Message   Reply w/Quote
Just another thought
I find it odd when people say how great our sensors are and say that our eyes are so good compaired to man made cameras. I say no its our brains that are so great. Imagine if a TV camera was only cappable of focusing on one point and had a blind spot not far off the center of the screen. What makes the eyes so great is that the brain can move the eyes and refocus on the point of interest, fill in the blind spot and any other missing bits, and compensate for all different types of light. I believe our ears change all through our lives and that with experiance and exposure our brains can compensate for most missing info. Sometimes helping the sensors out (like turning on the light if the room is dark) is nessesary but the brain is the best tool for fine tuneing.

Although, contridicting every thing I've just said, I've been reading for years and I still can't spell, so maybe the brain isn't all powerfull after all.

or maybe it's just mine that isn't all powerfull ?

IP: Logged

gelauffc
Member
posted 06 February 2001 06:10         Edit/Delete Message   Reply w/Quote
quote:
I was listening to a swept oscillator the other day, through my hearing correction system, and I noticed ...
- DM[/B]

Hi David,

I take it your hearing correction system works with FIR filters! Is your Impulse respons length odd or even numbered? I would expect odd (or even with a 0.0)to get a integer number delay. If even, you will get x+1/2 a sample delay and your in trouble.

With the haas effect you showed already that if some sound arives a very very little bit faster on one ear than the other it will get that direction. If you use a different impulse response for left and right this will effectively change the arrivel of the sound (also freq. dependend).

Spatial hearing is a nice subject.

IP: Logged

David McClain
Member
posted 06 February 2001 06:55         Edit/Delete Message   Reply w/Quote
Gelauffc --

I use a hybrid of FIR and other things, but I don't get your point. With equal length FIR's, regardless of the number of taps, in both left and right channels, the delay will be the same for each channel. It makes no difference whether I use an even number or an odd number of taps.

Am I missing something?

- DM

IP: Logged

David McClain
Member
posted 06 February 2001 06:57         Edit/Delete Message   Reply w/Quote
Also... With FIR filters, all impulse responses have the same delay, regardless of their specific shape, as long as the number of taps remains constant...

- DM

IP: Logged

David McClain
Member
posted 06 February 2001 07:00         Edit/Delete Message   Reply w/Quote
...and this is true at all freqencies as well. What isn't constant is the particular phase shift of any two frequency components. But the delay through the filter is independent of frequency...

Perhaps this is what you were trying to get me to see? If the phase shift of a higher frequency component is different from that of a lower frequency component, then maybe there is some kind of Haas effect? Hmm... I'll have to think of an experiment to test this idea...

- DM

IP: Logged

gelauffc
Member
posted 10 February 2001 07:10         Edit/Delete Message   Reply w/Quote
quote:
Originally posted by David McClain:
...What isn't constant is the particular phase shift of any two frequency components. But the delay through the filter is independent of frequency...
- DM

Dear David,

Due to RSI, I am not able to write long stories here. For these subjects one needs to do this, but I'll try again a short version:

1) I do understand FIR, I think. So yes the overall delay for L and R will be the same. If this was not... what are we discussing.
2) As I understood it, for most people with hearing disorder, the L and R ear differ in freq.response? So I could emagine you have 2 different impulse responses in your FIR filter for L&R?
3) If true, do not think in the f-domain(as you are used to) but the time-domain (time lobs). The impulses will be different! That will cause the haas effect to become effective. Because of the shape of the impulse, our ear will respond different for any frequency. One gets backwards-masking!
4) Let me also remind us all of the following:
Direction clues come from:
-1 SoundPressureLevel
-2 Interaural carrier delays(20Hz...2kHz)
-3 Interaural Envelope delays(200Hz ..> )
These clues are also sort of non-linear one can not simply cancel the one with an other.
5) You mentioned having notches in your earing(how long do you have them). Keep in mind that notches in the 1kHz..10Khz range are very importent for directional hearing. Think of HRTF curves! It is because of this, that cheap 3D-sound effects work!

I have a substract of a book from J. Blauert called "Raumliches Horen". Could be a reference for some people who read german.

-Christiaan

IP: Logged

SSC
Administrator
posted 10 February 2001 14:01         Edit/Delete Message   Reply w/Quote
For people who would find the English version easier to read, it is called "Spatial Hearing" by Jens Blauert, published by MIT Press, ISBN 0-262-02190-0.

This book is a very good survey of research activities into spatial hearing.

IP: Logged

Burton Beerman
Member
posted 10 June 2001 09:40         Edit/Delete Message   Reply w/Quote
Hi:
I am new to the forum and to kyma. just a simple question. no problem downloading .zip files, but how does one download .kym files
from the forum?
Burton Beerman


quote:
Originally posted by David McClain:
Here is a simple sound for experimenting with the Hass Effect (loudness vs. time of arrival). The left channel has a short delay (1-10 samples) and the right channel has an attenuator. Playing a live sound, e.g., a recording, through this Sound produces some astonishing results.

I found that with as little as 20 to 40 microsec delay in the left, the sounds appear to come from the right, even when the amplitude is only 10% of the left amplitude. Of course this is a completely unnatural situation since sounds on one side are generally louder on that side as well.

I think it is amazing that one can discern time of arrival differences on the order of 1/50 - 1/30 wavelength of 1 KHz tones.

- DM



IP: Logged

babakool
Member
posted 11 June 2001 13:49         Edit/Delete Message   Reply w/Quote
Hey Burton! This is Guy from the immersion. You've stumbled across one of the oddities of the forum. The files you seek from this topic are no longer there. There seems to be some termination point for this method of archiving uploads. This is one of the reasons I suggested a centralized location for permanent storage of this type of stuff and just use a filename link in the posts. It would make it much easier to locate stuff having it all in one place as well. Carla wasn't sure what the Infopop features were in this regard so it may not be possible with this software. I may have these files but will have to look so check back. Best regards

IP: Logged

babakool
Member
posted 14 June 2001 12:09         Edit/Delete Message   Reply w/Quote

haas.kym


phfrq_haas.kym

 
Here are the missing files from this topic.

IP: Logged

All times are CT (US)

next newest topic | next oldest topic

Administrative Options: Close Topic | Archive/Move | Delete Topic
Post New Topic  Post A Reply

Contact Us | Symbolic Sound Home

This forum is provided solely for the support and edification of the customers of Symbolic Sound Corporation.


Ultimate Bulletin Board 5.45c