Kyma Forum
  Kyma Support
  developing some sounds

Post New Topic  Post A Reply
profile | register | preferences | faq | search

next newest topic | next oldest topic
Author Topic:   developing some sounds
garth paine
Member
posted 03 September 2001 17:13         Edit/Delete Message   Reply w/Quote
Hi all,

I posted a sound in the Sound Exchange a week or so ago, that I am developing, in the hope of getting some advice about ways of creating sounds with different characteristics and more potential. No one has checked it out, and I am just asking here if you have seen it? Perhpas it's too boring? if so thats fine, but I thought maybe I had just posted it in the wrong place, so thought I'd place a note here as well. The entry in the Sound Exchabge is called 'Plant Sounds'.

Cheers

IP: Logged

Bill Meadows
Member
posted 03 September 2001 20:27         Edit/Delete Message   Reply w/Quote
Well, I did try it out the other day, but I wasn't quite sure I understood it. It seemed like you were creating a synthetic input for an inverse FFT.

I'll give it another go...

Perhaps you could follow-up with a brief explanation of how it works.

IP: Logged

garth paine
Member
posted 03 September 2001 23:31         Edit/Delete Message   Reply w/Quote
Bill wrote
<b>Well, I did try it out the other day, but I wasn't quite sure I understood it. It seemed like you were creating a synthetic input for an inverse FFT.
I'll give it another go...
Perhaps you could follow-up with a brief explanation of how it works.<b>

Yep - I am interested in exploring a number of different ways of feeding a IFFT to generate as braod a pallet of sounds as possible. I am particualry interested in using methods that I can control using realtime input - in this case I have 2 small weather stations that output wind speed/wind direction/solar radiation/temperature data, and I am interested in using that data and giving it a voice. I see this as a voice for the internal processes of the plant. I don't have a lot of synthesis theory under my belt, so I am looking for ideas from member of the list that I can go and research, try out etc - something to point me in new directions. I have worked with the pulse as input to an IFFT, and I thought perhpas I could also look at using other sources (not realtime spectra at this stage) to generate sounds that have more depth and variability than the ones I have got happening in the sample sounds, which as you rightly point out are rather simplistic at this stage.

I hope that helps clarify my intentions.

IP: Logged

David McClain
Member
posted 04 September 2001 20:45         Edit/Delete Message   Reply w/Quote
Hi Garth,

I downloaded it too and it got me thinking about what you wanted to accomplish... So it was anything but boring...

Trying to feed an IFFT is tricky because to be meaningful in the time domain, you pretty much need to load up the first few samples of each frame with information -- that's the bass end where we hear most things. Once you get to the mid point of the sample period and later, you are working at frequencies above 10 KHz and this is tough to hear for some of us.

The folks at Native Instruments have a tool that takes sample files, performs an FFT, and then sends each of the FFT cells into varying duration delay lines with feedback. Then they IFFT the results to get bizarre sounds. They call it their "Spektral Delay".

I have been toying around with making Kyma do something similar, but it requires some serious memory pipelines -- one delay per FFT cell. That's not exactly Kyma's forte, but I'll bet there is a way to do it. How about a second FFT on the FFT cells, convolve with a waveshape having several spikes and IFFT back to the frequency domain, then IFFT again back to time.

This is not exactly the same thing, but it is strongly reminiscent of something called the Cepstrum. Cepstral processing can be used to remove reverberation, for example. I'll bet it could be used to plant some in as well.

Just some thoughts here....

- DM

IP: Logged

garth paine
Member
posted 05 September 2001 16:46         Edit/Delete Message   Reply w/Quote

Reeds3.KYM

 
I am converting some SuperCollider code into Kyma. The SC code goes:

src = CombN.ar(Impulse.ar(in.poll(in5Args), in.poll(in5Args1)), dur, dur, 3, (amp3.kr*in.poll(in3Args1)))

out = IFFT.ar(fftsize, 0, cosineTable, nil, window, src,0)

CombN(input, MaxDelay, DelayTime, DecayTime to -60db)

IFFT(size, offset, costable, inputWindow, outputWindow, realInput, imaginaryInput)

So the combN has a delay time - I have been looking at using a reverbElement to replicate this, as suggested by SSC, and that certainly got closer to the SC code, and the delay line makes the sound fuller; However, it seems to me that creating an artificial input for the IFFT is limits the output to a certain aesthetic quality - I am interested in widening this aesthetic. As I am working on an installation that will gather data from some weather stations I have, it seems a nice conceptual model to use the incomming data (every 90ms) as the pulse data for the IFFT process. So using a PulseTrain allows me to put the weather station in as pulse frequency - when I vary the dutyCycle, which I thought just changed the width of the pulse, it just goes to noise. Anyway, I have tried a reverbElement on the pulse before the IFFT - sound attached here. So I am wondering about your idea David, weather one could look at making the delay time in the ReverbElement random in some way, but as you comment, we really need a seperate delay line per group of data input - I have been avoiding using samples as input because I really want to drive the auralise the weather station data.

For more info on my use of the weather stations, and my Reeds installation, see http://www.activatedspace.com.au/Installations/Reeds/

IP: Logged

David McClain
Member
posted 05 September 2001 18:56         Edit/Delete Message   Reply w/Quote
Hi Garth,

Well listening to your Web sounds, I can hear the effects of filling in the aft portions of the arrays feeding the IFFT. Lots of high frequency energy. But the pulses and gurgles sound great!

Since I am not a SuperCollider user (you Mac types!) I don't know what the Comb actually does. Can you describe it for me? I think I hear what you are using it for in the reverberent glassy pings. Is it simply a wavetable with spikes at several locations? Or is it a real comb filter with nice rounded humps for the nodes and loops.

One never realizes that a little pond could be so active!

- DM

Actually, after putting a recording of your ReedSounds through the spectrum analyzer, it looks quite unlike human created musical sound. There is nearly equal representation, over the long term, of every spectral frequency from 0 to 11 KHz (The recording from the Web site was 22.05 KHz sample rate).

I would like to understand better how you convert your weather instrument readings into sample values for use in the SC Comb and IFFT routines. I imagine these are 8 bit samples, but what sample rate, and how do you decide when to start a new FFT frame?

- DM

[This message has been edited by David McClain (edited 05 September 2001).]

IP: Logged

garth paine
Member
posted 05 September 2001 19:30         Edit/Delete Message   Reply w/Quote

Reeds020701_4.txt

 
Perhpas this helps.

Comb delay line. CombN uses no interpolation, CombL uses linear interpolation, CombA uses
all pass interpolation.
in - the input signal.
maxdelaytime - the maximum delay time in seconds. used to initialize the delay buffer size.
delaytime - delay time in seconds.
decaytime - time for the echoes to decay by 60 decibels. If this time is negative then the feedback
coefficient will be negative, thus emphasizing only odd harmonics at an octave lower.

So looking at
dur = fftsize/Synth.sampleRate;
src = CombN.ar(Impulse.ar(in.poll(in5Args), in.poll(in5Args1)), dur, dur, 3, (amp3.kr * in.poll(in3Args1))); //in5Args

// inverse transform
out = IFFT.ar(fftsize, 0, cosineTable, nil, window, src, 0);

The IFFT is fed an audio rate signal.

this example shows a basic use of CombN:
{ CombN.ar(Impulse.ar(300), 0.02, XLine.kr(0.0001, 0.02, 20) , 0.01) }.scope;

illustrates a Comb filter with no interpolation, which has an impulse as it's imput - the delays from the CombN.ar take on a nice even decay slope from the initial impulse. The XLine reduces the delay time, so the output moves closer and closer to the original pulse only.

This code seems to make a more mellow sound that the sound I am getting out of the Kyma, and one with more variation, but I can't see why.

I have uploaded the SuperCollider code so you can see more of what is happenning in the SC patch - the instruments are contained in the sig1, sig3, sig5, sig6 functions. A lot of this is made from altered examples supplied with SC, so the process of converting it to Kyma makes a good oportunity to come to understand the algorithms in more detail, and develop my very limited synthesis knowledge. You will see a few other uses of IFFT in sig3, sig5, sig6.

I am keen to make some variations of the sounds more organic/rounded/mellow in some way, so they are not quite as intense to listen to.

Thanks for your ideas, help etc - It is really valuable to have a forum to discuss the ideas (as basic as they may be) so I can learn and develop these sounds into something useful.

IP: Logged

David McClain
Member
posted 05 September 2001 19:48         Edit/Delete Message   Reply w/Quote
Okay Garth,

So the Comb filter really is a comb filter in the classical sense. All it really seems to be is our usual Kyma VariableDelay with feedback.

Feed a signal into a Kyma delay with feedback, interpolated or not, and you get resonant peaks wherever the delay time allows for constructive interference at the input. If the feedback is positive, this means that the delay in samples equals a multiple of the period. For negative feedback, an odd multiple of the half-period. For a delay length of 441 samples at a sample rate of 44.1 KHz that means a signal of 1, 2, 3, 4, ... KHz for positive feedback, or 0.5, 1.5, 2.5, ... KHz for negative feedback.

But regardless, that's why it sounds more mellow. A ping into a comb filter should sound like rapping a tube.

So, you may have said as much already, but I see that you are simply comb filtering impulsive signals and feeding them as your real-input to the IFFT. Your imaginary input is zero, which is fine. Kyma takes anything for imaginary input and ensures that you get a real temporal signal as output (they are enforcing Hermitian symmetry, or put another way, the Kyma IFFT is a kind of inverse Hartley transform).

So one idea for you is to split your signal in some fashion, i.e., time delay or time reversal and feed that other component in as the imaginary component for the IFFT. I have no idea how this will sound! One thing it will do is create some phase shifts and temporal delays in the sounds for you. I think the reverb idea ahead of the IFFT is a neat one too.

You are really in uncharted waters here!

[Another idea is to try some time varying, i.e., modulated, filtering on the output of the IFFT, with a slope between 6 and 12 dB per octave. That will give it a more humanly recognized spectral character -- if that even matters here!...

One of the reasons I say this here is that our hearing is peaked around 2-3 KHz. So when the average spectrum is white as in your Web music, we tend to hear it as peaked in the 2 - 3 KHz range. You have representation at all frequencies, so by modulating a filter cutoff you emphasize different frequencies for our human hearing.]

- DM


[This message has been edited by David McClain (edited 05 September 2001).]

[This message has been edited by David McClain (edited 05 September 2001).]

IP: Logged

garth paine
Member
posted 05 September 2001 19:55         Edit/Delete Message   Reply w/Quote
Ok cool - I'll choof off then and have a go at that - one thing I can;t see is where the imaginary component is - where do I find it? How do I put something in as an imaginary component?

IP: Logged

David McClain
Member
posted 05 September 2001 20:05         Edit/Delete Message   Reply w/Quote
You just feed them into the IFFT in Kyma as left channel = real input, right channel = imaginary input. Use a ChannelJoin Sound for this and feed separate signals to the left and right channels....

Good Luck, and let us hear what you develop!

- DM

IP: Logged

All times are CT (US)

next newest topic | next oldest topic

Administrative Options: Close Topic | Archive/Move | Delete Topic
Post New Topic  Post A Reply

Contact Us | Symbolic Sound Home

This forum is provided solely for the support and edification of the customers of Symbolic Sound Corporation.


Ultimate Bulletin Board 5.45c