|
Kyma Forum
![]() Tips & Techniques
![]() spectral processing / using fft vs. LiveSpectrumAnalysis & Oscillatorbank
|
| next newest topic | next oldest topic |
| Author | Topic: spectral processing / using fft vs. LiveSpectrumAnalysis & Oscillatorbank |
|
johannes Member |
hello, iam new to kyma. i just try to understand the basic concepts of spectral processing using kyma and how it compares to similar processes in max and supercollider. i worked a lot with ircamīs ftm library for max, which offers nice ways to store, access and process spectral data via matrices. in kyma it seems to be bit different. : ) from what i understand, in kyma we can use the LiveSpectralAnalysis or SpectrumInRam Sound to get spectral data that we process by arithmetic and we than feed into an oscillator bank. or we can use the FFT sound to transform a signal from time to freq domain or the other way around. here we work with real and imaginary data that can be converted to bin-amplitudes and bin-phases, right? what would be the best way when dealing with sound files that also contains broadband noise components? like a sound of crashing glas. is it possible to treat tonal components separate from noise components? would make sense for transposition. of course i saw peteīs kiss 2011 presentation. any papers, example-sounds or general ideas thanks alot. [This message has been edited by johannes (edited 17 August 2013).] IP: Logged |
|
SSC Administrator |
You could try experimenting with SpectrumModifier and the Spectrum File editor as ways to modify your spectra before resynthesis. IP: Logged |
|
johannes Member |
thanks for the reply, i will look at the spectrum file editor. i found the spectrumVoicedUnvoiced prototype. but using it, it seems not to be possible to control the amplitude of noise and sinusoidal components individually (like in the rx2 deconstruct module or in ircamīs supervp phasevocoder) is there a workaround? i would also like to apply the effect of phase randomization (used within phase vocoders) to a liveSpectrumAnalysis & oscillatorbank combi. i wonder if we could emulate this effect by adding (frame by frame) small random offsets to the frequencies of the individual oscillators in our oscillatorbank ? any experiences with that? thanks, johannes IP: Logged |
|
SSC Administrator |
quote: That prototype switches on/off according to whether that segment is voice or unvoiced, so there is not a separate amplitude control on each one. You could use two copies of the module, one to generated the voiced and the other to generate the unvoiced only and put a Level on each of them to achieve your level control.
quote: Yes, you could mix a Noise module in with the right channel of the spectrum. This would add a different random number to each partial on each frame. Are you coming to KISS in Brussels? I think Pete Johnston's talk is going to be on spectral manipulation again this year. IP: Logged |
|
johannes Member |
ah ok, i think we are talking about two different things. sorry if i was a bit unprecise. i want to control the amplitudes of noise AND sinusoidal components within a segment and repeat this process over n successive segments. regarding the phase randomization: yes, i should definitely think about coming to kiss2013 to hear more about spectral processing from pete. IP: Logged |
|
SSC Administrator |
quote: Yes, there's a scale factor control built into the Noise module already. IP: Logged |
|
johannes Member |
"We don't separate the noise and the harmonics in each frame so you would have to detect the noise using some other criteria." do you have an idea how i could do it in kyma? "Yes, there's a scale factor control built into the Noise module already." perfect! IP: Logged |
|
johannes Member |
is there an example sound, that demonstrates how the spectrumVoicedUnvoiced prototype can be used? i do not understand where (in which sound) to feed the output of the spectrumVoicedUnvoiced prototype in order to control the amplitudes of voice and unvoiced segments individually. beside that i am still very curious about separating and controlling amplitude of the noise and the harmonics in each frame. someone already tried to achieve that and likes to share thanks, johannes [This message has been edited by johannes (edited 30 August 2013).] [This message has been edited by johannes (edited 30 August 2013).] IP: Logged |
|
Peripatitis Member |
Hey Johannes. I might be wrong but i think the spectrumInRam and the liveSpectralAnalysis sounds output amplitude and frequency instead of magnitude and phase that fft does in max. I have experimented by placing all pass filter, or delays after the frequency output but the result, changes completely the 'pitch' of the sound. Adding a small amount of noise though works nicely. Perhaps limiting the output of a spectral modification of the frequency back to 0- 1 might be needed as well (in max they have the phasewrap object) i don't know how ftm works with fft's but through jitter matrix storage and processing is possible as well. My guess is that you could probably program your own spectrumVoicedUnvoiced sound in kyma as well IP: Logged |
|
johannes Member |
hey peripatitis, thanks for your feedback. "I might be wrong but i think the spectrumInRam and the liveSpectralAnalysis sounds output amplitude and frequency instead of magnitude and phase that fft does in max." of course but from what i know, the phases in the phase vocoder are used to estimate the frequency of each fft bin so it probably leads to a similar result to add noise to the frequency output of the spectrumInRam and the liveSpectralAnalysis sounds: randomization of the frequencies in a specific range. but i guess the problem here is to find the right range? i will read dolsonīs "phase vocoder tutorial" and see if that helps: http://www.panix.com/~jens/pvoc-dolson.par thanks, johannes IP: Logged |
All times are CT (US) | next newest topic | next oldest topic |
![]() |
|
This forum is provided solely for the support and edification of the customers of Symbolic Sound Corporation.