![]() |
![]() ![]() ![]() ![]() ![]()
|
next newest topic | next oldest topic |
Author | Topic: How best to approach this |
Larry Simon Member |
![]() ![]() ![]() Perhaps someone has already done this. I thought it might be fun to listen to the orbit of a fractal like the Lorenz in quad sound. The listener would sit in the middle of the complex plane and listen to a sound loop back and forth around him/her. So my first thought was to have some kind of function generator produce (x, y, velocity) triplets at a predetermined rate representing the position and speed of the orbiting sound source. (x, y) would be used to determine the left/right/front/back balance and velocity could be used to add a whoosh or whatever. Adding doppler might be a next step. I guess there are several ways to approach this from a design perspective, so I'm looking for advice. The (x, y, velocity)'s could be generated in advance and stored as three samples. The values would then be sample-accurate and 24-bit, but I doubt I would need anywhere near that much accuracy. I also don't like the idea of precomputing the values as it might be fun to say, nudge the flying sound in real-time into a different orbit or interact in some other way. Scripts are also precomputed so that wouldn't work. I guess that leaves some kind of independent program running on the host generating MIDI control signals representing x, y and velocity? Not sure but I suppose I might be able to do that in MAX. Other ideas? Larry IP: Logged |
Larry Simon Member |
![]() ![]() ![]() Having thought about it a little more, it would be nice if the position computation could be done at audio rates. Then by delaying each sample by the time it would take for it to get from the object to the observer would (correct me if I'm wrong) automatically create all the appropriate doppler kinds of effects. IP: Logged |
pete Member |
![]() ![]() ![]() Hi larry Doppler is one of these strange things that most beginners think is realy complex and phasey, but then you hear a lot of experts saying it's a simply pitch shift and nothing more. You will hear the sound at two diferent pitches at the same time albeit at different levels. Other buildings at different positions could produce different and varying amounts of pitch shift. Even if you had a reverb that emulated the effect that all these buildings had on a static sound, putting this reverb on the output of the single pitch shifter would not have the same effect as true doppler. Also each of your ears will hear different levels from different buildings reflections dependant upon which way round your head is. hope this makes sence. IP: Logged |
Frank Kruse Member |
![]() ![]() ![]() GRM Tools´ has a pretty cool doppler plug also waves make a cool one, but those don´t add reverb, i guess... frank. IP: Logged |
Larry Simon Member |
![]() ![]() ![]() Thanks guys, but the doppler part isn't the main issue. The key question is: is there a way in Kyma to generate an arbitrary function (in this case a complex function) in real time at audio rates? If not, then how might I precompute it and stuff in a sample (in this case several samples because the output of the function is a vector, not a scalar) so I can use a Function Generator or Oscillator to read and send out the values at audio rates. If that isn't easy either then I can write a little program in Smalltalk to generate the values at control rates, but I suspect that it wouldn't be as sonically interesting i.e. the doppler etc. would have to be explicitly faked i.e added on top rather than naturally arising. IP: Logged |
SSC Administrator |
![]() ![]() ![]() Yes, you can generate functions at the audio rate by using networks of modules like product, mixer, difference, attenuator, and the waveshapers (to do nonlinear functions like sin or cos). Then you can use MemoryWriters and Samples (memory readers) to connect the function output back to the function input as necessary. IP: Logged |
Larry Simon Member |
![]() ![]() ![]() Great, thanks. That makes sense. Suddenly the use of a whole bunch of modules I'd been scratching my head about has snapped into place. I'll try putting together a network to generate the Lorenz function and see what happens. Larry IP: Logged |
Larry Simon Member |
![]() ![]() ![]() Here's where I'm at. To recap, clarify, correct previous wrong assumptions: There are two basic components to what I want to do (at least for now). (a) implement a function generator which returns the value of the Lorenz equations over time. Now that I've bothered to look them up, they aren't 2D complex functions at all; they're three simple differential equations, yielding (x,y,z) as a function of t when solved using Euler's method. I've assumed that I need to generate the (x,y,z) at audio rates for part (b) below, but I'm not sure that it's really necessary. I understand now how to build a network of Math Sounds to compute a function. It would be nice to turn this generator into a reusable object but I'm not sure how one would do it. Kyma assumes all Sound objects are exactly that, generators of a single audio rate stream, whereas this object would produce three. I suppose one could implement LorenzX, LorenzY and LorenzZ, but the variables are so intertwined that each one would have to compute all three anyway. My bigger problem is that I need to feed back values of x, y and z to get the next set of values. If I understand correctly, the way to do this in Kyma would be to use MemoryWriters and a delay of 1 sample to pick up the previous values. Here's the rub: a 1 sample delay means all three MemoryWriters must be running on the same CPU, but MemoryWriters are unnamed so there can only be one on a CPU. Is this right? IP: Logged |
SSC Administrator |
![]() ![]() ![]() For the first part, MemoryWriters have a field to put in the name for the recording that is being made. You use the same name in the Sample in order to make the feedback. For the second part, if you implement the spatial location in terms of both an attenuation factor for each speaker and as a delay time for each speaker, the doppler should occur naturally as a result of the changing delay time. (The delay time should be linearly related to the distance between the sound source and the listener, along the axis of the speaker.) There are some examples of this in the Spatialization examples in the Sound Library. IP: Logged |
pete Member |
![]() ![]() ![]() would it be sufficient do the calculations at non sample rate(paramittor rate) and rely on the delay modules built in smoothing to fill in the gaps. Distance could be calculated using x source - x listener and y source - y listener .Then using pythagarus (square root of new x squared plus new y squared. IP: Logged |
Larry Simon Member |
![]() ![]() ![]() This has been a terrific learning experience. Sorry, I didn't mean for this to turn into a personal tutorial. Lorenz: OK, I put most of the network together last night. I originally looked at the tutorial on MemoryWriters but not the prototype definitions. Assumed (d'oh) that the name Sample used was just the filename, not a wavetable name, and would get ignored when the MemoryWriter checkbox was x'd. Doppler: Wow! So DelayWithFeedback does maintain an individual delay time for each sample point. I expected the delay time fields to be fixed, not variable. Very cool. Now I need to go out and buy a second stereo amp and speakers (or seeing as the Lorenz is actually 3D, an 8-speaker setup would be best...) I'll post the file when I finish it off. Pete: Yes, I don't think the Lorenz really needed to be at the audio rate, but I wanted to see how it would be done, and also find out what it sounds like as a sound source in and of itself. IP: Logged |
Larry Simon Member |
![]() ![]() ![]() The function has to be constrained to (-1, 1), right? I've been playing with the parameters of the function in a spreadsheet to try and pick values that will keep the function in that range and haven't had much luck. It is chaotic after all; I guess it'll wander where it wants. Any slick tricks for dealing with arithmetic on a larger range? IP: Logged |
SSC Administrator |
![]() ![]() ![]() Yes, it is true that the signal values are in the -1 to +1 range. Usually we get around this by generating a scaled output value and then scaling back up by the same amount when the signal is used (in feedback, for instance). IP: Logged |
All times are CT (US) | next newest topic | next oldest topic |
![]() ![]() |
This forum is provided solely for the support and edification of the customers of Symbolic Sound Corporation.