Kyma Forum
  Kyma Support
  evolving algorithms

Post New Topic  Post A Reply
profile | register | preferences | faq | search

next newest topic | next oldest topic
Author Topic:   evolving algorithms
garth paine
Member
posted 17 March 2002 16:45         Edit/Delete Message   Reply w/Quote
hi all,

I have been thinking about interactivity - much of my work is for interactive installations, and I am interested in making algorithms that evolve according to a pattern of input. I am trying to move towards a model more like conversation, where we are not too sure what we will be speaking about in a few moments, as it evolves in relation to both parties input from their own experience. Hence i am thinking that in order for my pieces to be really interactive the actual source algorithms need to change over time based on the input patterns. How could one do this in Kyma??? I have thought a bit about the script objects etc, but they would start new algorithms, not alter existing ones, or for instance, not make a new algorithm based on the cuirrent one, but with some alterations. How would Kyma know what it could do to change the algorithm??? I don;t want to it to be entirely chaotic??

Any thought welcome. BTW I dont know Smalltalk, but if it were possible to deal with this through Smalltalk, then I would have a look at it, but i would need some guidance and probably some examples about how to approach it.

IP: Logged

Burton Beerman
Member
posted 17 March 2002 18:24         Edit/Delete Message   Reply w/Quote
check out the markov chain sounds under script examples as a place to begin.

IP: Logged

garth paine
Member
posted 17 March 2002 21:30         Edit/Delete Message   Reply w/Quote
Thanks for the Marcov suggestion - I looked at the scripts and they seem to be generating input for variables within the algorithm, which is all good and fine, but I am wanting to construct new algorithms on the fly - actually make sound files which would then be played. These sound files would be an evolution of the one they replace. I wish to set some conditions: ie. if wins speed is over 30knots, change sine wave generator to white noise input.... I don't actually know the pragmatics of what it is I want to do yet, just the idea I am exploring, and then if I can see how to do it, I will look for sonically interesting outcomes.

IP: Logged

SSC
Administrator
posted 18 March 2002 10:11         Edit/Delete Message   Reply w/Quote
Although the Script module is good for algorithmically creating "patches" like the ones you make by hand in the Sound editor, it does all of computation at "compile time" *before* the sound starts to play.
In order to create Sounds that you can interact with *while* they are playing, there are several approaches:

"CapyTalk" expressions (the event language you use in parameter fields) are evaluated in real time on the Capybara. So any logical expressions, random number generating, and parameter controls you create in CapyTalk will be evaluated *while* the Sound is playing and can depend upon audio and MIDI input.

You can also create logical operations using Sounds . For example, you can think of a 1 as being true and a 0 as being false. Then you can use things like the FrequencyDetector to trigger something to occur when a certain frequency is detected.

You can also use WaitUntils and Markers in the Timeline to control when to move between different combinations of algorithms (in a linear or nonlinear order).

IP: Logged

garth paine
Member
posted 18 March 2002 15:38         Edit/Delete Message   Reply w/Quote
I thought I might provide some background to this idea. One of my principle interestes is making interactive realtime environments. I have been doing some thinking about "interactivity", and following the conversation model, thinking about ways that, like a conversation between 2 people, the sounds can evolve - that is, they may go in one direction on the basis of a weighting in the conversation, and then change when someone else comes to chat (not literally of course) or take off in another dirction given a change in the weight of the first conversation..... When we talk to someone - just a friendly chat, we really don't know what we will be saying in 10 minutes - it depends on the nature of the conversation.... So I would like to find ways of making sounds like that, but the way I see it at the moment, that would need to construction of new algorithms on the fly??? I get the impression from SCC responce that that is not possible in Kyma?? I thought I could use scripts to instantiate and dispose of sound files while the Capy was running?? What about adding functionallity to an algorithm? Patching in and additional element or removing it on the fly? would this still need to be instatiated at compile time??

Could I get past the compile time limitations using smalltalk?

IP: Logged

SSC
Administrator
posted 19 March 2002 09:36         Edit/Delete Message   Reply w/Quote
You could create a Mixer (or a Timeline) with the "universe" of all possible elements. Then you could algorithmically control which elements are audible at any particular time. A single Sound object with different parameter settings can produce vastly different acoustic output, and the way the parameters evolve can be determined by the conversation algorithm.

In some ways, this is like defining the personalities and voices of the people in the room and then modulating their voices with the conversation.

It seems likely that you would want to draw, in real time, from a set of algorithms that you had pre-developed in the studio in non-realtime, since most algorithmically generated signal flow diagrams would not produce interesting results. So philosophically, it is the same as dropping them in and out in real-time (even though, computationally, having all of them in a mixer is less efficient). Having a mixer that runs continuously while elements are dropped in and out is easy to do on a single processor but a non-trivial problem to solve on multi-processor architectures.

A Timeline containing all possible elements (with a WaitUntil in the track above each one and a Marker at the start time of each), is the ideal solution right now for defining a universe of elements and then jumping at will between collections of algorithms and modulating their parameters.

IP: Logged

All times are CT (US)

next newest topic | next oldest topic

Administrative Options: Close Topic | Archive/Move | Delete Topic
Post New Topic  Post A Reply

Contact Us | Symbolic Sound Home

This forum is provided solely for the support and edification of the customers of Symbolic Sound Corporation.


Ultimate Bulletin Board 5.45c