Chapter 3: Conversions

3.1 Working on sampleprecision

Here is the example from paragraph 1.4. We want a small feedback loop. The problem described in paragraph 1.4, was that the audiorate works with buffers, but sometimes we do not want to work with buffers. Sometimes, we want a small feedback loop which requires that we only compute one or two samples, and then put it back into the same algorithm.

The second picture shows how a very small feedback loop should be made in CPS. We simply convert from audiorate to controlrate and back again with 'audioControl' and 'controlAudio'. With these two objects, every sample comes out at controlrate. ‘controlAudio’ is a like bag for samples; when it is full, the samples continue at audiorate again. The example above does not work in CPS, the example at the right does. The wire from the output of the ‘*’ to the first input of the ‘+’ is behind both objects.

A lot of the audio processing objects can be converted to controlrate by selecting it, and then pressing PAGE_UP; selecting it and pressing PAGE_DOWN does the reverse. Of course, this does not mean that all the objects are usefull when used for processing MIDI. When you use ‘lopass’ to filter MIDI events, it won’t work because MIDI is not continuous, and audio is continuous.

When using audioControl and controlAudio, you must realise that it takes a lot of CPU time, because there is a big overhead by getting the samples individually through all objects 44100 (or an other current samplerate) times a second (see paragraph 1.4 for more information). It's best only to use the sampleprecision 'local'; for example only where the FM oscillator with feedback is created, but not any further.

Currently, the only objects that really depend on the buffersize, are the ‘fft’ and ‘ifft’ objects (because they return 1 (‘true’) when a buffer is processed), the ‘upsamp’ object (which interpolates to the next value within one buffer), the 'rms' object (since it gives a new value after each buffer), the k-rate object (it gives a value each buffer), sblock (which returns a value after a buffer), and downsamp (which also gives a new value after a buffer). These objects all still behave 'normally' but then corrected for the new buffersize. All other objects function exactly the same with different buffersizes.

Working with sampleprecision is one of the requirements of MPEG-4 Structured Audio. To prevent a waste of CPU time by doing everything at sampleprecision, CPS supports this 'local' sampleprecision. If a patch is saved as a .saol file (currently not supported), the audioControl and controlAudio objects are gone.


3.2 Scheduling in time at audiorate

It may seem as if everything in a patch is always static; all objects are only processing their input, and that's it. The opposite is true; sound (and music) is all about changes in time, so there must be a way to schedule things in time in CPS.

Scheduling in time can be viewed at two different levels: scheduling complete patches in time, or have one patch running all the time and let certain parameters in that patch change in time.

The first 'kind' of scheduling is exactly what happens when MPEG-4 Structured Audio is running. The patch is now called an instrument (and can be represented by a .saol, or orchestra file), and with an other .sasl file (score file) you can define when to start/stop the instruments in time, and which parameters they should get in time. Eventually, this complete process will probably be possible with CPS; currently you can create a patch, which is actually nothing more than a representation of a .saol file.

The second 'kind' of scheduling, within a patch, is supported by CPS in a very clear way, with the 'k-rate' object. The general principle is, that when the used buffersize and the samplerate are known in an object , then it knows exactly how much time has gone by since the last call, if the object is always triggered at regular intervals (or: continuously). For example (with the buffersize 512 and the samplerate 44100), if an object receives/produces an audio-buffer (or it receives a trigger each time a buffer has been processed) continuously, then it knows that 512/ 44100 = 0,0116 seconds have passed since the last call. Objects that work on audiorate (like aline) are in the scheduling process because they receive (or produce) an audio buffer all the time. Other objects (like kline) get a trigger each time a buffer is processed with ‘k-rate’, and they use that to calculate their latest value. Note that ‘k-rate’ is ment to schedule objects that update their value once a audiobuffer, not for scheduling MIDI (although that is possible).

In the example, the output of the kline is asked each time a buffer is put into the k-rate input. The difference between using aline or kline (triggered by ‘k-rate’), is that the output value is updated every sample when using aline, and when using kline the output value is only updated once a buffer. Although kline is less preciseous than aline, it saves a lot of CPU processing time, because the new value only have to calculated once a buffer instead of after each sample. In the example, it doesn’t matter much, because here we don’t hear the steps that we introduce with kline.

Just as with the audioControl and controlAudio objects, when a CPS patch is saved as a .saol file, the 'k-rate' object can not be found back in the .saol file. The place where you connect the schedule object to the audio, is where it gets written to the .saol file; that is exactly as in a .saol file where audiorate (a-rate) lines are also mixed with krate lines.

'k-rate' is also the correct MPEG-4 Structured Audio word for everything that is updated once an audiobuffer. 'The 'k-rate' means how often a second these objects get updated. This equals the samplerate divided by the buffersize. If you adjust the samplerate or the buffersize in the ‘Options’- ‘Globals’ menu, you can see how the k-rate changes too. Notice that you can also drag in the buffersize to adjust the buffersize more easily.

3.3 Other conversions

Sometimes you just want to multiply an audio signal with a non-audio number. That is exactly where 'upsamp' is for; it takes a normal controlrate value, and translates it to an audio buffer. If a new value is received in the upsamp, it gradually shifts from the last value to the new value to prevent clicks in the audio.

Note that when adding to or multiplying with an audiorate signal, you can use the special formed '+' and '*', which have an upsamp build-in. This saves processing time, because there is no constructing of a buffer needed.

There are two objects for the other way around, converting audio to a controlrate signal: downsamp and decimate. Downsamp calculates which number is the best representative in the buffer that it just received, by taking the mean value of the audio buffer. Decimate also receives an audio buffer and returns a controlrate value, but it does not calculate anything, it just returns an audio sample at controlrate. The frequency of returning a controlrate value, is the k-rate, because it sends out a number each time it receives a buffer of audio.

There are two other objects that receive audio and return only at k-rate, these are ‘fft’ and ‘rms’.