Exercise 6

Exercise 6: Triggering and Controlling Sounds

An expanding sub-field of electronic music is performance on data-driven instruments, in which a musician controls real-time sound production by interacting with devices that translate human movement into data streams for use by a computer.

In Project 1 you will design a data-driven instrument, compose a short piece for it, and perform the piece. But before we do that, we must explore some basic techniques to let us trigger and control sounds in Max. We’ll be able to apply these techniques to our work with external controllers.

Goals

We’re learning how to...

  • trigger sounds from a computer or MIDI keyboard,
  • handle sound durations known in advance or only after starting to make a sound,
  • use a computer’s mouse or trackpad to produce two streams of continuous data,
  • redirect streams of data from one destination to another,
  • generate triggers from continuous data, and
  • trigger a stream of continuous data (automation).

How to Do This Exercise

Working on the assignment is a two-stage process.

  1. Download Exercise 6 Max Tips. Open these patches in order (part 1, then part 2, etc.), reading the comments and operating the patches.

    The patches show how to access data from the computer keyboard and trackpad (or mouse), as well as how to shape these data to control sound. The examples of doing so are very limited. Please think of ways to use them musically for this exercise.

  2. Make a patch that plays sounds of any kind — sound files or synthetic notes — in response to actions you take on the computer keyboard and trackpad (or mouse). Try using one hand to type on the keyboard and the other hand to control the trackpad. Please consult the Tips files for ideas. Be ready to demonstrate your patch in tutorial by doing something musical with it.

    You may use native Max objects or Auzzie modules, or some combination of these, to create the sound that you control.

See the Suggestions section below.

Be sure you understand what each of these Max objects does:

  • key, keyup (parts 1 & 2)
  • select (sel) (part 1)
  • notein, stripnote (part 1)
  • delay (del) (part 2)
  • trackpad-xy abstraction (part 3)
  • gate, ggate (part 4)
  • past, int, change, speedlim (part 5)
  • line (part 6)

Suggestions

  • Typing keys can generate triggers that play sound files or notes; they can also fire off automation data streams that affect a parameter (e.g., playback speed or pitch bend).
  • Conversely, continuous data can act as a trigger (see the part 5 patch).
  • Use the trackpad to control two parameters at once. These could be volume controls (via one or more live.gain~ objects) or any two parameters at all, such as the ones in Auzzie modules.
  • Decide whether you want to control the duration of a sound in real time, so that your action stops, as well as starts, the sound. This is what we do when playing most sounds in a synthesizer from a MIDI keyboard. When we start playing the sound, we may not know how long it will last. It ends after our finger releases the key.

    But sometimes we want to trigger a sound to play back with a certain duration that we know in advance. This fixed duration can be simply the full length of a sound file, a selected duration within the playlist~ waveform display, or the time difference between the playback start time and a duration given as an argument to a delay object.

  • When using continuous controls, you must map the input range of your control to your desired output range using scale. There are no rules about how you do this — it depends entirely on your musical and performative purpose.

    But here is something that might help. Treat frequency (for oscillators and filters) and amplitudes in a way that takes into account the exponential effect these parameters have on our perception.

    • If you control frequency continuously, do that in terms of MIDI note numbers. Then convert to frequency using mtof.
    • If you control amplitude continuously, do that in terms of decibels. Then convert to linear amplitude using dbtoa before feeding that to the right inlet of *~ to perform amplitude scaling. This is essentially what happens inside of live.gain~ when you give it decibel values, making this object a good shortcut to producing convincing crescendi and diminuendi.
    • Note that some Auzzie parameters allow you to specify a value either as a range from 0 to 1 or as a range that makes more sense for that particular parameter. For example, you can specify filter cutoff frequency in Hz by writing a message like “exact 440” and sending it into the frequency inlet of REZFILTR.
    • If you’re itching to try external hardware now, go for it! Investigate the Project 1 Max Tips patches to find out how to interface with some types of hardware.

Submission

  • Be sure you satisfied the criteria listed above.
  • Make a folder for your exercise and put your main patch in it. Add any required sound files or abstractions (including trackpad-xy).
  • Zip your folder and submit it in Canvas.

    CAUTION: Be sure you test your patch if you move it into a new folder with other files.

Grading Criteria

This exercise is graded pass/fail. You must submit the exercise by Thursday midnight to be eligible for a pass.

Your patch must

  • operate correctly and
  • implement the functionality described in the “How To Do This Exercise” section above.