A data-driven instrument is a system comprising three parts: an external hardware device that allows for the encoding of human gesture as data, software that maps these data to musical parameters, and a sound-synthesis engine that renders sound as a direct result of human movement.
In this project you will create your own data-driven instrument, develop a short composition, and perform it. The hardware device can be one of the ones mentioned below or agreed upon with your instructor.
We can use all sorts of external hardware to control sound generation in Max. There are three main aspects of this endeavor:
- the mechanics of connecting a device and accessing its data in real time,
- the musical application of the data — how we map the data to the parameters that control sound generation, and
- the performance gestures you use to interact with the hardware device.
An unusual aspect of controlling sound using devices that are not recognizable musical instruments is that there is no fixed association between actions taken on a device and the sound produced by them. The freedom this allows can be liberating, but we always want the actions taken on the device to be musically convincing.