Fall 2020 Concert

CECM Fall 2020 Concert - Program Notes

This concert was streamed by IUMusicLive! on January 21, 2021. We would like to thank Tony Tadey and Glenn Myers for encoding our stream and providing invaluable technical advice. Joe Jiang edited the video of Kinesthetic Modes of Enunciation. Jozef Caldwell of Russian Recording was the recording engineer for Lullaby. Patrick Lenz served as videographer for Kinesthetic Modes of Enunciation and assisted with audio and camera setup. The students edited their own videos, with assistance from Chi Wang and John Gibson, who together produced the program.

Chi Wang: Kinesthetic Modes of Enunciation

Kinesthetic Modes of Enunciation was realized as our center’s telemusical performance during the 2020 pandemic. A telemusical performance may occur in many different ways, but for us it meant that performers and audience did not inhabit the same physical space, and that rehearsals occurred in real time over the internet rather than in a shared physical space. Our students, via a specially constructed remote system, accessed our studio software, hardware, and high-quality sound monitoring mechanism over the internet.

In Kinesthetic Modes of Enunciation, we explored vocal sounds. Instead of speaking or singing while performing, the musicians in the ensemble recorded themselves speaking different phrases and used the recordings as initial sound materials for the composition. We configured data-driven instruments with their own digital controllers that included the Nintendo Wii Remote, Wacom Tablet, Arturia BeatStep Pro, the Korg nanoKONTROL2, and the Nintendo GameCube. Using these controllers, we developed new performance techniques in addition to designing our sounds. Our performance harnessed the power of the Symbolic Sound Kyma sound design hardware and software environment, housed in the CECM studio, receiving performance data in real time from different places around the world. This allowed musicians to observe the interaction of data streams and to hear simultaneously the sound produced from the CECM studio. The ensemble was able to practice and rehearse the composition online, and the documented live interactive performance was a version with six performers in Auer Hall and one performer in Taiwan. This arrangement of technology allowed us to stay connected and inspired by exchanging compositional ideas and performance activities, which has proved to be a unique experience during this pandemic.

Anne Liao: Water, Bowls and Rocks

Water, Bowls and Rocks explores the metamorphosis from recording simple actions of nature, to the blanket of processed sounds. By applying performative actions on a Wacom drawing tablet, the data streams are sent to sound producing algorithms, controlling musical parameters in real time. In the composition, a single rock hit turns into thousands of horses galloping, and a single water drop becomes flexatones shaking.

Ben Cordell: Turbulence

Turbulence is a reflection on our current time in history. Some of the ideas I explored in creating this piece are: How can we create art during times of turbulence? How do we know what the impact will be of art that we create? How can we know which actions will lead to violence? This piece is a message of awareness and is dedicated to all of the lives lost or changed forever due to the Coronavirus pandemic and the Black Lives Matter movement.

Oliver Kwapis: Clock, Lightning Bolt, Volume, I Love You So Much

The last thing I do every night is check my phone. First, I check the clock icon in the upper-right-hand corner of the phone’s home screen: my alarm is set. Next, I look for the lightning bolt over the battery icon: my phone is charging. I press the volume button. A white bar appears on the screen: maximum volume. Last, I look at the photo on my home-screen, a picture of my partner.

I stare at each image intently — the clock, the lightning bolt, the volume bar, then my partner — and recite the corresponding words: clock, lightning bolt, volume, I love you so much. I repeat this litany many times with many variations. Clock, lightning bolt, volume, volume, I love you so much, so much, so much. I continue until I feel that I’ve performed the ritual correctly, then fall asleep.

I am glad to report that this routine is just a vestige of an obsessive-compulsive disorder that cropped up in my teens. I’ve gotten a handle on my compulsions and even though they lurk just off-stage, I’ve begun to appreciate their absurdity.

Clock, Lightning Bolt, Volume, I Love You So Much is a performance of my nighttime ritual. The piece is an abstract representation of my dual experience of how these rituals can hold me in their grip. It’s ironic to experience something while standing outside of that experience — of being in something, but not of it. It is painful, strange, quirky, and even funny, to say the least.

Yuseok Seol: Lullaby

Lullaby, for gayageum and electronics, is my first attempt writing for a Korean traditional instrument, despite my Korean background. The piece winds its way between two images. One image is a somewhat lullaby-like image, in which the gayageum plays a repetitive, hobbling rhythm that reminds us of the swinging gesture of a rocking chair. The microphone records and processes gayageum sound in real time, creating a hazy, foggy sonic space. The other image is contrasting: the gayageum plays rhythmical, lifting melodies along with a noisy pulse generated by the electronics. I use a distortion effect for the gayageum sound and densely distribute it in stereo space, making some chaos. At the end, the distorted effect and the swinging rhythm are combined, and the piece sinks into a somewhat dizzy, but also somewhat dreamy ocean.

This piece is composed for the gayageum player, Eunsun ‘Sunny’ Jung. It was very fortuitous for me to meet this wonderful gayageum player in the United States.

Shuyu Lin: Feather Mallet

I started with audio of a tuned wine glass struck by drumsticks. By applying multiple sound synthesis/resynthesis algorithms programmed in Kyma, I manipulated and developed the sound materials in real time. The performer uses a “feather mallet” — a feather attached to a Wii Remote game controller — to trigger and shape the sounds. The performer’s “touching” and “rubbing” of the wine glass with mallet gestures, juxtaposed with the sounds of drumsticks hitting the wine glass, creates an audiovisual illusion and establishes the timing relationship between the two. The musical journey of turning a single wine glass sample into a live audiovisual experience is offered to the audience during the performance.

Yi-De Chen: The Changeable Weather

The Changeable Weather is an interactive electronic music composition for nanoKONTROL2, Max, and Kyma. In this piece, I constructed three sound algorithms and applied them in a timeline. In each of the sound algorithms, I mapped and routed the nanoKONTROL2 faders’ data to different musical parameters via MIDI continuous controllers, transforming and blending the pre-recorded sounds and the synthesized sounds in real time. For instance, in the performance timeline, faders are routed to control amplitude, frequency randomness, the density of sound elements, and duration. I also structured the timeline with playback of the pre-recorded samples of the birds’ singing, wind, and thunder. The video documentation is a live performance of me operating the nanoKONTROL2 at home in Taiwan with sound produced in the CECM studio in real time over the Internet.

This piece is five minutes long and consists of four sections. The musical form of the piece is A-B-C-A’. The characters of each section are distinct, with the different timbres indicating various kinds of weather. In the A sections, I use birds’ singing to represent the sunny weather because, from my personal experience, it’s easier to hear birds singing in such weather. In the B section, wind and thunder sounds work as a transition from sunny to rainy weather. In the C section, heavy rain plays the most crucial part. The last section, which is the coda, features the birds’ singing again and tells us the stable weather is back.

Kevin Kopsco: Supriem

Supriem is the correct spelling for how 2020 has been. The economy has been supriem. The cooperation has been supriem. Taco supriem. Who’s the next supriem in American Horror Story’s coven? Chicken cutlet supriem. Supriem ninja warrior. Meows and cuddles and peeking our head out of scary bubbles. Chill and dopamine out, brah: shark week supriem, pizza supriem, supriem OG kush, supriem commander 2. Vibing, not vibing supriem. White supriemacy, dictator supriemacy, revamping naziism supriemacy, world is dying around us supriemacy, people are dying around us supriemacy, nothing about our society is working for anyone and we’re all just flailing to find normalcy supriemacy, half our population prefers an authoritarian cult than just being nice to people supriemacy.

What else to do, but dopamine out?

Joey Miller: Telesomnia

Telesomnia is a short film that tells the story of someone whose dreams are being manipulated. This was a collaboration with Sloan Welsch, a student in the Audio Engineering and Sound Production program, and a team led by director Kathryn Janko, a 2020 graduate of the Cinema and Media Production program at the IU Media School. Each of the performers, who are also characters in the film, use instruments and devices that input to a Max patch that manipulates the onscreen video. This version of Telesomnia is in 2D, but you can watch the full 360-degree VR experience with ambisonic immersive audio at telesomnia.com.