In October 2012, Erin Gee announced a collaboration with neurophysiologist Vaughan Macefield (Australia) and roboticist Dr. Damith Herath (MARCS Institute) called ‘Bicameral Music’. Bicameral Music is a performance that Gee describes as combining robotics, technology and raw emotion. The team has been researching and mapping raw emotion, translating electric currents in the brain into a decipherable auditory experience. The end goal of their research is a symphony, to be performed live in Montreal in 2013
We had the opportunity to speak with both Erin and Vaughan about their teams work. Here, in the first of a 2-part interview commentary, Erin gives some insight into the field of neural data and music. Inside: what inspired Bicameral Music, the teams relationship with the chance of music and the logic of math, and possible future implications of cybernetics.
This interview is the first of a 2-part series.
Your recent research and collaboration focuses on music created by raw emotions, derived from electrical currents in the human brain. What is the inspiration behind this project?
The inspiration came from my [Erin’s] direct engagement with materials presented to me by Vaughan. I personally have always had a very number-based relationship with music and its organization, as well as a fascination in technological devices. When Vaughan first told me about the capabilities of his materials in his laboratory I mostly remembered thinking that the science behind this project was already very fascinating, there didn’t need to be very much narrative or concept put on top of it. It became a question of translation, so I wanted to translate the emotional body through robotics and music, but I wasn’t satisfied by the solution of MIDI sounds or software. I wanted to return the emotions to robotic bodies, and I’m inspired and challenged by my work in making this happen. The world premiere of the work happens next year through Innovations en Concert a Montreal-based group that programs contemporary chamber music. It is interesting to think of this project as a biotechnological extension of what “chamber music” could be.
How do we ‘perform’ emotion?
“Performance” begs the question of who the audience is.
From a tradition of performing arts, emotion is performed in many means. For example, some performers may simply understand in their body how to hold their posture and alter their voice to simulate effects of stress or relaxation, others still may have a keen sense of using silence to evoke unsaid feelings in a tension between words. There are some acting practitioners that hold the belief that people cannot “perform” an emotion without being at least somewhat affected through their bodily engagements—that our bodies play such an important role in how our brain registers emotion, that even a seemingly artificial “performance” might arouse psychological effects if done in a way that registers with previous emotional experiences in the body.
As for those in everyday life, everyone’s physiology seems to react differently to emotional stimuli. This probably plays down into very primal levels of how people’s individual bodies are set up to react to emotions through heart rate, breathing, sweat release, blood flow, even how tight certain sets of muscles become under emotional stress. There is a complex relationship between one’s body and one’s psychology that results in unique physiological reactions to emotion, but nonetheless we seem intelligent enough to pick up on subtle emotional cues in the bodies of others, almost on a subconscious level. I’m not sure if feeling an emotion constitutes a “performance” when there is no play or plot to perform as such, but these are questions I’m working through in this project.
Music is very mathematical; here you’ve taken a collection of raw neural data and converted it into numbers to create sound, using specialized software. Yet your plans for future performances using this data are very aleatoric. How do the elements of chance, and the logic of math, complement each other in your research?
The word aleatoric has a rich musical tradition in early composers that pioneered modern music, like Boulez, Stochhausen, also Cage. Xenakis and Ligeti used probability theory later to compose their works. Stochastic and chance operations were often very earnest attempts to get beyond the mathematical constraints of Western harmonic systems and seek out ways for arranging sounds much like sound is arranged in nature. But nature brings its own complex mathematics that sometimes sound foreign or alienating when transmitted into musical form. I am curious if perhaps humans just weren’t physically constructed to understand certain elements of nature—our technologies can then step in and help us reach these understandings. Artists are continuously playing and pushing domains of representation—the advent of big data has already provided a context for this to happen in a significant way.
I find working with medical technologies and data to be both challenging and inspirational, because there is a constant back and forth between a natural experience and its technological codification. I am writing software at the same time as I am developing the physical musical instruments so I need to be careful to keep my own physical constraints in mind, those of the motors, the weight of aluminum, battery life. I suppose these physical realities are some of the most basic elements of “chance” that mess up the mathematics behind software, but I like this. I think is important to have elements of vulnerability, risk and spontaneity in math, why not.
While we are encoding emotional and physiological responses (which are themselves chance operations based off of probability) into musical patterns, the most significant chance is returning these codes beyond the software and into real technological bodies that have their own possibilities for re-complexifying the sound. For example, the tuning of the bars that I use will determine overtones that will offer differing qualities to the fundamental pitches of the mathematical renderings that will be coloured by the movement of the robotics themselves. I am also finding the sound of motors whirring to be a charming addition to the composition—the bodies of the robots are implicating themselves as they perform the human emotions. I think it’s a very important element to the work, to note the difference between the flesh-body of the human performer and the robotic body of her prosthesis, and to see them both as equals and not that one should be quiet for the other one as music occurs.
How could the electrical signals that are recorded – and the resulting performance – be affected by stimuli like stress, sleep, food, or the ability to focus and concentrate?
External factors are constantly modulating emotional performance—this is a normal part of emotions, it’s not out of scope to represent that. So far the performers that I have been in discussion with are excited for this challenge, and I’m going to be working with actors who are familiar with these kinds of challenges so it’s possible that this won’t be an issue to worry about too much. I would rather embrace this situation as a part of the honesty of the performance—this to me is part of the intimacy of the work, the trust and risk associated with taking this chance.
Could this research have possible medical applications, or cross-over into other areas of neuroscience?
I think that scientists are continuously challenged by new technologies that have the ability to process data more so than in previous decades. For example, I recently saw a graph displaying data gathered by NASA. It appears not only is NASA gathering data about the natural world that is exponentially growing at an unforeseen pace, but also that there is now the ability through pattern recognition to build vast amounts of simulated data that is exponentially larger than the “real data”. This says a lot about intersections between knowledge, our technological instruments, and the “real world’—when simulations provide us with more information than we have previously ever processed, it changes the way that data will be represented. This being said, the physical forms that this data takes is what helps us humans understand what its implications are and how to think about it. Developing challenging new systems of representation is something that artists regularly engage in—these systems always carry scientific, political and cultural implications. While I don’t think I can claim accomplishing something on this scale through this project yet (though I can always remain open to being surprised), the challenges of a data-driven, technologically supported society will demand new ways of seeing, hearing, and experiencing through technological interfaces, whether this data be emotional, environmental, spatial, market-based or interpersonal. I am excited that artists and scientists are working together in this respect.