The
idea of controlling and creating sounds by means of
electroencephalography (EEG) is certainly not new. Contemporary attempts
often seem humble compared to some of the visionary work imagined by
early explorers such as Alvin Lucier and David Roosenboom back in the
70s. However, our understanding of neuroscience has increased much since
then, and so has the availability of computational power and affordable
EEG technology. In that sense, we are only just beginning to explore the full potential of brain-music-interfaces.
In
fact, neuroscience analysis and sound synthesis share many deep and
unexplored connections. On the one hand, neuroscience and sound
synthesis rely on similar principles: in synthesis simpler waveforms are
combined and transformed into complex dynamics, while in neuroscience
complex signals are decomposed
into fundamental dynamics (reflecting underlying brain mechanisms).
Modular synthesis can also be seen to exemplify an important
organizational principle of the brain, namely that it can be understood
as a complex system emerging from the interaction between separate
computational modules consisting of (groups of) neurons.
These
and other observations started a long term collaboration with other
neuroscientists, musicians and performance artists. This resulted in the
EEGsynth: our open-source (Python) approach to translating brain,
muscle and heart-measurements into real-time control signals such as
CV/gate, MIDI, OSC and ArtNet-DMX. In my presentation I will explain the
usability of the EEGsynth. I will do this through walking you through
some of our performances, and the artistic and scientific themes that we
encountered in our attempts to explore possible connections between
neuroscience, sound, attention and self-awareness.
No comments:
Post a Comment