Use the sections below to peruse the various technical aspects of our project. Using music as a diagnostic tool is actively in development with UA Tech Launch and will remain proprietary information for the duration of the project.
Dr. Ilayda Altuntas Nott, Assistant Professor of Arts Education for the UA College of Fine Arts, performed a project assessment for the February Proof of Concept installation. A copy of the final assessment document is provided below.
Dr. Michael Vince composed a five-part vocal piece performed at the February 2025 HTI Proof of Concept Installation. Each vocalist represented one of the five EEG bands which interweaved vocal ranges to create a mystical, harrowing, meditative sound. He has since developed more direct compositions for each brain type, derived from EEG signals and translated algorithmically into MIDI data, which are provided below.
CONTROL
ALZHEIMER'S
DEMENTIA
Our installment design prioritizes the individual experience of exploration. This means that we want to put in the hands of users the power to choose which brain, and what layers of EEG (alpha, delta, etc.), they listen to. We considered a design where each node had a powered speaker that users could interact with to toggle the various pieces of audio. However, we foresaw that this would introduce friction with attendees -- not only in introducing audio pollution to nearby nodes, but in making users feel like they were interacting with someone else's property, instead of curating their own personal experience. With nodes only 4 to 6 feet away from each other, it also didn't feel realistic to have 19 interactable computers controlling 19 interactive speakers so close together. The user should experience the installation as a whole, and not get caught up in granular execution.
This is where the design team arrived at the use of Augmented Reality (AR). With a quick download of a custom app from the iPhone store, and after orienting the app's virtual environment to an image on the ground in the real world, attendees are ready to hold up their phones (or HTI-provided iPads) and experience the brain. The app allows users to change which brain they walk through (Alzheimer's, frontotemporal dementia, or control) as well as toggle which of the five EEG bands they can hear (see below).
Image showing the Augmented Reality app's settings, which include changing the EEG bands that are audible and which brain -- control, alzheimer's, dementia -- you will source the audio from.
Image showing the Augmented Reality app in action, showing the node closest to the viewer as large with the others smaller in the distance, which updates to the real-life positioning of the user holding the device relative to the nodes of the installation.
A close-up of an attendee holding up their phone running the augmented reality application for the event displaying EEG nodes and audio playback.
Photo Credit: KLH
While further work is in development, the music generated for the HTI February 2025 installation primarily drew from pitch-shifting algorithms combined with digital MIDI conversion into instrumentation. The range of meaningful EEG signal is between 0.5 Hz and ~140 Hz (with some data up to 500 Hz being relevant depending on methodologies used and relevant brain process being studied) but for this project, the highest band recorded was Gamma, which was cut off as low as 45 Hz. The human hearing range is from about 20 Hz to 20,000 Hz -- and for reference, middle C on a piano is 261.63 Hz. Meaningful differences between subject brains can be heard with simple analog pitch-shifting and thus it is a suitable jumping-off point for attendees to start learning about brain signal. The audio heard in the Augmented Reality App was a combination of pitch-shifted signal and the addition of MIDI instruments that were unique to each brain region to augment regional signal differences.
For current literature review and algorithmic methodology on melody and audio generation from EEG, please see our Bibliography and Further Reading at the bottom of this page.
BIBLIOGRAPHY AND FURTHER READINGS
Destexhe, Alain, and Luc Foubert. "Composing music from neuronal activity: The Spikiss project." Exploring Transdisciplinarity in Art and Sciences. Cham: Springer International Publishing, 2018. 237-253.
Jain, Aditya, et al. "A comparative study of visual and auditory reaction times on the basis of gender and physical activity levels of medical first year students." International journal of applied and basic medical research 5.2 (2015): 124-127.
Liang, Mingheng. "An improved music composing technique based on neural network model." Mobile Information Systems 2022.1 (2022): 7618045.
Miltiadous, Andreas, et al. "A dataset of scalp EEG recordings of Alzheimer’s disease, frontotemporal dementia and healthy subjects from routine EEG." Data 8.6 (2023): 95.
Noel, Gabriel D., Lionel E. Mugno, and Daniela S. Andres. "From signals to music: a bottom-up approach to the structure of neuronal activity." Frontiers in Systems Neuroscience 17 (2023): 1171984.