Affective Computing for Intelligent Music Playlists

This project examines how music playlists can be automatically generated and evolved to suit the user. Portable music players have evolved hugely in the last 10 years and listeners are now able to carry huge music libraries in their pocket. These devices have become increasingly pervasive and are used in all sorts of situations, such as: exercising, driving, relaxing, travelling, and so on. In this project, we are examining how sensors can be used to provide information to the music player about the environment the listener is in, what they are doing, and what their mood or emotional state is, with the aim of automatically picking music that best suits their current needs.
Similarity-based Audio Compression

Music generally relies heavily upon defined structures and repetition of musical sequences and elements. This is evident when symbolic music notation is inspected and can be heard in many contemporary popular music tracks. This project explores the range of opportunities that repetition and self-similarity present within digital recorded music. In particular, it is noted that data compression techniques have, to date, been reliant upon exploiting psychoacoustic redundancies to make file sizes smaller. Our work in this area examines the presence of perceptually redundant information to enhance the amount of data compression that can be achieved in musical audio.
Psych Dome

Psych Dome is an interactive visual music installation project. Psych Dome synthesises geometric visual patterns similar to those seen in experiences of hallucination, and provides an accompanying soundtrack. These are linked to the user's brain activity through use of a consumer-grade EEG headset, which provides control signals that affect various parameters of the graphics and sound in real-time. The first presentation of Psych Dome was in an immersive dome environment (full-dome). Technically the software works using Processing and Max/MSP software. As part of the project, we are also carrying out research via user-testing, to establish the efficacy of consumer-grade EEG headsets as controllers for interactive artworks.
Quake Delirium

Quake Delirium is a modification/hack for the first-person shooter game Quake, which alters the game in order to simulate altered states of consciousness (ASC). The modification works by manipulating game properties such as graphical parameters and sound, which are usually static.
By causing these properties to change in time, and adding a supporting soundtrack, the project aims to provide a prototype simulation of a hallucinatory experience. In a recent update to Quake Delirium, we also carried out some initial testing with an EEG headset as a control device for the ASC effects. The project forms part of ongoing research regarding 'ASC Simulations'.
By causing these properties to change in time, and adding a supporting soundtrack, the project aims to provide a prototype simulation of a hallucinatory experience. In a recent update to Quake Delirium, we also carried out some initial testing with an EEG headset as a control device for the ASC effects. The project forms part of ongoing research regarding 'ASC Simulations'.
Mezcal animations

Mezcal Animations is a fixed-media piece of 'visual music' (a form of video art). The composition has been performed internationally at: Last Friday Listening Room (University of California San Diego), Seeing Sound 2013 (Bath Spa University), Sweet Thunder Festival (San Francisco), and Sound Sight Space and Play 2014 (DeMontfort University).
An associated article was featured in a special edition of the Canadian journal eContact!, which looks at video music as an emerging art form. The article discusses the creation of the piece from both artistic and technological perspectives, using a combination of 8mm film and modern digital technologies.
An associated article was featured in a special edition of the Canadian journal eContact!, which looks at video music as an emerging art form. The article discusses the creation of the piece from both artistic and technological perspectives, using a combination of 8mm film and modern digital technologies.
aceremix

ACERemix is a project that came out of earlier work in Similarity-Based Audio Compression, following observations that the compression system would produce glitch type music when the settings were wrong or errors occurred in the similarity processing of musical audio..
The result is a method of taking an existing piece of music and producing quantised, beat-multiple sample 'grains' that can be rearranged in various ways to produce glitch music remixes.
ACERemix resulted in a MAX/MSP patch being developed that allows ACERemix versions of music to be manipulated, effects processing applied, and switched between in real-time, and so can be used as a performance tool.
The result is a method of taking an existing piece of music and producing quantised, beat-multiple sample 'grains' that can be rearranged in various ways to produce glitch music remixes.
ACERemix resulted in a MAX/MSP patch being developed that allows ACERemix versions of music to be manipulated, effects processing applied, and switched between in real-time, and so can be used as a performance tool.
Auditory Hallucinations

Auditory Hallucinations are the sounds people hear during experiences of hallucination, which have no acoustic origin in the external environment. This project investigates these types of sound by examining a large online database of experience reports, in order to form a classification system which is then used as a basis for designing these types of sounds in computer music. The research therefore forms part of our ongoing work in devising ways through which altered states of consciousness (ASC) can be represented using sound and game technologies.
Audio Easter eggs

This project investigates the use of hidden tracks and messages in a variety of music mediums ranging from vinyl LPs to CD, digital files and video games. Providing a contextual review of audio 'easter eggs', the project explores their meaning, purpose and considers possible techniques for incorporating them in modern digital technologies. The project includes a small prototype 'Egg Raid', which appears to be a regular keyboard/piano at first glance, but turns out to hide a variety of audio easter eggs.
CAT Synthesis

Much work has been done from in both creative and scientific disciplines to explore the relationship between music and human emotion. Much less explored, is the consideration of non-human animal emotion and music. Humans perceive animals as possessing and expressing emotions particularly through their production of sound. In this work, we consider a series of recordings captured from several domestic cats, which are represented interactively through a prototype system that utilises concatenate synthesis techniques. The human user is able to manipulate the emotional parameters of arousal and valence for a virtual, musical cat, providing them with a mechanism with which to influence it’s emotion, and effectively “play” the cat. In turn, we expect the emotion of the user to affected by the sounds made by the virtual cat.
Personal Constructs in Sound Design

A range of our projects utilises the investigative approach known as Repertory Grid Technique (RGT), which is devised from research into personal constructs pioneered by psychologist George Kelly. The approach allows a unique hybrid of qualitative and quantitative methods to be applied to a domain of interest allowing scales to be devised and utilised by study participants.
Our that utilise this technique have examined perceptions and understands of audio and music as they are applied to computer game evaluation, sound design for games, and descriptions of a range of musical features and characteristics across a range of contemporary genres.
Our that utilise this technique have examined perceptions and understands of audio and music as they are applied to computer game evaluation, sound design for games, and descriptions of a range of musical features and characteristics across a range of contemporary genres.
Project Holophonor

In the science-fiction TV show Futurama, the Holophonor is a musical instrument a bit like an Oboe, which generates sound and visual images. This project, currently in early stages, fuses the fields of visual music, futurology and affective computing. Firstly, we explore the history of visual music in painting and early experimental animation, before examining the development of this field seen in modern equivalents such as VJ culture, projection mapping artworks and music video games. Tracing this development towards interactive forms of visual music, we suggest that the instruments like the Holophonor may be possible with technologies which are either available or near to availability. In particular, we consider how affective computing technologies may enable the automatic recognition of emotion, which is among one of the key features of the holophonor.