A framework for computer-assisted sound design systems supported by modelling affective and perceptual properties of soundscape
Miles Thorogood, Jianyu Fan, Philippe Pasquier
Autonomously generating artificial soundscapes for video games, virtual reality, and sound art presents several non-trivial challenges. We outline a system called Audio Metaphor that is built upon the notion that sound design for soundscape compositions is emotionally informed. We first define the problem space of generating soundscape compositions referencing the sound design and soundscape literature. Next, we survey the state-of-the-art soundscape generation systems and establish the characteristics and challenges for evaluating these types of systems. We then describe the Audio Metaphor system that aims to model the soundscape generation problem using a method of soundscape emotion recognition and segmentation based on perceptual classes, and an autonomous mixing engine utilising optimisation and prediction algorithms to generate a soundscape composition. We evaluate the soundscape compositions generated by Audio Metaphor by comparing them with those created by a human expert and also those generated randomly. Our analysis of the evaluation study reveals that the proposed soundscape generation model is human-competitive regarding semantic and emotion-based indicators.
Type : Journal Article
Publication : Journal of New Music Research, vol. 48, no. 3, pp. 1-17, 2019