Multiscale Auditory Displays - Interacting with the complexity level in Model-based Sonification

Thomas Hermann
Bielefeld University, Germany

The talk first introduces sonification as a non-visual means for the
exploratory analysis of high-dimensional data. The framework of Model-based
sonification is then presented as an interactive and intuitive mediator
between the high-dimensional data spaces and the multi-modal perceptual
spaces of the system user. Some examples for sonification models will be
presented, with a focus on complexity parameters that allow to experience the
data set at different scales of resolution. Sound examples will be given to
illustrate the models.



A particular focus is put on the Growing Neural Gas Sonification (GNGS), a
sonification model that adaptively grows interfaces to enable interaction
with the high-dimensional data at hand of its reduced representation obtained
from the topologically structured network of neurons. Auditory
representations of the growth process itself (exemplified by both the GNGS,
and an other sonification model called 'Data Crystalization Sonification') is
able to turn structural changes on the scale of complexity into a temporal
change of the sound structure. Since the ear is highly suited to analyze
temporal structuring, including rhythm, it is ideal to process the
multi-scale "view" on the data. In addition, auditory gestalts may be created
from the auditory system that correspond to the multi-scale structuring of
the data, and are only apparent from multi-scale analysis and
in'visible' (resp. audible) at a single level of resolution.

Presentation (PDF File) Additional Presentation Files (Zip Archive)

Back to MGA Workshop III: Multiscale structures in the analysis of High-Dimensional Data