The field of sonification has progressed greatly over the past twenty years and currently constitutes an established area of research. This article aims at exploiting and organizing the knowledge accumulated in previous experimental studies to build a foundation for future sonification works. A systematic review of these studies may reveal trends in sonification design, and therefore support the development of design guidelines. To this end, we have reviewed and analyzed 179 scientific publications related to sonification of physical quantities. Using a bottom-up approach, we set up a list of conceptual dimensions belonging to both physical and auditory domains. Mappings used in the reviewed works were identified, forming a database of 495 entries. Frequency of use was analyzed among these conceptual dimensions as well as higher-level categories. Results confirm two hypotheses formulated in a preliminary study: pitch is by far the most used auditory dimension in sonification applications, and spatial auditory dimensions are almost exclusively used to sonify kinematic quantities. To detect successful as well as unsuccessful sonification strategies, assessment of mapping efficiency conducted in the reviewed works was considered. Results show that a proper evaluation of sonification mappings is performed only in a marginal proportion of publications. Additional aspects of the publication database were investigated: historical distribution of sonification works is presented, projects are classified according to their primary function, and the sonic material used in the auditory display is discussed. Finally, a mapping-based approach for characterizing sonification is proposed.
Citation: Dubus G, Bresin R (2013) A Systematic Review of Mapping Strategies for the Sonification of Physical Quantities. PLoS ONE 8(12): e82491. https://doi.org/10.1371/journal.pone.0082491
Editor: Michael J. Proulx, University of Bath, United Kingdom
Received: May 30, 2013; Accepted: October 24, 2013; Published: December 17, 2013
Copyright: © 2013 Dubus, Bresin. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: This work was supported by the Swedish Research Council, Grant Nr. 2010-4654. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
In this article we present a systematic review of results from studies in the field of sonification documented in 179 scientific publications representing 60 projects. The main idea is to draw an overview of a specific area of the relatively new research field of sonification in order to identify established methods and techniques. To this end, we have built up a database of sonification works currently comprising 734 entries. We set a particular focus on mappings of physical dimensions of sonified data to psychophysical and physical dimensions of the resulting sound, which we call auditory dimensions. We first present the concept of sonification: its nature, existing techniques, and a brief historical overview. In Section 2 we introduce our systematic review by presenting its objectives and restrictions. The method used for building the publication database and extracting the data is described in detail in Section 3. In Section 4 we present the sixty projects analyzed for this study by providing a brief description, mentioning the sonic material that was used, and listing the mappings. These data are analyzed and discussed in Section 5. The article ends with conclusions, suggestions for future work and the proposition of a mapping-based approach for characterizing sonification.
1.1 Nature of sonification
Several successive definitions of sonification have appeared since the concept was formally introduced in the 1990s. Although some earlier scientific works could qualify as genuine sonification (some are presented in Section 1.3), it seems that the term was first coined by William Buxton at a tutorial of the CHI conference in 1989, as:
“The use of sound for data representation [, being] the auditory counterpart of data visualization.” 
At first defined by analogy to scientific visualization, sonification rapidly gained significance as a research topic in itself, and the first conference dedicated to auditory display (International Conference on Auditory Display – ICAD) was founded in 1992 by Gregory Kramer. The numerous thoughts and findings resulting from this conference were summarized in Auditory display , published in 1994, where other definitions were proposed, the most elaborated being Scaletti's “working definition”:
“A mapping of numerically represented relations in some domain under study to relations in an acoustic domain for the purposes of interpreting, understanding, or communicating relations in the domain under study.” 
Sonification researchers gathered again at ICAD 1997 in order to report about the state-of-the-art of the field at that time and their ideas about future challenges. This led to the publication of the NSF Sonification report , where a new definition was formulated:
“Sonification is defined as the use of nonspeech audio to convey information. More specifically, sonification is the transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation.”
The research community seems to have accepted this definition, as mentioned by Walker and Nees in the Sonification handbook  — the most recent effort to provide an exhaustive overview of the field.
Nevertheless, defining the boundaries of sonification is still a hot topic, with some researchers expressing the need of having a somewhat stricter, systematic definition , , whereas others are willing to step over the border to data-driven music . This ambivalence is reflected in the ICAD program, where an increasing significance is attached to artistic works: a concert centered around sonification was included as a social event since 2000, a sonification contest has been organized since 2009, and topics of interest listed in the ICAD call for papers electively include references to art — “auditory display and art” in 2002, “sound as art” in 2010 and finally “sonification as art” in 2012 and 2013. Supper, in a sociological study of the ICAD community , reported the controversy created by Hermann's attempt to “narrow down the boundaries of the field”. Altogether, this janiform evolution indicates that a full consensus has not been adopted yet on the nature of sonification.
When comparing the successive definitions, it appears that qualifying a work as sonification is fundamentally related to its purpose: indeed, one could not determine if a sound is an emanation of sonification just by listening to it. This claim is in line with Scaletti's own reflexions on her early definition: “That the sound be data-driven is necessary but not sufficient justification for calling it sonification; it must also have been done with the intent of understanding or communicating something about the original domain” . The significance of this aspect was recently supported by Varni et al. who claimed that, although not fitting into Hermann's definition, mapping to high-level auditory dimensions using music material should be allowed in sonification, provided that the main goal was “to optimise efficiency of information communication” and not “o be pleasant to hear or to arouse particular feelings for the participants”.
1.2 Character of sonification
The field of sonification is interdisciplinary by nature. Like visualization, it can be applied to any kind of data, interactively or not, making it potentially useful for a large set of different domains. Sonification as a research topic is itself at the junction of numerous scientific disciplines including human-computer interaction, psychoacoustics, engineering design, human factors and ergonomics, assistive technology, and cognitive sciences. This is nicely illustrated by the “nterdisciplinary circle of sonification and auditory display”in the introduction to the Sonification handbook .
As for any sort of auditory display, the use of sound as a medium for communicating information in sonification makes it particularly well suited for time-related tasks such as monitoring or synchronization. Taking advantage of the strong relationship between auditory perception and motor control , sonification can also be a valuable assistance to the perception of movements, and more specifically to the perception of one's own body motion, i.e. kinesthesia. Combining these two aspects makes sonification an ideal candidate to support the design of applications related to physical training and rehabilitation, e.g. in sport –. Other popular applications are in the fields of data exploration (e.g. ), data mining (e.g. ), and sensory substitution, e.g. for assisting visually impaired people , . All in all, sonification represents a good complement to visualization insofar as the strengths of hearing and vision lie in different areas.
Various sonification techniques have been elaborated and formalized since the 1990s. The most widely accepted of these among the research community are described in detail in the Sonification handbook: audification , auditory icons , earcons , parameter mapping sonification , and model-based sonification . Audification is the direct playback of data streams as sound waves, allowing only some minor processing for the signal to become audible. Auditory icons are based on an ecological approach to auditory perception, associating short environmental sounds with discrete events in the data in order to create metaphorical perceptual relationships, e.g. the mechanical “click” sound in digital cameras. Earcons are similar to auditory icons regarding how data are considered and with respect to brevity, but using entirely synthetic sounds with no prior metaphorical value, e.g. a melody indicating the battery level in mobile phones. Earcons create perceptual relationships that have to be learned from scratch, but can be easily parameterized and combined with each other to form hierarchical patterns of information.
Parameter mapping sonification consists in defining a set of mappings — the nature of which is discussed in Section 2.2 — between data dimensions and auditory dimensions. While simple to design, this technique has the potential to communicate information in a continuous manner, therefore being the most widely used sonification technique. Whereas it allows for a much greater flexibility than the previous techniques, the design of each mapping should, in return, be considered very carefully: an unfortunate choice can dramatically affect the usability of the whole system.
Model-based sonification was introduced by Hermann and Ritter  in an attempt to move away from the simplicity of parameter mapping sonification. Specifically designed for interactive contexts, model-based sonification aims at benefiting from the learning abilities pertaining to everyday listening , . This technique is grounded in the human ability to associate a perceived sound and its characteristics with the source that generated it and its properties. For example we can distinguish between a broken table tennis ball and a new one by the different spectral characteristics of their impact sound, the sound of a broken ball having usually a higher centroid. Model-based sonification consists in defining a dynamic model representing a system that can evolve in time following a set of abstract laws, resulting in the creation of a virtual sounding object when data are injected into it. The sound is triggered when the user interacts with the system to activate the corresponding sounding object. The same model can be used with data coming from different domains, structurally different, and independently of their dimensionality. By analogy to the practice of a musical instrument, the model can be seen as a set of physical laws governing sound production and propagation, and the data as an instrument sounding only when manipulated by a player. Data from different domains could sound like different instruments, whereas structurally similar datasets would represent the same instrument with different qualities. To summarize, this approach allows the user to uncover relationships in the data in the same way that a musician would learn how to master an instrument.
1.3 Sonification in a historical perspective
History is rich with examples of use of the auditory modality to represent phenomena from the physical world. The use of auditing in Mesopotamia as early as 3500 BCE to detect anomalies in accounts of commodities is currently regarded as one of the first known implementations of data sonification . Auditory displays have been exploited to perceive various physical dimensions such as temporal, physiological or kinematic variables long before sonification techniques were formalized: automatic alarm signals and striking clocks were used in ancient Greece (e.g. by combining a clepsydra with a water organ ) and medieval China to provide information about elapsed time. Pythagoreans reportedly defined a musical scale by associating different tones with various heavenly bodies according to their apparent velocity as seen from the Earth. Inspired by this approach in his treatise Harmonices Mundi (1619), Kepler transposed the Pythagorean concept of (harmony of the spheres) onto a heliocentric model. He assigned to each planet a fundamental tone depending on its aphelion (maximum distance to the sun), which was then changed in pitch depending on the angular displacement of the planet as seen from the sun, thus covering a specific interval as the planet moved around its orbit. This led him to focus on a harmonic relationship between the mean distance and the orbital period of a celestial body, which he finally discovered and exposed in his third law of planetary motion .
The stethoscope, a device performing the audification of heart rate, breath and blood pressure among others, was invented by Laënnec in 1816. Its design from the 1940s is still the one in use in everyday medical practice.
Probably one of the most well-known devices to integrate an auditory display system — popular among the public and emblematic for sonification researchers — is the Geiger counter, which translates ionizing radiation into clicks with a pulse depending on the level of radiation. But what made it so popular? This particular auditory feedback was originally designed as a complement to the visualization performed on the earliest devices by an electrometer, since this tedious method of measurement was not entirely satisfying. The first use of an auditory Geiger counter was reported in 1917, when a sensitive telephone was incorporated in the electrical circuit in order to listen to the audification of electrical impulses due to the ionization of the gas in the tube of the counter . Already used nearly 40 years before for audifying a magnetically induced current in the nerves of frog legs  and in conducting wires subject to changes of molecular structure , this setup later evolved to include more advanced components for amplification and recording, loudspeakers, or headphones. By taking a step back and considering the Geiger counter as a device performing sonification of the level of ionizing radiation (instead of audification of electrical current), the issue of the mapping strategy emerges. Therein may lie the veritable key to its success: to transpose a physical quantity that is essentially non-visual — and pictured in everyone's imagination as very important because life-threatening — to the auditory modality through clicks with a varying pulse.
More recent applications of auditory displays were sparsely introduced during the twentieth century (e.g. Pollack and Ficks  in 1954, Speeth  in 1961, Kay  in 1974, Yeung  in 1980) but the starting point of the outburst of research in this field was the first ICAD conference in 1992 and the subsequent seminal work reported in the proceedings edited by Kramer . Sonification, a particular case of auditory display, is therefore a relatively recent matter of concern for scientists, yet it has gained a certain maturity in about twenty years of research. Even if sonification is a narrow niche of interdisciplinary applied sciences — e.g. as compared to scientific visualization — the community of researchers has grown significantly and is now producing burgeoning examples of practical applications.
2.1 Why a systematic review?
We see the need for drawing an overview of the field of sonification in order to understand what the most successful and promising strategies are when sonifying data, and provide researchers, designers, and practitioners in the field with a starting point with strong foundations that will allow the field of sonification to make a leap forward. Our aim is to provide answers to questions such as: “What are the domains of application of sonification?”, “What is the historical distribution?”, “What kind of sound is used?”, and “What are the most popular mappings?”
More in detail, we want to organize the knowledge accumulated in nearly 20 years of research, learn from previous research which mappings are natural, popular, successful, or unsuccessful, and build a foundation for future sonification works. The aim of our study is to look at previous sonification designs in order to perform a systematic review of the mappings between physical and auditory dimensions present in the literature. We should be able to identify whether some particular associations between physical quantities and sound parameters are more used than others. This would not imply that these associations are the most successful ones, but it will suggest which should be investigated first, for example when designing new sonification-based applications.
We have therefore decided to focus on publications dealing with sonification, by combining results of a large number of independent studies. This will enable the identification of patterns in the published results, common trends, and critical sticking points. We developed a method for identifying potentially interesting papers, for extracting scientific information from them, and at the same time avoiding bias between articles providing very detailed descriptions and more concise ones (see Section 3). This resulted in a large pool of papers (about 700) that we organized into a database, from which we randomly selected sixty sonification projects for the systematic review introduced in Section 4. These projects correspond to a total of 179 scientific publications, and constitute a sample of typical sonification works.
There have been previous overview works in the field of sonification. The closest work to that described in the present study was documented in the pioneering work by Walker and Lane  who proposed the design of a database for providing “a searchable online record of sonification mappings and auditory display techniques”. Other overview works include a review of electronic aids for blind people , an overview of auditory display of molecular information , a review of biofeedback technologies for neuromotor rehabilitation , a study of evaluation methods for sonification , a study of sound synthesis tools used for sonification applications , a review of methods for image sonification , a historical review of the use of sonification in a database of networked music and sound art , a review of aesthetic strategies in sonification , a recent large review of visual, haptic, auditory and multimodal display , and overviews of the whole field of sonification (e.g. by Worrall  and in the Sonification Handbook ).
2.2 Mapping information to the auditory domain
In our study we define mapping in sonification as a function from a subspace of the data domain to a subspace of the auditory domain. No condition is required on , which can be non-linear, or even discontinuous. However, Scaletti  indicates that, in order to qualify as sonification, the mapping should not be completely arbitrary nor excessively complex (so that data relations are decipherable). The domain of , i.e. the data subspace , can be multidimensional, making multivariate. In the case where elements of and can be ordered, the mapping is said to have a polarity if is monotonic.
Among the sonification techniques presented in the section 1.2, parameter mapping sonification is the most widely used for representing multidimensional data as sound. It is indeed the simplest way to continuously map data to sound. Using parameter mapping sonification forces the quantification of both the data subspace in input and the auditory domain subspace in output. Many dimensions can be displayed in parallel, and the mappings can be changed in real-time, making this method suitable for example in interactive sonification applications. Whereas they constitute the bulk of the design of a sonification system in the case of parameter mapping sonification, mappings can also be identified while using other sonification techniques.
As mentioned above, previous work on sonification mappings has been initiated by Walker et al., and is summarized in Walker's doctoral dissertation . Walker and Lane's mapping database  was meant to organize sonification mappings according to three design components: nature, polarity, and scaling of the mapping (as documented in ). In doing so, they split up the design process of parameter mapping sonification into three stages: choice of the mapping strategy (i.e., which auditory dimension to use to represent a specific data dimension), choice of polarity, and psychophysical scaling. This work was based on perceptual studies to guide the design process following these three successive stages, and dealt with a limited number of generic data dimensions (e.g. “Temperature”, “Pressure”, “Velocity”).
2.3 Restriction to physical quantities
In the present work, we restrict our investigation to mappings between physical and auditory dimensions. We believe that we will come up with a list of mappings that could be easily implemented using physics-based sound models such as those provided in the Sound Design Toolkit . This will allow for the design of a test bed for psychophysical experiments for the validation of the mappings and their properties, extending the pioneering study by Walker et al. .
We expect to extract a large variety of data from our database concerning domains, scales, and vocabulary. We will need to gather them into different categories at several levels, as presented in the next section. In this scope, the fact that physical quantities often correspond to concrete measures represents an advantage, the resulting categories being less subject to ambiguity than in the case of abstract data. Domains of mappings ( in Section 2.2) are subspaces of the considered data domain. Choosing the physical domain as data domain implies that the domains of mappings that we consider can most of the time be ordered (as these measures can be compared in the physical world). As a consequence, a polarity may be defined on this category of mappings.
3.1 Building up the publication database
3.1.1 Creating the publication database.
We started our study by collecting a large pool of scientific publications in order to initiate the filling of the publication database.
Any type of work about sonification may include descriptions of mappings and may therefore be included in the publication database, provided that some part of the sonified data can be qualified as physical quantities as described previously. In practice, sonification projects are most often described with an acceptable depth in articles from peer-reviewed journals and conference proceedings, doctoral theses, and patent applications. Articles correspond to the most suitable format for our study: they are relatively short and describe often a single research project in a concise manner. We chose therefore to initiate the publication database by creating a pool of articles obtained by browsing several online journal databases (Springer Link , IEEE Xplore , ScienceDirect , PubMed , ACM Digital Library , ASA Digital Library , Ingentaconnect ), as well as proceedings from specialized conferences (ICAD , ISon , CHI , SMC , NIME , Audio Mostly ), and Google Scholar . We do not, however, limit entries of the publication database to research articles, and other types of documents have been inserted following the expansion process presented in the next subsection. Doctoral theses can include unpublished project developments. Patents are by necessity technically more comprehensive than research articles, and can be helpful whenever the description in a related article is sparse or ambiguous. Some interesting information could also be extracted from book chapters, technical reports, master theses, artistic project descriptions, and websites, though those do not represent the majority of the target documents.
The first step of the article selection was performed by filtering the online databases listed above using the only keyword “sonification”, which typically gave a few hundred results in one go. Articles employing this term in the sense commonly used in biochemistry — i.e. sonic stimulation or irradiation by sound or ultrasound waves — were immediately discarded. We were aware that this process alone would not allow us to include projects published earlier than the formalization of auditory display techniques in the beginning of the 1990s, but this issue was later resolved by the process of expanding the publication database, as presented in the next subsection. For each search result, the criterion for inclusion in the publication database was the following: the title or the abstract of the article had to foreshadow the implementation of a practical application of sonification. It should not be too general like the presentation of a new software platform for sonification, nor too theoretical like the introduction of a taxonomy or a design framework. Sonification of abstract data such as stock market data or web traffic was left aside since we were only interested in physical quantities.
3.1.2 Reading and expanding the publication database.
With the method described in the previous subsection, we created an initial pool of articles, from which we could start our analysis. Interesting works cited in the articles from this initial pool were progressively included into our publication database. A given work was considered “interesting” whenever it matched the criterion for article inclusion defined in the previous subsection, i.e. the implementation of a practical application of sonification of physical quantities. This could be deduced either from the title and abstract as previously, or from the description in the citing article. In this way, the publication database could be expanded by including significant works published before the 1990s (i.e. before the term “sonification” appeared).
It appeared soon that reading and analyzing all the articles collected in the publication database would take a considerable amount of time, given that the number of entries seemed to grow exponentially, at least in a first phase. A shell script for the random selection of the next article to read was implemented in order to keep an even distribution of topics, research groups and time frame among the articles considered for the systematic review.
The systematic review is conducted on data extracted from projects, not from single articles. When an article is picked up from the publication database, we first look for similar articles stored in the publication database in order to group them into a project. Two articles are considered “similar” when they share the same objective, e.g. when the same data are used, or when new data are collected in a resembling experiment, using a resembling sonification algorithm. Similar articles are most of the time written by the same research group, have often the same funding source, and are usually published within a relatively short and homogeneous time frame. An article can form a project in itself when no similar articles can be found. More rarely, several projects can be tackled within a single article.
We tried to organize the reading of papers associated with a given project chronologically in order to follow and better understand developments and strategic choices. However, because of the backward-looking character of our searching strategy, we often found earlier references to be read at a later stage. These were either added to the group of articles associated with the project currently undergoing the reading process, or simply inserted into the publication database in case they belonged to another project.
The publication database was created according to the process described in the previous subsection in January 2011, encompassing therefore articles published in 2010 and before. In order to include more recent projects in the analysis, we repeated the creation process in January 2013 for articles published during the limited period 2011–2012. A total of 8 projects including one work from this period were included in the systematic review, representing 13.3% of the 60 projects. For future updates of the publication database, this operation can be reiterated for any period of interest.
Finally, review articles such as the ones mentioned in Section 2.1 allow both to include additional interesting references and to evaluate the progress state of the publication database.
3.2 Identifying mappings
3.2.1 Criteria for mapping inclusion.
Once the publication database was created according to the process presented above, all mappings of physical quantities to sound parameters were identified and considered for future analysis. However, some particular types of mappings were excluded from the analysis a priori.
In the beginning of the systematic review, we chose to consider audification as an absence of mapping, i.e. an absence of design strategy for the sonification system. Therefore we did not include works using audification in our publication database, and we did not count audification among the mappings to extract. This point of view was altered thereafter and we now believe that audification of data can be considered as a direct mapping of any data dimension to an elementary sound pressure level contributing to the creation of a waveform. Auditory graphs  were considered too abstract to be included in the analysis, as long as they did not correspond to a concrete sonification example (i.e. explicitly representing a given physical dimension). Although incorporating some data that can be classified as an objective physical dimension, e.g. the position of a cursor on a screen, auditory menus  were also judged too abstract to be included in this study.
Since the focus of the analysis was set on the design process of sonification systems, observations posterior to design (i.e. associations between physical and auditory dimensions that had not been consciously planned as part of the design but emerged from the use of the system as unexpected side-effects) were not considered as proper mappings. As an example inspired by a model-based sonification implemented by Sturm , one can consider a set of particles moving in a space subject to given physical laws of motion, each particle producing a pure tone of frequency depending on its velocity. An increase of temperature of the whole system would give rise to a higher perceived pitch of the sound feedback due to an increased overall velocity, but if the sonification design is not specifically mentioning the mapping Temperature Pitch, the only one to be retained is Velocity Pitch. Conversely, both should be taken into account if the intention of the sonification designer to make use of this particular behavior of the system is explicitly expressed at the design stage, even if the mapping is indirect. This example shows that this criterion is particularly relevant for model-based sonification, where the “sound-link variables”  (the dimensions of the model being directly sonified) are often acting on the sonic result at a low level.
3.2.2 Mapping labels.
An interesting aspect of the systematic review is the possibility to determine which mappings have been assessed as successful, or unsuccessful respectively. In order to track mapping evaluations performed in the considered different projects analyzed, we defined two corresponding labels: “assessed as good” (G) and “assessed as bad” (B). The label G was assigned to a given mapping whose efficiency was found to be significantly better when tested in comparison to other mappings corresponding to the sonification of the same physical dimension. The label B was assigned similarly when its efficiency was tested and found to be significantly worse than other mappings, or whenever a given mapping was reported inefficient for performing a given task. It is important to note here that we do not count the ability to perform a task as validation of the efficiency of a mapping if it has not been compared to another mapping. On the other hand, it seems reasonable to consider the inability to perform a task as a proof of its inefficiency.
Another label (F) was used to characterize mappings mentioned as interesting for a future application, but not implemented at the time the work was published.
3.2.3 Classification process.
Due to the interdisciplinary nature of sonification, we expect many different types of physical quantities to be sonified. We conducted a classification process aimed at gathering similar data under intermediate-level conceptual dimensions. This process could be described as organizing chaotic information to form categories based on similarity and natural relationships. To this end, we used affinity diagrams, also known as the KJ method, a popular tool used in management and planning since the 1960s . Physical dimensions directly extracted from the projects were written on post-it notes that were grouped by similarity on blank pages (Figure 1). Several clusters emerged and were assigned a label representing an intermediate-level conceptual dimension. For instance, the intermediate-level dimension Density encompasses the following lower-level variables: bulk density, population density, density of footsteps, number of people, local blood flow density level, oxygen saturation in arterial blood, water or forest density on a map, local data density, end tidal carbon dioxide concentration measured in respiration, density of He++ ions, neutron density reflecting material porosity, and spatial period of period-based textured images. The resulting classification, presented in Section 4, is inevitably based on the authors' interpretation of the data, and therefore incorporates elements of subjectivity. The intermediate-level dimensions were gathered into five high-level categories in order to reduce the dependency of the results to this subjectivity.
Each low-level dimension was written on a post-it note. The notes were then moved to form clusters based on their degree of similarity, constituting the intermediate-level dimensions used to reference mappings in this systematic review for both physical and auditory domains.
A similar issue occurs with auditory dimensions due to variations in the terminology. We used the same process in the auditory domain as for classifying the physical dimensions, gathering several words corresponding to the same notion in intermediate-level conceptual dimensions, which we grouped subsequently in five higher-level categories.
Arfib et al. , defining a theoretical framework for mapping gesture data to sound, make a clear distinction between dimensions corresponding to the perceptual effect on the listener (belonging to the “sound perceptual space”) and dimensions relative to “synthesis model parameters”. As mentioned in Section 1.1, sonification is indivisible from its purpose, which is to communicate information to a human user. We chose therefore to align our classification with the sound perceptual space, focusing on perceptual effects rather than on sound synthesis techniques. A simple illustration is the distinction between Frequency and Pitch. Whereas it is well-known that the two are directly related to each other , they belong to different spaces in the classification of Arfib et al., being respectively a synthesis model parameter and a dimension of the sound perceptual space. Sonification designers often employ the two terms alternately to describe a given mapping, as if interchangeable. According to our interpretation of those design descriptions, the resulting perceptual effect corresponding to the use of either word is most of the time identical. Following our perceptual approach, all the mappings concerned are classified as associating a specific physical dimension with Pitch. In some other cases, however, the distinction is clearly made by the sonification designer and is a well thought-out part of the design. For example, Grond and Dall'Antonia  map the distance between two atoms of a molecule to the center frequency of several superimposed resonant filters, which has the effect of modifying the timbre of an earcon rather than its pitch.
Various levels of description are expected to be found in the publication database, depending on the background and interests of the researchers designing the sonification system. For instance, the same mapping effect in the auditory dimension could be described rather approximately as a change in Timbre, more precisely as a change of Brightness, more specifically as an increase of the Frequency of the spectral centroid, more technically as an increase of the Cutoff frequency of a bandpass filter used to synthesize the sound, and so on. That being said, different levels of description can also reflect objective differences in the mapping design. To address this problem, the classification was built with enough flexibility to incorporate a multi-level hierarchy, taking into account the most detailed level of description available in the projects, for both physical and auditory dimensions.
Another source of disparity is the use of data sharing the same physical nature but on different scales. Gathering the dimensions according to their nature results in physical homogeneity, but also in having extremely different scales stirred together in the same dimension. One can wonder if it is pertinent to consider, for instance, temperature measurements in daily weather records as having the same significance as temperature measured inside a nuclear reactor, or core temperature of a star. The three variables described above belong to the same category of the current classification (Temperature) but could be distinguished at a lower level if the need for a finer distinction emerges in the future.
In light of the foregoing, it appears that the best solution is probably to have a multi-level and multi-scale structure for the classification of both physical and auditory dimensions. We provide an example of each in our current classification of auditory dimensions, further developed in Section 4.3: Spatialization incorporates a detailed multi-level set of subcategories, whereas Duration includes several scales. It should be noted that the aim of the present article is not to present a kind of ultimate classification, if that is ever possible. We shaped our classification by ensuring plasticity, i.e. the possibility to evolve dynamically to adapt to context changes. Context change here can correspond to the apparition of new data categories, or to a hitherto unprecedented discrimination of a data dimension according to different subgroups (e.g. scales). As it turned out, getting a better hindsight of the data resulted in the emergence of more stable categories. We believe that we have reached a relatively steady classification for auditory dimensions and high-level categories in the physical domain. Future developments of the classification could be validated by confrontation with the opinion of researchers from diverse fields, for example using a distributed and cooperative version of the KJ method  over the World Wide Web, or by using coding schemes . The latter, combined with inter-coder reliability tests, could yield a more robust validation of current and future classifications.
Presentation of the Data
4.1 Publication database
The publication database currently comprises 739 entries . Sixty projects were analyzed, corresponding to 179 publications referenced in the present article, and are presented in Section 4.4. The remaining 560 entries of the database, selected as potentially interesting but not analyzed yet, can be browsed online.
4.2 Sonified physical dimensions
In the domain of sonification mappings, i.e. sonified physical quantities, 33 dimensions emerged from the classification process and are presented in Table 1 along with a label (from P01 to P33) for reference in the remainder of the article. These dimensions, whose names are self-explanatory, are distributed over five high-level categories: kinematics, kinetics, matter, time, and dimensions. Kinematics refers to quantities used to characterize motion and position. Kinetics refers to quantities linked to the causes of motion, and by extension those related to energy. Matter refers to properties of the matter. Time refers to characteristics of a signal in the time-frequency domain. Dimensions refers to geometry of objects and spaces.
4.3 Auditory dimensions used in sonification
In the codomain of sonification mappings, i.e. sound parameters, 30 dimensions emerged from the classification process and are presented in Table 2 along with a label (from A01 to A30) for reference in the remainder of the article. These dimensions are distributed over five high-level categories: Pitch-related, Timbral, Loudness-related, Spatial, and Temporal. Six dimensions belong to at least two high-level categories. Most of the names of the dimensions are self-explanatory. In the following we describe those requiring further clarification.
Timbre is usually defined as comprising all the characteristics allowing us to distinguish between two sounds having identical pitch and loudness. Because this is a negative definition, it often appears judicious to describe a mapping more specifically than using the term “timbre”. The role of the high-level category Timbral (labels A03 to A14) is to cover this usual definition of timbre, whereas the intermediate-level dimension called Timbre (A03) in our classification corresponds to all the cases where no further precision on the mapping was given. Instrumentation (A04) refers to cases where musical instruments change depending on the sonified data, whereas Polyphonic content (A05) refers to the number of parts in a polyphonic piece, i.e. the number of instruments rendered in the playback of the piece. Spectral power (A08) encompasses operations performed on the sound spectrum that are not covered by the remaining dimensions listed in the high-level category Timbral.
Spatialization (A17) corresponds to the position of a sound source in space and time. Since it is often described through several interrelated aspects, it constitutes a good illustration of multi-level classification. These aspects include equipment (e.g. binaural earphones, stereo loudspeakers, array of loudspeakers), technique (e.g. Ambisonics, Vector Base Amplitude Panning, Wave Field Synthesis), as well as quantities centered on the perceptual effect on the listener (e.g. Interaural amplitude difference, Interaural time difference, Interaural frequency difference), or involving both technical and perceptual aspects (e.g. Head-related transfer function). While enumerated in Table 3, the different aspects are considered as a single mapping of the particular physical dimension to Spatialization. This applies even to cases where different aspects result in divergent assessments of efficiency, materialized by different mapping labels as described in Section 3.2.2. For instance, one could map the orientation of the listener towards a sounding object either to Interaural Time Difference () or Interaural Amplitude Difference () with varying success. However, from the point of view of the sonification designer, the goal remains unchanged: it is to map Orientation (P06) to Spatialization (A17).
Tempo (A19) should be understood in accordance with its definition for the MIDI format: it represents a high-level control on the playback speed independent of the density of sound events.
Duration (A20) corresponds to distance on the time axis, i.e. the time elapsed between two events. The quantity usually referred to as “tone duration” corresponds to the time elapsed between a tone onset and its offset. Another quantity commonly used in the study of music performance is Inter-Onset Interval (IOI), defined as the time elapsed between two successive tone onsets. Tone duration and IOI are therefore included in the same category. The reason for not distinguishing between these two quantities in our classification originates in the lack of precision found in many publications describing mappings, often mentioning the “duration” of sound stimuli without specifying clearly if it corresponds to tone duration or IOI.
Duration was chosen to illustrate the multi-scale classification due to the dependency of the perception of duration to time scale. According to Sethares , sonic events occurring at different time scales activate different cognitive structures calling on different types of memory (echoic, short-term, or long-term memory). This disparity of perceptual impressions was taken into account by Saue , who selected four time scales to be used in the context of sonification of large datasets: spectral, rhythmic, event, and ambient. These time scales were used to derive four elements of our classification.
The first element based on time scale is Spectral duration, corresponding to the smallest time scale (less than 50 ms). Sethares describes how echoic memory operates at this scale, performing the “fusion” of sonic events into coherent cognitive structures such as pitch and timbre. For his part, Saue explains that these sonic events are perceived as “variations in timbre and localization”. For reasons exposed above, in our bottom-up approach to classify auditory dimensions, timbre is often presented indirectly by sonification designers. Timbre variations are often described through performing low-level manipulation of the signal, or assembling temporal elements belonging to the spectral time scale (e.g. grain duration in granular synthesis). On the other hand, both spatialization and pitch were found to be described more explicitly by sonification designers. As a consequence, Spectral duration was represented by an auditory dimension in itself (A28) belonging to the high-level categories Temporal (by essence) and Timbral (by design).
The three other elements based on time scale constitute the multi-scale dimension Duration (A20): Rhythmic duration (A201) corresponds to a duration comprised between 100 ms and 2 s, calling on short-term memory, and described by Saue as “perceived as relative changes to events inside auditory streams”. Event duration (A202) corresponds to a duration over 2 s, calling on long-term memory, and described by Saue as “perceived as irregularly spaced singular events” and by Sethares as “disconnected events”. Ambient duration (A203) refers to dynamic continuous auditory streams that are not perceived as events but, according to Saue, “as always present (or not perceived at all); a state of no-change or slow change”. Finally, similarly to the case of a multi-class dimension, a last element has to be added to handle cases where no specific scale is mentioned (A204).
4.4 Description of the projects
The sixty projects analyzed in this systematic review are presented in Table 3. For each project, we provide a brief description of the work through the prism of sonification, focusing on this particular aspect rather than on the own research questions of the researchers. Interactivity, an important characteristic of a sonification system that has been highlighted by Hunt and Hermann , was characterized by the use of the words “interactive” and “real-time” in the description. We also describe the sonic material that was used, ranging from detailed descriptions of the sound synthesis to software and hardware platforms, in order to get a sense of the tools used by sonification researchers, a concern that was shared by Bearman and Brown in their recent review study . Finally, the list of mappings identified according to the process described in Section 3.2 is displayed. The list of abbreviations used in Table 3 is presented in Table 4.
Results and Discussion
5.1 Mapping frequencies
The principal measure considered in the systematic review is the frequency of use of mappings. In a preliminary study , we formulated three assumptions to be verified for a larger number of sonification projects. These assumptions, based on 54 publications representing 21 projects, constitute our preliminary hypotheses concerning sonification mappings and are summarized hereunder.
A large proportion of sonification mappings follow the logic of ecological perception. Mappings are often performing a sort of simulation of underlying physical phenomena, which can be implemented either directly or metaphorically. These natural associations between sound and its meaning regarding physics were called “universal relationships” by Hermann and Ritter  and depicted as “deeply engrained in the way we — usually subconsciously — pick up meaning from sound events”.
Pitch is by far the most used auditory dimension in sonification mappings. Typically, the design process for a sonification system often starts by mapping the most important data dimension to the frequency of a pure tone — it is, as Henkelmann puts it , the “Hello World” of sonification. Pitch is known to be the most salient attribute in a musical sound, described as “the most characteristic property of [musical] tones, both simple (sinusoidal) and complex”  and “the most common dimension for creating a system of musical elements.” . Although creating a sonification system is not equivalent to composing music, sonification designers are certainly influenced by music, its structural forms, and its aesthetic values , .
5.1.2 Census of mapping occurrences.
A total of 495 occurrences of mappings were identified within the 60 projects analyzed. In order to determine the most popular mappings, i.e. those occurring the greatest number of times, we first performed a simple census by counting all the occurrences identified in this systematic review, which are listed in Table 3. We could then establish a ranking of the most used mappings, the fourteen most popular of them being presented in Table 5.
As explained previously, multiple occurrences of mappings of the same low-level physical dimension to underclasses of Spatialization (A17) were counted as a single occurrence, although all are referenced in Table 3. According to our classification method described in Section 3.2.3, independent low-level physical dimensions can be grouped together in the same intermediate-level conceptual dimension. Hence it is possible to identify two mappings of the same intermediate-level dimension to Spatialization, as in Projects 49, 50, and 52. Spatialization is the only auditory dimension belonging to the high-level category Spatial present among the fourteen most popular mappings. It is associated with the physical dimensions Location (P01) and Motion (P07), both belonging to the high-level category Kinematics, which supports Hypothesis 3.
It can be observed that more than half of the most popular associations between physical and auditory dimensions involve Pitch (A01), which supports Hypothesis 2.
All the other (i.e. not involving Pitch) mappings but one correspond to natural perceptual associations, which supports Hypothesis 1. The two mappings Location Spatialization and Motion Spatialization can be easily understood: since Spatialization corresponds to the representation of a sound source in space and time, it corresponds to the natural representation of Location and Motion of sounding objects. The mapping Distance Loudness can be explained by the inverse distance law: the damping of sound waves in the transmission medium (e.g. air) leads to a decrease of sound intensity that is proportional to the distance to the sound source, and therefore to a decrease of loudness. The mappings Energy Loudness and Signal amplitude Loudness can be explained in that more energy dissipation leads to a larger amplitude of sound waves and therefore to an increase of loudness. It should be noted that these considerations take into account the polarity of the mapping, which was not studied in our systematic review. Further studies are required in order to verify this assumption.
The only mapping in Table 5 neither verifying Hypothesis 1 nor Hypothesis 2 is Density Duration.
5.1.3 Use of auditory dimensions.
In order to verify Hypothesis 2, we considered the frequency of use of auditory dimensions independently of the sonified physical quantities. The twelve most used auditory dimensions are presented in Table 6, together with their proportion of the total number of mapping occurrences. We performed pairwise Student's -tests on this set of proportions in order to determine which auditory dimensions were used significantly more often than others (). The third column in Table 6 shows how many auditory dimensions were used significantly less often than the one in the first column.
It can be observed that Pitch (A01) is the most used auditory dimension in sonification mappings, and that it is significantly more often used than all other 29 auditory dimensions in our classification. This makes Hypothesis 2 verified for the set of publications included in the present systematic review. Other auditory dimensions often used are Loudness (A15), Duration (A20) and Spatialization (A17).
5.1.4 Distribution of mappings: high-level trends.
In order to examine significant discrepancies in the distribution of mapping occurrences, we performed statistical tests on the largest possible sample population by gathering physical and auditory dimensions into larger categories according to the classification presented in Section 3.2.3. The high-level categories corresponding to the classification of physical (respectively auditory) dimensions are presented in Table 1 (respectively Table 2). This has the advantage of reducing the subjective character of our classification, the five high-level categories being relatively stable and the inclusion of a particular mapping less subject to debate. On the other hand, the information sieve is probably too coarse for describing appropriately the design stage of a sonification system, which is more likely to involve intermediate or low-level dimensions. Distribution of mapping occurrences at an intermediate level will be examined in the next section.
Mapping occurrences were aggregated for all dimensions, for both physical and auditory domains, and summed over high-level categories. As previously, mappings referenced in Table 3 involving the multi-class dimension Spatialization (A17) were considered as a single mapping when corresponding to the same low-level physical dimension. Mappings of physical dimensions to subclasses of the multi-scale dimension Duration (A20) were considered as independent from each other and were therefore aggregated separately. In the case of a mapping of a given physical dimension to an auditory dimension belonging to two or more high-level categories, the mapping was counted once for each concerned high-level category. The resulting distribution of mapping occurrences is shown in Table 7.
Since we consider the choice of sonification mappings as a design problem, we set our focus on the typical issue for a sonification designer, i.e. establishing the type of auditory dimension to use in order to map a given physical dimension. To be able to compare mapping strategies for the different high-level categories in the physical domain, we normalized the data according to the number of mappings identified for these categories. That is, for each row corresponding to a high-level category in the physical domain, we computed the proportion of mapping occurrences corresponding to each high-level category in the auditory domain. These normalized proportions are presented in Table 7 and displayed in Figure 2.
It can be observed that Loudness-related auditory dimensions are used mainly to sonify physical quantities belonging to the high-level category Kinetics. Spatial auditory dimensions are used mainly to sonify physical quantities belonging to the high-level category Kinematics.
Our objective is then to determine which categories in the auditory domain were used significantly more often for sonifying a specific high-level category of physical dimensions. For each high-level category in the physical domain, we performed pairwise Student's -tests () between high-level categories in the auditory domain on the normalized percentages. The following significant differences were observed:
- For sonifying physical dimensions belonging to the high-level category Kinematics: Pitch-related and Temporal auditory dimensions were found to be used significantly more often than Loudness-related auditory dimensions.
- For sonifying physical dimensions belonging to the high-level category Kinetics: Spatial auditory dimensions were found to be used significantly less often than auditory dimensions belonging to all other high-level categories (Pitch-related, Loudness-related, Temporal, and Timbral).
- For sonifying physical dimensions belonging to the high-level category Matter: Spatial auditory dimensions were not used at all. All other high-level categories (Pitch-related, Loudness-related, Temporal, and Timbral) were used significantly more than 0%.
- For sonifying physical dimensions belonging to the high-level category Time: Timbral auditory dimensions were found to be used significantly more often than Loudness-related auditory dimensions. Spatial auditory dimensions were found to be used significantly less often than auditory dimensions belonging to the high-level categories Pitch-related, Temporal, and Timbral.
- For sonifying physical dimensions belonging to the high-level category Dimensions: Loudness-related auditory dimensions were found to be used significantly less often than auditory dimensions belonging to the high-level categories Pitch-related, and Timbral. Spatial auditory dimensions were found to be used significantly less often than auditory dimensions belonging to the high-level categories Pitch-related, Temporal, and Timbral.
Taking the dual approach, one could investigate the use of auditory dimensions in sonification works, i.e. what types of physical dimensions have been sonified using specific auditory dimensions. As explained above, we normalized the number of mapping occurrences in each high-level category in the physical domain against the total number of mapping occurrences identified in this category. The percentages obtained in this way are independent of the volume of projects implementing the sonification of specific physical dimensions, which allows us to compare the type of auditory dimensions used in the sonification depending on the high-level category of the physical input data. Our objective is then to determine which categories in the physical domain were sonified significantly more often using a given high-level category of auditory dimensions. For each high-level category in the auditory domain, we performed pairwise Student's -tests () between high-level categories in the physical domain on the normalized percentages. The following significant differences were observed:
- Using Pitch-related auditory dimensions: no significant differences were found between high-level categories in the physical domain.
- Using Loudness-related auditory dimensions: physical dimensions belonging to the high-level category Kinetics were found to be sonified significantly more often than physical dimensions belonging to the high-level categories Kinematics and Dimensions.
- Using Temporal auditory dimensions: no significant differences were found between high-level categories in the physical domain.
- Using Timbral auditory dimensions: no significant differences were found between high-level categories in the physical domain.
- Using Spatial auditory dimensions: physical dimensions belonging to the high-level category Matter were not used at all. Physical dimensions belonging to the high-level category Kinematics were found to be sonified significantly more often than physical dimensions belonging to all other high-level categories (Kinetics, Time, and Dimensions). This makes Hypothesis 3 verified for the set of publications included in the present systematic review.
It could be considered surprising not to observe an intrinsically natural association, namely sonifying physical dimensions belonging to the high-level category Time using Temporal auditory dimensions. Due to the temporal nature of sound, this association embodies a trivial case of mapping where input and output data share the same physical nature. The fact that this association was not highlighted by the present study can be explained by a sort of bias occurring in the identification process of the mappings making one of the most common mappings implicit. In fact, every project described as “interactive sonification” or “real-time sonification” (at least) could be seen as including a mapping from the dimension Instant — as physical input data — to the dimension Instant — as auditory output.
5.1.5 Distribution of mappings: intermediate-level trends.
While in the previous section we investigated trends in sonification design at a high level of description, it is often essential for sonification designers to make a clear distinction between intermediate-level dimensions within the same high-level category — e.g. by choosing between mapping Velocity (P02) and Acceleration (P03) to dissimilar auditory dimensions. These distinctions do not appear in the high-level classification, and finer trends may also level out when grouped together. Similarly, high-level categories in the auditory domain do not provide detailed information on the expected perceptual effects. For instance, mapping a given physical dimension to Allophone (A07) or Brightness (A12) can be perceived differently by the listener, leading to variable efficiency. On the other hand, low-level dimensions, being often very specific to the domain of application, would not allow us to identify statistically significant differences in the use of mappings. Intermediate-level dimensions presented in Tables 1 and 2 represent a more suitable level of description for attempting to set up design guidelines, or for investigating the use of sonification as in the present study.
The method used for identifying trends within high-level categories is not well suited to the relatively smaller number of mapping occurrences for each association between a physical dimension and an auditory dimension. In many cases, no occurrences at all of a particular mapping were found. The proportions of mapping occurrences can be obtained in the same manner as in the previous section, computing percentages normalized by the total number of mapping occurrences identified for each physical dimension. However, when performing pairwise Student's -tests on these proportions, significant differences could only be obtained in very few cases. We chose to focus on the mappings having a proportion of use significantly greater than zero (). In Table 8 we present every physical dimension involved in at least one such mapping, together with the total number of identified mapping occurrences involving this dimension. In the third column we display the number of auditory dimensions having been used at least once to sonify this physical quantity. Finally, auditory dimensions used significantly more than 0% of the time are listed, together with the normalized percentage of use of the corresponding mapping. Physical dimensions not involved in such a mapping (i.e. for which no mapping was found to be used significantly more than 0% of the time) are not displayed in the table. In five cases, a particular auditory dimension was found to be used significantly more often than other auditory dimensions used at least once to sonify the same physical dimension (). These cases are highlighted in Table 8.
5.1.6 Example of multi-class dimension.
In our classification, we introduced an example of multi-class auditory dimension by identifying different aspects (technical, theoretical, perceptual) of the implementation of Spatialization (A17). As explained previously, the classes defined in our classification correspond to these various aspects and are therefore not mutually exclusive (i.e., an occurrence of a mapping can belong to several classes simultaneously). An analysis of the distribution of mapping occurrences over these classes provides information about how sonification designers are implementing and using spatial sound. The proportion of mapping occurrences attached to each class relative to the total number of mapping occurrences involving Spatialization is presented in Table 9. It can be observed than Stereo panning (), which can be considered as a very basic implementation of spatial sound, represents more than half of the uses of Spatialization. Pairwise Student's -tests between the proportion attached to the different classes show that Stereo panning is used significantly () more often than all other classes.
We could potentially go further and conduct similar investigations as in Sections 5.1.4 and 5.1.5, in order to examine the dependency to the type of input physical dimensions of the distribution of mapping occurrences among classes of Spatialization. However, at the current stage of the study, these investigations would probably not provide conclusive results: at an intermediate level, the small number of occurrences identified for every distinct mapping makes the identification of marked trends unlikely. At a high level, it has been shown previously that Spatial auditory dimensions are used to sonify almost exclusively physical dimensions belonging to the high-level category Kinematics.
5.1.7 Example of multi-scale dimension.
We also provided an example of multi-scale dimension in our classification, namely Duration (A20), described in detail in Section 4.3. This auditory dimension was divided into three subclasses representing different time scales (rhythmic, event, ambient) and one subclass for cases where no time scale was specified. In a same way as in the multi-class example, a multi-scale structure allows to study this dimension at a higher level of detail by investigating the use of each scale as a separate dimension, either regardless of the input physical dimensions (as in Section 5.1.3) or depending on them (at a high level as in Section 5.1.4, or at an intermediate level as in Section 5.1.5). Unlike the classes from Spatialization (A17) presented in the previous subsection, the different scales of Duration are mutually exclusive by definition. Although considered as a separate dimension due to its belonging to an additional high-level category, we included the auditory dimension Spectral duration (A28) in the multi-scale analysis. In this way, we could consider the full range of durations by entirely reproducing Saue's classification of time scales .
As in the multi-class example, the small number of mapping occurrences in each category did not allow us to observe significant differences related to intermediate-level physical dimensions. In Table 10 we show the proportion for each scale normalized by the total number of mapping occurrences in each high-level physical category, as well as the proportion of mapping occurrences for each scale regardless of the physical dimension. Pairwise Student's -tests () across high-level categories in the physical domain revealed no significant differences in the use of time scales. Pairwise Student's -tests () across the different scales showed that Rhythmic duration (A201) was used significantly more often than all other scales when sonifying physical quantities belonging to the high-level category Kinematics, as well as regardless of the physical dimension.
5.1.8 Assessed mappings.
In order to gain maturity, the field of sonification requires sound evaluation methods to be developed and extensively used by the community. Recent review studies ,  pointed out that evaluation of sonification systems is not systematic yet, although crucial from a design perspective. In most of the cases where some kind of evaluation is conducted, it consists either in a functional qualification of the sonification (i.e., showing that the display enables the execution of a given task) or in an assessment of its efficiency (i.e., investigating to which extent it has a valuable effect). As a consequence, the majority of these studies focus on the assessment of the auditory display as a whole, not investigating sonification mappings in detail, which means that mappings are often chosen in an ad hoc manner, or arbitrarily. The issue of mapping has only been tackled by few studies specifically focusing on psychoacoustical aspects, such as Projects 15 and 27 in the present systematic review.
As described in Section 3.2.2, we considered a mapping to be assessed as good (respectively assessed as bad) when it was found significantly more effective (respectively less effective) compared to other mappings based on objective tests. Mappings that were described as not functional were also assessed as bad. Assessment labels were assigned to a total of 30 mapping occurrences (15 were assessed as good, 15 as bad), representing 6.1% of the 495 mapping occurrences identified in the systematic review. All the involved mappings were assessed only once, with the exception of Velocity Tempo (P02 A19, assessed as good twice) and Motion Rhythmic duration (P07 A201, assessed as bad twice). Seven projects, representing 11.7% of the 60 projects considered in the systematic review, included at least one mapping occurrence with an assessment label. These rather small proportions highlight the general tendency in sonification works to set little focus on evaluation of individual mappings.
5.1.9 Future mappings.
Whenever a mapping was mentioned as a potentially interesting application but was not implemented in the framework of the project, it was assigned a special label (F). In total, 17 mapping occurrences were labeled as “future application”, representing 3.4% of the 495 mapping occurrences identified in the systematic review. Even if these particular mappings remained virtual, the researchers had expressed an advanced reflection on the sonification design. For this reason, we decided not to distinguish these particular mapping occurrences from normal occurrences (i.e. those actually implemented) when performing the statistical tests presented previously.
5.1.10 Keyword-based analysis.
Beyond the classification into conceptual intermediate-level dimensions and the grouping into high-level categories introduced previously, it is possible to apply various filters to the low-level dimensions in order to look for specific information. As an example, we filtered the low-level physical dimensions according to two complementary keywords: Horizontal and Vertical. In the following we present the low-level dimensions included in the category formed by each keyword. Each of them belongs to an intermediate-level dimension in our original classification, which is specified via its label.
- For the keyword Horizontal: horizontal position (P01), x position (P01), map: longitude (P01), azimuth angle (P06), radial direction (P06), horizontal direction (P07), horizontal movement of mouth corners (P07), width (P32), texton width (P32).
- For the keyword Vertical: vertical position (P01), y position (P01), vertical location of a maximum (P01), map: latitude (P01), slope (P06), vertical displacement (P07), vertical movement of lips (P07), vertical direction (P07), vertical displacement deviation magnitude (P07), vertical force (P10), height (P32), map: altitude (P32), altitude deviation (P32).
The same statistical tests as those performed in the previous subsections can be performed on keyword-based categories. For the sake of illustration, we investigated intermediary-level trends for the two categories corresponding to the keywords in the same manner as in Section 5.1.5.
Normalized proportions of mapping occurrences were computed. Mappings used significantly more than 0% of the time () are shown in Table 11. For each keyword-based category, the total number of mapping occurrences identified is presented together with the number of auditory dimensions having been used at least once. Finally, auditory dimensions used significantly more than 0% of the time are listed, together with the normalized percentage of use of the corresponding mapping. Cases where an auditory dimension was found to be used significantly more often than other auditory dimensions used at least once to sonify the same keyword-based category () are highlighted in the table. We can observe that physical dimensions related to horizontality are most often sonified through Spatialization, while those related to verticality are most often sonified via changes in Pitch. This particular trend was not visible in the original classification due to the grouping of low-level physical dimensions belonging to the two keyword-based categories in different intermediate-level dimensions.
Other interesting trends could be revealed by filtering physical or auditory data dimensions using carefully selected keywords. For instance, we could build up categories gathering low-level physical dimensions related to Uncertainty, e.g. including dimensions such as the deviation of various physical quantities from a reference value. We could also consider a specific domain of application, e.g. the sonification of EEG by defining a category gathering all low-level dimensions originating from that domain.
5.2 Other trends
In the previous subsections we focused on mapping frequencies in order to investigate associations between physical and auditory dimensions that have been used in past sonification works. Different approaches can be taken to extract other type of information from the sixty projects we have analyzed.
5.2.1 Project-related trends.
Instead of taking a mapping-centered approach as in Section 5.1, we can investigate project-related trends. The same type of statistical tests can be performed, considering the proportion of projects using a specific mapping or dimension. For the sake of illustration, we investigated the use of auditory dimensions throughout the sixty projects included in this study. In Table 12, we present the proportion of projects using specific auditory dimensions at least once. The eight auditory dimensions used by the largest number of projects are shown in the table. We performed pairwise Student's -tests on this set of proportions in order to determine which auditory dimensions were used by significantly more projects than others (). The third column in Table 12 shows how many auditory dimensions were used by significantly fewer projects than the one in the first column.
The same approach can be taken in order to investigate the proportion of projects sonifying given physical dimensions, or using specific associations between categories — both at an intermediate and at a high level.
5.2.2 Historical distribution.
We considered the distribution over the time of sonification works from the publication database, according to the year of publication. Publications included in the present study should be distinguished from remaining entries: while the former correspond to practical applications of sonification of physical quantities, the latter are only considered as potentially interesting at this stage, and will be included in future developments of the systematic review provided that they match the criterion for inclusion defined in Section 3.1.1. The historical distribution of database entries — comprising both included and remaining publications — is displayed in Figure 3, together with the distribution of included works alone. The earliest entry in the database is a technical report published in 1946. The distribution of database entries is sparse until the 1980s, then shows a slow growth of the number of publications until the beginning of the 1990s, followed by an irregular but rapid increase since then.
The red curve corresponds to the publications considered for the present systematic review. The black curve corresponds to the works included in the publication database, including those considered for the present systematic review.
The historical component of a mapping could be studied as well in the future by monitoring the evolution of its use over the time. This could be a way of assessing the degree of success of a mapping.
5.2.3 Project classification.
The sixty projects included in the present systematic review represent a sample of typical sonification works, and can be used to initiate a function-based classification for applications of sonification. Relating the function of a sonification project to its utilization of characteristics of sonification defined in Section 1.2, we defined seven broad categories encompassing these characteristics: monitoring, motion perception (including kinesthesia, training, and rehabilitation), accessibility (including sensory substitution, and mobility aid), data exploration (including data mining), complement to visualization (including sonification of maps), art and aesthetics, and study of psychoacoustics. All projects were classified according to their function as expressed by the researchers. We chose to consider only the primary function of a given project, although secondary functions were also described in many cases. For instance, Project 58 corresponds to an art installation sonifying the trajectory of cosmic particles. It belongs to the category art and aesthetics in our classification, but also represents a kind of motion perception, which was considered as a secondary function and is therefore not reported here. The resulting classification is presented in Figure 4.
Not surprisingly (because corresponding to one of the criteria for inclusion in the systematic review), most of the projects are associated with categories corresponding to a practical function: data exploration, accessibility, motion perception, and monitoring. Artistic works represent 20% of the projects, which is a relatively large part considering that the artistic nature of sonification is disputed. Only 2 projects out of 60 correspond to studies of psychoacoustics aiming at assessing perceptual effects of sonification mappings. This example of classification is based on a limited sample of projects, employs rather broad categories, and takes into account only the primary function of the projects. More advanced ways of classifying sonification projects could be studied in the future. Other sorts of project classification could be conducted, e.g. according to the discipline attached to the sonified data.
5.2.4 Sonic material.
The sixty projects of the present systematic review also provide information about the types of sonic material used to implement sonification applications. The choice of sonic material is critical for sonification design, insofar as it can dramatically affect the efficiency of specific mappings, or even of the entire auditory display.
For each project, a detailed description of the sonic material is given in the fourth column of Table 3. Several perspectives can be taken to describe the sonic material, among which the level of synthesis, the general category of sound, existing standard protocols, and the software that was used. Three different levels of synthesis were found among the projects: low-level synthesis, high-level synthesis, and sample-based displays. Low-level synthesis corresponds to cases where the auditory display is constituted by a waveform resulting from direct production and processing of a signal (e.g. pure tones, FM synthesis, filtered noise), whereas high-level synthesis corresponds to the use of more advanced pre-existing models (e.g. models for voice synthesis or physical interactions). Sample-based displays are formed by pre-recorded sound files that are played back, and optionally processed simultaneously. Three general categories of sounds were identified: musical sounds, voice or speech synthesis, and environmental sounds. Two standard protocols for information communication were used: MIDI and OSC. Finally, we investigated the use of several common software platforms for sound design and production. In Figure 5 we show the number of projects associated with each category.
Results are presented in groups corresponding to level of synthesis, general category of sound, standard protocols, and software.
The issue of the sonic material used in sonification applications was recently addressed in the review study by Bearman and Brown . Investigating the use of different “synthesis tools”, they found that the most popular software platforms were SuperCollider and PureData.
Conclusions and Perspectives
In this article we conducted a systematic review of sonification of physical quantities. The first step was to build up a database of publications related to practical applications of sonification, currently comprising 739 entries. Several aspects of this database were investigated: we presented the historical distribution of the entries of the database, providing a picture of the field of sonification since 1945. The publication database, constituting a resource for sonification researchers, could be extended in the future to include work dealing with audification, considered as a direct mapping of any physical dimension to instantaneous sound pressure. Theoretical works and projects involving sonification of more abstract (non-physical) data such as price or web traffic flow could also be incorporated. From the publication database, we selected randomly sixty sonification projects for the systematic review, corresponding to a total of 179 scientific publications. These projects constitute a sample of typical sonification works, and were classified according to their primary function. The sonic material used in the projects was analyzed from different perspectives such as the level of synthesis, the nature of the sound, and the software platform used.
We introduced a method for classifying mappings extracted from sonification projects. A list of conceptual dimensions was drawn up for both sonified physical quantities and auditory dimensions used to render auditory displays. These conceptual dimensions were obtained taking a bottom-up approach; therefore, the list of physical dimensions depends on the domain of sonification mappings, i.e. the nature of the data that was sonified in the selected projects. This list will evolve gradually when additional projects are included in the analysis. On the other hand, the list of auditory dimensions, obtained by the same bottom-up approach, has reached a relatively stable state, due to the fact that the codomain of sonification mappings is always the same, namely the auditory domain. However, sharper focus can be given to specific auditory dimensions of interest through a separation in different scales or different classes. We also provided an example of multi-class dimension (Spatialization, A17), and one of multi-scale dimension (Duration, A20).
For each project, associations between physical and auditory dimensions were identified, constituting a database of sonification mappings. A total number of 495 mapping occurrences were identified. Additional information was attached to mappings in this database whenever their efficiency was assessed (as good or as bad), or if they were mentioned as interesting future development but not implemented at the time of publication. We have found that only a marginal proportion of mapping occurrences have been assessed, highlighting the lack of evaluation in sonification design. An analysis of the frequency of use of mappings was performed at the level of the conceptual dimensions previously described, as well as for high-level categories gathering these dimensions for both physical and auditory domains. This analysis confirmed the following prior hypotheses: Pitch is by far the most used auditory dimension in sonification mappings, and Spatial auditory dimensions are almost exclusively used to sonify Kinematic physical quantities. Results were found consistent with the following third hypothesis: the most popular mappings follow the logic of ecological perception. The most often used mappings not involving Pitch correspond indeed to natural perceptual associations. Nevertheless, the polarity of the involved mappings should be investigated in order to be able to demonstrate this hypothesis. By normalizing the number of mapping occurrences against the total number of mapping occurrences identified for a given physical dimension, we could determine the most popular mappings independently of the domain of application. Being often used does not demonstrate that these mappings are the most efficient ones, but it suggests to investigate them first, when developing future guidelines for sonification design, both by examining their polarity and by assessing them, e.g. with help of psychophysical tests.
6.1 Characterization of sonification via mappings
The concept of mapping is central in the “working definition” of sonification by Scaletti reported in Section 1.1, but did not appear in many of the later definitions. Throughout the reading process conducted in the framework of the present systematic review, we found its role significant when considering the potential inclusion of a given work in the analysis. When designing criteria for inclusion in the publication database prior to the reading process, as described in Section 3.1.1, we already considered the possibility to extract at least one mapping as being a qualifying factor. In fact, it proved to be a necessary and sufficient condition for a publication to be included in the analysis: all works that were included contain at least one description of a sonification mapping from a physical dimension to an auditory dimension, and any work that contains such a mapping was considered as being a relevant sonification application. Because we were interested in sonification of physical quantities, we identified the domain of mappings with the physical domain. Considering the possibility to extract at least one such mapping as a necessary and sufficient condition for inclusion, we developed de facto a characterization of sonification of physical quantities.
This way of characterizing a subdomain of sonification can be extended to characterize sonification itself, considering carefully the domain and the codomain of mappings. The various specificities of the nature of sonification presented in Section 1.1 can be expressed in line with this approach, e.g. by imposing restrictions on the domain and on the codomain. For instance, a part of the definition such as “the use of nonspeech audio” is ambiguous and might be interpreted erroneously as an exclusion of voice and speech synthesis in the sonic material used in the sonification. Using mappings to characterize this aspect amounts to restrict the codomain of mappings by excluding the semantics attached to speech. The purpose of sonification — to communicate information — is embedded in the condition that the mappings have to be a conscious design choice to be taken into account. This characterization process does not enable the distinction between scientific and artistic works, but the need for such a distinction is questionable. Further theoretical considerations are required to build up a robust characterization process, but we believe that an approach centered on mappings could constitute a good basis for a new definition of sonification.
Conceived and designed the experiments: GD RB. Performed the experiments: GD. Analyzed the data: GD. Contributed reagents/materials/analysis tools: GD RB. Wrote the paper: GD RB.
- 1. Reuter LH, Tukey P, Maloney LT, Pani JR, Smith S (1990) Human perception and visualization. In: Proceedings of the 1st conference on Visualization. Los Alamitos, CA, USA: IEEE Computer Society Press, pp. 401–406.
- 2. Kramer G, editor (1994) Auditory display: sonification, audification and auditory interfaces. Santa Fe, NM, USA: Addison Wesley Publishing Company.
- 3. Scaletti C (1994) Auditory display: sonification, audification and auditory interfaces, Addison Wesley Publishing Company, chapter 8: Sound synthesis algorithms for auditory data represen9tations. pp. 223–251.
- 4. Kramer G, Walker BN, Bonebright TL, Cook P, Flowers JH, et al.. (1999) Sonification report: status of the field and research agenda. report prepared for the National Science Foundation by members of the International Community for Auditory Display. Technical report, International Community for Auditory Display (ICAD), Santa Fe, NM, USA.
- 5. Walker BN, Nees MA (2011) The sonification handbook, Logos Publishing House, chapter 2: Theory of sonification. pp. 9–40.
- 6. Hermann T (2008) Taxonomy and definitions for sonification and auditory display. In: Proceedings of the 14th International Conference on Auditory Display (ICAD 2008). Paris, France. CD-ROM.
- 7. Hermann T (2010). Sonification - A definition. URL http://sonification.de/son/definition. Accessed November 8, 2013.
- 8. Vogt K (2010) Sonification of simulations in computational physics. Ph.D. thesis, University of Music and Performing Arts, Graz, Austria.
- 9. Supper A (2012) The search for the ‘killer application’: drawing the boundaries around the sonification of scientific data. In: Pinch T, BijsterveldKT, editors, The Oxford handbook of sound studies, Oxford University Press, chapter 10. pp. 249–270.
- 10. Varni G, Dubus G, Oksanen S, Volpe G, Fabiani M, et al. (2012) Interactive sonification of synchronisation of motoric behaviour in social active listening to music with mobile devices. Journal on Multimodal User Interfaces 5: 157–173.
- 11. Hermann T, Hunt A, Neuhoff JG (2011) The sonification handbook, Logos Publishing House, chapter 1: Introduction. pp. 1–6.
- 12. Gibet S (2010) Musical gestures, Routledge, chapter 9: Sensorimotor control of sound-producing gestures. pp. 212–237.
- 13. Kleiman-Weiner M, Berger J (2006) The sound of one arm swinging: a model for multidimensional auditory display of physical motion. In: Proceedings of the 12th International Conference on Auditory Display (ICAD 2006). London, UK, pp. 278-280.
- 14. Godbout A, Boyd JE (2010) Corrective sonic feedback for speed skating: a case study. In: Proceedings of the 16th International Conference on Auditory Display (ICAD 2010). Washington, DC, USA, pp. 23–30.
- 15. Hermann T, Ungerechts B, Toussaint H, Grote M (2012) Sonification of pressure changes in swimming for analysis and optimization. In: Proceedings of the 18th International Conference on Auditory Display (ICAD 2012). Atlanta, GA, USA, pp. 60–67.
- 16. Murgia M, Sors F, Vono R, Muroni AF, Delitala L, et al. (2012) Using auditory stimulation to enhance athletes' strength: an experimental study in weightlifting. Review of Psychology 19: 13–16.
- 17. Fröhlich B, Barrass S, Zehner B, Plate J, Göbel M (1999) Exploring geo-scientific data in virtual environments. In: Proceedings of the 10th IEEE Visualization conference (VIS '99). San Francisco, CA, USA, pp. 169–173.
- 18. Noirhomme-Fraiture M, Scholler O, Demoulin C, Simoff SJ (2008) Complementing 980 visual data mining with the sound dimension: sonification of time dependent data. In: Simoff SJ, Bohlen MH, Mazeika A, editors, Visual Data Mining, Springer Berlin Heidelberg, volume 4404 of Lecture Notes in Computer Science. pp. 236–247.
- 19. Kay L (1974) A sonar aid to enhance spatial perception of the blind: engineering design and evaluation. The Radio and Electronic Engineer 44: 605–627.
- 20. Zhao H, Plaisant C, Shneiderman B, Lazar J (2008) Data sonification for users with visual impaireent: a case study with georeferenced data. ACM Transactions on Computer-Human Interaction 15: 4:1–4: 28.
- 21. Dombois F, Eckel G (2011) The sonification handbook, Logos Publishing House, chapter 12: Audification. pp. 301–324.
- 22. Brazil E, Fernström M (2011) The sonification handbook, Logos Publishing House, chapter 13: Auditory icons. pp. 325–338.
- 23. McGookin D, Brewster S (2011) The sonification handbook, Logos Publishing House, chapter 14: Earcons. pp. 339–362.
- 24. Grond F, Berger J (2011) The sonification handbook, Logos Publishing House, chapter 15: Parameter mapping sonification. pp. 363–397.
- 25. Hermann T (2011) The sonification handbook, Logos Publishing House, chapter 16: Model based sonification. pp. 399–428.
- 26. Hermann T, Ritter HJ (1999) Listen to your data: model-based sonification for data analysis. In: Proceedings of the International Symposium on Intelligent Multimedia And Distance Education (ISIMADE '99). Baden-Baden, Germany, pp. 189–194.
- 27. Gaver WW (1986) Auditory icons: using sound in computer interfaces. Human-Computer Interaction 2: 167–177.
- 28. Gaver WW (1993) What in the world do we hear? An ecological approach to auditory source perception. Ecological Psychology 5: 1–29.
- 29. Worrall D (2009) Chapter 2: An overview of sonification. in: Sonification and information: Concepts, instruments and techniques, Ph.D. dissertation, University of Canberra, Canberra, Australia.
- 30. Humphrey JW, Oleson JP, Sherwood AN (1998) Greek and Roman technology: a sourcebook. Routledge, 522 pp.
- 31. Gaizauskas BR (1974) The harmony of the spheres. Journal of the Royal Astronomical Society of Canada 68: 146–151.
- 32. Kovaric AF (1917) New methods for counting the alpha and the beta particles. Physical Review 9: 567–568.
- 33. d'Arsonval JA (1878) Téléphone employé comme galvanoscope. Comptes rendus hebdomadaires des séances de l'Académie des Sciences 86: 832–833.
- 34. Hughes DE (1881) Molecular magnetism. Proceedings of the Royal Society of London 32: 213–225.
- 35. Pollack I, Ficks L (1954) Information of elementary multidimensional auditory display. Journal of the Acoustical Society of America 26: 155–158.
- 36. Speeth SD (1961) Seismometer sounds. Journal of the Acoustical Society of America 33: 909–916.
- 37. Yeung ES (1980) Pattern recognition by audio representation of multivariate analytical data. Analytical Chemistry 52: 1120–1123.
- 38. Walker BN, Lane DM (2001) Sonification mappings database on the web. In: Proceedings of the 7th International Conference on Auditory Display (ICAD 2001). Espoo, Finland, p. 281.
- 39. Kay L (1984) Electronic aids for blind persons: an interdisciplinary subject. Physical Science, Measurement and Instrumentation, Management and Education - Reviews, IEE Proceedings A 131: 559–576.
- 40. García Ruiz MA, Gutiérrez PulidoJR (2006) An overview of auditory display to assist comprehension of molecular information. Interacting with Computers 18: 853–868.
- 41. Huang H, Wolf SL, He J (2006) Recent developments in biofeedback for neuromotor rehabilitation. Journal of NeuroEngineering and Rehabilitation 3: 11:1–11: 12.
- 42. Vogt K (2011) A quantitative evaluation approach to sonifications. In: Proceedings of the 17th International Conference on Auditory Display (ICAD 2011). Budapest, Hungary. CD-ROM.
- 43. Bearman NE, Brown E (2012) Who's sonifying data and how are they doing it? A comparison of ICAD and other venues since 2009. In: Proceedings of the 18th International Conference on Auditory Display (ICAD 2012). Atlanta, GA, USA, pp. 231–232.
- 44. Sarkar R, Bakshi S, Sa PK (2012) Review on image sonification: a non-visual scene representation. In: Proceedings of the 1st International Conference on Recent Advances in Information Technology (RAIT 2012). Dhanbad, India, pp. 86–90.
- 45. Joy J (2012) What NMSAT says about sonification. AI & Society 27: 233–244.
- 46. Grond F, Hermann T (2012) Aesthetic strategies in sonification. AI & Society 27: 213–222.
- 47. Sigrist R, Rauter G, Riener R, Wolf P (2013) Augmented visual, auditory, haptic, and multimodal feedback in motor learning: a review. Psychonomic Bulletin & Review 20: 21–53.
- 48. Hermann T, Hunt A, Neuhoff JG, editors (2011) The sonification handbook. Berlin, Germany: Logos Publishing House.
- 49. Walker BN (2000) Magnitude estimation of conceptual data dimensions for use in sonifications. Ph.D. thesis, Rice University, Houston, TX, USA.
- 50. Walker BN, Kramer G, Lane DM (2000) Psychophysical scaling of sonification mappings. In: Proceedings of the 6th International Conference on Auditory Display (ICAD 2000). Atlanta, GA, USA, pp. 99–104.
- 51. Delle Monache S, Polotti P, Rocchesso D (2010) A toolkit for explorations in sonic interaction design. In: Proceedings of the 5th Audio Mostly conference (AM '10). Piteå, Sweden, pp. 1–7.
- 52. Springer Link. URL http://link.springer.com. Accessed November 8, 2013.
- 53. IEEE Xplore. URL http://ieeexplore.ieee.org. Accessed November 8, 2013.
- 54. Science Direct. URL http://www.sciencedirect.com. Accessed November 8, 2013.
- 55. PubMed URL http://www.ncbi.nlm.nih.gov/pubmed. Accessed November 8, 2013.
- 56. The ACM Digital Library URL http://dl.acm.org. Accessed November 8, 2013.
- 57. Acoustical Society of America. URL http://scitation.aip.org/content/asa. Accessed November 8, 2013.
- 58. ingentaconnect. URL http://www.ingentaconnect.com. Accessed November 8, 2013.
- 59. International Community for Auditory Display. URL http://icad.org. Accessed November 8, 2013.
- 60. Interactive Sonification. URL http://interactive-sonification.org. Accessed November 8, 2013.
- 61. Special Interest Group on Computer-Human Interaction. URL http://www.sigchi.org/conferences. Accessed November 8, 2013.
- 62. Sound and Music Computing. URL http://smcnetwork.org. Accessed November 8, 2013.
- 63. New Interfaces for Musical Expression. URL http://www.nime.org. Accessed November 8, 2013.
- 64. Audio Mostly. URL http://www.audiomostly.com. Accessed November 8, 2013.
- 65. Google Scholar. URL http://scholar.google.com. Accessed November 8, 2013.
- 66. Mansur DL, Blattner MM, Joy KI (1985) Sound graphs: a numerical data analysis method for the blind. Journal of Medical Systems 9: 163–174.
- 67. Walker BN, Lindsay J, Nance A, Nakano Y, Palladino DK, et al. (2013) Spearcons (speech-based earcons) improve navigation performance in advanced auditory menus. Human Factors 55: 157–182.
- 68. Sturm BL (2000) Sonification of particle systems via de Broglie's hypothesis. In: Proceedings of the 6th International Conference on Auditory Display (ICAD 2000). Atlanta, GA, USA, pp. 87–92.
- 69. Hermann T, Ritter HJ (2004) Sound and meaning in auditory data display. Proceedings of the IEEE 92: 730–741.
- 70. Kokogawa T, Maeda Y, Ajiki T, Itou J, Munemori J (2012) The effect to quality of creativity with sampling partial data from a large number of idea cards. In: Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work Companion (CSCW '12). Seattle, WA, USA, pp. 147–150.
- 71. Arfib D, Couturier JM, Kessous L, Verfaille V (2002) Strategies of mapping between gesture data and synthesis model parameters using perceptual spaces. Organised sound 7: 135–152.
- 72. Stevens SS, Volkmann J (1940) The relation of pitch to frequency: a revised scale. The American Journal of Psychology 53: 329–353.
- 73. Grond F, Dall'Antonia F (2008) SUMO - A sonification utility for molecules. In: Proceedings of the 14th International Conference on Auditory Display (ICAD 2008). Paris, France. CD-ROM.
- 74. Munemori J, Nagasawa Y (1991) Development and trial of groupware for organizational design and management: distributed and cooperative KJ method support system. Information and Software Technology 33: 259–264.
- 75. Garrison DR, Cleveland-Innes M, Koole M, Kappelman J (2006) Revisiting methodological issues in transcript analysis: negotiated coding and reliability. The Internet and Higher Education 9: 1–8.
- 76. URL http://www.mendeley.com/groups/3612491/sonification/papers. Accessed November 8, 2013.
- 77. Sethares WA (2007) Rhythm and transforms, Springer London, chapter 1.2: Perception and time scale. pp. 6–9.
- 78. Saue S (2000) A model for interaction in exploratory sonification displays. In: Proceedings of the 6th International Conference on Auditory Display (ICAD 2000). Atlanta, GA, USA, pp. 105–110.
- 79. Hermann T, Hunt A (2005) An introduction to interactive sonification. IEEE MultiMedia 12: 20–24.
- 80. Dubus G, Bresin R (2011) Sonification of physical quantities throughout history: a meta-study of previous mapping strategies. In: Proceedings of the 17th International Conference on Auditory Display (ICAD 2011). Budapest, Hungary. CD-ROM.
- 81. Henkelmann C (2007) Improving the aesthetic quality of realtime motion data sonifications. Technical Report CG-2007-4, Universit at Bonn.
- 82. Rasch R, Plomp R (1999) The psychology of music, Academic Press, chapter 4: The perception of musical tones. Series in Cognition and Perception. 2nd edition, pp. 89–112.
- 83. Patel AD (2008) Music, language, and the brain, Oxford University Press, chapter 2: Sound elements: pitch and timbre. pp. 7–93.
- 84. Vickers P, Hogg B (2006) Sonification abstraite/Sonification concrète: An 'æsthetic persepective space' for classifying auditory displays in the ars musica domain. In: Proceedings of the 12th International Conference on Auditory Display (ICAD 2006). London, UK, pp. 210–216.
- 85. Barrass S, Vickers P (2011) The sonification handbook, Logos Publishing House, chapter 7: Sonification design and aesthetics. pp. 145–172.
- 86. Walker BN, Godfrey MT, Orlosky JE, Bruce CM, Sanford J (2006) Aquarium sonification: sound1scapes for accessible dynamic informal learning environments. In: Proceedings of the 12th International Conference on Auditory Display (ICAD 2006). London, UK, pp. 238–239.
- 87. Walker BN, Kim J, Pendse A (2007) Musical soundscapes for an accessible aquarium: bringing dynamic exhibits to the visually impaired. In: Proceedings of the International Computer Music Conference (ICMC 2007). Copenhagen, Denmark, pp. 268–275.
- 88. Pendse A, Pate M, Walker BN (2008) The accessible aquarium: identifying and evaluating salient creature features for sonification. In: Proceedings of the 10th International ACM SIGACCESS Conference on Computers and accessibility (ASSETS 2008). Halifax, Canada, pp. 297–298.
- 89. Jeon M, Winton RJ, Yim JB, Bruce CM, Walker BN (2012) Aquarium fugue: interactive sonification for children and visually impaired audience in informal learning environments. In: Proceedings of the 18th International Conference on Auditory Display (ICAD 2012). Atlanta, GA, USA, pp. 246–247.
- 90. URL http://cycling74.com. Accessed November 8 , 2013.
- 91. Saue S, Fjeld OK (1997) A platform for audiovisual seismic interpretation. In: Proceedings of the 4th International Conference on Auditory Display (ICAD 1997). Palo Alto, CA, USA, pp. 47–56.
- 92. Cabrera D, Ferguson S, Maria R (2006) Using sonification for teaching acoustics and audio. In: Proceedings of ACOUSTICS 2006. Christchurch, New Zealand, pp. 383–390.
- 93. Bologna G, Vinckenbosch M (2005) Eye tracking in coloured image scenes represented by Ambisonic fields of musical intstrument sounds. In: Proceedings of the 1st International Work-Conference on the Interplay Between Natural and Artificial Computation (IWINAC 2005). Las Palmas de Gran Canaria, Spain, pp. 327–337.
- 94. Bologna G, Deville B, Pun T, Vinckenbosch M (2007) Transforming 3D coloured pixels into musical instrument notes for vision substitution applications. EURASIP Journal on Image and Video Processing 2007: 76204 1–76204: 14.
- 95. Bologna G, Deville B, Pun T, Vinckenbosch M (2007) Identifying major components of picture by audio encoding of colours. In: Proceedings of the 2nd International Work-Conference on the Interplay Between Natural and Artificial Computation (IWINAC 2007). La Manga del Mar Menor, Spain, pp. 81–89.
- 96. Bologna G, Deville B, Vinckenbosch M, Pun T (2008) A perceptual interface for vision substitution in a color matching experiment. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN 2008), part of the IEEE World Congress on Computational Intelligence (WCCI 2008). Hong Kong, China, pp. 1621–1628.
- 97. Bologna G, Deville B, Pun T (2008) Pairing colored socks and following a red serpentine with sounds of musical instruments. In: Proceedings of the 14th International Conference on Auditory Display (ICAD 2008). Paris, France. CD-ROM.
- 98. Bologna G, Deville B, Pun T (2009) On the use of the auditory pathway to represent image scenes in real-time. Neurocomputing 72: 839–849.
- 99. Bologna G, Deville B, Pun T (2009) Blind navigation along a sinuous path by means of the See ColOr interface. In: Proceedings of the 3rd International Work-Conference on the Interplay Between Natural and Artificial Computation (IWINAC 2009). Santiago de Compostela, Spain, pp. 235–243.
- 100. Deville B, Bologna G, Vinckenbosch M, Pun T (2009) See ColOr: seeing colours with an orchestra. In: Lalanne D, Kohlas J, editors, Human Machine Interaction, Springer Berlin Heidelberg, volume of Lecture Notes in Computer Science. pp. 251–279.
- 101. Bologna G, Deville B, Pun T (2010) Sonification of color and depth in a mobility aid for blind people. In: Proceedings of the 16th International Conference on Auditory Display (ICAD 2010). Washington, DC, USA, pp. 9–13.
- 102. Gomez JD, Bologna G, Pun T (2010) Color-audio encoding interface for visual substitution: See ColOr Matlab-based demo. In: Proceedings of the 12th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2010). Orlando, FL, USA, pp. 245–246.
- 103. Bologna G, Deville B, Gomez JD, Pun T (2011) Toward local and global perception modules for vision substitution. Neurocomputing 74: 1182–1190.
- 104. Gomez JD, Bologna G, Deville B, Pun T (2011) Multisource sonification for visual substitution in an auditory memory game: one, or two fingers? In: Proceedings of the 17th International Conference on Auditory Display (ICAD 2011). Budapest, Hungary. CD-ROM.
- 105. Effenberg AO, Mechling H (1999) Akustisch-rhythmische Informationen und Bewegungskon trolle - Von der rhythmischen Begleitung zur Sonification. Motorik 22: 150–160.
- 106. Effenberg AO (2001) Multimodal convergent information enhances perception accuracy of human movement patterns. In: Proceedings of the 6th Annual Congress of the European College of Sport Science. Cologne, Germany, p. 122.
- 107. Effenberg AO, Mechling H (2003) Multimodal convergent information enhances reproduction accuracy of sport movements. In: Proceedings of the 8th Annual Congress of the European College of Sport Science. Salzburg, Austria, p. 196.
- 108. Effenberg AO (2005) Movement sonification: effects on perception and action. IEEE MultiMedia 12: 53–59.
- 109. Effenberg AO (2007) Movement sonification: motion perception, behavioral effects and functional data. In: Proceedings of the 2nd International Workshop on Interactive Sonification (ISon 2007). York, UK. Online.
- 110. Scheef L, Boecker H, Daamen M, Fehse U, Landsberg MW, et al. (2009) Multimodal motion processing in area V5/MT: evidence from an artificial class of audio-visual events. Brain Research 1252: 94–104.
- 111. Jovanov E, Wegner K, Radivojević V, Starčević D, Quinn MS, et al. (1999) Tactical audio and acoustic rendering in biomedical applications. IEEE Transactions on Information Technology in Biomedicine 3: 109–118.
- 112. Jovanov E, Starčević D, Marsh A, Obrenović Ž, Radivojević V, et al.. (1999) Multi modal presentation in virtual telemedical environments. In: Sloot P, Bubak M, Hoekstra A, Hertzberger B, editors, High-Performance Computing and Networking, Springer Berlin Heidelberg, volume 1593 of Lecture Notes in Computer Science. pp.964–972.
- 113. Jovanov E, Starčević D, Samardžić A, Marsh A, Obrenović Ž (1999) EEG analysis in a telemedical virtual world. Future Generation Computer Systems 15: 255–263.
- 114. Jovanov E, Starčević D, Radivojević V, Samardžić A, Simeunović V (1999) Perceptualization of biomedical data. An experimental environment for visualization and sonification of brain electrical activity. IEEE Engineering in Medicine and Biology Magazine 18: 50–55.
- 115. Thompson J, Kuchera-Morin J, Novak M, Overholt D, Putnam L, et al. (2009) The Allobrain: an interactive, stereographic, 3D audio, immersive virtual world. International Journal of Human- Computer Studies 67: 934–946.
- 116. Nasir T (2009) Geo-sonf: spatial sonification of contour maps. In: IEEE International Workshop on Haptic Audio visual Environments and Games (HAVE 2009). Lecco, Italy, pp. 141–146.
- 117. Schaffert N, Gehret R, Effenberg AO, Mattes K (2008) The sonified boat motion as the characteristic rhythm of several stroke rate steps. In: Book of abstracts of the 8th World Congress of Performance Analysis of Sport (WCPAS VIII). Magdeburg, Germany, p. 210.
- 118. Schaffert N, Mattes K, Effenberg AO (2009) A sound design for the purposes of movement optimisation in elite sport (using the example of rowing). In: Proceedings of the 15th International Conference on Auditory Display (ICAD 2009). Copenhagen, Denmark, pp. 72–75.
- 119. Hermann T, Rath M, Barrass S, Murray-Smith R, Williamson J, et al.. (2009) WG4 report and of Berlin sonification workshop. Technical report, COST-SID, Berlin, Germany.
- 120. Schaffert N, Mattes K, Barrass S, Effenberg AO (2009) Exploring function and aesthetics in sonifications for elite sports. In: Proceedings of the 2nd International Conference on Music Science. Sydney, Australia, pp. 83–86.
- 121. Schaffert N, Mattes K, Effenberg AO (2010) A sound design for acoustic feedback in elite sports. In: Ystad S, Aramaki M, Kronland-Martinet R, Jensen K, editors, Auditory Display, Springer Berlin Heidelberg, volume 5954 of Lecture Notes in Computer Science. pp. 143–165.
- 122. Schaffert N, Mattes K, Effenberg AO (2010) Listen to the boat motion: acoustic information for elite rowers. In: Proceedings of the 3rd International Workshop on Interactive Sonification (ISon 2010). Stockholm, Sweden, pp. 31–38.
- 123. Barrass S, Schaffert N, Barrass T (2010) Probing preferences between six designs of interactive sonifications for recreational sports, health and fitness. In: Proceedings of the 3rd International Workshop on Interactive Sonification (ISon 2010). Stockholm, Sweden, pp. 23–30.
- 124. Schaffert N, Mattes K, Effenberg AO (2011) An investigation of online acoustic information for elite rowers in on-water training conditions. Journal of Human Sport and Exercise 6: 392–405.
- 125. Schaffert N, Mattes K, Effenberg AO (2011) The sound of rowing stroke cycles as acoustic feedback. In: Proceedings of the 17th International Conference on Auditory Display (ICAD 2011). Budapest, Hungary. CD-ROM.
- 126. Schaffert N, Mattes K, Effenberg AO (2011) Examining effects of acoustic feedback on perception and modification of movement patterns in on-water rowing training. In: Proceedings of the 6th Audio Mostly conference (AM '11). Coimbra, Portugal, pp. 122–129.
- 127. Schaffert N, Gehret R, Mattes K (2012) Modeling the rowing stroke cycle acoustically. Journal of the Audio Engineering Society 60: 551–560.
- 128. Schaffert N, Mattes K (2012) Acoustic feedback training in adaptive rowing. In: Proceedings of the 18th International Conference on Auditory Display (ICAD 2012). Atlanta, GA, USA, pp. 83–88.
- 129. URL http://puredata.info. Accessed November 8, 2013.
- 130. Childs E, Pulkki V (2003) Using multi-channel spatialization in sonification: a case study with meteorological data. In: Proceedings of the 9th International Conference on Auditory Display (ICAD 2003). Boston, MA, USA, pp. 192–195.
- 131. Harding C, Kakadiaris IA, Loftin RB (2000) A multimodal user interface for geoscientific data investigation. In: Tan T, Shi Y, Gao W, editors, Advances in Multimodal Interfaces - ICMI 2000, Springer Berlin Heidelberg, volume 1948 of Lecture Notes in Computer Science. pp. 615–623.
- 132. Harding C, Kakadiaris IA, Casey JF, Loftin RB (2002) A multi-sensory system for the investigation of geoscientific data. Computers & Graphics 26: 259–269.
- 133. Barrass S, Zehner B (2000) Responsive sonification of well-logs. In: Proceedings of the 6th International Conference on Auditory Display (ICAD 2000). Atlanta, GA, USA, pp. 72–80.
- 134. Beilharz K (2004) (Criteria & aesthetics for) Mapping social behaviour to real time generative structures for ambient auditory display (interactive sonification). In: INTERACTION - Systems, Practice and Theory: A Creativity & Cognition Symposium. Sydney, Australia, pp. 75–102.
- 135. Beilharz K (2005) Gesture-controlled interaction with aesthetic information sonification. In: Proceedings of the 2nd Australasian Conference on Interactive Entertainment (IE 2005). Sydney, Australia, pp. 11–18.
- 136. Beilharz K (2005) Wireless gesture controllers to affect information sonification. In: Proceedings of the 11th International Conference on Auditory Display (ICAD 2005). Limerick, Ireland, pp. 105–112.
- 137. Beilharz K (2005) Responsive sensate environments: past and future directions. In: Martens B, Brown A, editors, Computer Aided Architectural Design Futures, Springer Netherlands. pp. 361–370.
- 138. Martins ACG, Rangayyan RM, Portela LA, Junior EA, Ruschioni RA (1996) Auditory display and sonification of textured images. In: Proceedings of the 3rd International Conference on Auditory Display (ICAD 1996). Palo Alto, CA, USA, pp. 9–11.
- 139. Rangayyan RM, Martins ACG, Ruschioni RA (1996) Aural analysis of image texture via cepstral filtering and sonification. In: Proceedings of the SPIE conference on Visual Data Exploration and Analysis III. San Jose, CA, USA, volume 2656, pp. 283–294.
- 140. Martins ACG, Rangayyan RM (1997) Experimental evaluation of auditory display and sonification of textured images. In: Proceedings of the 4th International Conference on Auditory Display (ICAD 1997). Palo Alto, CA, USA, pp. 129–134.
- 141. Martins ACG, Rangayyan RM, Ruschioni RA (2001) Audification and sonification of texture in images. Journal of Electronic Imaging 10: 690–705.
- 142. Walker BN, Kramer G (1996) Mappings and metaphors in auditory displays: an experimental assessment. In: Proceedings of the 3rd International Conference on Auditory Display (ICAD 1996). Palo Alto, CA, USA, pp. 71–74.
- 143. Walker BN, Lane DM (2001) Psychophysical scaling of sonification mappings: a comparison of visually impaired and sighted listeners. In: Proceedings of the 7th International Conference on Auditory Display (ICAD 2001). Espoo, Finland, pp. 90–94.
- 144. Walker BN (2002) Magnitude estimation of conceptual data dimensions for use in sonification. Journal of Experimental Psychology: Applied 8: 211–221.
- 145. Walker BN, Mauney LM (2004) Individual differences, cognitive abilities, and the interpretation of auditory graphs. In: Proceedings of the 10th International Conference on Auditory Display (ICAD 2004). Sydney, Australia. CD-ROM.
- 146. Walker BN, Kramer G (2005) Mappings and metaphors in auditory displays: an experimental assessment. ACM Transactions on Applied Perception 2: 407–412.
- 147. Walker BN (2007) Consistency of magnitude estimations with conceptual data dimensions used for sonification. Applied Cognitive Psychology 21: 579–599.
- 148. Walker BN, Mauney LM (2010) Universal design of auditory graphs: a comparison of sonification mappings for visually impaired and sighted listeners. ACM Transactions on Accessible Computing 2: 12 1–12: 16.
- 149. Bearman NE, Lovett A (2010) Using sound to represent positional accuracy of address locations. The Cartographic Journal 47: 308–314.
- 150. Bearman NE (2011) Using sound to represent uncertainty in future climate projections for the United Kingdom. In: Proceedings of the 17th International Conference on Auditory Display (ICAD 2011). Budapest, Hungary. CD-ROM.
- 151. Bearman NE, Fisher PF (2012) Using sound to represent spatial data in ArcGIS. Computers & Geosciences 46: 157–163.
- 152. Brown E, Bearman NE (2012) Listening to uncertainty - Information that sings. Significance 9: 14–17.
- 153. Eslambolchilar P, Crossan A, Murray-Smith R (2004) Model-based target sonification on mobile devices. In: Proceedings of the 1st International Workshop on Interactive Sonification (ISon 2004). Bielefeld, Germany. Online.
- 154. Eriksson M, Bresin R (2010) Improving running mechanics by use of interactive sonification. In: Proceedings of the 3rd InternationalWorkshop on Interactive Sonification (ISon 2010). Stockholm, Sweden, pp. 95–98.
- 155. Eriksson M, Halvorsen KA, Gullstrand L (2011) Immediate effect of visual and auditory feedback to control the running mechanics of well-trained athletes. Journal of Sports Sciences 29: 253–262.
- 156. Hermann T, Meinicke P, Bekel H, Ritter HJ, Müller HM, et al.. (2002) Sonifications for EEG data analysis. In: Proceedings of the 8th International Conference on Auditory Display (ICAD 2002). Kyoto, Japan, pp. 37–41.
- 157. Meinicke P, Hermann T, Bekel H, Müller HM, Weiss S, et al. (2004) Identification of discriminative features in the EEG. Intelligent Data Analysis 8: 97–107.
- 158. Hermann T, Baier G, Müller M (2004) Polyrhythm in the human brain. In: Proceedings of the 10th International Conference on Auditory Display (ICAD 2004). Sydney, Australia. CD-ROM.
- 159. Baier G, Hermann T (2004) The sonification of rhythms in human electroencephalogram. In: Proceedings of the 10th International Conference on Auditory Display (ICAD 2004). Sydney, Australia. CD-ROM.
- 160. Baier G, Hermann T, Sahle S, Stephani U (2006) Sonified epileptic rhythms. In: Proceedings of the 12th International Conference on Auditory Display (ICAD 2006). London, UK, pp. 148–151.
- 161. Baier G, Hermann T, Stephani U (2007) Event-based sonification of EEG rhythms in real time. Clinical Neurophysiology 118: 1377–1386.
- 162. Baier G, Hermann T, Stephani U (2007) Multi-channel sonification of human EEG. In: Proceedings of the 13th International Conference on Auditory Display (ICAD 2007). Montréal, Canada, pp. 491–496.
- 163. Hermann T, Baier G (2008) Die Sonifikation des menschlichen EEG. In: Polzer BO, editor, Katalog: Wien Modern 2008, Vienna, Austria: Verein Wien modern. pp. 25–27.
- 164. Hermann T, Baier G (2010) Sonic triptychon of the human brain. In: Proceedings of the 16th International Conference on Auditory Display (ICAD 2010). Washington, DC, USA, pp. 301–303.
- 165. Hermann T, Baier G, Stephani U, Ritter HJ (2006) Vocal sonification of pathologic EEG features. In: Proceedings of the 12th International Conference on Auditory Display (ICAD 2006). London, UK, pp. 158–163.
- 166. Hermann T, Baier G, Stephani U, Ritter HJ (2008) Kernel regression mapping for vocal EEG sonification. In: Proceedings of the 14th International Conference on Auditory Display (ICAD 2008). Paris, France. CD–ROM.
- 167. Hinterberger T, Mellinger J, Birbaumer N (2003) The Thought Translation Device: structure of a multimodal brain-computer communication system. In: Proceedings of the 1st International IEEE EMBS Conference on Neural Engineering. Capri Island, Italy, pp. 603–606.
- 168. Hinterberger T, Neumann N, Pham M, Kübler A, Grether A, et al. (2004) A multimodal brainbased feedback and communication system. Experimental Brain Research 154: 521–526.
- 169. Hinterberger T, Baier G, Mellinger J, Birbaumer N (2004) Auditory feedback of human EEG for direct brain-computer communication. In: Proceedings of the 10th International Conference on Auditory Display (ICAD 2004). Sydney, Australia, pp. 158–163.
- 170. Hinterberger T, Baier G (2005) Parametric orchestral sonification of EEG in real time. IEEE MultiMedia 12: 70–79.
- 171. Hinterberger T (2007) Orchestral sonification of brain signals and its application to brain computer interfaces and performing arts. In: Proceedings of the 2nd International Workshop on Interactive Sonification (ISon 2007). York, UK. Online.
- 172. MacVeigh R, Jacobson RD (2007) Increasing the dimensionality of a Geographic Information System (GIS) using auditory display. In: Proceedings of the 13th International Conference on Auditory Display (ICAD 2007). Montréal, Canada, pp. 530–535.
- 173. Alexander RL, Zurbuchen TH, Gilbert J, Lepri S, Raines J (2010) Sonification of ACE level 2 solar wind data. In: Proceedings of the 16th International Conference on Auditory Display (ICAD 2010). Washington, DC, USA, pp. 39–42.
- 174. Milios EE, Kapralos B, Stergiopoulos S (1999) Sonification of range information for 3-D space perception. Journal of the Acoustical Society of America 105: 980.
- 175. Milios EE, Kapralos B, Kopinska A, Stergiopoulos S (2003) Sonification of range information for 3-D space perception. IEEE Transactions on Neural Systems and Rehabilitation Engineering 11: 416–421.
- 176. Palomaki H (2006) Meanings conveyed by simple auditory rhythms. In: Proceedings of the 12th International Conference on Auditory Display (ICAD 2006). London, UK, pp. 99–104.
- 177. Pirhonen A (2007) Semantics of sounds and images - Can they be paralleled? In: Proceedings of the 13th International Conference on Auditory Display (ICAD 2007). Montréal, Canada, pp. 319–325.
- 178. Pirhonen A, Palomaki H (2008) Sonification of directional and emotional content: description of design challenges. In: Proceedings of the 14th International Conference on Auditory Display (ICAD 2008). Paris, France. CD-ROM.
- 179. Watson MO, Sanderson PM (1998) Work domain analysis for the evaluation of human interaction with anaesthesia alarm systems. In: Proceedings of the 8th Australasian Conference on Computer-Human Interaction (OzCHI '98). Adelaide, Australia, pp. 228–235.
- 180. Watson MO, Russell WJ, Sanderson PM (2000) Ecological interface design for anaesthesia monitoring. Australasian Journal of Information Systems 7: 109–114.
- 181. Watson MO, Sanderson PM, Anderson J (2000) Designing auditory displays for team environments. In: Proceedings of the 5th Australian Aviation Psychology Symposium (AAvPA 2000). Manly, Australia. CD-ROM.
- 182. Watson MO, Sanderson PM (2001) Intelligibility of sonifications for respiratory monitoring in anaesthesia. In: Proceedings of the Human Factors and Ergonomics Society 45th Annual Meeting. Minneapolis, MN, USA, pp. 1293–1297.
- 183. Crawford J,Watson MO, Burmeister O, Sanderson PM(2002) Multimodal displays for anaesthesia sonification: timesharing, workload, and expertise. In: Proceedings of the joint ESA/CHISIG Conference on Human Factors (HF 2002). Melbourne, Australia. CD-ROM.
- 184. Crawford J, Savill A, Sanderson PM (2003) Monitoring the anesthetized patient: an analysis of confusions in vital sign reports. In: Proceedings of the Human Factors and Ergonomics Society 47th Annual Meeting. Denver, CO, USA, pp. 1574–1578.
- 185. Sanderson PM (2003) Exploring auditory displays to support anaesthesia monitoring: six questions from a research program. In: Proceedings of the 39th Annual Conference of the Ergonomics Society of Australia (ESA 2003). St Lucia, Australia, pp. 48–53.
- 186. Watson MO, Sanderson PM, Woodall J, Russell WJ (2003) Operating theatre patient monitoring: the effects of self paced distracter tasks and experimental control on sonification evaluations. In: Proceedings of the 2003 Annual Conference of the Computer-Human Interaction Special Interest Group of the Ergonomics Society of Australia (OzCHI 2003). St Lucia, Australia, pp. 128–137.
- 187. Watson MO, Sanderson PM (2004) Sonification supports eyes-free respiratory monitoring and task time-sharing. Human Factors 46: 497–517.
- 188. Watson MO, Sanderson PM, Russell WJ (2004) Tailoring reveals information requirements: the case of anaesthesia alarms. Interacting with Computers 16: 271–293.
- 189. Sanderson PM, Crawford J, Savill A, Watson MO, Russell WJ (2004) Visual and auditory attention in patient monitoring: a formative analysis. Cognition, Technology & Work 6: 172–185.
- 190. Watson MO, Gill T (2004) Earcon for intermittent information in monitoring environments. In: Proceedings of the 2004 Annual Conference of the Computer-Human Interaction Special Interest Group of the Human Factors and Ergonomics Society of Australia (OzCHI 2004). Wollongong, Australia. CD-ROM.
- 191. Sanderson PM, Shek V, Watson MO (2004) The effect of music on monitoring a simulated anaesthetised patient with sonification. In: Proceedings of the 2004 Annual Conference of the Computer-Human Interaction Special Interest Group of the Human Factors and Ergonomics Society of Australia (OzCHI 2004). Wollongong, Australia. CD-ROM.
- 192. Sanderson PM, Tosh N, Philp S, Rudie J, Watson MO, et al. (2005) The effects of ambient music on simulated anaesthesia monitoring. Anaesthesia 60: 1073–1078.
- 193. Watson MO (2006) Scalable earcons: bridging the gap between intermittent and continuous auditory displays. In: Proceedings of the 12th International Conference on Auditory Display (ICAD 2006). London, UK, pp. 59–62.
- 194. Sanderson PM, Watson MO (2006). Method and means of physiological monitoring using sonifications. US Patent 7070570.
- 195. Watson MO (2006). Method and apparatus for physiological monitoring. WO Patent 2006/079148.
- 196. Watson MO, Sanderson PM (2007) Designing for attention with sound: challenges and extensions to ecological interface design. Human Factors 49: 331–346.
- 197. Sanderson PM, Watson MO, Russell WJ, Jenkins S, Liu D, et al. (2008) Advanced auditory displays and head-mounted displays: advantages and disadvantages for monitoring by the distracted anesthesiologist. Technology, Computing and Simulation 106: 1787–1797.
- 198. Martini J, Hermann T, Anselmetti D, Ritter HJ (2004) Interactive sonification for exploring single molecule properties with AFM based force spectroscopy. In: Proceedings of the 1st International Workshop on Interactive Sonification (ISon 2004). Bielefeld, Germany. Online.
- 199. Dozza M, Chiari L, Chan B, Rocchi L, Horak FB, et al. (2005) Inuence of a portable audiobiofeedback device on structural properties of postural sway. Journal of NeuroEngineering and Rehabilitation 2: 13:1–13: 12.
- 200. Dozza M, Chiari L, Horak FB (2005) Audio-biofeedback improves balance in patients with bilateral vestibular loss. Archives of Physical Medicine and Rehabilitation 86: 1401–1403.
- 201. Chiari L, Dozza M, Cappello A, Horak FB, Macellari V, et al. (2005) Audio-biofeedback for balance improvement: an accelerometry-based system. IEEE Transactions on Biomedical Engineering 52: 2108–2111.
- 202. Brunelli D, Farella E, Rocchi L, Dozza M, Chiari L, et al.. (2006) Bio-feedback system for rehabilitation based on a wireless body area network. In: Proceedings of the 4th Annual IEEE International Conference on Pervasive Computing and Communications (PerCom 2006) - Workshop UbiCare. Pisa, Italy, pp. 527–531.
- 203. Giansanti D, Dozza M, Chiari L, Maccioni G, Cappello A (2009) Energetic assessment of trunkpostural modifications induced by a wearable audio-biofeedback system. Medical Engineering & Physics 31: 48–54.
- 204. Krishnan S, Rangayyan RM, Bell GD, Frank CB (2000) Sonification of knee-joint vibration signals. In: Proceedings of the 22nd Annual EMBS International Conference. Chicago, IL, USA, pp. 1995–1998.
- 205. Krishnan S, Rangayyan RM, Bell GD, Frank CB (2001) Auditory display of knee-joint vibration signals. Journal of the Acoustical Society of America 110: 3292–3304.
- 206. Jones D (2008) AtomSwarm: a framework for swarm improvisation. In: Giacobini M, Brabazon A, Cagnoni S, DiCaro GA, Drechsler R, et al.editors, Applications of Evolutionary Computing, Springer Berlin Heidelberg, volume 4974 of Lecture Notes in Computer Science. pp. 423–432.
- 207. Ahmad A, Adie SG, Wang M, Boppart SA (2010) Sonification of optical coherence tomography data and images. Optics Express 18: 9934–9944.
- 208. Ng K, Weyde T, Larkin O, Neubarth K, Koerselman T, et al.. (2007) 3D augmented mirror: a multimodal interface for string instrument learning and teaching with gesture support. In: Proceedings of the 9th International Conference on Multimodal Interfaces. Nagoya, Japan, pp. 339–345.
- 209. Larkin O, Koerselman T, Ong B, Ng K (2008) Sonification of bowing features for string instrument training. In: Proceedings of the 14th International Conference on Auditory Display (ICAD 2008). Paris, France. CD-ROM.
- 210. Hermann T, Krause J, Ritter HJ (2002) Real-time control of sonification models with a haptic interface. In: Proceedings of the 8th International Conference on Auditory Display (ICAD 2002). Kyoto, Japan, pp. 82–86.
- 211. Hermann T, Ritter HJ (2004) Neural gas sonification - Growing adaptive interfaces for interacting with data. In: Proceedings of the 8th International Conference on Information Visualisation (IV'04). London, UK, pp. 871–878.
- 212. Hermann T, Meinicke P, Ritter HJ (2000) Principal curve sonification. In: Proceedings of the 6th International Conference on Auditory Display (ICAD 2000). Atlanta, GA, USA, pp. 81–86.
- 213. Hermann T, Ritter HJ (2005) Crystallization sonification of high-dimensional datasets. ACM Transactions on Applied Perception 2: 550–558.
- 214. Pauletto S, Hunt A (2004) Interactive sonification in two domains: helicopter ight analysis and physiotherapy movement analysis. In: Proceedings of the 1st International Workshop on Interactive Sonification (ISon 2004). Bielefeld, Germany. Online.
- 215. Pauletto S, Hunt A (2004) A toolkit for interactive sonifications. In: Proceedings of the 10th International Conference on Auditory Display (ICAD 2004). Sydney, Australia. CD-ROM.
- 216. Pauletto S, Hunt A (2006) The sonification of EMG data. In: Proceedings of the 12th International Conference on Auditory Display (ICAD 2006). London, UK, pp. 152–157.
- 217. Pauletto S, Hunt A (2009) Interactive sonification of complex data. International Journal of Human-Computer Studies 67: 923–933.
- 218. Kopeček I, Ošlejšek R (2008) Hybrid approach to sonification of color images. In: Proceedings of the 3rd International Conference on Convergence and Hybrid Information Technology (ICCIT '08). Busan, South Korea, pp. 722–727.
- 219. O'Neill C, Ng K (2008) Hearing images: interactive sonification interface for images. In: Proceedings of the 4th International Conference on Automated solutions for Cross Media Content and Multi-Channel Distribution (AXMEDIS 2008). Florence, Italy, pp. 25–31.
- 220. Huang H, Ingalls T, Olson L, Ganley K, Rikakis T, et al.. (2005) Interactive multimodal biofeedback for task-oriented neural rehabilitation. In: Proceedings of the 27th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Shangai, China, pp. 2547–2550.
- 221. Chen Y, Huang H, Xu W, Wallis RI, Sundaram H, et al.. (2006) The design of a real-time, multimodal biofeedback system for stroke patient rehabilitation. In: Proceedings of the 14th Annual ACM International Conference on Multimedia (MM '06). Santa Barbara, CA, USA, pp. 763–772.
- 222. Wallis I, Ingalls T, Rikakis T, Olsen L, Chen Y, et al.. (2007) Real-time sonification movement for an immersive stroke rehabilitation environment. In: Proceedings of the 13th International Conference on Auditory Display (ICAD 2007). Montréal, Canada, pp. 497–503.
- 223. Kay L (1962) Auditory perception and its relation to ultrasonic blind guidance aid. Journal of the British Institution of Radio Engineers 24: 309–317.
- 224. Kay L (1964) An ultrasonic sensing probe as a mobility aid for the blind. Ultrasonics 2: 53–59.
- 225. Kay L (1964). A new or improved apparatus for furnishing information as to position of objects. GB Patent 978742.
- 226. Kay L (1966) Ultrasonic spectacles for the blind. Journal of the Acoustical Society of America 40: 1564.
- 227. Kay L (1968). Blind aid. US Patent 3366922.
- 228. McMullen SC, Winkler F (2010) The Elocuter: I must remind you we live in Dada times… In: Proceedings of the 28th ACM Conference on Human Factors in Computing Systems (CHI 2010).Atlanta, GA, USA, pp. 3001–3006.
- 229. Kessous L, Jacquemin C, Filatriau JJ (2008) Real-time sonification of physiological data in an artistic performance context. In: Proceedings of the 14th International Conference on Auditory Display (ICAD 2008). Paris, France. CD-ROM.
- 230. Valenti R, Jaimes A, Sebe N (2010) Sonify your face: facial expressions for sound generation. In: Proceedings of the 18th Annual ACM International Conference on Multimedia (MM '10). Florence, Italy, pp. 1363–1372.
- 231. Williamson J, Murray-Smith R (2010) Multimodal excitatory interfaces with automatic content classification. In: Dubois E, Gray P, Nigay L, editors, The Engineering of Mixed Reality Systems, Springer London, Human-Computer Interaction Series, chapter 12. pp. 233–250.
- 232. Kazakevich M, Boulanger P, Bischof WF, Garcia M (2006) Multi-modal interface for a real-time CFD solver. In: Proceedings of the 5th IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE 2006). Ottawa, Canada, pp. 15–20.
- 233. Taylor R, Kazakevich M, Boulanger P, Garcia M, Bischof WF (2007) Multi-modal interface for uid dynamics simulations using 3-D localized sound. In: ButzA, FisherB, KrügerA, OlivierP, OwadaS, editors, Smart Graphics, Springer Berlin Heidelberg, volume 4569 of Lecture Notes in Computer Science. pp. 182–187.
- 234. Zhao H, Plaisant C, Shneiderman B, Duraiswami R (2004) Sonification of geo-referenced data for auditory information seeking: design principle and pilot study. In: Proceedings of the 10th International Conference on Auditory Display (ICAD 2004). Sydney, Australia. CD-ROM.
- 235. Zhao H, Smith BK, Norman K, Plaisant C, Shneiderman B (2005) Interactive sonification of choropleth maps. IEEE MultiMedia 12: 26–35.
- 236. Zhao H (2005) Interactive sonification of geo-referenced data. In: Proceedings of the 23th ACM Conference on Human Factors in Computing Systems (CHI 2005). Portland, OR, USA, pp. 1134–1135.
- 237. Xu J, Fang ZG, Dong DH, Zhou F (2010) An outdoor navigation aid system for the visually impaired. In: Proceedings of the 4th IEEE International Conference on Industrial Engineering and Engineering Management (IEEM 2010). Macau, China, pp. 2435–2439.
- 238. Harada S, Takagi H, Asakawa C (2011) On the audio representation of radial direction. In: Proceedings of the 29th ACM Conference on Human Factors in Computing Systems (CHI 2011). Vancouver, Canada, pp. 2779–2788.
- 239. Winton R, Gable TM, Schuett J, Walker BN (2012) A sonification of Kepler space telescope star data. In: Proceedings of the 18th International Conference on Auditory Display (ICAD 2012). Atlanta, GA, USA, pp. 218–220.
- 240. El-Shimy D, Grond F, Olmos A, Cooperstock JR (2012) Eyes-free environmental awareness for navigation. Journal on Multimodal User Interfaces 5: 131–141.
- 241. Terasawa H, Takahashi Y, Hirota K, Hamano T, Yamada T, et al.. (2011) C. elegans meets data sonification: can we hear its elegant movement? In: Proceedings of the 8th Sound and Music Computing Conference (SMC 2011). Padua, Italy, pp. 77–82.
- 242. Calvet D, Vallée C, Kronland-Martinet R, Voinier T (2000) Descriptif technique d'un Cosmophone à 24 voies. Technical report, CNRS.
- 243. Calvet D, Vallée C, Kronland-Martinet R, Voinier T (2000) Cosmophony or how to listen to cosmic rays. Technical report, CNRS.
- 244. Vallée C (2000) Cosmophonie et muséographie. Technical report, CNRS.
- 245. Vallée C (2002) The Cosmophone: towards a sensuous insight into hidden reality. Leonardo 35: 129.
- 246. Gobin P, Kronland-Martinet R, Lagesse GA, Voinier T, Ystad S (2004) Designing musical interfaces with composition in mind. In: Wiil UK, editor, Computer Music Modeling and Retrieval, Springer Berlin Heidelberg, volume 2771 of Lecture Notes in Computer Science. pp. 225–246.
- 247. Diennet J, Gobin P, Sturm H, Kronland-Martinet R, Voinier T, et al.. (2004). Structure pour spectacles cosmophoniques. Artistic project description, Ubris Studio.
- 248. Diennet J, Calvet D, Kronland-Martinet R, Vallée C, Voinier T (2007) The Cosmophone - Playing with particles, the cosmos and sounds. In: Proceedings of MutaMorphosis: Challenging Arts and Science International Conference. Prague, Czech Republic. Online.
- 249. Kronland-Martinet R, Voinier T (2008) Real-time perceptual simulation of moving sources: application to the Leslie cabinet and 3D sound immersion. EURASIP Journal on Audio, Speech, and Music Processing 2008: 849696 1–849696: 10.
- 250. Kronland-Martinet R, Voinier T, Calvet D, Vallée C (2012) Cosmic ray sonification: the Cosmophone. AI & Society 27: 307–309.
- 251. Adhitya S, Kuuskankare M (2011) The Sonified Urban Masterplan (SUM) tool: sonification for urban planning and design. In: Proceedings of the 17th International Conference on Auditory Display (ICAD 2011). Budapest, Hungary. CD-ROM.
- 252. Adhitya S, Kuuskankare M (2012) Composing graphic scores and sonifying visual music with the SUM tool. In: Proceedings of the 9th Sound and Music Computing Conference (SMC 2012). Copenhagen, Denmark, pp. 171–176.
- 253. Wilde D (2008) hipDisk: using sound to encourage physical extension, exploring humour in interface designs. International Journal of Performing Arts and Digital Media 4: 7–26.
- 254. Wilde D (2008) The hipdiskettes: learning (through) wearables. In: Proceedings of the 20th Australasian Conference on Computer-Human Interaction: Designing for Habitus and Habitat (OzCHI 2008). Cairns, Australia, pp. 259–262.
- 255. Wilde D (2011) Extending body and imagination: moving to move. International Journal on Disability and Human Development 10: 31–36.
- 256. Wilde D (2012) hipDisk: understanding the value of ungainly, embodied, performative, function. In: Proceedings of the 30th ACM Conference on Human Factors in Computing Systems Extended Abstracts (CHI EA 2012). Austin, TX, USA, pp. 111–120.