Skip to content

ICAD2011 Nick Bearman

Nick Bearman

School of Environmental Sciences, University of East Anglia, UK.

Evaluating whether sound can be used to represent uncertainty in spatial data
This project compares different visual and sonic methods of representing uncertainty in spatial data. When handling large volumes of spatial data, users can be limited in the amount of data that can be displayed at once due to visual saturation (when no more data can be shown visually without obscuring existing data). While there are a number of approaches within the visual representation field to reduce this saturation, there is most defiantly a limit on the amount of information that can be displayed at any one time.

Using sound in combination with visual methods may help to represent uncertainty in spatial data and this example uses the UK Climate Predictions 2009 (UKCP09) dataset [1], where uncertainty has been included for the first time. In all of the previous versions of this dataset, users were given a single number for the prediction of a particular climate variable, under a specific emissions scenario for a particular location and time. This dataset now provides users with a range of values and probabilities which they need to be able to integrate into their existing work flow and decision making processes.

While there has been considerable research looking at how people understand sounds and how to use sonifications there has been little on using sound to represent attributes of spatial data, or using sound in a spatial data (GIS) environment [2]. Fisher [3] was one of the earliest examples of using sound to represent uncertainty in spatial data, and it worked in a complementary manner, allowing the user to 'see' the data and 'hear' the uncertainty associated with it. His work was quite limited by the technology available at the time (1994). Jeong and Gluck [4] compared haptic, sonic and combined display methods in a series of user evaluations (n=51) and found that haptic alone was most effective; however, users preferred haptic and sonic combined even though their performance was lower. While there have been a small number of these types of experiments using sound to represent some aspect of spatial data, it has yet to really 'take off'.

Participants took part in the evaluation via a web-based interface, which used the Google Maps API to show the spatial data and capture user inputs. Using sound and vision together to show the same variable may be useful to colour blind users. Previous awareness of the data set appears to have a significant impact (p < 0.001) on participants ability to use the sonification. Using sound to reinforce data shown visually results in increased scores (p = 0.005) and using sound to show some data instead of vision showed a significant increase in speed without reducing effectiveness (p = 0.033) with repeated use of the sonification.

Issues to Discuss

  1. Why is sonification not widely used in Geographic Information / spatial data applications?
  2. Is there / why is there a gap between the sonification and the visualisation communities?
  3. Where can sound be used as a better medium than vision in a spatial data context?

References

G. J. Jenkins, J. M. Murphy, D. S. Sexton, J. A. Lowe, P. Jones, and C. G. Kilsby, UK Climate Projections: Briefing report. Exeter, UK: , 2009.
J. B. Krygier, “Sound and Geographic Visualization,” in Visualization in Modern Cartography, Oxford, UK: Elsevier Science, 1994, pp. 149-166.
P. F. Fisher, “Animation and Sound for the Visualisation of Uncertain Spatial Information,” in Visualisation in Geographic Information Science, Chichester, UK: Wiley, 1994.
W. Jeong and M. Gluck, “Multimodal geographic information systems: Adding haptic and auditory display,” in Journal of the American Society for Information Science and Technology, vol. 54, no. 3, pp. 229-242, 2003.

 

 

Back to ICAD2011 Think Tank

Next to Ethan Brown

Back to top