Comments and Feedback

General (anonymous) comments from the participants about their experience of the 2011 ThinkTank

  • Discussion about perception and fatigue was interesting and insightful. It helped me position my own stance on fatigue and its relation to my research/project.
  • Helpful advice about learning terminology from other disciplines so that we may answer calls to papers across multi-disciplinary conferences/journals.
  • In general the Think Tank session worked very well and for me specifically it gave a good overview of the area of sonification and showed how wide an area it covered! It also worked well as an ice breaker session for students who might be a bit unsure of ‘conferences’, if this was their first one.
  • The 5 minute talks about other students work worked well and the staff feedback was useful. We never really got to look at the questions we were asked to come up with before the session, which was slightly disappointing, although we covered some of the general issues in the discussion session on Inputs / Doing It / Outputs.
  • I think the student think tank should definitely be run again next year, in a similar format to this one, but possibly extend it a bit to allow for more discussion and the questions (see above) if there are a similar number of people.
  • I found the ThinkTank to be a fun and helpful networking and learning opportunity, and I received some quite useful feedback on my project. Overall I thought the format was effective. However, I left the session feeling uncertain what to do next-I wished I had a better starting point on entering into the existing auditory display and sonification literature, and I also was hoping for more suggestions on people who I should talk to and whose work I should read.

Nick Bearman

School of Environmental Scienes, University of East Anglia, UK.

Evaluating whether sound can be used to represent uncertainty in spatial data

This project compares different visual and sonic methods of representing uncertainty in spatial data. When handling large volumes of spatial data, users can be limited in the amount of data that can be displayed at once due to visual saturation (when no more data can be shown visually without obscuring existing data). While there are a number of approaches within the visual representation field to reduce this saturation, there is most defiantly a limit on the amount of information that can be displayed at any one time.

Using sound in combination with visual methods may help to represent uncertainty in spatial data and this example uses the UK Climate Predictions 2009 (UKCP09) dataset [1], where uncertainty has been included for the first time. In all of the previous versions of this dataset, users were given a single number for the prediction of a particular climate variable, under a specific emissions scenario for a particular location and time. This dataset now provides users with a range of values and probabilities which they need to be able to integrate into their existing work flow and decision making processes.

While there has been considerable research looking at how people understand sounds and how to use sonifications there has been little on using sound to represent attributes of spatial data, or using sound in a spatial data (GIS) environment [2]. Fisher [3] was one of the earliest examples of using sound to represent uncertainty in spatial data, and it worked in a complementary manner, allowing the user to 'see' the data and 'hear' the uncertainty associated with it. His work was quite limited by the technology available at the time (1994). Jeong and Gluck [4] compared haptic, sonic and combined display methods in a series of user evaluations (n=51) and found that haptic alone was most effective; however, users preferred haptic and sonic combined even though their performance was lower. While there have been a small number of these types of experiments using sound to represent some aspect of spatial data, it has yet to really 'take off'.

Participants took part in the evaluation via a web-based interface which used the Google Maps API to show the spatial data and capture user inputs. Using sound and vision together to show the same variable may be useful to colour blind users. Previous awareness of the data set appears to have a significant impact (p < 0.001) on participants ability to use the sonification. Using sound to reinforce data shown visually results in increased scores (p = 0.005) and using sound to show some data instead of vision showed a significant increase in speed without reducing effectiveness (p = 0.033) with repeated use of the sonification.

Issues to Discuss

  1. Why is sonification not widely used in Geographic Information / spatial data applications?
  2. Is there / why is there a gap between the sonification and the visualisation communities?
  3. Where can sound be used as a better medium than vision in a spatial data context?


  1. G. J. Jenkins, J. M. Murphy, D. S. Sexton, J. A. Lowe, P. Jones, and C. G. Kilsby, UK Climate Projections: Briefing report. Exeter, UK: , 2009.
  2. J. B. Krygier, “Sound and Geographic Visualization,” in Visualization in Modern Cartography, Oxford, UK: Elsevier Science, 1994, pp. 149-166.
  3. P. F. Fisher, “Animation and Sound for the Visualisation of Uncertain Spatial Information,” in Visualisation in Geographic Information Science, Chichester, UK: Wiley, 1994.
  4. W. Jeong and M. Gluck, “Multimodal geographic information systems: Adding haptic and auditory display,” in Journal of the American Society for Information Science and Technology, vol. 54, no. 3, pp. 229-242, 2003.