In this entry I will be reviewing a NIME (New Interfaces for Musical Expression) paper I read, ClimaSynth: Enhancing Environmental Perception through Climate Change Sonic Interaction. The authors are Eleni-Ira Panourgia and Angela Brennecke from Film University Babelsberg KONRAD WOLF and Bela Usabaev from the Cologne Academy of Media Arts. The paper was published in 2024 from the International Conference on New Interfaces for Musical Expression.
The document describes ClimaSynth, an interactive application that aims to communicate climate change through sound, using interaction as a means to raise awareness about the changing environment. ClimaSynth explores the relationship between acoustic and climatic effects by allowing users to manipulate sound through an interface.
I believe it is relevant for New Interfaces for Musical Expression because it explores how we can use simple sonic parameters to communicate big issues. At the same time, it is relevant for my research because it shows a way to draw attention to an environmental cause.
The web-based application developed wishes to enhance environmental perception and explore how sound can express aridity and drought. It achieves this through contrast and familiar sounds, which are associated with climate conditions. The aim is also to investigate the issue through non-human perspectives.
The sonic experience wants to be accessible, with a blurred distinction between content and interface. The authors wanted to create a minimal UI that united sound and vision. Other applications in this format already exist, such as Sound Canvas, Lines and GrainTrain. Riverssounds, another one, allows users to navigate river ecosystems with mouse interaction.
Climate change also means that environments sound differently. This interactive application tries to communicate this using a sound design technique called granular synthesis. It transforms audio by breaking it down into microscopic fragments called “grains”. In this case, granular synthesis is used to manipulate recordings.
The user can select a soundscape from a drop-down menu: “birds near water”, “river water” and “tree bark”. Each of these corresponds to a field recording. They can then adjust two parameters with sliders, “areas” and “spread”, which are then translated visually and sonically. By modifying these parameters, it is possible to transition between two extremes: the birds singing alone or accompanied by insects, the river becoming dryer and the tree bark readjusting its flexibility, all changes caused by drought.
The “spread” value is visually translated as a purple area around the mouse, that can be more or less dense. With zero spread the grains are sampled from the same point, while a higher value allows for a broader selection of grains.
The “area” value shows the number of black dots around the mouse. Lower values allow the sounds to be individually perceived, while higher values make us perceive them as connected.
The conceptual originality of ClimaSynth is using parameters to communicate climate change and identifying which ones convey the idea the best. The non-human perspective of the applications makes the user almost feel like they are a fish in the river or the tree itself.
The idea of melting interface and content is coherent and shows that sound and visual elements can also communicate greater problems without needing any words.
The paper is missing information about target audience, testing and participants.
I also think that it is not very accessible to someone not from the field because the way the parameters work is not intuitive. Their visual translation is clear, but I found the sonic changes difficult to grasp. Before reading the paper, I was not sure of what the interactions meant. I would have wished for clearer naming of the parameters or labels on the two ends of the sliders, as a sort of cue of their effect on the sound.
With the climate changes not being too evident on screen or sonically, I think the result is limited, although after playing with it for a while it is possible to perceive the changes.
As a consequence, I believe that the mapping could be more thoughtful, but the instrument is still performable beyond the lab. It explores musical expression in a way that is closer to nature and it surely conveys a message.
I agree with what is stated in the conclusion of the paper: “This approach offers a promising direction for conveying the complexity of climate change through rich sonic encounters with changing environmental states.”
The conclusion also mentions the possibility to integrate more parameters and environmental or location-specific data in the UI, and I can imagine the interface changing for each recording, to make the experience more immersive.
I reckon the application could be displayed in exhibitions about environmental issues, but it could also be used to promote behavioural change in interactive ads.
The method explored has potential to communicate the complex issue that is climate change, and it was of great inspiration for my research theme, as I am interested in how interaction can raise awareness about light pollution. Using parameters, whether visual or sonic and analysing them is a way to communicate environmental changes and I will take it into consideration when working on my light pollution project.
