ClimaSynth: Enhancing Environmental Perception through Climate Change Sonic Interaction

In this entry I will be reviewing a NIME (New Interfaces for Musical Expression) paper I read, ClimaSynth: Enhancing Environmental Perception through Climate Change Sonic Interaction. The authors are Eleni-Ira Panourgia and Angela Brennecke from Film University Babelsberg KONRAD WOLF and Bela Usabaev from the Cologne Academy of Media Arts. The paper was published in 2024 from the International Conference on New Interfaces for Musical Expression.

The document describes ClimaSynth, an interactive application that aims to communicate climate change through sound, using interaction as a means to raise awareness about the changing environment. ClimaSynth explores the relationship between acoustic and climatic effects by allowing users to manipulate sound through an interface.

I believe it is relevant for New Interfaces for Musical Expression because it explores how we can use simple sonic parameters to communicate big issues. At the same time, it is relevant for my research because it shows a way to draw attention to an environmental cause.

The web-based application developed wishes to enhance environmental perception and explore how sound can express aridity and drought. It achieves this through contrast and familiar sounds, which are associated with climate conditions. The aim is also to investigate the issue through non-human perspectives.

The sonic experience wants to be accessible, with a blurred distinction between content and interface. The authors wanted to create a minimal UI that united sound and vision. Other applications in this format already exist, such as Sound Canvas, Lines and GrainTrain. Riverssounds, another one, allows users to navigate river ecosystems with mouse interaction.

Climate change also means that environments sound differently. This interactive application tries to communicate this using a sound design technique called granular synthesis. It transforms audio by breaking it down into microscopic fragments called “grains”. In this case, granular synthesis is used to manipulate recordings. 

The user can select a soundscape from a drop-down menu: “birds near water”, “river water” and “tree bark”. Each of these corresponds to a field recording. They can then adjust two parameters with sliders, “areas” and “spread”, which are then translated visually and sonically. By modifying these parameters, it is possible to transition between two extremes: the birds singing alone or accompanied by insects, the river becoming dryer and the tree bark readjusting its flexibility, all changes caused by drought. 

The “spread” value is visually translated as a purple area around the mouse, that can be more or less dense. With zero spread the grains are sampled from the same point, while a higher value allows for a broader selection of grains.

The “area” value shows the number of black dots around the mouse. Lower values allow the sounds to be individually perceived, while higher values make us perceive them as connected.

The conceptual originality of ClimaSynth is using parameters to communicate climate change and identifying which ones convey the idea the best. The non-human perspective of the applications makes the user almost feel like they are a fish in the river or the tree itself.

The idea of melting interface and content is coherent and shows that sound and visual elements can also communicate greater problems without needing any words.

The paper is missing information about target audience, testing and participants.

I also think that it is not very accessible to someone not from the field because the way the parameters work is not intuitive. Their visual translation is clear, but I found the sonic changes difficult to grasp. Before reading the paper, I was not sure of what the interactions meant. I would have wished for clearer naming of the parameters or labels on the two ends of the sliders, as a sort of cue of their effect on the sound. 

With the climate changes not being too evident on screen or sonically, I think the result is limited, although after playing with it for a while it is possible to perceive the changes.

As a consequence, I believe that the mapping could be more thoughtful, but the instrument is still performable beyond the lab. It explores musical expression in a way that is closer to nature and it surely conveys a message.
I agree with what is stated in the conclusion of the paper: “This approach offers a promising direction for conveying the complexity of climate change through rich sonic encounters with changing environmental states.” 

The conclusion also mentions the possibility to integrate more parameters and environmental or location-specific data in the UI, and I can imagine the interface changing for each recording, to make the experience more immersive.

I reckon the application could be displayed in exhibitions about environmental issues, but it could also be used to promote behavioural change in interactive ads.

The method explored has potential to communicate the complex issue that is climate change, and it was of great inspiration for my research theme, as I am interested in how interaction can raise awareness about light pollution. Using parameters, whether visual or sonic and analysing them is a way to communicate environmental changes and I will take it into consideration when working on my light pollution project.

D&R2 SED D2 – Business Idea 2/2

What Problem are you solving?

Most game interfaces are static and cluttered, creating massive legibility and navigation barriers for visually impaired players. Designers often lack the time or specialized tools to implement complex WCAG-like standards and multi-sensory feedback.

Why should we care about it?

Accessibility isn’t just a “feature”, it is a fundamental right to play. When games like Black Myth: Wukong launch with unreadable text, it excludes millions of players and hurts the game’s reputation and reach.

What is the solution? How does it work?

An interactive design engine that acts as a “Canva for Game UI.” It allows designers to drag-and-drop modular HUD elements and automatically checks them against accessibility rules while providing a library of haptic and audio logic.

Who is the target audience/customer?

The target audience is the visually impaired gaming community who needs better tools to play. The paying customer is the UX/UI Designer and Game Studio looking to streamline their workflow and meet professional standards.

What is going to happen? (Change & Impact)

We will move from a “one-size-fits-all” UI to a Modular Era where games are playable by everyone from day one. This shift reduces “design guesswork” for studios and replaces frustration with independence and mastery for players.

Bonus: How can this make money?

The platform will operate on a Freemium SaaS model, offering a free basic toolkit for students and indies, while charging Enterprise Subscription fees to large studios for advanced simulation tools and custom haptic libraries.

5# Product & Business Idea – Design & Research II (Katerina)

Thanks! 🙏

4# Customer Profile & Value Proposition Map – Design & Research II (Katerina)

NIME Article Review – Sonic City: The Urban Environment as a Musical Interface

3# Inclusion & Accessibility – Design & Research II (Katerina)

#2 Change & Impact – Design & Research II (Katerina)

#1 System Map – Design & Research II (Katerina)

Accessibility requirements and barriers

Breaking down the accessibility requirements of this project felt like one of the more hard exercises of the whole research process, because we are used to think everything would have been “easy and smooth” since it’s a niche digital research product. But there was much more behind.

What the user should be capable of

On a physical and personal level, sight would be the most relevant sense, since the experience is largely text-based, but it can be made compatible with screen readers and text-to-speech tools from the start. Hearing is not a requirement at all. Movement needs are minimal too, limited to basic typing, clicking, or scrolling, with voice dictation and keyboard-only navigation as fallbacks.

On the cognitive level is where it gets more demanding. Following an iterative loop of prompting, reviewing, and editing requires sustained mental focus and a willingness to sit with a process that is genuinely not instantaneous. As well as the need for a strong digital literacy to navigate AI tools or terminology, and enough language confidence to work in what are predominantly English environments. Though nowadays everything can be translated in real time.

Financially, the core methodology is designed to be of course a free to access tool, to be used as a starting point or guideline to use AI tools. Infrastructure needs are also few: a device and internet connection.

Who is it meant for and where does this happen

The methodology will be designed for anyone engaging with storytelling on a professional or exploratory level, from researchers and creatives to students and private users. It lives entirely in digital space, which means it can happen in any place and environment.

What it does require is probably the mindset: willingness and awareness to co-create with a machine, and enough critical thinking to question what the machine gives back.

The barriers

Two barriers kept coming up in the entire thinking process. The first was language, since AI tools lean heavily on English, nuance is often the first casualty of translation. The solution here could be to develop a simplified and even more visual version of the methodology that relies on basic English, diagrams, and examples rather than dense theoretical language.

The second barrier was knowledge. The informatics specific terminology involved is genuinely intimidating and not of common knowledge (it was for me too). The solution I thought for this issue was to add a vocabulary directly into the framework itself, so all the basic knowledge can be in the same place as the product.

To conclude, cognitive overload is also worth a mention. When the prompting loop feels endless and the output feels overwhelming, the step-by-step structure of the methodology becomes less of a nice-to-have and more of a way to escape.