I want to promote sustainable clothing consumption. I want to show people that it really can be quite easy to repair, tailor, and even make your own clothes. Studies show that taking part in the making and maintenance of your clothes makes you feel more attached to them, resulting in you taking better care of them and keeping the clothes longer.
Why should we care about it?
The fashion industry is one of the most environmentally harmful industries in the world. Fast fashion generates enormous amounts of waste, uses large quantities of water, and is one of the most exploitative businesses when it comes to labor rights.
What is the solution you are offering? How does it work?
Right now, my idea is a website for reusing old clothes and materials. It can generate sewing patterns based on the materials available, their quantity, and the user’s preferences. The user can customize the experience as much as possible by using filters to reflect factors such as their skill level, available time, tools, and workspace.
Who is the target audience? Who is the customer?
I would say my target audience would be young people that are interested in living more sustainably and are motivated to keep their clothing consumpion down.
Key words for my main taget group: thrifters, crafty, prctical, motivated, hobbies, sewing
What is going to happen? (Change & Impact)
People will realize how easy it is to take care of and customize their clothes. Early users will influence others, and hopefully the rest of society will follow their lead. This will decrease fast fashion demand and, in the long run, have a positive effect on the environment. 🙂
Recently, I came across the research “Entangling with Light and Shadow: Layers of Interaction with the Pattern Organ” by Jasmine Butt, Nathan Renney, Benedict Gaster, and Maisie Palmer, developed within the Expressive Computer Interaction Research Group at UWE Bristol.
RESEARCH OBJECTIVE
Figure 1: Illustration of an interface pattern
This research explores the design and use of a camera-based digital musical instrument called the Pattern Organ. This visual-audio synthesis artifact investigates new ways of interacting through light and shadow.
Users can modify a waveform by placing their hands or objects in front of the instrument’s camera, creating shadows and patterns. Through this interaction, they can perceive how both the environment and the sound change in real time.
The project initially started as a digital tool to represent the process of optical sound technology. However, during the workshop sessions, this idea evolved further. The focus shifted from a purely visual-audio synthesis system to a more open, participatory, and exploratory process.
THEORETICAL BACKGROUND
A matter of finding the grain of the world
Bruna Goveia Da Rocha and Kristina Andersen [2]
Figure 2: Design of the original Instrument
Drawing from analogue optical sound technologies used in early cinema, the research reinterprets these practices through a post-human perspective. Two main theoretical perspectives are considered. The first is from N. Katherine Hayles, who describes the world as a complex and highly interconnected syste[3]m. In this view, cognition is not limited to humans but moves dynamically across humans, animals, and technological systems. The second perspective is Karen Barad’s Agential Realism [4]. This theory describes reality as something continuously shaped by the interaction between material and meaning. Matter and information are not separate but constantly influence each other.
A strong emphasis is also placed on material thinking and hands-on experimentation with different materials.
CONSIDERATIONS
Figure 3: Images from the first workshop’s exploration
Throughout this process, there are two aspects that I personally find particularly interesting.
From a theoretical perspective, I find the idea of an entangled and participatory workshop very powerful. In this context, three elements—human, machine, and materials—are in constant dialogue. They continuously influence each other during the creative process. This approach is very effective in stimulating critical thinking, both in design, where each input can generate new ideas or solutions, and in educational contexts.
Figure 3: Sperimentation using a rotating can
From a practical perspective, I was particularly interested in the use of raw data. This concept influenced the method of sonification used in the project. Rawness can be understood as a choice to avoid interpreting or transforming the data through complex digital processing. Instead, the data produced by the system is used more directly, without adding layers of interpretation.
This does not necessarily mean that raw data is more accurate or more realistic. Rather, it means that the measurements are not modified or filtered, allowing a more immediate connection with the original signal.
In the case of camera-based sonification, two main approaches can be identified:
Extracting features from image data to control or modulate sound
Using a more direct method, where pixel brightness values are translated into sound signals with minimal processing
CONCLUSION
This research opens important questions about how data should be treated and interpreted. It challenges the idea that data always needs to be processed, optimized, or controlled.
It also highlights the role of human intervention and how our decisions shape the way systems behave. At the same time, it shows how the physical and material nature of interaction—light, shadow, objects—can influence digital processes in meaningful ways.
More broadly, it invites us to rethink the relationship between humans, technology, and the material world. Instead of separating them, this work suggests that meaningful interaction emerges from their continuous entanglement.
REFERENCES
[1] J. Butt, N. Renney, B. Gaster, and M. Palmer, “Entangling with light and shadow: Layers of interaction with the pattern organ,” in Proceedings of the International Conference on New Interfaces for Musical Expression (NIME ’25), Canberra, Australia, June 24–27, 2025.
[2] Bruna Goveia Da Rocha and Kristina Andersen. 2020. Becoming Travelers: Enabling the Material Drift. In Companion Publication of the 2020 ACM Design ing Interactive Systems Conference. ACM, Eindhoven Netherlands, 215–219. https://doi.org/10.1145/3393914.3395881
[3] Katherine Hayles. 2006. Unfinished Work: From Cyborg to Cognisphere. Theory, Culture Society 23 (2006), 159–166. https://doi.org/10.1177/0263276406069229
[4] Karen Barad. 2007. Meeting the universe halfway: quantum physics and the entanglement of matter and meaning. Duke University Press, Durham London.
In this entry I will be reviewing a NIME (New Interfaces for Musical Expression) paper I read, ClimaSynth: Enhancing Environmental Perception through Climate Change Sonic Interaction. The authors are Eleni-Ira Panourgia and Angela Brennecke from Film University Babelsberg KONRAD WOLF and Bela Usabaev from the Cologne Academy of Media Arts. The paper was published in 2024 from the International Conference on New Interfaces for Musical Expression.
The document describes ClimaSynth, an interactive application that aims to communicate climate change through sound, using interaction as a means to raise awareness about the changing environment. ClimaSynth explores the relationship between acoustic and climatic effects by allowing users to manipulate sound through an interface.
I believe it is relevant for New Interfaces for Musical Expression because it explores how we can use simple sonic parameters to communicate big issues. At the same time, it is relevant for my research because it shows a way to draw attention to an environmental cause.
The web-based application developed wishes to enhance environmental perception and explore how sound can express aridity and drought. It achieves this through contrast and familiar sounds, which are associated with climate conditions. The aim is also to investigate the issue through non-human perspectives.
The sonic experience wants to be accessible, with a blurred distinction between content and interface. The authors wanted to create a minimal UI that united sound and vision. Other applications in this format already exist, such as Sound Canvas, Lines and GrainTrain. Riverssounds, another one, allows users to navigate river ecosystems with mouse interaction.
Climate change also means that environments sound differently. This interactive application tries to communicate this using a sound design technique called granular synthesis. It transforms audio by breaking it down into microscopic fragments called “grains”. In this case, granular synthesis is used to manipulate recordings.
The user can select a soundscape from a drop-down menu: “birds near water”, “river water” and “tree bark”. Each of these corresponds to a field recording. They can then adjust two parameters with sliders, “areas” and “spread”, which are then translated visually and sonically. By modifying these parameters, it is possible to transition between two extremes: the birds singing alone or accompanied by insects, the river becoming dryer and the tree bark readjusting its flexibility, all changes caused by drought.
The “spread” value is visually translated as a purple area around the mouse, that can be more or less dense. With zero spread the grains are sampled from the same point, while a higher value allows for a broader selection of grains.
The “area” value shows the number of black dots around the mouse. Lower values allow the sounds to be individually perceived, while higher values make us perceive them as connected.
The conceptual originality of ClimaSynth is using parameters to communicate climate change and identifying which ones convey the idea the best. The non-human perspective of the applications makes the user almost feel like they are a fish in the river or the tree itself.
The idea of melting interface and content is coherent and shows that sound and visual elements can also communicate greater problems without needing any words.
The paper is missing information about target audience, testing and participants.
I also think that it is not very accessible to someone not from the field because the way the parameters work is not intuitive. Their visual translation is clear, but I found the sonic changes difficult to grasp. Before reading the paper, I was not sure of what the interactions meant. I would have wished for clearer naming of the parameters or labels on the two ends of the sliders, as a sort of cue of their effect on the sound.
With the climate changes not being too evident on screen or sonically, I think the result is limited, although after playing with it for a while it is possible to perceive the changes.
As a consequence, I believe that the mapping could be more thoughtful, but the instrument is still performable beyond the lab. It explores musical expression in a way that is closer to nature and it surely conveys a message. I agree with what is stated in the conclusion of the paper: “This approach offers a promising direction for conveying the complexity of climate change through rich sonic encounters with changing environmental states.”
The conclusion also mentions the possibility to integrate more parameters and environmental or location-specific data in the UI, and I can imagine the interface changing for each recording, to make the experience more immersive.
I reckon the application could be displayed in exhibitions about environmental issues, but it could also be used to promote behavioural change in interactive ads.
The method explored has potential to communicate the complex issue that is climate change, and it was of great inspiration for my research theme, as I am interested in how interaction can raise awareness about light pollution. Using parameters, whether visual or sonic and analysing them is a way to communicate environmental changes and I will take it into consideration when working on my light pollution project.
When browsing through the project and paper titles, this one instantly took my attention, as it involves a similar concept to my own Virtual Foley Stage project, in terms of interaction. Therefore this review comes in handy to investigate the approach used by the authors and compare with my own ideas and processes.
The paper proposes an interactive multi-functional virtual instrument based on computer vision. Gestural Hand recognition via the MediaPipe Hand Landmarker will ensure real time music creation in Max/Msp. There are 4 different modes to manipulate the audio signal, which enables use in either standalone or accompanying applications.
In the first stage, it uses the OpenCV library to display real-time video content from the player. This will then be monitored and analyzed by the Hand Landmarker system to gather gestural movement data, which will be mapped to different functions and tasks within the audio environment. Essentially, this process is the same approach I want to accomplish in my project and seems to be ideal.
The next step includes sending the acquired information from the MediaPipe Pipeline to the audio environment via OSC messages. While the paper’s chosen audio pipeline is Max/Msp, my own project will work within plugdata, to keep all angles of the project accessible and open-source.
From an audio perspective, the reviewed project is focused on creating an interface to play and compose music. This can be done through playing specific notes or chords, while the gesture movement gets mapped to specific musical expressions. As my project pursues more a sound effect and sound design goal, these mapping strategies do not seem to be as useful. I will have to implement a system with more control in the time domain, rather than within the pitch, as this is the most crucial part in foley work.
Even if the paper has not brought a lot of new insights to me regarding some details in my own project, it is good to see other work with a similar idea turning out well. Ultimately, my main concerns about the playability and latency issues, do not seem to have been experienced or considered in the paper. This leaves me on a positive note and anticipation for my own device.
Most game interfaces are static and cluttered, creating massive legibility and navigation barriers for visually impaired players. Designers often lack the time or specialized tools to implement complex WCAG-like standards and multi-sensory feedback.
Why should we care about it?
Accessibility isn’t just a “feature”, it is a fundamental right to play. When games like Black Myth: Wukong launch with unreadable text, it excludes millions of players and hurts the game’s reputation and reach.
What is the solution? How does it work?
An interactive design engine that acts as a “Canva for Game UI.” It allows designers to drag-and-drop modular HUD elements and automatically checks them against accessibility rules while providing a library of haptic and audio logic.
Who is the target audience/customer?
The target audience is the visually impaired gaming community who needs better tools to play. The paying customer is the UX/UI Designer and Game Studio looking to streamline their workflow and meet professional standards.
What is going to happen? (Change & Impact)
We will move from a “one-size-fits-all” UI to a Modular Era where games are playable by everyone from day one. This shift reduces “design guesswork” for studios and replaces frustration with independence and mastery for players.
Bonus: How can this make money?
The platform will operate on a Freemium SaaS model, offering a free basic toolkit for students and indies, while charging Enterprise Subscription fees to large studios for advanced simulation tools and custom haptic libraries.
While reading about PlaySoundGround, I was struck by how a seemingly simple idea—combining sound and play—can completely transform the playground experience. The project turns familiar playground equipment into interactive musical instruments, where physical movement directly produces sound .
At first glance, this idea feels almost obvious. It made me question why more playgrounds are not designed in this way. If play is already based on movement, rhythm, and interaction, connecting it to sound seems like a natural extension. Yet, in most traditional playgrounds, this potential remains unexplored.
What I found particularly interesting is how the project reveals the relationship between playing and play. As the authors describe, both involve creative interaction within physical and social constraints . By making this connection explicit, the playground becomes more than just a physical space—it becomes an interactive, expressive environment.
Another aspect that stood out to me was that the playground was scaled for adults. This shift challenges the common assumption that playgrounds are only for children. Extending such experiences to adults opens up new possibilities for interaction, creativity, and social engagement. It suggests that play is not limited by age, but rather by how spaces are designed.
Overall, this project made me reflect on how small design interventions can unlock entirely new experiences. Even a simple addition like sound can make playgrounds more engaging, interactive, and meaningful. It also reinforces my interest in designing participatory and playful systems that invite users to actively shape their own experiences.