How to Structure an Ecosystem

I’ve discussed the setting for my worldbuilding project, so today I’d like to get into the ecosystem. To take a look at how life evolves – the rules it follows and how it will change over time. I also want to explore what animals might evolve in this environment based on what animals we know live around the hydrothermal vents on Earth.

Animal Architecture – Repetition, Symmetry & Polarity

First, let’s get down the facts of animal architecture – how is life structured?

When looking at fossils, we can draw a lot of conclusions about evolution’s pervasive use of repeating parts and modular architecture in animal designs. Even individual body parts reflect this theme of modular design – the limbs of four-legged vertebrates are all made up of thigh, calf, ankle or upper arm, forearm, wrist and the hands and feet bear five similar digits. Something that can be found as far back as the Jurassic.

No matter how complex or bizarre the outward appearance of an animal may be – beneath it they are all constructed along recognizable, similar themes. It’s all about repetition; repeated parts and within those repeating units. The most obvious difference between groups of animals are the number and kind of repeated structures. When comparing these parts however, it’s important to discern if it’s the same body part that might have been changed/adapted. This is referred to as a “homolog” and would apply to our fore- and hindlimbs, for example. Or different shapes of teeth that are specifically adapted for biting, tearing or compacting food. These are all structures that arose as a repeated series and over time differentiated to varying degrees in different animals.

Another point in animal architecture is symmetry – most animals are bilaterally symmetrical, meaning their left and right sides match and they have a central axis of symmetry running down the middle of the long axis of the body. This also enables a front/rear orientation, which played an important part in the evolution of locomotion.

Thirdly, let’s talk about polarity. This is essential to how we are built from a purely structural point. Most animals possess three axes of polarity: head to tail, top to bottom/back and front and near to far from the body/torso (e.g. limbs).

Diversification of Life

Variation in shape also directly leads into evolution – the changes in the number and kind of serial homologs are considered a principal theme here. Early groups of animals tended to have a large number of similar repeating parts, but later groups would specialize these structures more and more. These specialized structures also wouldn’t revert back to more generalized forms.

The Basis for Life in Europa

Researching about primitive life during the last semester taught me that even with the animal itself long gone, we can draw conclusions about it by drawing parallels to animals known today. Similar features and structures are likely to have served a similar purpose, so it makes sense to take a look at what lifeforms live around our hydrothermal vents on Earth and use them as a stepping stone for creating an alien ecosystem.

These animals would be the following:

  • Bristle Worms (e.g.: pompeii worm, sulfide worm)
  • Segmented Worms (e.g.: tube worms)
  • Jellyfish (e.g.: Lucernaria janetae)
  • Sea Anemones (e.g.: Relicanthus daphneae)
  • Crustaceans like shrimp, crabs, lobsters – many of which lack eyes (e.g.: Alvinocarididae, Bythograeidae, Yeti Crabs/Lobsters)
  • Molluscs like mussels, clams, snails (e.g.: Bathymodiolus thermophilus, Calyptogena magnifica, scaly-foot snail)
  • Tonguefish (e.g.: Symphurus thermophilus)
  • Ray-Finned Fish (e.g.: Eelpouts)
  • Cephalopods (e.g.: Vulcanoctopus)

Sources:

(1) Carrol, Sean B.: Endless Forms Most Beautiful. The New Science of Evo Devo. New York: W. W. Norton & Company, Inc. 2005 [E-Book]

(2) Wikipedia. Die freie Enzyklopädie (21.03.2017), s.v. Category: Animals living on hydrothermal vents, https://en.wikipedia.org/wiki/Category:Animals_living_on_hydrothermal_vents (zuletzt aufgerufen am 31.03.2025)

IRCAM Excursion Pt. 2

In the second part of my blog series from the IRCAM Forum, I will summarize 2 workshops and performances, where I found similarities and approaches that could be interesting for my own research project.

Concrete Motion

Concrete Motion is an experimental tool designed for educational settings that facilitates the study and creation of sound-based music through physical movement. The system integrates the flexible audio processing of Max/MSP combined with Google MediaPipe body-tracking, within the TouchDesigner environment. By leveraging these technologies, it establishes an interactive digital space where listening and electroacoustic analysis are mediated through the user’s bodily gestures. This approach aims to bridge the gap between abstract musical concepts and tangible, physical interaction for learners and creators alike. While the objectives in this project’s research lies in educational ambitions, the fact it is using gestural interaction to create an accessible environment, can be directly translated into my own idea of an accessible interface for my project. A main difference in the technical construction would be, that the MediaPipe Hand Landmarker is running directly through Touchdesigner. I am not sure if there is an additional latency involved or if this could also be an idea for my project. Especially if I plan to get some additional visualisation on the gestures and processed audio.

Liminal

Liminal is an interactive installation that moves beyond traditional control-based models to explore a “liminal” space where agency is shared between humans and AI. In this environment, human gestures do not function as direct commands but instead serve as contextual information that influences the system’s evolving behavior over time. The architecture uses real-time computer vision and a Python-based decision layer to ensure that audiovisual changes emerge through gradual modulation and probabilistic weighting rather than immediate cause-and-effect. By distributing authorship across both the participant and the machine, the work transforms interaction into a sustained, meditative dialogue shaped by accumulated history and continuous negotiation. I visited the performance and workshop, as this is as well a similar approach from a gestural interaction to audio creation philosophy. In comparison to other projects of this theme, I found the bidirectional interaction and decision making between the human input and the models system. Maybe this could also be seen as an anecdote to how future work will, if not already, operate in all daily activities. On a technical level, the computer vision was also implemented through Touchdesigner, as in the other project mentioned earlier. 

Both projects incorporate most technicalities I strive to achieve in my own project, hence why it was interesting listening to the approaches and talking about the ideas involved. While I will head towards a different end goal with my product, they are still good examples to compare the technical feasibility and workflow. All in all, our days at IRCAM Forum brought me exciting insights and takeaways from diverse fields of audio focussed research.

IRCAM Excursion Pt. 1

Last week we attended the IRCAM Forum in Paris as part of our semester excursion. Overall it was an interesting couple of days, where we had the chance to experience a lot of the latest research and techniques in various fields of audio and sound. Within this post I will try to summarise my main takeaways on some chosen talks, workshops and performances I have attended.

Partiels & ASAP Plugins

I visited a talk and workshop by Pierre Guillot, which showed updates and news from the tools Partiels and ASAP. Partiels is a software dedicated to analysing audio signals and retrieving useful data for signal processing and sound design applications. One of the new interesting developments involved a direct python integration to access analysed data directly. The ASAP tools are more a direct creative solution to manipulate audio in various applications. The three major tools mentioned would be the Psycho Filter – allowing to apply spectral filters directly to a source, the Pitches Brew – an advanced pitch and formant manipulation via interactive frequency curve editing, and Stretch Life – a time manipulation tool for compressing and stretching sound dynamically. Further notable mentions have been the Spectral Morphing and Spectral Crossing tool, which allows to combine and ‘morph’ two audio sources on the spectral domain. All together these seem to be interesting tools, cleverly designed and quite accessible for most users I would imagine.

GRM Tools – Atelier

Another interesting workshop was the presentation of the GRM Tools Atelier software. It is a sound processing and synthesis environment working for real-time and multichannel productions, both in a live or studio application. I really liked the modular approach and the quick and intuitive randomisation capabilities, which allows for a fast agitation of multiple parameters at once. This can be an interesting choice for sound artists wanting to work with only one standalone software and dealing with more graphically intuitive controls than for example puredata or Max/Msp. As I already own some similar synthesizer software of a similar modular system, without the multichannel capabilities, I will for now stick to these though.

VASE

I also experienced a performance of VASE by composer Yuval Seeberger. The device is a motorized music-box installation that performs a 12-minute, semi-algorithmic composition by integrating a physical punched paper score with advanced digital processing. The system utilizes a specialized “ensemble” consisting of an acoustic-mechanical music-box, an analog motor, Max/MSP synthesis, and RAVE neural audio models to create a rich, layered sonic environment. While the structure follows a formal organization, the irregular communication between the computer and the mechanical hardware ensures that each loop cycle contains subtle, unpredictable variations. Through the use of Piezo and magnetic rail-coil pickups, the piece effectively bridges the gap between tactile mechanical movement and real-time AI-driven sound generation. I was quite fascinated by the various soundscapes it was able to produce and also how the random interaction by the composer influenced the installation.

[DesRes 2 @BirgitBachler] Entry 01: Employee and Consumer Protection

New semester – new me. More or less, at least. While I was extremely confident about my chosen research topic last semester “Creating User-Centered Strategies that align with Business and IT Goals in an innovative Agile Environment. Use Case: Self Checkout Terminals in Supermarkets” the tables have turned.

Rewind – initial idea

Reason No. 1: I initially chose this topic because ever since my internship at Bosch last year I’ve realized how important it is to keep all three feasibility, viability and desirability in mind .

Reason No. 2: The specific use cases of Self Checkout terminals in supermarket seemed like a reasonable extension to the topic, since it aligns with my tasks at my student job. On top, my company would’ve be interested in a collaboration for the thesis.

Long story short: I have gotten to realize that theses matters are already researched through and through.

Motivation – Focusing on employees and/or consumer protection

Instead I would like to contribute something that’s actually innovative and has a social and ethical impact on top. Over the past few years topics such as protection of employees and consumer protection have been on my mind frequently.

Possible idea + 1st Prototype

About two years ago I had the opportunity to participate in an interdisciplinary project called Legal Design Sprint endorsed by the Chambor of Labour Vienna. The project of my team (topic “Mobility”) was even awarded by a team of experts.


Topic Mobility – How do people commute to work?

To guarantee a sustainable future this must change. There is a set of alternatives, such as riding the bike to work or using public transport. Nevertheless, these options are limited and raise a few pain points:

  • How much responsibility should be carried upon employers?
  • Which obstacles are people on the countryside facing?
  • How do we convince car drivers to let go of their habits?
  • Are possible alternatives tangible?

Our Solution

With our concept “Business Mobility Strategy” employers must be made responsible for the possibilities of commute that are being offered to their employees. To do this a dialog between both sides – employers and employees – needs to happen.

  1. Interface: The service will be implemented in the “USP – Unternehmensserviceportal” (= the major online service for Austrian businesses).
  2. Data collection: Employers must enter data such as the distance to the nearest bus station or the amount of parking space available for employees.
  3. Less Red Tape: Business owners are constantly exposed to endless and redundant paperwork. Usability Tests will assure a seamless and viable end-to-end experience.
  4. Data Evaluation: Once all the information is submitted the businesses approach and actions towards sustainability in mobility will be graded. This assessment scheme will be defined by experts to guarantee transparency.
  5. Recommended Actions: The evaluation also provides businesses with an answer on how to improve in the topic of sustainable mobility.‍ Employers can set these recommendations as goals to improve rating and help their employees with commuting to work.
  6. Spot Checks: An independent institution will make sure the businesses have submitted their mobility data correctly.

Alternative options:

  • dark patterns in fast fashion e-commerce (e.g. SHEIN)
  • user centered interaction of the “Arbeitnehmer:innenveranlagung”


Reference

[1] G. Kovacic, Cristian Andronic, and S. Kirchmayr-Novak, ÖV-Erreichbarkeit großer Arbeitsplatzstandorte in Österreich. 2022.

Blog Post 5: Product Idea

A product or business idea is a structured proposal that identifies a specific problem, outlines a solution, and defines how value is created for users and stakeholders. In design-driven innovation, such ideas are grounded in real user needs and aim to create both functional and experiential improvements.

Understanding the underlying idea of a product is the first and most important step in its development. For the idea of a guiding system at German train stations the exact paraments for the final product are not yet defined. But a closer look at the product idea is still a valuable step towards more clarity and understanding.

The core problem lies in the current experience of train platforms, which are often perceived as stressful, unorganized, and confusing environments. Boarding and exiting trains can be physically demanding, especially during peak times or for individuals with limited mobility. This creates friction in the interaction between passengers, trains, and the platform itself, ultimately reducing the overall quality of the travel experience.

Addressing this issue matters because improving the usability and comfort of train travel can make it a more attractive mode of transportation. A better experience could encourage more people to choose trains over cars, contributing to reduced traffic congestion and lower environmental impact.

The proposed solution is a physical guiding system integrated directly into train platforms. While still in development, the current idea is the use of light-based elements, such as illuminated pathways, signals, or dynamic indicators, to guide passengers intuitively. This system would enhance orientation, communicate real-time information, and support smoother boarding and alighting processes without adding visual clutter.

The target audience includes all users of the train system, with a primary focus on passengers. At the same time, organizations like Deutsche Bahn act as key stakeholders and customers, investing in and maintaining the system. The expected impact includes improved navigation, more efficient passenger flow, and a more structured and user-friendly platform environment.

From a business perspective, the model could involve an initial infrastructure investment by railway operators, followed by ongoing maintenance.

Ultimately, the idea combines user-centered design with systemic impact, aiming to transform train platforms into more intuitive, accessible, and enjoyable spaces.

Blogpost 4: The Value Proposition Canvas

The Value Proposition Canvas is a strategic tool used in design and innovation to ensure that a product or service aligns closely with user needs. It consists of two main components: the Customer Profile and the Value Map. The Customer Profile focuses on understanding the user by identifying their jobs (what they want to achieve), pains (challenges or frustrations), and gains (desired outcomes or benefits). The Value Map, on the other hand, outlines how a product or service responds to these needs through products and services, pain relievers, and gain creators. Together, these tools help designers create solutions that are both relevant and impactful. (Strategyzer, 2026)

To get a better understanding of the anticipated product and its purpose for the user, two canvases were produced for two different players. The first one focuses on the train passenger as an end user. Their Customer Profile emphasizes practical goals such as arriving on time, navigating platforms easily, and boarding trains without stress. Gains include comfort, clarity, and reliability, while pains involve confusion, overcrowding, physical strain, and lack of accessible information. The Value Map responds with a physical support and guidance system, clearer information structures, and inclusive design features to accommodate diverse user needs.

The second example represents the Deutsche Bahn (DB) as a customer. Here, the Customer Profile highlights organizational goals such as transporting passengers efficiently from A to B, ensuring smooth system operations, and maintaining profitability. The identified gains include improved punctuality, enhanced public image, and increased customer satisfaction. However, DB also faces significant pains, such as technical failures, delays, and negative public perception. The corresponding Value Map proposes solutions like improved guidance systems, better information displays, and more structured platforms, all aimed at reducing inefficiencies and enhancing the overall service experience.

Overall, these two profiles demonstrate how the Value Proposition Canvas can bridge organizational objectives and user experiences, enabling more targeted and user-centered design solutions.

PROTOTYPING – Design & Research II (Birgit) – 1/6

Harmonix Series: Accessible Digital Musical Instruments for Mindfulness and Creativity


The “Harmonix Series” by Wing Hei Cheryl Hui and Patrick Hartono represents a visionary bridge between human-computer interaction and therapeutic art, moving beyond the technical novelty of Digital Musical Instruments to address a profound need for the democratization of creativity. What I find most compelling about this work is its commitment to radical accessibility; by shifting the focus from mastering a complex tool to simply experiencing a soundscape, the authors empower users of all physical and cognitive abilities to become creators. This is further elevated by the intentional integration of mindfulness into the interface design, which transforms the act of music making into a meditative process for emotional regulation rather than just a performance. The synergy between robust technical implementation and a sophisticated aesthetic sensibility is palpable, resulting in an instrument that doesn’t just function, but truly resonates on a human level. Ultimately, this project serves as a vital reminder that the future of music technology should prioritize deep human impact and digital wellbeing, treating the user as a whole person seeking connection and calm.

Tiny Touch Instruments: Composing for Collaborative Performance – NIME Paper Review

I chose this article because the title immediately caught my attention. I was curious what “Tiny Touch Instruments” actually are and what kinds of decisions and thoughts go into programming such instruments.

The author, Rebecca Abraham, is a researcher and composer working in the area of digital and collaborative music-making. In the paper, Abraham describes a project centered on Tiny Touch Instruments (TTIs), a set of mobile, web-based musical instruments that are played through touch gestures on a smartphone or similar device. The project is situated within a broader context of mobile music ensembles, such as the Stanford Mobile Orchestra, which explore how mobile technology can support collective music-making. You can access the Tiny Touch Instruments here.

As part of this research, Abraham composed two pieces titled Skating and Skipping. Both works are performed using the TTIs that the author programmed. The instruments run on a webpage and are controlled using gestures such as tapping, swiping, or holding a finger on the screen. These interactions generate sound while also producing visual feedback, allowing performers to see and hear the effects of their gestures. One important aspect of the project is accessibility: the pieces are designed so that they can be performed without prior rehearsal and even by people without formal musical training.

The two compositions use different approaches to notation and performance. In Skating, performers follow a graphic score that includes visual shapes and brief text instructions. Participants draw certain gestures on their screens or interact with others in the group, for example by imitating nearby performers or responding to sounds they hear in the room. The focus of the piece is less on precise melodies and more on shared sonic textures that emerge through group interaction.

Skipping uses a different format. Instead of a static graphic score, performers follow an animated score projected on a large screen. This score combines graphics, animations, and text instructions that guide the performers’ actions over time—for example, indicating where on the phone screen to interact or encouraging them to increase the frequency of tapping. The piece gradually shifts from simple exploration of the instruments toward more intentional interaction between performers.

Through observations, interviews, and surveys with participants across several performances, Abraham analyzed how people experienced these pieces. One key finding was that performing without rehearsal encouraged exploration and experimentation. At the same time, performers gradually became more comfortable with the instruments as the piece progressed. Another important result concerns notation: a multimodal approach that combines graphics, animation, and text proved particularly effective.

Visual elements helped performers understand the relationship between their gestures and the sounds produced by the instruments.
An especially interesting observation was how the performances changed participants’ perception of their smartphones. During the performance, the phone was no longer experienced primarily as a device for communication or distraction, but rather as a creative musical tool that enabled collective expression.

These ideas resonate strongly with my own design interests. In my research, I am exploring the concept of “ear candy” and interactive sound design. Inspired by this article, I am considering developing my own small touch-based digital instruments that people could access online. My goal would be to design them in a way that is not only playful and engaging, but also educational, allowing users to learn something about sound or interaction through experimentation.

The Living Looper – NIME Paper Review

Evolving the Living Looper: Artistic Research, Online Learning, and Tentacle Pendula

The Living Looper: Rethinking the Musical Loop as a Machine Action-Perception Loop

I came across the concept of ‘living loops’ while going through the NIME papers. Naturally intrigued, I wanted to learn more about this interesting topic. It is a special music interaction project that can yield vastly different outputs, mainly due to the use of generative machine learning models. The implementation focuses on the RAVE encoder, created by IRCAM, where the audio data is mapped into a compressed latent space and trained using an autoregressive predictive model incorporating partial least squares regression. The dataset primarily focuses on the timbre of the sound rather than the overall musical structure.

It is a unique approach to looping as the artist has the ability to control the loops and train them for real-time audio synthesis, which help create more unique textures and tones. Though the instruments used for the interface were primarily stringed and wind instruments like the violin and saxophone, I wonder how the results would vary if percussion was used. Was it possibly avoided due to the fast attack and low sustain times? A thought to ponder, for sure. I am also curious to learn how turntablists and DJs could use this tool in their performances.

The 2025 paper introduces a user-centric improvement to the previous model, which lacked a visual interface. The upgrade also features a visualisation for each loop by introducing a unique ‘tentacle pendulum’ that interacts when the loop is played. The design was intended to represent the RAVE latent dimensions in order of importance, where the value of each latent dimension determines angular displacement and hue of each segment. The introduction of incremental algorithms which distribute the computation across audio frames, potentially allows more computational resources to be brought to bear on each loop. 

One of the other things I really like about this project is its compatibility and open-source accessibility. Any model following the RAVE API can now be packaged into a Living Looper instance using a Python CLI available on PyPI. Since these instances are now themselves nn~ models, the core functions can be loaded directly into Pure Data and Max, and the new graphical version can be easily installed as a SuperCollider extension, making it very accessible for most sound and music enthusiasts.

One issue I assume they would face is latency from the signal processing, making the timbre of the sound a crucial indicator for the looper. The parallel processing of effects like delays and reverb could prove to be quite a challenge as well, and I would be interested to know how they could overcome this challenge.

Overall, I thought it was a super interesting project and especially useful for artists and musicians to experiment, compose and express themselves. Excited to see what the next follow-up to this project is.