Draw Calls: The Invisible Bottleneck

Building performant 3D experiences on the web requires understanding how browsers, GPUs, and JavaScript interact. Even with optimized models and textures, poor draw call management causes stuttering.

What is a Draw Dall and why is it Important?

A draw call is the communication between the CPU and the GPU, more like single commands from the CPU saying “make this” or “draw this”. Each visible mesh generates one draw call. When issuing a draw call, the CPU prepares render state, binding vertex buffers, setting shaders, configuring textures, and managing memory. GPUs themselves are extremely efficient, rendering the triangles that make up models almost in an instant, but CPUs often cannot handle that much information, especially in such short time. Every draw call between creates a communication overhead, basically a cost required to make the communication between CPU and GPU happen. And if there are too many draw calls, the CPU is overwhelmed by the amount of data while the GPU sits there, doing nothing and waiting for instructions from the CPU.

This is why the resolution of a model does not matter as much as draw call count. A mesh with 200.000 triangles could render smoothly while 200 small meshes with 1.000 triangles each could overload the CPU leading to stuttering due to the overhead. Three.js projects usually maintain 60fps with around 100 draw calls per frame and at 500+ calls, even powerful hardware starts to struggle.

By focusing on draw calls, web developers can fix many performance issues that are not obvious from looking at mesh density or material count alone. Keeping the number of calls low through these steps ensures that the CPU can keep up with the GPU, resulting in smoother interaction and a more responsive 3D experience.

How to Reduce Draw Calls?

Merging

One of the most effective optimization methods is to merge static geometry. The objects in the environments that do not need to move, for example pieces of a building, such as floor tiles, wall segments, furniture can often be combined into a single larger mesh. This simple step turns many small draw calls into one large one. Even though the total amount of information has not changed, the scene will run much smoother as the communication overhead only needs to happen once for the combined mesh instead of once for each of the individual pieces.

The only drawback of this method is, that these parts need to be fully static, so it is not a good method for pieces that need to move individually or that can be interacted with. After merging only the whole merged mesh can be transformed, not the smaller parts of it.

Instanced Mesh

Another powerful tool is instancing. If the scene features 200 small meshes for example that are identical, these meshes can be instanced instead of duplicated. This allows the CPU to only send a single draw call, with the GPU handling the positioning of the object afterwards. This technique is ideal for repeated objects like trees, chairs, street lamps, bolts, and many more that share the same mesh and material but appear in different positions and rotations. A real estate visualization demo reduced draw calls from 9000 to 300 by converting chairs and props to instances, improving performance from 20 to 60 frames per second.

Batched Mesh

When focusing on draw calls, not only meshes are important, materials and textures are too. Every time the renderer needs to switch materials, it disrupts batching and usually triggers a new draw call. Sharing materials across meshes and using texture atlases where possible can help keep the draw calls lower. For example, several props that could be represented with a single atlas can be drawn together using the same material, with the UVs selecting the appropriate part of the texture for each object. This reduces both material state changes and draw call counts, especially in engines like Three.js that can batch geometry sharing a material, enabling them to combine multiple different geometries that share a single material into a single draw call.

Vision

Another reduction method that is often forgotten is on the visibility side through techniques like frustum culling. Most engines automatically skip objects that are not visible to the camera’s view frustum (the are that is currently visible to the camera) but manually culling or grouping specific objects together can help reduce calls. For example, hiding entire sections of a scene when the user is in a different area. This is especially useful in large scenes with different rooms or zones that the user cannot see all at once.

SOURCES

https://www.utsubo.com/blog/threejs-best-practices-100-tips
https://velasquezdaniel.com/blog/rendering-100k-spheres-instantianing-and-draw-calls/
https://stackoverflow.com/questions/41783047/how-many-webgl-draw-calls-does-three-js-make-for-a-given-number-of-geometries-ma
https://discourse.threejs.org/t/three-js-instancing-how-does-it-work/32664

Finding our way without rushing 10/10

Rückblick und Ausblick

Zu Beginn der Blogposts hatte ich mir lediglich den thematischen Rahmen gesteckt: Eine Untersuchung der Hybridanimation in der Grauzone zwischen handgezeichnetem Look und moderner Computergraphik. Die genaue Zielrichtung der Forschung hat sich jedoch erst nach und nach herausgestellt.

Im Laufe der folgenden Beiträge haben sich die verschiedenen Puzzleteile zu einem Gesamtbild zusammengefügt. Die Betrachtung der Hybridanimation als technisches Zusammenspiel von 2D und 3D, der beobachtbare Trend weg vom Realismus hin zu einem „handgemachten“ Look, sowie die Analyse von Non-Photorealistic Rendering und Cel-Shading machten eines deutlich: Hybridanimation ist mehr als eine technische Herausforderung – sie ist ein tiefgreifendes wahrnehmungspsychologisches Feld.

Ein entscheidender Wendepunkt der Recherche war die Einsicht, dass Fragen nach Abstraktion, Stilisierung und dem Phänomen des Uncanny Valley untrennbar mit der emotionalen Glaubwürdigkeit verknüpft sind. Zwischen ikonischer Reduktion und fotografischer Genauigkeit existiert kein ästhetischer neutraler Bereich. Vielmehr handelt es sich um eine Balance, die darüber entscheidet, ob eine Figur als lebendig und nahbar oder als befremdlich wahrgenommen wird. Die Hybridanimation agiert genau in dieser Zone, um die Vorteile beider Welten zu nutzen.

Die Synthese dieser Gedanken führte schließlich zur „Formel der Immersion“. Hierbei wird deutlich, dass Immersion sich nicht auf eine einzige Grundlage stützt, sondern aus verschiedenen Bausteinen entsteht. Daraus formte sich die zentrale Arbeitshypothese für die Masterarbeit:

Story (Kern) + Animation (Lebendigkeit) + Stilisierung (Verstärker) + Sound (Katalysator) = Immersion.

Innerhalb dieser Gleichung fungiert die Stilisierung zwar nicht als primäre Quelle der Emotion, aber als entscheidender kognitiver Verstärker, der die Intensität und Klarheit der vermittelten Gefühle beim Publikum maßgeblich steuert.

Im kommenden Semester wird diese Gleichung weiter vertieft und konkretisiert. Geplant sind Untersuchungen dazu, wie unterschiedliche Stilisierungsgrade die Wahrnehmung von Emotionen beeinflussen und wie ikonische im Vergleich zu semi-realistischen Figuren verschiedene Empathiereaktionen hervorrufen. Anhand ausgewählter Hybridfilme soll gezeigt werden, wie Blickführung, Bildgestaltung und Sounddesign ineinandergreifen.

Die Formel der Immersion

In der Welt der Hybridanimation verliert man sich leicht in technischen Details. Wir diskutieren über Shader, Framerates und Line-Art, während wir oft die wichtigste Frage übersehen: Warum fühlen wir überhaupt etwas?

Zu Beginn meiner Forschung war ich überzeugt, dass der Grad der Stilisierung der direkte Schlüssel zur Empathie sei, aber die Realität ist komplexer. Ein visuell perfekter Charakter lässt uns kalt, wenn die narrative Substanz fehlt. Aus dieser Erkenntnis und der Analyse medienpsychologischer Studien habe ich eine Formel abgeleitet, die den Rahmen meiner Masterarbeit bildet:

Story (Core) + Animation (Lebendigkeit) + Stilisierung (Verstärker) + Sound = Immersion

1. Der Kern (Core): Die Geschichte als emotionales Fundament

Ohne eine starke narrative Basis bleibt jede Animation eine bloße Technikdemonstration. Die Forschung zeigt, dass Empathie nicht durch das Aussehen entsteht, sondern durch Ziele, Konflikte und Verletzlichkeit.

Empirische Studien stützen dies: Lee et al. wiesen mittels Eye-Tracking nach, dass Testpersonen bei identischer Story dasselbe Empathie-Niveau empfinden – egal, ob die Figur ikonisch oder realistisch gezeichnet war. Die Story ist der „Core“, der festlegt, welche Emotionen wir fühlen sollen. Sie bereitet das Gehirn darauf vor, visuelle Reize als bedeutsam zu interpretieren.

2. Animation: Das Prinzip der Lebendigkeit

Sobald das Fundament steht, haucht die Animation der Idee Leben ein. Hierbei geht es nicht um Realismus, sondern um die psychologische Plausibilität. In der Hybridanimation nutzen wir 2D-Prinzipien wie „Squash and Stretch“, um Emotionen physisch greifbar zu machen.

Medienpsychologisch gesehen adressiert dies unsere Spiegelneuronen. Studien zu 3D-Charakteren belegen, dass nicht der Stil, sondern die Bewegungsqualität über die Glaubwürdigkeit entscheidet. Wie ein Charakter zögert oder zusammenzuckt, übersetzt den narrativen Kern.

3. Stilisierung: Der kognitive Verstärker

Hier setzt meine zentrale Forschungsfrage an: Wenn die Story der Motor ist, dann ist die Stilisierung der Verstärker. Der Grad der Abstraktion fungiert als Filter.

Wissenschaftlich lässt sich das durch eine geringere Gamma-Aktivität im Gehirn erklären: Weniger visuelles Rauschen bedeutet mehr Fokus auf die Essenz, wie etwa den emotionalen Ausdruck der Augen. Die Hybridisierung bietet hier die goldene Mitte.

4. Sound: Der emotionale Katalysator

Die Gleichung wäre unvollständig ohne die auditive Ebene. Studien zeigen, dass Sounddesign die wahrgenommene Immersion bei Animationen um das 4,4-Fache steigern kann. Er koppelt die visuelle Information an körperliche Reaktionen wie Gänsehaut oder Herzrasen und „zündet“ die bereits vorhandene narrative Emotion an.

Quellenverzeichnis:

Tan, E. S. (1996). Emotion and the Structure of Narrative Film: Film as an Emotion Machine. Routledge.

Lee, Y. I., Choi, Y., & Jeong, J. (2017). Character drawing style in cartoons on empathy induction: an eye-tracking and EEG study. PeerJ, 5, e3988.

Kock, M., & Louven, C. (2018). The power of sound design in a moving picture: An empirical study with emoTouch for iPad. Empirical musicology review, 13(3-4), 132-148.

Slowness in Navigation: How Maps Shape the Way We Move 9/10

Disadvantages:
➖ As a rule, it supports a functional rhythm focused on daily travel: fast, straightforward, problem-solving.

Google maps
Yandex maps
Mapy.cz

Sound Design and Scoring as Emotional Architecture

In film, sound is often perceived as a supportive layer to the image. Yet in practice, sound design and music are central to how a film feels alive. Long before viewers consciously interpret narrative or visual composition, they respond to rhythm, texture, tension and release created through sound. Film sound does not merely accompany images; it animates them, gives them weight and shapes how time, space and emotion are perceived.

Film sound operates on multiple levels simultaneously. Dialogue conveys explicit information, sound design establishes environment and physical presence, and music shapes emotional interpretation. What makes film feel alive is not the presence of these elements individually, but their precise coordination. Subtle shifts in texture, timing and dynamics can transform a static image into a living moment. A nearly imperceptible low-frequency drone can create unease, while a slight delay between image and sound can suggest disorientation or emotional distance.

The book Creative Strategies in Film Scoring published by Berklee Press emphasizes that effective film music is not about illustrating what is already visible, but about revealing what is unseen. Music can express internal states, foreshadow events or connect scenes across time and space. Rather than reacting directly to visual action, contemporary film scoring often works against the image, creating contrast or tension. This approach prevents redundancy and allows sound to function as an interpretive layer rather than a decorative one.

This philosophy is particularly evident in the work of Hans Zimmer, whose approach to film scoring has reshaped contemporary sound aesthetics. Zimmer frequently blurs the boundary between music and sound design, integrating synthesized textures, processed orchestral elements and rhythmic pulses into a single sonic system. His scores are often built around evolving textures rather than traditional melodic themes, allowing sound to function as atmosphere, momentum and emotional pressure at once.

In films such as Dunkirk or Blade Runner 2049, sound becomes inseparable from the visual experience. Time-based structures like ticking clocks, accelerating pulses or continuous drones create a bodily sense of urgency. These sonic elements do not simply underscore action; they condition how the viewer’s body responds to the image. Breathing, heart rate and attention are subtly guided by sound, creating a visceral sense of immersion.

What is especially relevant for design-oriented research is the way film sound operates as a system rather than a sequence of isolated cues. Sound designers and composers often work with modular elements that can expand, contract or transform depending on narrative context. This systemic thinking parallels approaches in audiovisual design and live visuals, where parameters are defined and relationships are established rather than fixed outcomes produced. Sound becomes adaptive, responsive and temporally fluid.

Another key aspect discussed in film sound theory is the idea of “invisible work.” When sound design functions well, it often goes unnoticed. Silence, restraint and reduction play a crucial role in making moments feel alive. Removing sound can heighten attention, while minimal sonic gestures can carry more emotional weight than complex compositions. This sensitivity to absence and space reinforces the idea that liveliness does not depend on constant stimulation, but on carefully designed contrast.

Examining film sound production highlights how deeply sound shapes perception and meaning. It demonstrates that sound is not an accessory to image, but a structuring force that animates narrative, space and emotion. For audiovisual design beyond cinema, this perspective suggests that making visuals feel alive may depend less on visual complexity and more on how sound and image are choreographed as a unified emotional architecture.

Sources:

Berklee Press. (2016). Creative strategies in film scoring. Berklee College of Music.

Karlin, F., & Wright, R. (2004). On the track: A guide to contemporary film scoring (2nd ed.). Routledge.

Lehman, F. (2018). Hollywood harmony: Musical wonder and the sound of cinema. Oxford University Press.

Designing Against Synchrony

Audiovisual systems are often built on rules. In many live visuals, animations follow amplitude, brightness responds to frequency, and rhythm maps directly to motion. These mappings feel intuitive and readable, especially in environments such as clubs or concerts where immediacy is crucial. Rule-based systems allow designers to translate sound into visuals efficiently, creating coherence and predictability in complex sensory environments.

However, this reliance on synchrony also introduces a limitation. When audiovisual systems consistently reinforce what is already present in the sound, they risk becoming illustrative rather than expressive. Sound is no longer interpreted, but mirrored. Over time, these conventions solidify into expectations: a bass drop must explode visually, intensity must equal brightness, silence must equal darkness. At this point, the system no longer produces meaning, but confirms it.

This is where breaking the system becomes not a failure, but a design strategy.

Theorists of film sound have long argued that sound does not need to align with image to be effective. Michel Chion describes how sound can function as an independent narrative force, shaping perception even when it contradicts what is seen. Rather than redundancy, disjunction creates tension, ambiguity and emotional depth. This principle extends beyond cinema into audiovisual performance and live visual systems.

A clear example of this can be found in the film Dunkirk (2017), where sound and image deliberately resist synchronization. The persistent ticking motif in the soundtrack does not correspond to visible clocks or actions, yet it dominates the viewer’s bodily perception of time. Moments of visual stillness are accompanied by intense sonic pressure, while moments of action are sometimes stripped of musical emphasis. The result is not confusion, but heightened immersion. Sound does not explain the image; it destabilizes it. The film feels alive precisely because its audiovisual system refuses to resolve into a single, coherent rhythm.

In live audiovisual contexts, similar effects emerge when systems are designed to allow contradiction. Calm visuals beneath aggressive sound, delayed reactions, or moments where visuals remain unchanged despite sonic escalation all interrupt expectation. In these moments, sound effectively “lies.” It suggests one emotional direction while the image proposes another. Rather than canceling each other out, these competing signals produce a third layer of meaning that the audience must actively negotiate.

This negotiation is crucial. When systems behave exactly as expected, audiences can disengage perceptually while remaining physically present. When a system breaks its own rules, attention is reactivated. The audience becomes aware of the audiovisual relationship as something constructed and contingent. This awareness does not diminish immersion; it often deepens it by introducing tension and unpredictability.

Importantly, breaking rules only works when rules exist in the first place. A system provides orientation; deviation creates emphasis. Silence has power only when sound is anticipated. Stillness is expressive only when movement has been established. From this perspective, systems and their disruption are not opposites, but interdependent design elements.

This leads to a stronger design claim: audiovisual systems should not aim for perfect synchronization, but for expressive flexibility. Rather than asking how accurately visuals can follow sound, designers might ask when visuals should resist, lag behind or remain indifferent to sonic cues. Such resistance introduces friction, and friction is often what makes an experience feel alive.

For practice-based audiovisual design, this reframes error and failure. A visual response that appears “incorrect” within a system may generate more meaning than a technically flawless reaction. Especially in live contexts, moments of breakdown can signal presence, risk and authorship. They remind the audience that the system is being performed, not executed.

Ultimately, liveliness in audiovisual work does not emerge from control alone. It emerges in the unstable space between structure and rupture. Designing systems that can be bent, contradicted or temporarily broken allows audiovisual experiences to move beyond automation and toward expression. In this space, sound and image do not simply align.

THEY ARGUE.

THEY HESITATE.

THEY COLLIDE.

And it is precisely there that they begin to feel alive.

Sources:

Chion, M. (1994). Audio-vision: Sound on screen. Columbia University Press.

Chion, M. (2009). Film, a sound art. Columbia University Press.

Karlin, F., & Wright, R. (2004). On the track: A guide to contemporary film scoring (2nd ed.). Routledge.

Lehman, F. (2018). Hollywood harmony: Musical wonder and the sound of cinema. Oxford University Press.

Systematic Evaluation vs. Research Through Design

Formulating a research question is not a neutral or universal act. Different disciplines propose different frameworks that reflect their underlying assumptions about knowledge, rigor and validity. This becomes especially visible when frameworks developed for scientific research are applied to creative or practice-based fields. In this post, I examine the stepwise approach to research question formulation by Ratan, Anand and Ratan, and critically compare it with Christopher Frayling’s concept of Research through Design.

Ratan et. al define a research question as a response to an existing uncertainty within a defined area of concern. Their framework emphasizes that a research question must be carefully evaluated before research begins, as it guides the entire investigative process. Central to their approach is the evaluative acronym FINERMAPS, which defines the characteristics of a good research question: feasible, interesting, novel, ethical, relevant, manageable, appropriate, potential value, publishability and systematic. Rather than focusing on how a question emerges, the framework primarily addresses how a question can be justified, validated and assessed for quality.

From a methodological standpoint, this approach offers several strengths. FINERMAPS provides a clear checklist that helps prevent overly broad, vague or impractical questions. It foregrounds feasibility and manageability, encouraging researchers to align ambition with available resources and constraints. The framework also emphasizes relevance and potential value, ensuring that research questions are not formulated in isolation but contribute meaningfully to an existing field of knowledge. In this sense, the approach supports accountability and clarity, qualities that are often expected in academic research contexts.

However, the framework also reflects assumptions rooted in positivist and applied research traditions. The classification of research questions into types such as descriptive, relational, comparative or causal reveals a preference for questions that aim to explain, measure or establish relationships. While this is appropriate for many forms of scientific inquiry, it becomes limiting when applied to design research, where questions often address experience, interpretation and meaning rather than causality or classification. Additionally, the emphasis on systematic formulation and early evaluation assumes a level of stability that does not always exist in exploratory or practice-led research.

This contrasts strongly with Christopher Frayling’s model of Research through Design. Frayling distinguishes between research into, for and through design, with the latter positioning practice itself as a mode of inquiry.

In this framework, research questions are not necessarily fully formed at the outset. Instead, they may emerge, shift or even dissolve through making, testing and reflecting. Knowledge is generated not only through analysis but through the act of designing, with artifacts functioning as sites of investigation rather than final answers.

Where Ratan et al. emphasize evaluation and validation prior to research, Frayling emphasizes emergence and iteration within research. Ambiguity is not treated as a flaw but as a productive condition. Research questions in a research through design context may remain open-ended or provisional for extended periods, allowing insights to surface through practice rather than through predefined structures. This approach aligns closely with creative disciplines, where understanding often develops through material engagement rather than linear problem-solving.

The tension between these frameworks reveals a deeper epistemological difference. Ratan et al. prioritize questions that can be systematically assessed and potentially generalized, whereas Frayling accepts situated, subjective and practice-specific knowledge as valid research outcomes. The inclusion of criteria such as “publishability” within FINERMAPS further highlights this divide, as design research may generate value through process, experience or localized insight rather than through traditional publication metrics.

Rather than viewing these frameworks as incompatible, their comparison highlights the need for adaptation. The stepwise approach by Ratan et al. can function as a valuable evaluative lens, helping to test whether a research question is feasible, relevant and appropriately scoped. Frayling’s framework, on the other hand, offers a conceptual foundation for embracing uncertainty, iteration and making as legitimate forms of inquiry. When combined critically, these frameworks allow research questions to be both reflective and rigorous, structured yet open to transformation.

Sources:

Frayling, C. (1993). Research in art and design. Royal College of Art Research Papers, 1(1), 1–5.

Ratan, S. K., Anand, T., & Ratan, J. (2019). Formulation of research question – stepwise approach. Journal of Indian Association of Pediatric Surgeons, 24(1), 15–20.

Developing a Research Question and Possible Outcomes

Finding a research question is often presented as a single moment of clarity. In practice, however, it is an iterative process shaped by curiosity, observation, conversation and experimentation. Rather than starting with a fixed thesis statement, I approached my research direction as something that would emerge through learning, practice and reflection.

My initial curiosity came from experience rather than theory. Spending time in clubs, concerts and audiovisual environments, I repeatedly noticed how visuals influence the story that sound conveys. Often, a new combined narrative emerges, shaping how people move, connect or disengage within a shared space. This observation led to an early realization: visuals are never neutral. They always influence how a space is felt and experienced.

To explore these questions, I began with open-ended brainstorming. I collected associations between sound qualities such as rhythm, reverb, equalizers, distortion and tempo, and visual attributes such as movement, shape, density and colour. These early mappings revealed patterns. They are not intended to function as strict rules, but as starting points for developing a visual language for the relationship between sound and image. Similar to spoken language, where a word can hold multiple meanings depending on context, a visual interpretation of sound can also remain open to interpretation.

Another important method was conversation-based research. I spoke with media designers, VJs, sound designers and people involved in club culture to gather perspectives beyond my own practice. These discussions reinforced my understanding that audiovisual design is relational. It is shaped by people, space and time, rather than by tools alone.

Alongside these methods, reflective writing helped me articulate why certain audiovisual moments stayed with me. Recurring themes emerged from this process, including collectivity, embodiment, rhythm and atmosphere. Writing about these experiences clarified that my interest does not lie in visuals for their own sake, but in understanding how they shape shared emotional experience.

Through this process, my focus gradually narrowed. Rather than asking how visuals can enhance music, I became increasingly interested in how designed systems for translating sound into visuals can shape collective emotional experience in live environments. Based on this focus, several candidate research questions emerged. Each approaches the topic from a slightly different angle while remaining intentionally open-ended.

Option A
How can a visual language for sound be designed to influence collective emotional experience in temporary cultural spaces such as clubs and raves?

Option B
How can sound be translated into a consistent visual language that shapes collective emotional experience in live music environments?

Option C
How can a visual language for translating sound be developed to shape collective experience in temporary live music spaces?

Option D
Can a designed visual language for sound influence how collective emotion is experienced in live audiovisual performance?

These questions map a research landscape rather than define a single outcome. One question may become dominant, or the thesis may synthesize elements from multiple options. At this stage, the openness of these questions reflects the exploratory nature of the research.

If one or more of these directions is pursued, the thesis would define a system for translating sound into visuals. This system is explicitly understood as a designed language, not a universal truth. Its purpose would be to explore consistency, interpretation and meaning rather than objective correctness.

The visual language could be applied through practice-based outcomes such as live and reactive visuals using tools like TouchDesigner, Resolume or Arkestra, prerendered animation tests exploring rhythm and timing, or staged audiovisual performances functioning as experimental scenarios rather than final artworks.

The final layer of the research would focus on reflection. This could include audience feedback, informal responses, personal reflection on the design process and comparisons between rule-based visual systems and intuitive or improvised approaches. Through this process, the thesis aims to understand how designed visual systems can shape experience in live audiovisual contexts while remaining open to ambiguity, interpretation and ongoing development.

Sources:

Chion, M. (1994). Audio-vision: Sound on screen. Columbia University Press.

Leerberg, M., Riisberg, V., & Boutrup, J. (2010). Design responsibility and sustainable design as reflective practice: An educational challenge. Sustainable Development, 18(5), 306–317.

Marks, L. E. (1978). The unity of the senses: Interrelations among the modalities. Academic Press.

Ratan, S. K., Anand, T., & Ratan, J. (2019). Formulation of research question–Stepwise approach. Journal of Indian Association of Pediatric Surgeons24(1), 15-20.


Visual Identity, Design Management and Responsibility

From a design management perspective, the combination of sound and image has the opportunity to become a branding tool. DJs, collectives and labels use visual identity to express values, whether political, aesthetic or emotional. Visuals become a statement, extending sound language into design culture.

As a media designer and club culture enthusiast, I am fascinated by how visual artistry can evoke the essence of music. A bouncing sphere might represent percussion, while abstract gradients might express the warmth of a synth pad. Beyond aesthetics, the creation of such experiences allows designers to tell political statements, engage and connect with people and support causes through charity initiatives.
An example of such responsibility taken into account is the collaboration of the music label Curieux Dilettanti (CXD) and the charity initiative ELPIDA e.V.. An album was produced in advance of a joint music event, of which all revenue was donated to the ELPIDA project in order to assist them in their mission for the right to asylum for all, once and for all.

So to me the question is not only how music can be visualized, but how it can be interpreted through design. Design is not neutral; it shapes emotions, values and collective experience. This becomes particularly important in club and festival environments, which function not only as entertainment spaces but also as cultural and political arenas.

Sources

Best, K. (2015). Design management: Managing design strategy, process and implementation. Bloomsbury.

Faulkner, J. (2013). VJ: Audio-visual art and VJ culture. Laurence King Publishing.

Giera, L., & Eller, C. (2025, October 22). Community driven cultural works (Interview).