I chose this article because the title immediately caught my attention. I was curious what “Tiny Touch Instruments” actually are and what kinds of decisions and thoughts go into programming such instruments.
The author, Rebecca Abraham, is a researcher and composer working in the area of digital and collaborative music-making. In the paper, Abraham describes a project centered on Tiny Touch Instruments (TTIs), a set of mobile, web-based musical instruments that are played through touch gestures on a smartphone or similar device. The project is situated within a broader context of mobile music ensembles, such as the Stanford Mobile Orchestra, which explore how mobile technology can support collective music-making. You can access the Tiny Touch Instruments here.
As part of this research, Abraham composed two pieces titled Skating and Skipping. Both works are performed using the TTIs that the author programmed. The instruments run on a webpage and are controlled using gestures such as tapping, swiping, or holding a finger on the screen. These interactions generate sound while also producing visual feedback, allowing performers to see and hear the effects of their gestures. One important aspect of the project is accessibility: the pieces are designed so that they can be performed without prior rehearsal and even by people without formal musical training.
The two compositions use different approaches to notation and performance. In Skating, performers follow a graphic score that includes visual shapes and brief text instructions. Participants draw certain gestures on their screens or interact with others in the group, for example by imitating nearby performers or responding to sounds they hear in the room. The focus of the piece is less on precise melodies and more on shared sonic textures that emerge through group interaction.
Skipping uses a different format. Instead of a static graphic score, performers follow an animated score projected on a large screen. This score combines graphics, animations, and text instructions that guide the performers’ actions over time—for example, indicating where on the phone screen to interact or encouraging them to increase the frequency of tapping. The piece gradually shifts from simple exploration of the instruments toward more intentional interaction between performers.
Through observations, interviews, and surveys with participants across several performances, Abraham analyzed how people experienced these pieces. One key finding was that performing without rehearsal encouraged exploration and experimentation. At the same time, performers gradually became more comfortable with the instruments as the piece progressed. Another important result concerns notation: a multimodal approach that combines graphics, animation, and text proved particularly effective.
Visual elements helped performers understand the relationship between their gestures and the sounds produced by the instruments. An especially interesting observation was how the performances changed participants’ perception of their smartphones. During the performance, the phone was no longer experienced primarily as a device for communication or distraction, but rather as a creative musical tool that enabled collective expression.
These ideas resonate strongly with my own design interests. In my research, I am exploring the concept of “ear candy” and interactive sound design. Inspired by this article, I am considering developing my own small touch-based digital instruments that people could access online. My goal would be to design them in a way that is not only playful and engaging, but also educational, allowing users to learn something about sound or interaction through experimentation.
Recently, I came across the research “Entangling with Light and Shadow: Layers of Interaction with the Pattern Organ” by Jasmine Butt, Nathan Renney, Benedict Gaster, and Maisie Palmer, developed within the Expressive Computer Interaction Research Group at UWE Bristol.
RESEARCH OBJECTIVE
Figure 1: Illustration of an interface pattern
This research explores the design and use of a camera-based digital musical instrument called the Pattern Organ. This visual-audio synthesis artifact investigates new ways of interacting through light and shadow.
Users can modify a waveform by placing their hands or objects in front of the instrument’s camera, creating shadows and patterns. Through this interaction, they can perceive how both the environment and the sound change in real time.
The project initially started as a digital tool to represent the process of optical sound technology. However, during the workshop sessions, this idea evolved further. The focus shifted from a purely visual-audio synthesis system to a more open, participatory, and exploratory process.
THEORETICAL BACKGROUND
A matter of finding the grain of the world
Bruna Goveia Da Rocha and Kristina Andersen [2]
Figure 2: Design of the original Instrument
Drawing from analogue optical sound technologies used in early cinema, the research reinterprets these practices through a post-human perspective. Two main theoretical perspectives are considered. The first is from N. Katherine Hayles, who describes the world as a complex and highly interconnected syste[3]m. In this view, cognition is not limited to humans but moves dynamically across humans, animals, and technological systems. The second perspective is Karen Barad’s Agential Realism [4]. This theory describes reality as something continuously shaped by the interaction between material and meaning. Matter and information are not separate but constantly influence each other.
A strong emphasis is also placed on material thinking and hands-on experimentation with different materials.
CONSIDERATIONS
Figure 3: Images from the first workshop’s exploration
Throughout this process, there are two aspects that I personally find particularly interesting.
From a theoretical perspective, I find the idea of an entangled and participatory workshop very powerful. In this context, three elements—human, machine, and materials—are in constant dialogue. They continuously influence each other during the creative process. This approach is very effective in stimulating critical thinking, both in design, where each input can generate new ideas or solutions, and in educational contexts.
Figure 3: Sperimentation using a rotating can
From a practical perspective, I was particularly interested in the use of raw data. This concept influenced the method of sonification used in the project. Rawness can be understood as a choice to avoid interpreting or transforming the data through complex digital processing. Instead, the data produced by the system is used more directly, without adding layers of interpretation.
This does not necessarily mean that raw data is more accurate or more realistic. Rather, it means that the measurements are not modified or filtered, allowing a more immediate connection with the original signal.
In the case of camera-based sonification, two main approaches can be identified:
Extracting features from image data to control or modulate sound
Using a more direct method, where pixel brightness values are translated into sound signals with minimal processing
CONCLUSION
This research opens important questions about how data should be treated and interpreted. It challenges the idea that data always needs to be processed, optimized, or controlled.
It also highlights the role of human intervention and how our decisions shape the way systems behave. At the same time, it shows how the physical and material nature of interaction—light, shadow, objects—can influence digital processes in meaningful ways.
More broadly, it invites us to rethink the relationship between humans, technology, and the material world. Instead of separating them, this work suggests that meaningful interaction emerges from their continuous entanglement.
REFERENCES
[1] J. Butt, N. Renney, B. Gaster, and M. Palmer, “Entangling with light and shadow: Layers of interaction with the pattern organ,” in Proceedings of the International Conference on New Interfaces for Musical Expression (NIME ’25), Canberra, Australia, June 24–27, 2025.
[2] Bruna Goveia Da Rocha and Kristina Andersen. 2020. Becoming Travelers: Enabling the Material Drift. In Companion Publication of the 2020 ACM Design ing Interactive Systems Conference. ACM, Eindhoven Netherlands, 215–219. https://doi.org/10.1145/3393914.3395881
[3] Katherine Hayles. 2006. Unfinished Work: From Cyborg to Cognisphere. Theory, Culture Society 23 (2006), 159–166. https://doi.org/10.1177/0263276406069229
[4] Karen Barad. 2007. Meeting the universe halfway: quantum physics and the entanglement of matter and meaning. Duke University Press, Durham London.
While reading about PlaySoundGround, I was struck by how a seemingly simple idea—combining sound and play—can completely transform the playground experience. The project turns familiar playground equipment into interactive musical instruments, where physical movement directly produces sound .
At first glance, this idea feels almost obvious. It made me question why more playgrounds are not designed in this way. If play is already based on movement, rhythm, and interaction, connecting it to sound seems like a natural extension. Yet, in most traditional playgrounds, this potential remains unexplored.
What I found particularly interesting is how the project reveals the relationship between playing and play. As the authors describe, both involve creative interaction within physical and social constraints . By making this connection explicit, the playground becomes more than just a physical space—it becomes an interactive, expressive environment.
Another aspect that stood out to me was that the playground was scaled for adults. This shift challenges the common assumption that playgrounds are only for children. Extending such experiences to adults opens up new possibilities for interaction, creativity, and social engagement. It suggests that play is not limited by age, but rather by how spaces are designed.
Overall, this project made me reflect on how small design interventions can unlock entirely new experiences. Even a simple addition like sound can make playgrounds more engaging, interactive, and meaningful. It also reinforces my interest in designing participatory and playful systems that invite users to actively shape their own experiences.
What problem are you solving? Externalizing one’s own internal proprioception and bodily and spatial awareness is not easy, and varies from person to person depending on their mental model.
Why should we care about it? Through sports (specifically aerial silks, in this case), people get adrenaline, maintain good physical and mental well-being, and create community. However, there is a barrier to entry in the fact that people believe it’s harder and less safe than it actually is, leading to people potentially not finding a hobby that could bring them joy.
What is the solution you are offering? How does it work? I am offering an aerial silks teaching kit including colorful sleeves and a physical 3D model of a human, which has articulating joints and an additional apparatus add-on (starting with aerial silks). This model would have the possibility of demonstrating aerial silks figures on a smaller scale and with lower stakes. Plus, a premium model would have digital capabilities, with sensors embedded in the sleeves to track the person’s movements and be able to represent it autonomously in the model. This kit would also come with an accompanying app that will function as a personal digital aerial diary to log and track your own progress and create community.
Who is the target audience? Who is the customer? The target audience is the aerial silks student who wants to learn new figures. However, the customer is both the student and also the aerial silks teacher that wants to be able to explain in more classes without having to expense copious amounts of physical energy.
What is going to happen? (Change & Impact) Aerial students will be able to have more mental security that they will be safe trying out a specific new figure. They will get a deeper understanding for their own body and spatial awareness. The learning curve of new tricks will be lesser, making students able to learn and retain information quicker. Students will be able to, on their own, see if a figure is appropriate for their level, decipher individual moves, try it out in a low-stakes environment, and identify which points have to be taken more care of in order to be safe (e.g. detect essential and nonessential parts of a figure).
The following are representations of the 2 customer profiles, the value proposition map, and the business model canvas of my prototype.
Own image.
Own image.
Own image.
Own image.
Own image.
Through this exercise, I realized the possibility of expanding my low-level prototype into a digital tool and environment. It would be interesting to explore how a tailor-made app can elevate the experience of using my tool and learning aerial silks, working as a sort of personal diary to log your learnings and new tricks (in progress/mastered) and create community for aerialists from all over the world to help each other. Not only that, but sensors could be added to the wearable sleeves that could communicate with a “premium” (premium because it would have digital capabilities besides his analog ones) version of the 3D little guy aerialist to observe what an aerialist did that was wrong/right.