Designing for Interrupted Experiences

Across my previous research and posts, interruption has appeared repeatedly as a central condition of contemporary interaction. From notifications and social media to cognitive load, emotional cost and recovery, interruption is not an exception to interaction but a structural feature of it. This final blog brings these strands together and reframes interruption as a design material rather than a problem to eliminate.

One of the most consistent findings across HCI research is that when an interruption occurs matters as much as that it occurs. Adamczyk and Bailey’s work on interruption timing demonstrates that interruptions placed at structurally meaningful moments within a task; such as boundaries between subtasks, produce significantly less frustration, annoyance, and cognitive effort than interruptions that occur mid-action.¹ This supports the idea that interruption cost is not uniform, but highly sensitive to task structure and temporal context.

From a design perspective, this challenges the dominant notification model used in many smart devices and platforms, where interruption timing is driven by system priorities rather than user activity. Treating all moments as equally interruptible ignores how users mentally segment tasks and weakens recovery. Designing for interrupted experiences therefore requires an understanding of how users perceive time, progress, and task continuity.

Liikkanen and Gómez argue that interaction design actively shapes user’s experience of time, not just efficiency or usability.² Interfaces that fragment attention, accelerate pace or constantly reset context distort temporal experience and increase the subjective cost of interruption. This aligns with earlier discussions in my research on flow and recovery: interruptions are not only breaks in attention but breaks in temporal coherence.

Recent design research responds to this by shifting focus from preventing interruption to supporting attention. Monge Roffarello et al. introduce digital attention heuristics that prioritize continuity, predictability and cognitive respect in interface behavior.³ Rather than maximizing engagement, these heuristics aim to reduce unnecessary attentional demand and help users maintain control over their focus. This approach contrasts sharply with attention capture patterns identified in deceptive interface designs, where interruption is deliberately used to redirect behavior.⁴

Designing for interrupted experiences therefore has an ethical dimension. When interruption is used strategically to capture attention, it externalizes cognitive cost onto the user. In contrast, attention supportive design acknowledges limits, supports recovery and reduces friction. This distinction becomes particularly relevant in educational and blended environments, where users report feeling constantly interrupted yet unable to disengage from digital systems. Pattermann et al. show that students experience digital interruption as both disruptive and unavoidable, reinforcing the need for design strategies that support regulation rather than escalation.⁵

Several applied design approaches address this challenge directly. Rydén’s user-centered work on designing for distraction emphasizes understanding interruption from the user’s lived experience rather than abstract performance metrics.⁶ By mapping when, why and how users feel interrupted, designers can identify points where systems should step back rather than intervene. This aligns with earlier discussions in my research on polite and adaptive systems, where responsiveness replaces control.

Taken together, these studies suggest that designing for interrupted experiences means accepting interruption as inevitable but designing it’s consequences. This includes supporting recovery, preserving context, respecting task boundaries and also making attention visible as a shared responsibility between user and system.

As a concluding position, my research does not argue for interruption free design. Instead, it proposes a shift in design intent: from capturing attention to caring for it. Designing for the interrupted means designing systems that understand timing, support memory, respect emotional cost or help users return; not just react.

This framing of mine sets the foundation for future thesis work (hopefully) that explores interruption not as a usability flaw, but as a core interaction condition that demands deliberate, human-centered design responses.

References

  1. Adamczyk, P. D., & Bailey, B. P. (2004). If not now, when?: The effects of interruption at different moments within task execution. Proceedings of CHI 2004.
  2. Liikkanen, L. A., & Gómez, R. (2013). Designing interactive systems for the experience of time. Proceedings of CHI 2013.
  3. Monge Roffarello, A., et al. (2025). The digital attention heuristics: Supporting the user’s attention by design.
  4. Monge Roffarello, A., et al. (2023). Defining and identifying attention capture deceptive designs in digital interfaces.
  5. Pattermann, M., et al. (2022). Perceptions of digital device use and accompanying digital interruptions in blended learning.
  6. Rydén, J. (Year). Designing for the distracted: A user-centered approach to explore and act on the user experience of distraction.

    AI Assistance Disclaimer:
    AI tools were used at certain stages of the research process, primarily for source exploration, grammar refinement and structural editing. All conceptual development, analysis and final writing were made by the author.

Calm UX in AI-Driven Products

How Google’s and IBM’s AI Guidelines Help Reduce Cognitive Load

Artificial intelligence has become foundational in modern digital products, powering everything from search and recommendations to analytics and automation. But when AI is integrated carelessly, it doesn’t feel “helpful”, it feels unpredictable, opaque, or intrusive. That’s where Calm UX intersects directly with practical AI design.

To create AI experiences that feel reassuring rather than stressful, we need both behavioral guidelines and design patterns that embed calmness into interaction. Leading design frameworks from companies like Google and IBMarticulate such principles, explicitly tying usability, transparency, and control to trustworthy AI experiences.

Why Calm UX Matters in AI Systems

AI systems are fundamentally probabilistic, they make predictions, not certainties. Yet users instinctively seek clarity, control, and predictability when interacting with digital products. When an AI recommendation appears without context, or when a system acts before the user has given explicit consent, the interface can quickly feel noisy or demanding. The result is increased cognitive load: users must expend mental effort to interpret what the system did, why it did it, and whether they are still in control.

A familiar example is autocorrection. When it quietly suggests a word and allows the user to accept or ignore it, it feels helpful and unobtrusive. When it automatically replaces words without explanation or easy reversal, it creates friction, uncertainty, and frustration. The difference is not the intelligence of the system, but how its behavior is communicated and constrained.

Calm UX addresses this tension by deliberately reducing the mental work required to understand and manage AI behavior. It does so by:

  • clearly indicating when AI is active,
  • explaining why a suggestion or prediction is being made,
  • making it obvious how users can intervene, override, or undo an action,
  • and keeping AI signals in the periphery until the user chooses to engage.

This approach aligns closely with the core idea of Calm Technology: technology should inform without demanding attention. AI should participate quietly in the background, stepping into focus only when its input is meaningful, actionable, and invited.

How Google’s People + AI Guidebook Supports Calm UX

Google’s People + AI Guidebook provides a concrete set of principles and patterns for AI-enabled interfaces, emphasizing user understanding and control. Key patterns include:

1. Model Status and Confidence Indicators

Instead of presenting AI output as a definitive outcome, designers should surface confidence levels or uncertainty ranges (for example, “83% confidence”). Making uncertainty visible helps users better predict system behavior, build appropriate trust, and reduces anxiety when outcomes are not certain.

2. Recommendations with Rationale

AI suggestions should clearly communicate why they are offered. For example, by referencing past behavior or recent activity. Making this underlying logic visible provides essential context, reduces cognitive load, and helps avoid the “black box” effect that often undermines trust in AI systems.

3. Human-in-the-Loop Controls

Allowing users to accept, reject, edit, or refine AI suggestions keeps agency firmly with the user rather than with an opaque automated system. This sense of control builds confidence and reduces anxiety about unintended or irreversible outcomes.

Google’s guidance pitches these patterns not as optional add-ons but as core UX requirements when embedding AI into workflows, because clarity and control directly reduce cognitive demand.

IBM’s Approach to AI UX — Transparency, Trust, and Shared Agency

IBM’s AI design practice also emphasizes understanding and human-centered automation. Explainability helps users understand both the process and limitations of AI. In their guidelines, this approach is summarized in two key concepts:

1. Explainability as a UX Function

When systems articulate how and why they reached a specific conclusion—even at a high level—users can form a clear mental model of the AI’s behavior. This predictability reduces mental effort and helps prevent frustration caused by uncertainty.

2. Role Clarity Between User and AI

Users should always understand where human responsibility begins and where AI assistance ends. Clearly demarcating these boundaries in the interface minimizes anxiety by removing uncertainty about whether the system is acting autonomously or on the user’s behalf.

This emphasis echoes research suggesting that AI designers must address both model transparency and user understanding if they want trust and low friction in human-AI interaction.

Calm UX, Cognitive Load & Calm Technology

At its core, Calm UX in AI interfaces is about managing the mental effort users invest in understanding system behavior. It uses patterns that reduce ambiguity, promote transparency, and preserve control, all of which are directly supported by Google’s and IBM’s AI guidelines.

That alignment is not coincidental. Calm UX and Calm Technology principles converge around the same goal: Design systems that support human thinking — not overwhelm it.

When AI interfaces follow clear guidelines, from explainability to human-in-the-loop design, they become not just smarter, but calmer, more trustworthy, and easier to use.

References:
  • Weiser, M., Seely Brown, J. (1995): “Designing Calm Technology“, Xerox PARC
  • Weiser, M., Seely Brown, J. (1996): “The Coming Age of Calm Technology“, Xerox PARC
  • Case, A. (2015): “Calm Technology: Principles and Patterns for Non-Intrusive Design
  • Google PAIR – People + AI Guidebook https://pair.withgoogle.com/guidebook/
  • IBM – Explainable AI Design Guidelines https://www.ibm.com/design/ai/

AI Assistance Disclaimer:

AI tools were used to improve grammar and phrasing. The ideas, examples, and content remain entirely the author’s own.

Interruption in Smart Devices and Social Media

As I also mentioned in some of my previous posts, Interruptions in digital systems are no longer limited to isolated notification events. In smart devices and social media platforms, interruption has become a persistent interaction condition shaped by continuous connectivity, algorithmic attention capture and social expectations. Rather than being occasional disruptions, interruptions are increasingly embedded into everyday interaction flows, influencing how users relocate their attention and switch tasks while using it.

Research on social media distraction consistently shows that interruptions operate through both external and internal mechanisms. External interruptions include notifications, alerts, and interface prompts, while internal interruptions emerge as urges, thoughts or habitual checking behaviors triggered by platform design.1 This distinction is important for interaction design, as it shifts the problem from simply “reducing notifications” toward understanding how interfaces create conditions that sustain attentional vulnerability even in the absence of explicit prompts.

Several studies demonstrate that social media interruptions negatively affect task performance and cognitive efficiency. Experimental work by Marotta and Acquisti5 shows that even brief social media interruptions can reduce performance on cognitively demanding tasks, particularly when users resume work without structural support. Similarly, Okoshi et al.6 found that frequent smartphone notifications increase cognitive load and disrupt task continuity, reinforcing the idea that interruption cost is cumulative rather than momentary.

At the same time, interruptions persist because they fulfill social and psychological needs. Koessmeier and Büttner1 identify social connection and fear of missing out as central drivers of social media distraction, alongside task avoidance and self-regulation failure. This aligns with findings from Tams et al.7, who show that restricting smartphone access can increase stress and social threat perceptions, suggesting that interruption is not only a usability issue but also an affective and relational one. From an HCI perspective, this reinforces the idea that interruptions cannot be evaluated solely in terms of efficiency loss.

Smart devices makes this dynamic more intense by extending interruption beyond the smartphone. Wearables, smart assistants and ambient displays introduce new channels through which attention can be captured or fragmented. Light and Cassidy3 frame this condition as one where disconnection itself becomes a socially and economically charged act, making uninterrupted interaction increasingly difficult to sustain. In such environments, interruption becomes a structural property of interaction ecosystems rather than a design flaw in a single interface.

Recent work has begin to explore design interventions that do not simply suppress interruptions but reshape how and when they occur. Weber et al.8 examine user-defined notification delay, showing that allowing users to postpone interruptions can reduce perceived disruption without eliminating access to information. Okoshi et al.’s Attelia6 system similarly demonstrates that context-aware notification management can lower cognitive load by aligning interruptions with moments of lower demand.

More recent approaches focus on changing attention capture patterns at a system level. Some researchers introduce the concept of “Purpose Mode,” which reduces distraction by altering how social media interfaces surface content during goal-directed activities. Rather than blocking access, such systems attempt to weaken damaging attention loops while preserving user groups. This reflects a broader shift away from binary solutions toward adaptive interaction strategies.

Taken all together, these studies suggest that interruption in smart devices and social media should be understood as a “design tradeoff” rather than a problem to be eliminated. Interruptions support connection, awareness and engagement but they also fragment attention and increase cognitive strain. The challenge for interaction design is not to remove interruptions, but to shape them in ways that respect user capacity, context, and recovery.

This positions interruption as a central concern for contemporary interaction design. As smart devices and social platforms increasingly mediate everyday activity, designers must consider how systems distribute attention over time, how interruptions accumulate, and how users regain control after disruption. Rather than asking how to stop interruption, the more productive question becomes how to design interactions that acknowledge interruption as an inevitable condition and respond to it responsibly.

References

  1. Koessmeier, C., & Büttner, O. B. (2021). Why are we distracted by social media? Distraction situations and strategies, reasons for distraction, and individual differences. Frontiers in Psychology, 12, 711416.
    https://doi.org/10.3389/fpsyg.2021.711416
  2. Lee, M., et al. (2025). Purpose Mode: Reducing distraction through toggling attention capture damaging patterns on social media.
  3. Light, A., & Cassidy, E. (2014). Strategies for the suspension and prevention of connection: Rendering disconnection as socioeconomic practice.
  4. Liu, Y. (Year). The attention crisis of digital interfaces and how to consume media more mindfully.
  5. Marotta, V., & Acquisti, A. (2018). Interrupting interruptions: A digital experiment on social media and performance.
  6. Okoshi, T., et al. (2015). Attelia: Reducing users’ cognitive load due to interruptive notifications on smartphones.
  7. Tams, S., et al. (2018). Smartphone withdrawal creates stress: A moderated mediation model of nomophobia, social threat, and stress.
  8. Weber, F., et al. (2018). Snooze! Investigating the user-defined deferral of mobile notifications.

AI Assistance Disclaimer:
AI tools were used at certain stages of the research process, primarily for source exploration, grammar refinement and structural editing. All conceptual development, analysis and final writing were made by the author.

Memory and Recovery: Designing for Resumption After Interruption

Up to this point, my research has focused on how interruptions disrupt attention and flow. However, interruptions do not end when the disruption occurs. What follows (the process of resuming a task) is often where the real cost appears. This brings memory into focus, not as a cognitive abstraction, but as a practical interaction design concern.

When a user is interrupted, they do not simply return to where they left off. They must remember what they were doing, why they were doing it and what the next step was supposed to be. This resumption process relies on short-term memory, contextual cues and sometimes an external support from the interface. If these elements are weak or missing, recovery becomes slow, error-prone and frustrating.

Research on memory for goals shows that interrupted tasks remain mentally active, but their activation decays over time. The longer and more demanding the interruption, the harder it becomes to recall the original goal state. From an interaction design perspective, this can mean that poor recovery is not a user failure but a predictable outcome of how memory works under interruption.

This is where I think interface design plays a critical role. Interfaces can either support memory during recovery or actively work against it. Continuous feeds, disappearing context and forced state changes increase the cognitive effort required to resume the task. In contrast, stable visual cues, persistent task states and meaningful markers can act as external memory aids, reducing the mental burden placed on the user.

Several studies on interruption recovery that I have examined show that even small cues; such as highlighting the last action, preserving task structure or offering lightweight reminders, can significantly improve resumption performance. These cues do not need to explain everything. Their value lies in reactivating the user’s memory by reconnecting them with the task context they previously constructed.

From a UX perspective, this reframes memory as an interaction problem rather than an internal process. Memory is distributed across the user and the interface. When interfaces erase context, reorder information or prioritize immediacy over continuity, they shift the entire recovery burden onto the user. This is especially visible in environments shaped by constant notifications, multitasking, and fragmented attention.

Design research on memory supplementation further supports this view. Instead of assuming users will remember, these approaches treat the interface as a partner in recall. By externalizing task state, progress and reasoning traces, systems can support problem solving and reduce the cost of interruption. This does not mean eliminating interruptions but designing for their aftermath.

There is also a temporal part to memory and recovery. Fast systems are often optimized for immediate response, not for long-term comprehension. However, memory formation and recall require time, repetition and moments of reflection. Interfaces that constantly refresh, replace, or overwrite information sometimes undermine these processes. In this sense, recovery is not only about returning to a task but about preserving meaning over time.

Seen through this lens, memory and recovery become central to interaction design in interrupted environments. The question shifts from “How do we prevent interruptions?” to “How do we help users return?” Designing for recovery means acknowledging that interruption is inevitable but disorientation does not have to be.

My research positions memory not as a background cognitive function, but as a design material. If interaction design shapes how users remember, forget and resume, then recovery is not a side effect, it is a responsibility. This perspective directly informs the next stage of my research, which moves toward designing explicitly for interrupted experiences.

References

Altmann, E. M., & Trafton, J. G. (2002). Memory for goals: An activation-based model. Cognitive Science, 26(1), 39–83.

Bruya, B., & Tang, Y. Y. (2018). Is attention really effort? Revisiting Daniel Kahneman’s influential 1973 book Attention and Effort. Frontiers in Psychology, 9, 1133.

Chen, X., Li, Z., & Wang, Y. (2025). The effects of cues on task interruption recovery in a concurrent multitasking environment. International Journal of Human–Computer Studies.

Yang, S. (2019). UX design for memory supplementation to support problem-solving tasks in analytic applications (Master’s thesis).

Zannoni, M., & Pollini, A. (2022). Are memories an interaction design problem? PAD Pages on Arts and Design, 15(23).

AI Assistance Disclaimer:
AI tools were used at certain stages of the research process, primarily for source exploration, grammar refinement and structural editing. All conceptual development, analysis and final writing were made by the author.

Application of calm technology principles in Digital Product Design

Many digital products today are technically well designed. They pass usability tests, follow established patterns, and allow users to complete tasks efficiently. And yet, they still feel stressful to use. This tension points to a common misunderstanding in UX:

Usability alone does not guarantee a calm experience (Calm UX).

What users often struggle with is not failure, but mental strain — the quiet effort required to interpret, decide, remember, and stay oriented while interacting with an interface.

Cognitive Load Is the Invisible Friction

I realized that a key driver of user stress is cognitive load: the amount of mental effort required to process information and make decisions. Human working memory is limited. When interfaces demand too much attention, comparison, recall, or interpretation, users become fatigued and error-prone — even if nothing is technically “broken”.

Research by Nielsen Norman Group shows that cognitive load increases when users are forced to:

  • hold information in memory instead of recognizing it
  • make too many decisions at once
  • decode unclear labels or system states
  • recover from interruptions without guidance

Reducing cognitive load is not about removing functionality. It’s about removing unnecessary mental work.

Calm UX Goes Beyond Usability

Calm UX builds on classic usability principles but extends them into the emotional and psychological domain. As described in recent UX research and writing, calm experiences are those that reduce anxiety, uncertainty, and hesitation, especially in moments where users are unsure what the system is doing or what is expected of them.

According to UXmatters, much of the most damaging friction in digital products is not physical or functional, but psychological. Interfaces that rush users, provide ambiguous feedback, or escalate situations unnecessarily create stress — even when users ultimately succeed.

Calm UX asks different questions than traditional UX:

  • Do users feel in control?
  • Does the system behave predictably?
  • Is uncertainty acknowledged or ignored?
  • Does the interface reassure, or does it pressure?

Design Principles That Create Calm

Research from NN/g, UXmatters, and Calm Technology literature points to a small set of recurring principles that consistently reduce cognitive strain and user anxiety.

Minimize cognitive effort by default
Calm interfaces prioritize recognition over recall, limit information to what is immediately relevant, and use familiar, consistent patterns. Clear visual hierarchy and progressive disclosure help users stay oriented without unnecessary mental effort.

Communicate with clarity, not urgency
System messages are emotionally charged moments. Calm UX avoids alarmist language and explains what happened, why it matters, and what comes next—without blame, pressure, or artificial urgency.

Make system behavior visible
Uncertainty increases stress. Loading states, background processes, and validations should clearly communicate progress and outcomes, even when no action is required from the user.

Respect attention as a scarce resource
Notifications should interrupt only when they provide clear, timely value. Calm UX is quiet by default and intentional when asking for attention.

Introduce complexity gradually
Complex systems don’t need to feel complex upfront. Calm UX reveals detail only as it becomes relevant, reducing initial overwhelm and supporting user confidence.

These principles are not new rules. They are a reframing of established UX heuristics through the lens of Calm Technology—shifting the focus from efficiency alone to cognitive and emotional ease.

Design Patterns That Create Calm

In practice, these principles materialize through a set of recurring design patterns that can be used as tools to create calmer products.

Progressive Disclosure
Calm UX avoids presenting all information and options at once. Instead, complexity is revealed gradually, as it becomes relevant. This helps users orient themselves quickly and reduces initial cognitive load, especially in complex systems.

Recognition Over Recall
Rather than relying on users’ memory, calm interfaces surface choices, defaults, examples, and familiar patterns directly in the UI. This reduces mental effort and minimizes the anxiety that comes from uncertainty or second-guessing.

Visible System Status
Calm UX avoids silent systems. Loading states, background processes, and validation feedback clearly communicate what is happening and what to expect next, even when no action is required from the user.

Gentle Confirmation
Success and completion are communicated through subtle, inline feedback instead of disruptive modal dialogs. This reassures users without interrupting their flow or escalating the interaction unnecessarily.

Forgiving Interactions
Undo options, editable states, and non-destructive defaults make mistakes recoverable. When users know they can correct an action, they interact with greater confidence and less hesitation.

Predictable Interaction Patterns
Consistent layouts, control placement, and feedback behavior reduce the mental effort required to re-orient across screens. Calm interfaces prioritize familiarity over novelty.

Descriptive Microcopy
Clear, outcome-focused language replaces vague labels and technical jargon. Users understand what will happen before they act, reducing hesitation and cognitive strain.

Status Over Alerts
Whenever possible, calm systems communicate information through passive status indicators rather than interruptive alerts. Information remains available without demanding immediate attention.

Notification Gating
Notifications are used sparingly and intentionally. Calm UX is quiet by default and interrupts only when timely user action truly matters, treating attention as a limited resource.

Clear Exit Paths
Users can cancel, go back, or pause processes at any time. Knowing there is always a way out significantly reduces pressure and perceived risk.


Together, these patterns don’t eliminate complexity — they structure it, pace it, and communicate it with care. They shift UX from demanding attention to supporting orientation, from pushing users forward to helping them stay grounded.

As digital products increasingly incorporate AI-driven predictions, recommendations, and automation, these patterns become even more critical. When systems begin acting on users’ behalf, clarity, control, and calm are no longer optional — they are the foundation of trust. In the next article, I’ll explore how Calm UX principles apply specifically to AI-driven products, and how thoughtful design can make intelligent systems feel supportive rather than intrusive.

References:
  • Weiser, M., Seely Brown, J. (1995): “Designing Calm Technology“, Xerox PARC
  • Weiser, M., Seely Brown, J. (1996): “The Coming Age of Calm Technology“, Xerox PARC
  • Case, A. (2015): “Calm Technology: Principles and Patterns for Non-Intrusive Design

AI Assistance Disclaimer:

AI tools were used to improve grammar and phrasing. The ideas, examples, and content remain entirely the author’s own.

Drink Smart and Keep Calm: Technology that Stays in the Background – Part III

From Concept to Prototype: Planning a Calm, Tangible Drinking Reminder

After introducing ubiquitous computing, tangible user interfaces, and calm technology through the example of a smart water glass, the next step is to explore how such a concept could be translated into a physical prototype. Rather than focusing solely on technical feasibility, the planned smart coaster is intended as a design-driven experiment — one that combines physical prototyping with a human-centered design (HCD) process.

The goal is not to build a “perfect” product, but to create a functional artifact that allows the underlying interaction principles to be examined, questioned, and refined.

Framing the Problem in Its Usage Context

The initial motivation for the project stems from a common everyday situation: forgetting to drink water while working or studying. Existing solutions, such as hydration reminder apps, typically rely on push notifications, sounds, or vibrations. While effective in theory, these mechanisms often interrupt users at inopportune moments and shift attention away from the current task toward a screen.

Before committing to a specific technical solution, I would usually start the project by planning a usage context analysis. This would involve observing when and where drinking usually happens, how glasses are positioned in work environments, and how people react to reminders during focused tasks. As the design proposal has already been introduced, I move directly into this idea rather than conducting a full exploratory phase. The underlying assumption is that drinking is already embedded in physical routines and object interactions—making it a promising candidate for a tangible, environment-based interface.

Planned Human-Centered Design Approach

The development of the smart coaster is intended to follow a simplified human-centered design (HCD) process:

  1. Empathize & UnderstandThe process would begin with self-observation and informal conversations to gain insight into why drinking is often forgotten and how existing reminder systems are perceived in everyday situations.
  2. DefineBased on these initial insights, the core design challenge can be formulated as:How might a drinking reminder support hydration without interrupting or demanding attention?
  3. IdeateThe ideation phase would focus on identifying calm forms of feedback. Different modalities—such as light, sound, or subtle movement—would be explored and evaluated in terms of intrusiveness, social acceptability, and perceptibility in the periphery of attention.
  4. PrototypeA low- to mid-fidelity prototype of a smart coaster is planned as a tangible representation of these concepts, allowing interaction principles to be examined in a physical form.
  5. EvaluateShort, qualitative user testing sessions are intended to help validate assumptions and inform iterative refinement of the interaction and feedback design.

Technical Implementation as Design Medium

The planned prototype combines accessible digital fabrication and physical computing tools:

  • 3D-printed coaster, designed to visually blend into everyday environments.
  • pressure sensor to detect the presence or absence of a glass.
  • Raspberry Pi Pico as the microcontroller handling timing and state logic.
  • Subtle ambient feedback, such as low-intensity light, to communicate reminders without explicit alerts.

Importantly, the technical setup is intentionally kept minimal. This aligns with calm technology principles by reducing complexity and ensuring that the coaster remains usable even if the digital components fail.

Planned User Testing and Evaluation

Rather than large-scale usability testing, the project is intended to rely on small, qualitative user tests. Participants would use the coaster in desk-based work scenarios and reflect on their experience afterward.

The evaluation would focus less on performance metrics and more on experiential questions:

  • Was the reminder perceived as intrusive?
  • Did it remain in the periphery until needed?
  • How did it compare emotionally to phone-based reminders?

These observations are expected to inform whether the concept successfully embodies calm interaction.

Conceptual Comparison: Coaster vs. App

As part of the analysis, the smart coaster will be conceptually compared to traditional drinking reminder apps. While apps centralize interaction on a screen, the coaster distributes interaction into the environment. This comparison serves to highlight how tangible interfaces and ubiquitous computing shift responsibility from the user to the surrounding system.

Outlook

By planning the smart coaster as both a technical prototype and a research artifact, the project aims to explore how calm technology principles can be operationalized in everyday objects. The focus remains on how interaction feels, rather than how much functionality is added — reinforcing the idea that sometimes, the most effective technology is the one that stays quietly in the background.

References:
  • Weiser, M., Seely Brown, J. (1995): “Designing Calm Technology“, Xerox PARC
  • Weiser, M., Seely Brown, J. (1996): “The Coming Age of Calm Technology“, Xerox PARC
  • Case, A. (2015): “Calm Technology: Principles and Patterns for Non-Intrusive Design
  • https://calmtech.com
  • Human-Centered Design nach ISO 9241-210:2019

AI Assistance Disclaimer:

AI tools were used to improve grammar and phrasing. The ideas, examples, and content remain entirely the author’s own.

User Interfaces in Video Games 10/10

User Interfaces in Video GamesThe quest for genre-appropriate and usable game UI

Over the last nine blog posts, I’ve went from the oscilloscope screens of 1958 to visual representations of game UI to the modular, accessible hardware of today. Now I’d like to wrap up my journey with a short reflection.

This research diary started with a fairly open topic that I had no idea how to navigate. After my research, I can safely say that I’ve learned a lot about games, their interfaces and the interaction design behind it.

I learned that game UI has a history worth respecting. Going back to the oscilloscope screens of the 50s and the early arcade days of Space Invaders was interesting because I realised that the simple high score was an innovation at the time and was the start of the complex feedback loops we have now.

I learned how to categorise the visual representations of game UI. Breaking down the four types of UI completely changed how I look at game screens. I now see how a UI can either be overlayed on top of a game or be woven directly into the world, with the player character being aware of it.

I learned that style can actually drive usability. Exploring the Aesthetic-Usability Effect showed me that when a game aesthetic, it isn’t just for show. I learned that if a menu feels like it belongs in the game’s world, players are more likely to find it intuitive and engaging, which holds water with my personal gaming journey.

I learned that accessibility is a fundamental responsibility. From my struggle with tiny subtitles to the impact of the QuadStick, I learned that game UI design isn’t just an aesthetic choice but also about inclusion. This coincides with the fact that I’d like games to be enjoyed as many people since they made my life much better.

In the end, through all these learnings and this, to be honest, hard journey I realised that designing a game user interface is a way, way, way, way more complicated and diverse topic than I anticipated. When I picked this topic I was mostly just focusing on the simple thought of “cool, stylish UI that also respects users” and kept a narrow focus on the visual part, the user interface design of it all.

However, through this research diary and through a conversation with the Senior UX/UI Designer at Bongfish, I’ve realised that game user interface designers are responsible for way more than the graphical menus and HUDs. The 60 accessibility options of The Last of Us Part II kind of blew me away with the use haptics, audio cues and difficulty settings.

If I continued with this (now daunting) topic, I’d have to consider narrowing down the research to specific devices (PC, console, mobile or VR/AR etc.). Placing emphasis on just the visuals doesn’t really work for this topic, as evidenced by this extensive journey of many sub-topics, so finding a focus area could be hard. Either way, I’d say it was a valuable journey and I’ve collected some actual knowledge on my newfound love: games.

User Interfaces in Video Games 9/10

User Interfaces in Video GamesThe quest for genre-appropriate and usable game UI

To continue with the accessibility topic of my last blog post, in this one I would like to dive deeper into how this complex interaction of playing video games is for people with disabilities.

Beyond the screen and graphical user interface, we have to consider the physical interface players control the game with. While reading the Saunders and Novak book Game Development Essentials: Game Interface Design, I was really moved by the story of Robert Florio, a quadriplegic artist. He uses a “mouth stick” to play games like Devil May Cry 3, a fast paced action game with complex combos [1]. It made me realise that an accessible interface isn’t just about ease of use, but it’s also about giving someone control over a world they can’t physically interact with anymore. When a designer adds the option of remappable buttons, they aren’t just making a “setting”, they’re opening doors for people who wouldn’t be able to interact with the product at all otherwise.

The “mouth stick” in question was an early model of the QuadStick, pictured on Figure 1. This is a mouth-operated controller produced by an independent manufacturer. It acts as an “add-on” to existing consoles or PCs, using sip-and-puff sensors to translate breath and lip movements into complex game inputs [2].

Figure 1:
Quadstick
Source: [3]

A major sign that the industry is finally taking this seriously is that console manufacturers are now building these solutions themselves. The PlayStation Access Controller is a modular kit designed specifically to be accessible out of the box. It moves away from the “fixed” shape of a standard controller, allowing players to create a layout that works for their specific hand strength or range of motion. This further emphasises the importance of customisable and remappable inputs in games.

Figure 2: Playstation 5 Access Controller
Source: [4]

This is where one really sees how interacting with games goes beyond the user interface. It’s also about the user experience and overall interaction design. I already mentioned The Last of Us Part II in my last blog post, focusing on the vast variety of subtitle adjustment options. This is just one out of over 60 different accessibility options [5].

Their design philosophy follows a “sensory redundancy” model. This means that if a player can’t see the path, Navigation Assistance uses haptic pings and 3D audio cues to guide them. If a player can’t hear an enemy, Awareness Indicators and Combat Vibration Cues translate sound into visual and tactile data. This really showed me how expansive this theme can get once we look at the broader spectrum of the interaction between the game and the player.

User Interfaces in Video Games 8/10

User Interfaces in Video GamesThe quest for genre-appropriate and usable game UI

The question proposed in my last blog post is a big can of worms that has many aspects influencing it. One big aspect of interfaces being usable is accessibility, which I took a look at in this blog post.

In my research, I’ve found that many people treat accessibility as a “bonus feature,” but as Saunders and Novak point out in Game Development Essentials: Game Interface Design, it’s a fundamental responsibility. Since there are no strict government regulations for games, it’s up to developers to self-regulate to meet the needs of those with disabilities [1].

In my introductory blog post, I mentioned the frustration of games having subtitles but them being too small to read, often with bad contrast. Subtitles are a perfect example of where game UI often has issues. In many modern AAA games, the text is optimised for someone sitting in front of a high-resolution monitor. But for a console player sitting on a couch 3 meters from a TV, that text becomes unreadable.

I noticed this in many games but want to point out Black Myth: Wukong as an example, pictured on Figure 1. The text is so tiny that even at my monitor I could barely read it, especially on white backgrounds where it lacked contrast in addition to it’s small size. It really dampened my experience because I played the game with the Chinese dub, but this would be an even worse experience for someone who’s, for example, deaf.

Figure 1: Black Myth: Wukong
Source: [2]

To combat this, the choice of typeface is important. Sans Serif fonts (like Arial or Verdana) are preferred for difficult viewing conditions because they don’t have the tiny “cross strokes” (serifs) that can blur together at low resolutions [1]. Simply testing the legibility on different devices and positions during development would already make a huge difference.

A best practice example for dealing with subtitles can be seen in The Last of Us Part II. They provide incredibly adjustable subtitle options where players themselves can massively increase the text size, change the color of the names to identify speakers, and add a dark semi-transparent backing box behind the text. This means that no matter how bright the game world is, the text is still legible.

Figure 1: The Last of Us Part II
Source: [3]

Another aspect to consider is colour-blindness. Around 8% of men (1 in 12) and 0.5% of women (1 in 12) are affected [4]. Considering this data, its vital to never use color as the only way to give information. A health bar shouldn’t just change from green to red; it should also change in length so a color-blind player can still read the state of the game [1]. Likewise, if a game uses only red and green to signal “enemy” versus “friend”, a significant portion of the audience is excluded.

User Interfaces in Video Games 7/10

User Interfaces in Video GamesThe quest for genre-appropriate and usable game UI

So far I’ve introduced some history, UI elements, their visual representations and common game genres. Now I’d like to take a look at the “Should games sacrifice functionality for style and vice versa? Do accessibility options affect the art being made?” question that popped up in my introductory blog post.

One of the most difficult parts of game UI design is the “battle” between aesthetics and functionality. In my gaming journey so far, I’ve seen both sides of the coin: games that are beautiful but hard to navigate, and games that are perfectly functional but look like sterile, uninspired and out of place.

Form follows function” is a famous phrase coined by Louis Sullivan which has been applied to many different types of design that deals with this topic [1]. This means that the way something looks is influenced by what it’s supposed to do. The function of game UI is to communicate states, so it should adapt itself to what users actually need to function within the game. The common questions are “Where am I?”, “How much health do I have?” and “Am I winning?”. These are answered with mini-maps, health bars and scores, all of which have evolved through necessity to communicate status.

So can style actually improve function? Why do I enjoy stylish UI in games if minimalist UI also does the job? Thinking about this led me to the Aesthetic-Usability Effect, which is defined in the book Universal Principles of Design. The Aesthetic-Usability Effect is described as “a phenomenon in which people perceive more aesthetic designs as easier to use than less-aesthetic designs – whether they are or not.” [2] This means that if a player loves the look of a menu, they’re more inclined to keep using it, and thus learn how to use it.

A personal example I’d like to showcase is the difference between the Metal Gear Solid: Peace Walker (2011) and Metal Gear Solid V (2015) staff management menus. In Metal Gear Solid: Peace Walker I learned to navigate the menu thanks to the more simplified information and “military file” aesthetic which fit the game world, being set in the 70s.

Figure 1:
Metal Gear Solid: Peace Walker HD
Source: [3]
Figure 2: Metal Gear Solid V
Source: [4]

Metal Gear Solid V, which was released 4 years later, features a virtually identical menu, but goes for an angled look which is supposed to be a hologram from the device the character is holding in his hand. This takes away real estate for the sake of diegetic immersion. This, however, clutters the UI with more information displayed. I would have been overwhelmed with this menu had I not already “trained” myself with the previous game. I knew which information to ignore and what the actual function of the menu is. The aesthetic is also lost within this blue, minimalist, hologram look which clashes with the fact the game is set in the 80s.

This leads me to believe that style shouldn’t be sacrificed for function or vice versa.

A visual style is first determined for the game experience overall. Then, the information is made to come across in the most immediate and understandable way. Finally, both form a framework for the user interface aesthetics. The visuals shouldn’t drive the function, but they can certainly bend and influence it. – Stieg Hedlund [5]

In my next blog post, I want to dive deeper into the usability aspect of this debate by exploring the topic of Accessibility.

  • [1] L. H. Sullivan, The Tall Office Building Artistically Considered. Philadelphia, PA, USA: J. B. Lippincott, 1896.
  • [2] W. Lidwell, K. Holden, and J. Butler, Universal Principles of Design: A Cross-Disciplinary Reference. Gloucester, MA, USA: Rockport Publishers, 2003.
  • [3] Game UI Database, “Metal Gear Solid: Peace Walker HD,” Game UI Database. Accessed: Feb. 06, 2026. [Online.] Available: https://www.gameuidatabase.com/gameData.php?id=530
  • [4] Game UI Database, “Metal Gear Solid V: The Phantom Pain,” Game UI Database. Accessed: Feb. 06, 2026. [Online.] Available: https://www.gameuidatabase.com/gameData.php?id=98
  • [5] K. Saunders and J. Novak, Game Development Essentials: Game Interface Design. Clifton Park, NY, USA: Thomson Delmar Learning, 2007.