#14 All Eyez On Me

Eye-tracking sounds deceptively simple: you track where someone looks, and suddenly you “understand” perception. Unfortunately (or fortunately, depending on your tolerance for complexity), it is not that straightforward. There isn’t just one way to track eyes. There are multiple methods, each producing different kinds of data, each shaping what you think you are observing. So before interpreting gaze behaviour, it is essential to understand how that gaze was captured in the first place

1. Screen-Based Eye-Tracking 

Screen-based eye-tracking represents one of the most established and controlled methods for analyzing visual attention. Often referred to as the “lab classic,” it provides a highly precise way of measuring how users engage with visual content on a screen. In this setup, participants are positioned in front of a stationary device that tracks their eye movements while they observe a digital stimulus. This could be anything from a website interface to a poster layout or typographic composition. The controlled environment allows researchers to isolate visual variables and focus specifically on how attention is distributed across a given design.

Technically, the system operates through infrared light, which is projected onto the eye and reflected off the cornea. Cameras capture this reflection and calculate the direction of the gaze. This data is then mapped onto a two-dimensional surface, allowing researchers to reconstruct exactly where and for how long a participant looked at specific elements within the design. The strength of this method lies in its precision. It allows for detailed analysis of visual behavior, making it particularly suitable for areas such as graphic design, UI and UX interfaces, as well as typography and layout studies. Designers can identify whether key elements such as headlines, images, or navigation structures are effectively capturing attention, or whether they are being overlooked entirely.

However, this level of control also introduces certain limitations. The artificial nature of the lab setting reduces real-world context. Participants remain stationary, and natural body movement is restricted. As a result, the findings reflect focused screen-based interaction rather than more dynamic, environmental forms of perception. To visualize how this data is typically interpreted, heatmaps are often used. These maps translate gaze data into areas of intensity, highlighting where attention is concentrated and where it fades.

In this example, the viewer’s gaze is primarily drawn to the image and title, while the body text receives significantly less attention. The footer is nearly ignored. This kind of visualization makes it immediately clear how hierarchy, scale, and placement influence perception. To extend this further into a more dynamic visualization, gaze sequences can also be mapped as a path across the layout:

START → Title → Image → Image → Text → Image → (exit)

This sequence reflects not just where attention occurs, but how it moves over time. It reveals patterns such as repeated focus on dominant elements or skipping behavior across less engaging areas. Ultimately, screen-based eye-tracking becomes a tool for making the invisible visible. It translates subconscious visual behavior into measurable data. If the goal is to understand whether a design communicates effectively at first glance, or whether certain elements fail to engage, this method offers a direct and reliable way of observing that interaction.

It answers a very specific question with clarity: where do people actually look?

2. Mobile Eye-Tracking Glasses 

Mobile eye-tracking glasses represent a shift from controlled laboratory conditions to real-world observation. Instead of viewing static stimuli on a screen, participants move freely through physical environments while their gaze behavior is recorded in real time.In this setup, participants wear lightweight glasses equipped with multiple cameras. A forward-facing scene camera captures the surrounding environment, while inward-facing eye cameras track pupil movement. The collected data is then combined, allowing gaze points to be overlaid onto a continuously moving field of view. This method is particularly well suited for contexts where spatial interaction plays a central role. These include exhibitions, spatial design environments, wayfinding systems, and retail spaces. Unlike screen-based setups, it captures how attention unfolds dynamically in relation to movement, orientation, and environmental context. However, this increased realism comes with certain trade-offs. Compared to stationary eye-tracking systems, the level of precision is lower. The data itself is also more complex to analyse, as it combines movement, shifting perspectives, and changing visual stimuli. In addition, calibration requires careful setup and can introduce variability into the results. To visualize this type of data, gaze paths are often mapped within a spatial layout.olds in space and time, not just on a flat surface. 

3. Fixation Mapping vs. Gaze Plotting 

Once eye-tracking data has been collected, the focus shifts from observation to visualization. The central question becomes how this data can be translated into a format that reveals meaningful patterns of visual attention. Two primary methods are commonly used for this purpose: fixation maps and gaze plots. Fixation maps, often referred to as heatmaps, display the intensity of attention across a visual surface. They aggregate data from multiple users and highlight areas where gaze is concentrated over time. Warmer zones indicate higher levels of attention, while cooler areas suggest that elements are either overlooked or only briefly viewed. This type of visualization is particularly useful when evaluating the overall effectiveness of a design, as it reveals which components consistently attract attention across a group of participants. In contrast, gaze plots, also known as scanpaths, focus on the sequence of eye movements. Instead of showing intensity, they map the order in which visual elements are explored. Individual fixations are numbered, and connected by lines that represent the movement of the gaze from one point to the next. This allows for a more detailed understanding of how a design is navigated over time.issing structure or missing behaviour. 

In this representation, the numbers indicate the order of attention, while the connecting lines illustrate the direction of gaze movement. Each method answers a different question. Heatmaps reveal what is seen, highlighting areas of focus and neglect. Gaze plots, on the other hand, explain how attention unfolds, showing the path users take as they move through a design.

For a comprehensive understanding of visual behavior, both approaches are necessary. Heatmaps provide an overview of attention distribution, while gaze plots introduce temporal structure. Without one, the analysis remains incomplete, either lacking insight into sequence or missing the broader patterns of engagement.

4. Remote vs. Head-Mounted Tracking 

A fundamental distinction in eye tracking lies between remote systems and head mounted approaches, each offering different advantages depending on the research context. Remote eye tracking is non intrusive and typically screen based. Participants remain stationary while their gaze is recorded with a high level of precision. This setup allows for controlled conditions, making it particularly effective when detailed and accurate measurements are required. In contrast, head mounted eye tracking involves wearable devices that participants use while moving through real world environments. This approach captures visual behavior within natural contexts, where movement, surroundings, and situational factors influence perception. While this increases ecological validity, it also reduces the level of control and precision compared to lab based systems.

Understanding this distinction is essential when designing experiments. A poster observed in a controlled laboratory setting may produce very different results than the same poster experienced within a crowded exhibition space, where distractions, movement, and environmental complexity shape how attention is distributed.

Choosing the Right Method (or: It Depends, Obviously) 

Each method answers different questions: 

Method Strength Best For 
Screen-based Precision UI, graphic design 
Mobile glasses Real-world behaviour Exhibitions, spatial design 
Gaze plots Sequence of perception Understanding navigation 
Dynamic tracking Temporal behaviour Motion design, video 

Final Thought 

Eye-tracking does not show you everything. It shows you where attention is directed, which is only one layer of perception but a crucial one.  And perhaps the most important realisation: The method you choose does not just measure reality it frames it. 

Sources:

  • Holmqvist, K. et al. (2011) Eye Tracking: A Comprehensive Guide to Methods and Measures. Oxford University Press. 
  • Duchowski, A.T. (2017) Eye Tracking Methodology: Theory and Practice. Springer. 
  • Wedel, M. & Pieters, R. (2008) Eye Tracking for Visual Marketing. Foundations and Trends in Marketing. 
  • Poole, A. & Ball, L.J. (2006) Eye tracking in human-computer interaction and usability research
  • Goldberg, J.H. & Wichansky, A.M. (2003) Eye tracking in usability evaluation: A practitioner’s guide
  • Holmqvist, K. & Andersson, R. (2017) Eye Tracking: A Comprehensive Guide to Methods and Measures (2nd ed.)

#13 Structuring the Chaos: Integrating the Double Diamond into a Neurodesign Research Process

After establishing a general research direction around neurodesign and eye-tracking, the next step is less glamorous but significantly more important: structure. Or, more precisely, preventing this project from turning into an endless collection of “interesting observations” without a clear outcome. Following a strong recommendation from Prof. Baumann, I am integrating the Double Diamond model (Design Council, 2005) into my research process—adapting it to fit an empirical, neurodesign-driven approach. The goal is to align creative exploration with methodological rigour, especially in preparation for the exhibitions taking place in early June. The Double Diamond framework divides the design process into four phases: Discover (diverging) , Define (converging), Develop (diverging), Deliver (converging). While traditionally used in design practice, it is increasingly applied in research contexts to structure problem framing and solution validation (Design Council, 2005). 

    For a neurodesign project, this is particularly useful because the field itself sits between disciplines. Without a clear framework, it is very easy to: 

    • read everything, 
    • test everything, 
    • and conclude… nothing specific. 

    Trust me, I’ve been there…

    Phase 1: Discover

    The objective at this stage is to build a broad theoretical and contextual foundation.

    To do so, the research begins with an extensive review of existing literature across multiple fields. This includes neurodesign and cognitive processing (Posner, 1980; Auerhammer, 2020), offering insight into how visual information is received and interpreted in the brain. In parallel, methodologies such as eye tracking (Holmqvist et al., 2011) provide a measurable way of observing attention in real time. Further perspectives on visual attention and cognitive load (Sweller, 1988; Spinks & Mortimer, 2016) help frame how and why certain elements are noticed, ignored, or cognitively processed. Alongside this theoretical groundwork, existing exhibition formats and visual environments are analyzed to understand how design operates within spatial and contextual settings. These observations help translate abstract concepts into applied scenarios.From this, a set of initial variables begins to emerge. Among them, the distinction between viewer expertise, comparing designers and non designers, becomes particularly relevant, as well as the type of visual stimuli presented. At this stage, the focus remains intentionally broad. The aim is not immediate clarity, but rather to create a state of informed confusion, an expanded field of understanding from which more precise questions can later develop.

    Phase 2: Define

    The objective at this stage is to narrow the research focus into a testable direction before the exhibitions begin. This process is guided through continuous refinement of the research question, supported by weekly discussions with Prof. Baumann. These conversations help translate broader interests into a more precise and actionable framework. Based on this, key parameters are defined. This includes identifying the target groups, as well as establishing clear measurement criteria such as fixations, gaze paths, and areas of interest. In parallel, a structured experimental setup is developed, specifically adapted to exhibition contexts, where environmental factors play a significant role in how visual information is perceived. To ensure feasibility, initial pilot tests are conducted. These include calibration processes and technical validation, allowing potential issues to be identified and resolved early on. This phase is critical because it transforms the intention to study neurodesign into a research approach that can be systematically observed, tested, and measured in practice.

    Phase 3: Develop

    The objective at this stage is to collect and expand empirical data within real world contexts. Eye tracking studies are conducted during exhibitions, with early June marking a key milestone in the data collection process. Participants are drawn from both designers and non designers, allowing for a comparative perspective on how visual information is processed across different levels of expertise. The experimental setup remains flexible throughout this phase. Adjustments and iterations are made where necessary in order to respond to practical challenges and ensure the reliability of the collected data. In addition to quantitative measurements, observational notes are recorded to capture contextual factors that may influence participant behavior but are not directly measurable through eye tracking alone. This phase represents the core stage of data generation. While grounded in a defined structure, it remains intentionally exploratory, balancing controlled conditions with the variability of real world environments.

    Phase 4: Deliver

    The objective at this stage is to consolidate the findings and translate collected data into meaningful outcomes. The process begins with the analysis of eye tracking data, including heatmaps, fixation durations, and gaze sequences. These metrics provide insight into how visual attention is distributed and how information is processed over time. Based on this, comparisons are drawn between different participant groups, particularly between designers and non designers. This allows patterns in visual attention and perception to emerge, highlighting both similarities and differences in how visual stimuli are interpreted. The identified patterns are then translated into research insights, forming a foundation for further reflection. In addition, potential implications for design practice are developed, connecting empirical findings back to applied design contexts. At this stage, the project shifts from collecting data to actively interpreting and applying it, turning observations into structured knowledge.

    The exhibitions in early June function as a fixed point within the process. Instead of treating them as just another data collection opportunity, they become: A deadline for having a functional methodology . A contextual test environment for real-world perception. A transition point between defining and developing phases. In other words, if the setup is not ready by then, the entire timeline politely collapses. This structured approach also aligns with key ideas in neurodesign research. Auerhammer (2020) emphasises the importance of linking design decisions to measurable cognitive and emotional responses, rather than relying on subjective interpretation. 

    Similarly, attention research highlights that perception is: selective (Posner, 1980), capacity-limited (Sweller, 1988)  and expertise-dependent (Lohmeyer et al., 2014). By embedding these principles into the Double Diamond framework, the process becomes not just structured, but scientifically grounded

    The Double Diamond does not reduce complexity—it organises it. And for a project dealing with neurodesign, eye-tracking, and human perception, that is probably the closest thing to control we are going to get. 

    Sources: 

    • Auerhammer, J. (2020) Neurodesign: Perspectives on an emerging discipline
    • Design Council (2005) The Double Diamond: A universally accepted depiction of the design process
    • Holmqvist, K. et al. (2011) Eye Tracking: A Comprehensive Guide to Methods and Measures. Oxford University Press. 
    • Lohmeyer, Q., Matthiesen, S. & Meboldt, M. (2014) Task-dependent visual behaviour of engineering designers – an eye-tracking experiment. DESIGN Conference. 
    • Posner, M.I. (1980) Orienting of attention. Quarterly Journal of Experimental Psychology, 32(1), pp. 3–25. 
    • Spinks, J. & Mortimer, D. (2016) Lost in the crowd? Using eye-tracking to investigate information processing in choice experiments. BMC Medical Informatics and Decision Making. 
    • Sweller, J. (1988) Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), pp. 257–285. 

    .

    #12 Building a Research Practice Around Eye-Tracking in Design

    Design education often trains us to justify decisions conceptually or aesthetically. However, across my earlier work, a recurring question has emerged: How reliable is our intuition about where people look? We are holding on to principles that have been established, way before technological advancements took off. So how can we truly know how our eyes and brains experience our designs?

    Eye-tracking offers a way to approach this question empirically. By recording gaze behaviors, specifically fixations, it becomes possible to reconstruct how viewers navigate visual material in real time (Scene Grammar Lab, 2023). This allows for a shift from speculative reasoning to evidence-based analysis of perception. Visual attention is not arbitrary, it is shaped by task, context, and prior knowledge (Eisma, Eijssen & de Winter, 2022). This suggests that design cannot be understood independently of its viewers. Or, put less diplomatically: a design without an audience is just a very confident arrangement of pixels. 

    Expertise and the Problem of “Seeing Differently” 

    Another key aspect I want to research further is the distinction between designers and non-designers as viewing groups. Existing research indicates that expertise significantly influences visual behaviour. Designers, due to training, tend to process layouts more strategically, while non-experts rely more on saliency and immediate visual cues (Lohmeyer et al., 2014). This in return raises an uncomfortable but necessary question for all design practice: Are we designing for ourselves, or for the people who will actually engage with the work? Because if these groups fundamentally see differently, then evaluating design solely within expert circles risks missing how it functions in real-world contexts. Giving that Personas and Target Groups are ofcourse researched, there is still a barrier we simply can not forsee and that is: the individuality of each person viewing a design. Circling back to the fact that we all live in constant progress. Meaning each age-, social- and targetgroup adapts differently to trends, animations or context. o build a structured research foundation, I will focus on eye-tracking in applied design contexts, particularly exhibitions and curated visual environments.

    Planned approach: 

    • Conduct observational studies using eye-tracking technology 
    • Compare two primary groups: 
    • Designers (trained visual literacy) 
    • Non-designers (general audience) 
    • Analyse: 
    • Fixation duration 
    • Gaze paths 
    • Areas of interest (heatmaps) 

    The aim is to generate empirical data on how different audiences engage with design, rather than relying on assumptions or post-rationalised explanations. Alongside data collection, I will have weekly consultations with Professor Baumann to refine the research direction. At this stage, the topic is intentionally broad, perhaps too broad. The goal of these meetings is to iteratively narrow the focus into a clearly defined research question. Because right now, the working title could easily be:  “Everything About Eye-Tracking, Everywhere, All at Once.”

    Semester Plan & Why This Matters

    To keep this project from dissolving into beautifully organised chaos, I am going to need a more narrow downed timetable. Since the beginning of the semester can turn slightly hectic, I will start my research a month after the start. So I propose the following structure:

    Weeks 4–6:  Methodological framework and technical setup (including inevitable calibration struggles) 

    Weeks 7–10: Data collection in exhibitions and design environments 

    Weeks 11–12:  Data analysis (heatmaps, gaze plots, mild existential doubt) 

    Weeks 13–14:  Synthesis and refinement of research focus 

    This project is ultimately about repositioning design as an evidence-informed practice. By integrating eye-tracking data, we can begin to understand not just what design communicates, but how it is actually perceived

    Because if design is a form of communication, then attention is its most fundamental currency. 

    Sources:

    • Eisma, Y.B., Eijssen, D. & de Winter, J.C.F. (2022) What attracts the driver’s eye attention as a function of task and environment. Information (Switzerland), 13(7). 
    • Lohmeyer, Q., Matthiesen, S. & Meboldt, M. (2014) Task-dependent visual behaviour of engineering designers – an eye-tracking experiment. DESIGN Conference. 
    • Rodemer, M. et al. (2022) Dynamic signals in instructional videos support students to navigate through complex representations. Applied Cognitive Psychology. 
    • Scene Grammar Lab (2023) Eye-tracking research overview
    • Spinks, J. & Mortimer, D. (2016) Lost in the crowd? Using eye-tracking to investigate information processing in choice experiments. BMC Medical Informatics and Decision Making. 

    #11 Let’s catch up!

    This semester, my research is not just about design itself, it’s about how the brain processes what we see and how we can actually visualize this data. Do people visually engage with design differentley?

    Since a lot of you might not be familiar with my research, I thought it would be nice to do a quick catch up, on the matter.  In the last semester I have spent my time further deepening my understanding for neurodesign. Neurodesign sits at the intersection of design, cognitive science, and neuroscience. So instead of evaluating design purely through aesthetics or intention, we as designers ask:

    What happens in the brain when we experience design?  

    This includes processes such as attention, perception, and decision-making, all of which influence how visual information is interpreted. Importantly, these processes are largely automatic and unconscious (Posner, 1980). This means that, what we think we see and what we actually process can differ significantly. For designers this practice could create a shift: From designing based on intuition, to designing based on measurable cognitive responses. In a world where artificial intelligence becomes more and more advanced, it could mean immense progress for designers, to dive deeper into human cognitive responses, in order to make designs more relateable. Users being abled to experience designs, that feel tailored specifically to them, could mean a new way of connecting.

    In neurodesign research, perception is understood as context-dependent and shaped by prior knowledge (Eisma, Eijssen & de Winter, 2022). This directly connects to my central comparison: Designers (trained visual literacy, pattern recognition) & Non-designers (intuitive, less structured viewing behaviour). Research suggests that expertise fundamentally alters how visual information is processed (Lohmeyer et al., 2014). Designers often scan strategically, while non-experts rely more on visual salience. 

    Attention, Cognitive Load, and Ignored Design

    Another important concept within neurodesign is cognitive load. As we all know by now, the brain has limited processing capacity, which means not all visual information receives equal attention. When designs become too complex, users may engage in selective attention, ignoring parts of the visual field entirely (Spinks & Mortimer, 2016). At the same time, research shows that guiding attention meaning through hierarchy, contrast, or motion, can significantly improve comprehension (Rodemer et al., 2022). 

    More design ≠ more understanding 

    However, this raises an important question: if attention can be guided, to what extent can perception actually be controlled? While design strategies such as hierarchy and contrast allow designers to direct visual flow, they do not guarantee uniform interpretation. Individual differences such as prior experience, cultural background, and emotional state, continue to influence how information is processed. This suggests that design operates within a space of probability rather than certainty. Designers can increase the likelihood that specific elements are noticed or understood, but they cannot fully determine how a visual message will be received.

    This becomes particularly relevant when considering the role of artificial intelligence in contemporary design processes. AI systems are highly effective at optimizing visual output based on existing data patterns. They can predict where users are likely to look, which compositions perform best, and how to structure information for maximum clarity. In this sense, AI aligns closely with principles of cognitive efficiency, often reducing cognitive load by streamlining visual complexity. However, optimizing for efficiency does not necessarily equate to optimizing for experience.

    From a neurodesign perspective, engagement is not solely driven by clarity or ease of processing. Elements such as ambiguity, surprise, and even minor inconsistencies can capture attention and sustain interest. These factors introduce a level of cognitive tension, encouraging deeper processing rather than immediate recognition. While AI tends to minimize such irregularities in favor of optimized outcomes, human designers may intentionally incorporate them as part of a more nuanced design strategy. This highlights a fundamental distinction: AI operates primarily through pattern recognition and prediction, whereas human designers integrate interpretation, intuition, and contextual awareness. As a result, the integration of AI into design workflows does not eliminate the need for human input, but rather shifts its focus. Designers are no longer only responsible for producing visual outcomes, but increasingly for evaluating, selecting, and contextualizing them.

    In this evolving landscape, understanding cognitive processes becomes even more critical. By grounding design decisions in knowledge about perception, attention, and cognitive load, designers can engage more deliberately with both human users and computational systems. This creates the potential for a hybrid approach, where AI supports efficiency and scalability, while human designers maintain responsibility for meaning, relevance, and experiential quality.

    Ultimately, neurodesign does not seek to replace intuition with data, but to expand it. By making cognitive processes more visible and measurable, it allows designers to reflect on their decisions in new ways, bridging the gap between subjective experience and objective analysis. In this sense, the future of design may not lie in choosing between human or machine-driven approaches, but in understanding how both can operate together within the same cognitive and perceptual frameworks that shape how we see, interpret, and connect with the world.

    Sources:

    Eisma, Y.B., Eijssen, D. & de Winter, J.C.F. (2022) What attracts the driver’s eye attention as a function of task and environment. Information (Switzerland), 13(7). 

    Lohmeyer, Q., Matthiesen, S. & Meboldt, M. (2014) Task-dependent visual behaviour of engineering designers – an eye-tracking experiment. DESIGN Conference. 

    Posner, M.I. (1980) Orienting of attention. Quarterly Journal of Experimental Psychology, 32(1), pp. 3–25. 

    Rodemer, M. et al. (2022) Dynamic signals in instructional videos support students to navigate through complex representations. Applied Cognitive Psychology. 

    Scene Grammar Lab (2023) Eye-tracking research overview. 

    Spinks, J. & Mortimer, D. (2016) Lost in the crowd? Using eye-tracking to investigate information processing in choice experiments. BMC Medical Informatics and Decision Making.