Design & Research II – System, Impact, and Inclusion

Design & Research 2 | For: Katerina Sedlackova

Following my prototypes, I am now looking at how my project fits into the bigger world. I have broken this down into three parts: the system, the change it creates, and who can actually use it.

This diagram illustrates the broader ecosystem surrounding my camera-AI guidance system. I have mapped it from the core outwards to show how the project connects to the world.

The Core: The interaction between the Photographer and the Manual Camera.
Direct User Context: Students, hobbyists, and “Nostalgic Gen Z” looking for a creative rhythm.
External Ecology: The heavy hitters—Nikon/Sony (Hardware), Adobe/Midjourney (AI), and Instagram (Social). I also included E-waste, as the sustainability of our gear is part of the system.

This comparison highlights the shift from automation-first snapping to learning-aware photography.

The Goal: The goal is to move the user from being a passive passenger of an automated process to an active “Pilot” who understands their tools.

Accessibility in photography is not just about “talent”; it is a systemic issue. Using the floating barrier map, I identified the physical and cognitive hurdles that stop people from mastering manual photography.

Design & Research II – Lo-Fi Prototypes 1/6

Following my research on “Automation in Photography,” I have spent this week diving deeper into my project by creating three different prototype scenarios. Even though I haven’t tested these with real users yet, the act of making them helped me see points I was missing and gave me a better direction for my Master’s thesis.

In this one, when the user opens the camera, they have to choose between two options. One is a Raw Mode where the user has all the control, and the other is an AI Automation mode.

The Goal: To see if forcing the user to pick a mode at the start makes them more intentional about how they want to take the photo.

This is a digital assistant that pops up on the screen while you are shooting. It explains what is happening based on the scene. For example, it might say “increase shutter speed because you are shooting action” or “reduce ISO because there is too much light.”

The Goal: To see if giving the user a “why” helps them stay in control instead of the camera just fixing the settings automatically.

This is for professional cameras. A separate device (like a phone) is attached to the camera to guide the user. It shows suggestions on which physical dials to turn to get the right settings.

The Goal: To see if the AI can act as a teacher that helps the user learn how to use the manual settings on their professional camera.

Creating these scenarios helped me see which directions I might follow, but it also left me with a big question about the design process. I understand that if you have a clear vision, prototyping early can save a lot of time. But when you are still in the early stages of defining and understanding the problem, I found it extremely difficult.

To be honest, it doesn’t make total sense to me to build a solution when I haven’t even fully decided what the actual problem is yet. While I know it is supposed to be beneficial, I personally didn’t find it that helpful at this stage. It felt a bit like guessing. However, the exercise did at least show me which side of the camera-AI idea has the most potential, even if the final direction is still a bit blurry.

ID1 – NIME Article

Authors: Hugh Aynsley, Pete Bennett, Dave Meckin, Sven Hollowell, and Thomas J. Mitchell

For this NIME research task, I chose a paper that sits exactly at the intersection of my Master’s research and the future of interaction design. While many NIME papers focus on sensors or sound synthesis, this 2025 study explores the psychology of the design process when using Generative AI.

The authors conducted workshops to see how Text-to-Image (TTI) tools like Midjourney change how we brainstorm. Instead of the traditional “slow” process of sketching by hand, designers used AI to “materialize” their thoughts instantly.

Visualizing the Abstract: Turning vague feelings (like “granular” or “metallic” sounds) into concrete visual shapes.

The Power of the Pivot: Using AI “hallucinations” or mistakes as a spark for a new, unplanned design direction.

High-Speed Variation: Generating dozens of different “vibes” for a controller in seconds to see what sticks.

Style Mapping: Forcing the AI to blend two unrelated worlds—like a “violin” and a “space station”—to find a new aesthetic.

Boundary Objects: Using the AI images as a bridge to help team members understand a complex concept without long explanations.

As someone who has spent the last semester investigating whether automation “steals the joy” of creativity, this paper gave me a new perspective. I’ve often seen AI as a “thief of the mistake,” but Aynsley et al. argue that the AI’s mistakes are actually its biggest strength in the ideation phase. It provides a “surprise” factor that a human designer might never think of on their own.

What I find missing in this research, however, is the tactile reality. It’s easy to generate a beautiful, “instant” image of a musical instrument, but the paper doesn’t address the massive gap between a 2D AI dream and a functional, ergonomic 3D interface. As interaction designers, we know that how something looks is only half the battle; how it feels in the hand is where the real design happens.

Overall, I think “Instant Design” is a powerful look at how our tools are evolving. It confirms my belief that the future isn’t about the machine replacing the artist, but about the designer becoming a “Curator of Possibilities.” We are still the pilots; the AI is just helping us navigate the “Fog” of the early design phase much faster.

References:
[1] H. Aynsley, P. Bennett, D. Meckin, S. Hollowell, and T. J. Mitchell, “’Instant Design’: Five Strategies for the use of Generative AI in NIME Ideation Workshops,” in Proc. Int. Conf. on New Interfaces for Musical Expression (NIME), 2025.

The End of the Beginning – What I’ve Learned So Far

Design & Research | Master Thesis Log 10

In research, there is rarely a clean “The End.” There are just checkpoints.

So, this is my checkpoint.

When I started this journey, I was looking for data. I wanted to know how many people use Auto Mode vs. Manual Mode. I wanted to know technical details about sensors and algorithms. But over the last few weeks—through the interviews, the failed experiments, and the late-night confusion—I found something much more important.

I found the emotional core of the problem.

The Thief of Joy My biggest realization so far is not about technology; it’s about psychology.

I have come to believe that AI is a thief. It doesn’t steal our jobs (at least, not yet). It steals something more subtle. It steals the joy of the mistake.

In my experiments, I realized that when the camera makes everything perfect, it robs us of the curiosity in the process. It takes away that “happy accident”—the blurry, imperfect, messy shot that somehow captures the feeling better than a sharp image ever could. When we remove the struggle, we remove the satisfaction.

Where I Am Going Next So, where does that leave me?

I am not done. I still have more research to do. I need to dig deeper into how we can bring that struggle back without making photography impossible. I need to talk to more designers and photographers and maybe even build some prototypes.

But I do have a compass now.

My direction for the next phase of my research is the concept of the “Co-Pilot.” I don’t have the solution built yet. I don’t know exactly what it looks like. But I know that the future shouldn’t be about the machine taking over. It should be about a partnership where the human stays in charge of the art, and the machine just helps us get there.

The blog series for this session ends here, but the work is just getting started.

Thank you for reading my messy, imperfect thoughts. Now, I’m going back to the research.

The Trap of Perfection: Why “Easy” is the Enemy

Design & Research | Master Thesis Log 09

In my last post, I told you I was going to spend some time experimenting with my smartphone camera—really pushing the AI settings to see what they could do. I wanted to see if I could find a way to love the automation.

Well, I tried. And I found something interesting: I hated it.

The Experiment I went out with just my phone. No heavy gear, no lenses, just the device in my pocket. I took pictures of architecture, people, and shadows.

Technically? The photos were incredible. The AI balanced the highlights perfectly. The “Night Mode” saw things my eyes couldn’t even see. The colors were vibrant and sharp. I didn’t have to think about shutter speed or ISO. I just tapped the screen.
It was effortless. It was perfect.
And that is exactly the problem.

The Missing Ingredient I realized that when the camera does everything, the satisfaction disappears.

When I use my manual camera, I am constantly solving problems. Is the light too harsh? Do I need to lower the shutter speed? Is the focus right? When I finally get the shot, I feel a rush of dopamine because I solved the puzzle.

With the AI phone camera, there was no puzzle. It was just… consumption. I wasn’t making an image; I was just collecting one.

The “Happy Accident” I also realized that automation kills the “happy accident.”

Some of my best photos happened because I made a mistake. Maybe the shutter was too slow and created a beautiful blur. Maybe the exposure was dark and created a moody silhouette.

My phone refused to let me make those mistakes. It “fixed” everything instantly. It sanitized the creativity right out of the process.

The Realization This experiment taught me more than any interview could. It taught me that friction is necessary for art.

We don’t play video games that are impossible to lose. We don’t watch movies where everything goes perfectly for the hero. We need the struggle.

So, as I move toward my final design concept, I know one thing for sure: My solution cannot just be “easier.” It has to be “harder” in the right way. We need to bring the struggle back.

Missed Connections and Surprise Conversations

Design & Research | Master Thesis Log 08

Research rarely goes according to plan.

In my last post, I told you I was hitting the “pause” button on the pressure. I promised myself I would stop forcing results and just let the process happen. And honestly? It’s working.

I had planned to share a deep-dive interview this week with a “Hybrid Shooter”—someone who mixes film and digital workflows. Unfortunately, due to scheduling conflicts, we couldn’t make it happen yet. A few weeks ago, that would have panicked me. I would have scrambled to find a backup or faked a conclusion.
But today? I’m okay with it.

Testing Without Pressure Instead of stressing about the missing interview, I’ve been using this time to experiment on my own. I’ve been walking around with just my phone, playing with the AI settings I usually ignore. I’m trying to see exactly what the software is doing to my images—where it helps, and where it takes over. It’s different when you are just “playing” versus “researching.” You notice more.

A Random Encounter: Donnie Jacob Then, something serendipitous happened.

I hopped onto an Instagram Live with Donnie Jacob, the content creator known for approaching strangers and taking their portraits. It wasn’t planned, but I got the chance to ask him directly about his take on AI in photography.

His answer was incredibly grounding.
He reminded me that “AI” isn’t actually new. He pointed out that we’ve had tools like the Magic Brush and content-aware fill in Photoshop for years. The technology has been here a long time; only the terms have changed.

He admitted that while we can’t run from the change—it’s inevitable—it might be too soon to make a final judgment on where it’s all going. But he shared one strong belief that really stuck with me:

He believes we have to embrace the change—we can’t hide from it—but we must never let it take control over us. The photographer has to remain the one in the driver’s seat.

It confirms what I’ve been feeling: The future isn’t about fighting the technology. It’s about knowing who is in charge.

Why I’m Hitting Pause

Design & Research | Master Thesis Log 07

I sat down tonight to write a very different blog post.

My plan was perfect. I was going to show you the charts from my latest interviews. I was going to explain the difference between “active” and “passive” users. I was going to act like I had everything figured out.

But if I am being completely honest with you? I don’t.
Right now, I am stuck.

They tell you that research is a straight line. You have a question, you find data, and you get an answer. But nobody tells you about the “Fog.” The Fog is where I am right now. It is that messy, confusing middle part where you have too much information and no idea where to put it.

Drowning in Data Over the past few weeks, I have collected so much. I have hours of conversations with photographers. I have folders full of notes about AI, automation, and the history of the camera.

But instead of making things clearer, the data has made everything harder.
Should I focus on the art itself?
Should I focus on the psychology of the photographer?
Should I focus on the interface design of the camera?

Every time I look at my notes, I see a million different paths I could take. It feels like standing in the middle of a busy intersection with traffic coming from every direction. I am paralyzed by the possibilities.

Losing the Joy Somewhere along the way, I think I lost the fun of this project.

When I started, I was excited. I loved the question: “Does automation kill the artist?” It felt important. But lately, the pressure to produce “results” has taken over. I found myself rushing through the research just to get to the finish line. I stopped listening to what the data was telling me because I was too busy trying to force a solution.

I was trying to design the final product before I even understood the problem.

The Power of the Pause So, this blog post is my stop sign.

I am giving myself permission to stop running. I realized that if I keep sprinting in the dark, I am just going to hit a wall. I need to stop frantically searching for the “right” direction and just let the information sink in.

I need to go back and listen to those interviews again—not to extract quotes for a presentation, but to actually hear the emotions in their voices. I need to look at the photos again. I need to remember why I cared about this topic in the first place.

I don’t know exactly what my next step is. I don’t know if the final result will be a new camera mode, a manifesto, or a physical prototype. And to be honest, that uncertainty is really scary. It feels like I am failing.

But maybe feeling lost is just proof that I am actually exploring something new. If I knew the answer already, it wouldn’t be research, right?

For now, I am going to turn off my “analyst brain” and just breathe. The answers will come, but only if I give them space to arrive.

    Who is in Control? The Battle for Agency

    Design & Research | Master Thesis Log 06

    For the last few weeks, I have analyzed photography through the lens of philosophy. But as an Interaction Designer, I need to understand the user.

    This week, I interviewed two distinct photographers. My goal was to investigate a core design problem: When the machine (AI) takes control of the image, do we lose the art?

    The data I collected was surprising. One photographer sees a new evolution of tools, while the other sees a moral battle for truth.

    The first subject is a working digital photographer who uses modern tools. In our discussion, we talked about features like Generative Expand—where AI creates the background for you.

    For him, this isn’t about “faking” reality; it’s about utility. He explained that sometimes you don’t have the budget for a studio or the right location, so the AI helps you “fix” the background. He is willing to give up control of those pixels to solve a problem.

    I pushed him on the question of Agency: If the camera is digital, is the computer doing the work?

    He clarified a crucial distinction across several of his answers. For him, the human is absolutely still in control, provided one condition is met: Manual Settings.

    He emphasized that as long as the photographer is manually managing the technical variables—White Balance, Shutter Speed, Aperture, ISO—the human is the “Pilot.” Even if the image is digital, the decisions are human.

    This is a vital finding for my thesis. It suggests that for digital natives, “Agency” is located in the Settings Menu, not the film roll.

    However, he admitted that as AI improves, this balance might shift. He expressed a real uncertainty about where the line will be drawn in the future:

    “I don’t think it will ever die honestly it’s a form of art that’s been around forever I think it may change in ways I hope it doesn’t get so reliant on AI but who truly knows.”

    Then, he offered a profound prediction. He believes that the definition of “Authenticity” is about to shift. Just as Film became the “vintage” alternative to Digital, he believes standard Digital cameras will become the “Authentic” alternative to AI:

    I think you will always have it around even if one day ai takes over you will have those who will still shoot film and those who will use digital as the new form of film vs AI which scary to think about but true

    This suggests that “Agency” is relative. In 2030, holding a digital camera and manually setting the White Balance might be seen as the ultimate act of human control, because it proves a human was there.

    On the other side, I interviewed a legend in the New York film photography scene. He is known for capturing the “Madness” of NYC—raw, unedited, and chaotic.

    I asked him if the perfection of AI images offends him. His answer ignored the utility argument entirely. He focused strictly on Value.

    “I ignore it. Work done by a human will always be worth more”

    He believes that the “Apparatus” (the machine) cannot create value. Only human labor creates value. When I asked if the public will eventually be fooled by the shiny look of AI, he gave a final verdict:

    “The truth always prevails”

    For the Purist, “Control” is binary. You either have it, or you don’t. He refuses to let the AI fix his backgrounds or clean up his noise, because those imperfections are where the “Truth” lives.

    This field research has clarified the conflict I am studying. We have two user groups with opposing definitions of “Control”:

    • Group A (Evolution): Believes in Selective Control. As long as they control the technical settings (Manual Mode), they feel like the artist—even if AI helps generate the background.
    • Group B (Resistance):  Believes in Total Control. They reject machine intervention entirely because they believe value comes from physical truth and human labor.

    Refining the Question:
    If we allow AI to take over parts of the process (as Group A accepts), do we eventually destroy the “Value” that Group B cherishes? Or is controlling the “Settings” enough to keep the human soul alive?

    Next Steps

    To answer this, I need to find the middle ground. Next week, I am interviewing “Hybrid” creators—people who use both manual film cameras AND high-tech Cameras. To see how they navigate the balance between Control and Automation.

    The Authenticity Paradox

    Optical Truth vs. Emotional Truth: Why a blurry photo often tells the truth better than a sharp one.

    Design & Research | Master Thesis Log 05

    In computer science, “noise” is an error. In art, “noise” is texture.

    In my last blog , I discussed how the lack of “anticipation” is killing our creativity. Now, I want to drill down into the definition of Authenticity. If we are going to design a camera that resists AI perfection, we need to understand exactly what we are trying to preserve.
    I propose that photography serves two opposing masters: Optical Truth and Emotional Truth.

    Optical Truth is objective. It is data. It asks: “Did I capture every photon correctly?”

    Modern smartphones are obsessed with this. They want zero noise, maximum sharpness, and perfect white balance. The result is what we see below: technically flawless, but emotionally sterile.

    Optical Perfection: Clean, sharp, and cold. The AI removed all the shadows where the mystery used to hide. (Photo: Joel Filipe)

      The problem is that memory doesn’t work like a 4K sensor. Memory is blurry. Memory is warm. Memory has vignetting. When an AI “cleans up” a photo, it often cleans away the feeling of the memory itself.

      The Glitch is the Gift: The blur creates the sensation of spinning. An AI would try to “fix” this face, destroying the moment. (Photo: William Klein, 1955

      Emotional Truth is subjective. It is messy. It asks: “Does this feel like it felt?”

      Consider the work of Daido Moriyama or William Klein. Their photos are often grainy, out of focus, or tilted. By the standards of an AI Algorithm, these are “bad photos.” The AI would try to fix them.

      But the “badness” is the point. The blur is the motion. The grain is the grit of the street.

      The Crisis of Code: The fundamental issue in Interaction Design is that we have trained our machines to view human imperfection as a “bug” to be squashed. But in art, the imperfection is often the “feature.”


      This leads me to the Japanese concept of Wabi-Sabi—the acceptance of transience and imperfection.

      How do we code Wabi-Sabi into a camera?

      If I am building an “Honest Interface,” it cannot just be a “Raw Mode” (which is still just data). It needs to be a “Mood Mode.” We need controls that allow the user to tell the system: “Do not fix this. I want the blur.”

      Currently, “Portrait Mode” fakes a blur (bokeh) to look expensive. I am interested in a mode that allows Motion Blur to look alive. I want to design an interface where the user can prioritize Atmosphere over Resolution.

      I have now established a strong theoretical framework:
      1. AI creates Zombie Formalism.
      2. Screens kill Anticipation.
      3. Algorithms prioritize Optical Truth over Emotional Truth.

      But this is all just my opinion. To turn this into a Master’s Thesis, I need to get out of the library and into the field. Next week, I will be conducting Qualitative Interviews with photographers to see if they actually feel this loss of agency, or if I am just a nostalgic romantic yelling at a cloud.

      References & Reading List

      [1] R. Barthes, Camera Lucida: Reflections on Photography. Hill and Wang, 1981.
      [2] L. Koren, Wabi-Sabi for Artists, Designers, Poets & Philosophers. Stone Bridge Press, 1994.

      AI Declaration: This blog post reflects my own research, writing, and arguments. An LLM was utilized solely to assist with the structure and organization of the content.

      The Death of Anticipation

      From “Mental Construction” to Digital Consumption: How the ‘Live View’ screen killed our ability to see.

      Design & Research | Master Thesis Log 04

      “A photograph is not created in the camera. It is created in the mind.”

      This concept, famously articulated by Stephen Shore [1], is known as Mental Construction. Shore argues that the physical act of pressing the shutter is just the final step of a long psychological process. The photographer looks at the chaos of the world, organizes it mentally into a frame, and then uses the machine to capture that thought.
      But today, this order of operations has been reversed.

      In my research into camera interfaces, I have identified a critical shift in how we interact with the image: the shift from the Viewfinder to the Screen.

        The Viewfinder (Traditional): When you look through an optical viewfinder, you are looking at reality. The camera is just a window. You have to imagine (Pre-visualize) how the film will interpret that reality. You are active.

        The Screen (Modern): When you look at a smartphone screen, you are looking at a processed simulation. The HDR is already applied. The colors are already boosted. You don’t need to imagine the photo because the computer has already finished it for you.

        This interface design encourages Post-rationalization instead of Pre-visualization. We shoot first, and ask questions later. We treat the world as raw data to be harvested, rather than a subject to be understood.

        Active Seeing: The restriction of the viewfinder forces the eye to focus. (Source: Unsplash)

        Ansel Adams wrote extensively about “visualization”—the ability to see the final print in your mind’s eye before the exposure is made [2].

        Digital interfaces have killed this skill. Because the feedback loop is instant (0.01 seconds), there is no gap for the imagination to live in. In film photography, there was a “Latent Image”—the invisible period between shooting and developing. That invisibility forced the photographer to trust their vision.

        By removing the latency, we removed the anxiety. But we also removed the intent. If I can take 1,000 photos in a minute and delete 999, I stop caring about the 1.





        This leads to a radical question for my thesis: Can we design for blindness?

        If the screen is the problem, maybe the solution is to take it away. I am beginning to conceptualize an interface that re-introduces “digital latency.”

        Imagine a camera app that doesn’t show you the photo immediately. Imagine a tool that forces you to define your parameters (Mood: Melancholy? Lighting: High Contrast?) before it opens the shutter.

        By delaying the gratification, we might restore the “Mental Construction.” We might force the user to become an architect of the image again, rather than just a consumer of it.

        If we strip away the instant gratification and the AI perfection, what is left? Next week, I will finally tackle the definition of “Authenticity.” I will look at the debate between “Optical Truth” (what the lens sees) vs. “Emotional Truth” (what the human feels), and how we can code that difference into a system.

        References (IEEE)

        [1] S. Shore, The Nature of Photographs. Phaidon Press, 2007.
        [2] A. Adams, The Camera. Little, Brown and Company, 1980.

        AI Declaration: This blog post was drafted with the assistance of an LLM to explore the psychological concepts of ‘Mental Construction.’ The connection to Interface Design and the ‘Latent Image’ theory are my own research.