The Trap of Perfection: Why “Easy” is the Enemy

Design & Research | Master Thesis Log 09

In my last post, I told you I was going to spend some time experimenting with my smartphone camera—really pushing the AI settings to see what they could do. I wanted to see if I could find a way to love the automation.

Well, I tried. And I found something interesting: I hated it.

The Experiment I went out with just my phone. No heavy gear, no lenses, just the device in my pocket. I took pictures of architecture, people, and shadows.

Technically? The photos were incredible. The AI balanced the highlights perfectly. The “Night Mode” saw things my eyes couldn’t even see. The colors were vibrant and sharp. I didn’t have to think about shutter speed or ISO. I just tapped the screen.
It was effortless. It was perfect.
And that is exactly the problem.

The Missing Ingredient I realized that when the camera does everything, the satisfaction disappears.

When I use my manual camera, I am constantly solving problems. Is the light too harsh? Do I need to lower the shutter speed? Is the focus right? When I finally get the shot, I feel a rush of dopamine because I solved the puzzle.

With the AI phone camera, there was no puzzle. It was just… consumption. I wasn’t making an image; I was just collecting one.

The “Happy Accident” I also realized that automation kills the “happy accident.”

Some of my best photos happened because I made a mistake. Maybe the shutter was too slow and created a beautiful blur. Maybe the exposure was dark and created a moody silhouette.

My phone refused to let me make those mistakes. It “fixed” everything instantly. It sanitized the creativity right out of the process.

The Realization This experiment taught me more than any interview could. It taught me that friction is necessary for art.

We don’t play video games that are impossible to lose. We don’t watch movies where everything goes perfectly for the hero. We need the struggle.

So, as I move toward my final design concept, I know one thing for sure: My solution cannot just be “easier.” It has to be “harder” in the right way. We need to bring the struggle back.

Missed Connections and Surprise Conversations

Design & Research | Master Thesis Log 08

Research rarely goes according to plan.

In my last post, I told you I was hitting the “pause” button on the pressure. I promised myself I would stop forcing results and just let the process happen. And honestly? It’s working.

I had planned to share a deep-dive interview this week with a “Hybrid Shooter”—someone who mixes film and digital workflows. Unfortunately, due to scheduling conflicts, we couldn’t make it happen yet. A few weeks ago, that would have panicked me. I would have scrambled to find a backup or faked a conclusion.
But today? I’m okay with it.

Testing Without Pressure Instead of stressing about the missing interview, I’ve been using this time to experiment on my own. I’ve been walking around with just my phone, playing with the AI settings I usually ignore. I’m trying to see exactly what the software is doing to my images—where it helps, and where it takes over. It’s different when you are just “playing” versus “researching.” You notice more.

A Random Encounter: Donnie Jacob Then, something serendipitous happened.

I hopped onto an Instagram Live with Donnie Jacob, the content creator known for approaching strangers and taking their portraits. It wasn’t planned, but I got the chance to ask him directly about his take on AI in photography.

His answer was incredibly grounding.
He reminded me that “AI” isn’t actually new. He pointed out that we’ve had tools like the Magic Brush and content-aware fill in Photoshop for years. The technology has been here a long time; only the terms have changed.

He admitted that while we can’t run from the change—it’s inevitable—it might be too soon to make a final judgment on where it’s all going. But he shared one strong belief that really stuck with me:

He believes we have to embrace the change—we can’t hide from it—but we must never let it take control over us. The photographer has to remain the one in the driver’s seat.

It confirms what I’ve been feeling: The future isn’t about fighting the technology. It’s about knowing who is in charge.

The Tyranny of the Perfect Image

Design & Research | Master Thesis Log 03

There is a common phrase repeated in tech reviews today: “Everyone is a photographer.”

The logic goes like this: We all have 200-megapixel sensors in our pockets. We have stabilization that defies gravity and Night Modes that turn midnight into noon. Therefore, because the output is technically high-quality, the act must be photography.

I disagree. In fact, for my thesis, I am proposing the opposite: As cameras get “better,” photography is getting worse.

We are not witnessing a renaissance of creativity; we are witnessing the rise of “Zombie Formalism”—images that look alive (sharp, colorful, perfectly exposed) but are internally dead because they lack human intent.

To understand why this is happening, I turned to the media philosopher Vilém Flusser. In his seminal work Towards a Philosophy of Photography [1], Flusser distinguishes between the “tool” and the “machine.”

A tool (like a paintbrush) serves the human. The human decides every stroke.
A machine (like a camera) has a “program.” It has pre-set rules.

The “Black Box”: When the camera makes 90% of the decisions, the user becomes a functionary, not an artist. (Source: Unsplash)

    Flusser argues that most photographers are not artists; they are “Functionaries.” They simply press a button to trigger the machine’s program. In 2025, this is more true than ever. When I lift my phone to take a picture of a sunset, the AI:

    • Identifies the scene (“Sunset”).
    • Balances the exposure (HDR).
    • Sharpens the edges.
    • Boosts the saturation.

    I did not make those choices. The algorithm did. I simply authorized the calculation.

    Perfection vs. Emotion: Sometimes the blurry shot tells the truth that the sharp shot hides. (Source: Unsplash)

    The result of this automation is a homogenization of our visual culture. We are drowning in what I call the “Aesthetic of Least Resistance.”

    Look at Instagram. The images are stunningly clear, but they all look the same. They lack the “friction” of reality. In Interaction Design, we are taught to remove friction—to make things seamless. But in art, friction is essential.

    Film photography was full of friction. You had to measure light. You had to focus manually. You could fail. And because you could fail, your success meant something.

    Wim Wenders recently critiqued this phenomenon, noting that the inflation of images leads to a deflation of meaning [2]. When a camera cannot take a “bad” picture, the “good” picture loses its value. It becomes a commodity, not a memory.

    In my initial research plan, I considered conducting a visual audit of smartphone interfaces this week. However, as I dove into Flusser’s theories, I realized that analyzing the surface of the interface (the icons and buttons) is premature if we don’t first question the structure beneath it.

    The core issue isn’t just how the buttons look, but how they shape our thinking. If modern AI cameras are designed to provide answers, my research is now shifting to understand how we can preserve the user’s ability to ask questions.

      Closing Thought: The Search for Friction

      We are building cameras that solve problems we didn’t have. The problem of “focus” was never just technical; it was artistic. When we remove the struggle, we remove the satisfaction.

      As I continue this research, I am looking for the “sweet spot”—where the tool helps us, but doesn’t replace us. The goal isn’t to destroy the technology, but to find the human heartbeat buried underneath the algorithm.

      References (IEEE)

      [1] V. Flusser, Towards a Philosophy of Photography. London: Reaktion Books, 2000.
      [2] W. Wenders, “The Act of Seeing,” in The Pixels of Paul Cézanne: And Reflections on Other Artists, 2018.

      AI Declaration: This blog post was drafted with the assistance of an LLM to structure the theoretical analysis. The research selection, case study choice, and final arguments regarding ‘Indexicality’ are my own.

      The Moon is a Lie: A Case Study in Ontological Deception

      Design & Research | Master Thesis Log 02
      #InteractionDesign #AIPhotography #HumanInTheLoop #ResearchJourney #ComputationalPhotography

      Since its invention, photography has held a unique promise: the promise of truth. Unlike a painting, which is an interpretation, a photograph was historically seen as an “index”—a physical trace left by light hitting a sensor.

      But what happens when the sensor stops recording light and starts predicting it?

      In my previous post, I asked if photography is dead. This week, I conducted a deep dive into the Samsung “Space Zoom” Controversy. This event is not just a consumer tech scandal; for my thesis, it serves as “Ground Zero” for the ontological shift in image-making. It proves we have moved from capturing the world to generating a statistical average of it.

      The controversy erupted when Reddit user u/ibreakphotos designed a clever stress test for Samsung’s “100x Space Zoom.” The user hypothesized that the camera wasn’t actually optically powerful enough to see the moon’s craters.

      The Methodology:

      • They downloaded a high-res image of the moon.
      • They downsized it and blurred it until it was an unrecognizable, glowing white blob.
      • They displayed this blob on a monitor in a dark room.
      • They stood back and photographed the monitor using the Samsung S23 Ultra.

      The hardware limitation: A tiny smartphone sensor cannot defy physics, yet the software claims it can. (Source: reddit)

      The Results:

      The phone produced a sharp, detailed image of the moon, complete with craters and surface textures.

      This was physically impossible. The source image (the blurred blob on the screen) contained zero texture data. The camera had effectively “hallucinated” the craters because its AI recognized the shape of a moon and overlaid a texture map from its internal database.

      Why does this matter for Interaction Design? Because it breaks the fundamental contract between the user and the tool.

      In media theory, Charles Sanders Peirce defined the photograph as an “Index”—a sign that has a physical connection to its object (like a footprint in the sand). When you look at a traditional photo, you know that the light actually touched the subject.

      The Samsung Moon is no longer an Index. It is a Simulacrum. As the philosopher Jean Baudrillard argued, a simulacrum is a copy without an original. The image on the user’s phone is “hyperreal”—it looks more real than the blurry reality the user actually saw with their eyes, but it has no connection to the physical moment.

      The friction lies here:

      The User thinks: “I captured this.”
      The System knows: “I generated this.”

      This creates a gap in agency. The user believes they are the creator, but they are merely the “prompter.” The camera is no longer a tool for documentation; it is a tool for optimization. It prioritizes a “beautiful lie” over an “ugly truth.”

      After analyzing this case, I do not believe the solution is to ban AI. Most users do want a clear photo of the moon, even if it is fake. However, from an Interaction Design standpoint, the failure here is not technological—it is ethical.

      The Failure of “Silent Substitution”
      The interface lied. It presented a generated image as a captured one. My take is that we need to redesign the camera interface to be “Honest.”

      My Proposal for Future Research:
      We need a UI that distinguishes between “Documentation Mode” (Optical truth, flaws included) and “Simulation Mode” (AI enhanced).

      If the user knows they are painting with data, the agency is restored. They become a “Director” rather than a duped consumer. The current design trend of hiding these choices behind a single “Shutter Button” is what I call “Agency Laundering”—the machine takes the credit, but lets the user feel like the artist. My thesis aims to challenge this specific pattern.

      Key Questions Arising from this Case:

      1. Transparency: Should AI-enhanced photos carry a visible watermark or metadata tag indicating “Generative Content”?
      2. The “Raw” Mode: Is “Pro Mode” the last bastion of authenticity, or is AI seeping into the raw data as well?
      3. User Consent: Did the user consent to having their blurry moon replaced? Or did the interface assume their intent?

      References (IEEE)

      [1] u/ibreakphotos, “Samsung ‘Space Zoom’ Moon Shots are Fake,” Reddit, 2023.
      [2] J. Vincent, “Samsung’s Moon photos are fake—but so is a lot of mobile photography,” The Verge, 2023.
      [3] J. Baudrillard, Simulacra and Simulation. University of Michigan Press, 1994.

      AI Declaration: This blog post was drafted with the assistance of an LLM to structure the theoretical analysis. The research selection, case study choice, and final arguments regarding ‘Indexicality’ are my own.

      Is Photography Dead? Rethinking Creative Authenticity in the Age of AI

      Design & Research | Master Thesis Log 01

      The mechanical eye vs. the digital brain. (Source: Unsplash)

      I still remember the first time I developed a roll of film. There was a specific anxiety in waiting to see if the shot came out right—the grain, the slightly missed focus, the “happy accidents.”

      Today, that anxiety is gone. We are witnessing the death of the “snapshot” and the birth of the “computed image.” With the release of tools like Google’s Magic Editor and Adobe’s Generative Fill, the definition of photography has shifted from capturing light to processing data.

      As an Interaction Design student coming from a background where photography was about documenting reality, this shift fascinates and terrifies me. If an algorithm frames the shot, adjusts the lighting, and even generates missing details, who is the creator? The user or the system? My Master’s research topic, “Rethinking Creative Authenticity,” investigates this exact tension.

      The Visual Conflict

      This image has “noise.” It has grain. It captures a fleeting moment that might never happen again. It feels human because it is flawed. (Source: Unsplash)
      Computed Perfection
      Clean, optimized, and statistically average. AI tools push us toward this aesthetic—images that look “correct” but feel empty. (Source: Unsplash)

      The Research Framework

      Central Research Question

      How can interaction design redefine or preserve creativity within automated camera systems and AI-enhanced photography tools?

      To answer this, I am breaking the problem down into three sub-areas:

      1. Perception: Do users perceive a “technically perfect” AI image as less authentic than a flawed human image? Where is the threshold?
      2. Agency: Can we design interfaces that force the user to make creative decisions rather than relying on auto-pilot?
      3. Collaboration: How can AI act as a “Creative Coach” (guiding composition) rather than a “Servant” (fixing mistakes)?

      Why This Matters for Design

      In Interaction Design, we often talk about removing “friction.” We want apps to be easy, fast, and seamless. However, in creative tools, friction is often where the art happens. The struggle to get the focus right, or the decision to underexpose a shot for mood—that is creative intent.

      If we design cameras that remove all struggle, we risk atrophying human creativity. We create a “Push Button, Get Art” culture [1]. My goal is to find the “sweet spot” where automation supports the user without replacing them.

      My Approach: Research through Design

      I don’t just want to write about this; I want to build a solution. My approach involves “Speculative Prototyping.” I intend to design a camera interface that resists total automation—a tool that asks you “Why?” before you shoot, rather than just fixing the “How.”

      Early phase: Sketching interfaces that bring the human back into the loop. (Source: Unsplash)
      1. Literature Review: Deep dive into “Computational Photography” ethics.
      2. Interviews: Conducting qualitative sessions with photographers to understand their fears regarding AI.

      References (IEEE)

      [1] L. Manovich, “AI Aesthetics,” Manovich.net, 2018. [Online]. Available: http://manovich.net/index.php/projects/ai-aesthetics

      [2] A. Agarwala et al., “Photographic stills from video,” ACM Transactions on Graphics (TOG), vol. 23, no. 3, pp. 585-594, 2004.

      [3] H. Steyerl, “In Defense of the Poor Image,” e-flux journal, no. 10, 2009.

      AI Declaration: This blog post was drafted with the assistance of an LLM to structure my initial thoughts and ensure academic formatting. The personal motivation, image selection, and research direction are entirely my own.