The Authenticity Paradox

Optical Truth vs. Emotional Truth: Why a blurry photo often tells the truth better than a sharp one.

Design & Research | Master Thesis Log 05

In computer science, “noise” is an error. In art, “noise” is texture.

In my last blog , I discussed how the lack of “anticipation” is killing our creativity. Now, I want to drill down into the definition of Authenticity. If we are going to design a camera that resists AI perfection, we need to understand exactly what we are trying to preserve.
I propose that photography serves two opposing masters: Optical Truth and Emotional Truth.

Optical Truth is objective. It is data. It asks: “Did I capture every photon correctly?”

Modern smartphones are obsessed with this. They want zero noise, maximum sharpness, and perfect white balance. The result is what we see below: technically flawless, but emotionally sterile.

Optical Perfection: Clean, sharp, and cold. The AI removed all the shadows where the mystery used to hide. (Photo: Joel Filipe)

    The problem is that memory doesn’t work like a 4K sensor. Memory is blurry. Memory is warm. Memory has vignetting. When an AI “cleans up” a photo, it often cleans away the feeling of the memory itself.

    The Glitch is the Gift: The blur creates the sensation of spinning. An AI would try to “fix” this face, destroying the moment. (Photo: William Klein, 1955

    Emotional Truth is subjective. It is messy. It asks: “Does this feel like it felt?”

    Consider the work of Daido Moriyama or William Klein. Their photos are often grainy, out of focus, or tilted. By the standards of an AI Algorithm, these are “bad photos.” The AI would try to fix them.

    But the “badness” is the point. The blur is the motion. The grain is the grit of the street.

    The Crisis of Code: The fundamental issue in Interaction Design is that we have trained our machines to view human imperfection as a “bug” to be squashed. But in art, the imperfection is often the “feature.”


    This leads me to the Japanese concept of Wabi-Sabi—the acceptance of transience and imperfection.

    How do we code Wabi-Sabi into a camera?

    If I am building an “Honest Interface,” it cannot just be a “Raw Mode” (which is still just data). It needs to be a “Mood Mode.” We need controls that allow the user to tell the system: “Do not fix this. I want the blur.”

    Currently, “Portrait Mode” fakes a blur (bokeh) to look expensive. I am interested in a mode that allows Motion Blur to look alive. I want to design an interface where the user can prioritize Atmosphere over Resolution.

    I have now established a strong theoretical framework:
    1. AI creates Zombie Formalism.
    2. Screens kill Anticipation.
    3. Algorithms prioritize Optical Truth over Emotional Truth.

    But this is all just my opinion. To turn this into a Master’s Thesis, I need to get out of the library and into the field. Next week, I will be conducting Qualitative Interviews with photographers to see if they actually feel this loss of agency, or if I am just a nostalgic romantic yelling at a cloud.

    References & Reading List

    [1] R. Barthes, Camera Lucida: Reflections on Photography. Hill and Wang, 1981.
    [2] L. Koren, Wabi-Sabi for Artists, Designers, Poets & Philosophers. Stone Bridge Press, 1994.

    AI Declaration: This blog post reflects my own research, writing, and arguments. An LLM was utilized solely to assist with the structure and organization of the content.