Skip to main content

On Evidence

·6 mins

Images are interesting because, in the digital world, it’s trivial to alter them, and yet we still tend to treat images—photographs, video—as a kind of proof. If you see a photograph in a news article, most people don’t really stop to ask whether the thing actually happened. The photograph is the thing, or at least it stands in for it. Light hit something, a camera was there, therefore this event occurred. That association feels almost automatic, and it’s been reinforced culturally for a long time.

You see this most clearly with video. Think about police shootings, body-cam footage, surveillance clips. People argue endlessly about interpretation—what someone intended, what they should have done, what context is missing—but almost nobody questions that the footage itself is a record of reality. The disagreement lives on top of the video, not underneath it. Whatever you think about what happened, the assumption is that what you’re seeing did, in fact, happen in roughly that way.

Sports offer an even cleaner example. In close races—cycling, running, horse racing—you get a photo finish. Cameras fire frames at a ridiculous rate, synced to a clock, and officials line everything up to determine who won. It’s treated as essentially irrefutable evidence. The camera doesn’t lie. Even though, in very close cases, there’s still interpretation involved—what frame counts, which body part matters, whether motion blur obscures something—the underlying assumption remains intact. The image is the authority.

Photography has been truth-adjacent for a long time, and that made sense when photography was hard. When taking a sharp, well-exposed photograph of something real required effort, and convincingly altering it required even more. Photoshop has existed for decades, but major manipulations were often obvious, especially to anyone who knew what to look for. Most images, particularly those published by institutions, were trusted by default.

The digital era quietly broke that assumption.

Now images are just bits. Perfectly copyable. Trivially editable. You can have two nearly identical versions of the same photograph circulating online at the same time, differing only in some critical detail. In one, a person is crying. In the other, they aren’t. One implies fear, guilt, weakness. The other doesn’t. And if the software is good enough—if the alteration is clean enough—there may be nothing in the image itself that allows you to resolve which one is “real.”

There was a recent example where an image circulated showing a woman being arrested, apparently crying, and later the original image surfaced showing that she wasn’t. The details of the story don’t really matter, and I don’t care much about who admitted what or when. What matters is the structure of the problem. If an authority distributes an altered image, and another authority disputes it with a slightly different version, the photograph itself stops functioning as evidence and starts functioning as rhetoric. At that point, you’re no longer evaluating the image—you’re evaluating who you trust.

Right now, computer-generated images are often still easy to spot if you know what to look for. There’s glitchiness, weird details, things that don’t quite line up. But that’s an implementation detail, not a fundamental limitation. If the tools get good enough—and they are getting better very quickly—then staring harder at the pixels won’t help. The medium itself can’t resolve the dispute.

Even before intentional manipulation, digital images are already interpretations. Light hits a sensor, gets converted into data, demosaiced, tone-mapped, color-corrected, sharpened, denoised, compressed. Even if the photographer does nothing, the camera does something. None of this is malicious, and I’m certainly not suggesting there’s a conspiracy where camera manufacturers are secretly lying to us, but it does mean the image was never a pure capture of reality to begin with. It’s always already been processed, massaged, interpreted by physics, by engineering, by software defaults.

We were fine with that as long as the edits felt bounded. As long as altering an image in a meaningful way required skill, time, and intent. That boundary may now be gone.

So what does proof even mean when the medium itself can’t be trusted?

This is where film becomes interesting—not because it’s purer or morally superior or some kind of antidote to modernity, but because it forces you to think about proof differently. When you shoot on film, light physically alters a silver-halide emulsion. The development process removes silver and leaves behind a latent image encoded in matter. You can hold it. Touch it. Scratch it. Sniff it. Taste it if you want. And of course, look at it under a loupe. The negative is not a file describing an image; it’s the physical imprint of an event—photons hitting a focal plane at a particular place and time.

Yes, film can be manipulated. Prints can be dodged and burned. Negatives can be altered. Scenes can be staged. None of that is new. Ansel Adams famously altered his prints extensively. But the negative itself exists as an object in the world. It has a history. A chain of custody. Grain structure. Edge markings from a specific film stock. Dust, scratches, fingerprints. A contact sheet shows what came before and after the frame you’re looking at in way that’s extremely difficult to alter. Someone with enough expertise can interrogate it. The object resists abstraction in a way that bytes being copied and pasted across the internet cannot.

Faking a negative convincingly is possible in theory, but hard in practice. You could project an image onto film, but it would almost certainly leave artifacts—strange color responses, odd grain behavior, inconsistencies that a trained eye could spot. The effort required matters. The object pushes back.

A digital image doesn’t. Once an image exists only as data, copies are indistinguishable from originals. Metadata can be stripped or fabricated. There is no “original” in any meaningful sense, just identical instances floating around. With film, the negative is the original. The film was actually physically present at the moment the image was captured.

That physicality gives analog images a strange relevance now. Not because they’re truer, but because they’re traceable. They act like witnesses. They’re relics. They’re pieces of evidence you can actually hold in your hand and examine yourself.

You could imagine exhibiting prints alongside their negatives—not as a gimmick, but as a quiet assertion. This image has a body. This happened somewhere, sometime, and here is the material trace it left behind. You could hand someone a loupe. You could even do the silly but effective trick of inverting an iPhone screen and holding it up to a negative to see a rough positive right there on the spot.

None of this solves anything at scale. Film doesn’t fix misinformation. It doesn’t save us from propaganda. And romanticizing photography as truth has always been a mistake. Even if you think film looks better—and I often do—it’s technically inferior to modern digital cameras in most ways that matter.

But the contrast is useful.

In a world where any image can be synthesized, regenerated, or subtly altered, proof no longer lives in what we see. It lives in provenance, in process, in context, in whether an image has receipts.

Film isn’t more honest than digital. It’s just harder to lie with convincingly.