All tech is not created equal. As well-intentioned as developers, engineers, and inventors may be, the biases they carry tend to find a way to seep into the things they create. That includes search engines, virtual assistants, and even algorithms that determine who can take out a bank loan. It even affects “older” technologies like cameras.
These days, we’re far past the times where anyone would question why you’d put a camera on a phone—they’re a ubiquitous and essential addition to smartphones and laptops. Cameras are simple on the surface. They sort of work like our eyes do: The iris (like a camera’s aperture) lets in light and informs our retina on what it sees, like a camera informs film (or, nowadays, your phone). Here’s the problem: the idea that someone’s eyes would only allow them to see people of lighter skin tones would be utterly ridiculous. When it comes to cameras, that actually isn’t always too far off.
Ultimately cameras are technology, so it tracks that they would carry their own bias. This was true even back in the 1950s, when cameras were calibrated in a way that prioritized light skin tones. And even in the pre-internet era, the ripple effects of that were massive. It resulted in cameras that did a better job of photographing white people than Black people, which affected who’d actually look good on TV, on a magazine cover, or even in a family photo. Fast forward to the internet era, where facial recognition algorithms can be more likely to mistake one Black person for another. You can imagine what flubs like this could mean when such software is used by law enforcement.
Seeing is understanding; we say “I see” when we finally comprehend something. What message does it send when people create a camera and that camera can’t see you? What does it mean when a piece of glass is connected to a supercomputer and the entire internet and it still can’t discern your selfie at the bar? This reading list is mostly a viewing list, in hopes that you’ll see what I mean. —Xavier Harding