In partnership with
It’s a core function of journalism to hold decision-makers accountable. But what happens when the choice is made by an algorithm? Automated systems running in the background of modern life make decisions every millisecond about what news people see, who they should date, and what jobs they qualify for, and are increasingly used by policymakers to determine everything from which benefits people receive to who law enforcement investigates.
But getting answers about how these systems actually work can be nearly impossible. So when Lighthouse Reports approached WIRED about partnering on an investigation into risk-scoring algorithms used to predict welfare fraud in Europe, they jumped at the opportunity. Lighthouse managed to gain unprecedented access to one of these algorithms, including the machine learning model’s code, the data it was trained on, and the technical documentation, which allowed the team to reconstruct the tool and test how it works.
"These systems are almost always kept secret from the public under the guise of protecting the intellectual property of the companies that create them," says Couts, a senior editor at WIRED. "To have the opportunity to shine a light on how this technology works and how it impacts the lives of vulnerable people is a rare thing indeed."
The result was "Suspicion Machine", a major joint investigation with Lighthouse Reports which shows how welfare-fraud detection systems used by governments are both inaccurate and unfair. Here, Couts and Dhruv Mehrotra dive into how they went about their reporting, past investigations that have inspired their work, and their favorite tools for reporting on the technology that shapes our lives.
From our partnersWIRED & Lighthouse Reports
Andrew Couts: I like to say that journalism is a team sport, and this piece is proof of that. It took dozens of people to make this story possible, with countless hours spent reporting, writing, editing, designing, and engineering. The result, I hope, is a clear look at the risk-scoring algorithm the city of Rotterdam, in the Netherlands, used to flag people for potential welfare fraud investigations. It was a heavy lift, but well worth it.
Dhruv Mehrotra: I’m really proud to have played a part in getting this story out there. As this kind of technology gets embedded into more and more government systems, I hope this investigation can be a model for how journalists can work together to investigate discriminatory algorithms. It’s a technical deep dive into how Rotterdam’s algorithm works, who it impacts, and who is to blame.
From our partnersWIRED
AC: On the other side of Rotterdam’s risk-scoring algorithm are the people whose lives are upended by welfare fraud investigations. Several of the people we spoke to for this story experienced extreme stress and hardship as a result of being investigated—only to have their names cleared months later.
DM: When you are deep in a data story it can be too easy to forget that there are real people behind the numbers. This is a story about those people and how and how their lives were upended by fraud investigations.
From our partnersWIRED
AC: For this story, we traveled to Denmark, where for the past decade conservative politicians have increasingly pushed for crackdowns on benefits fraud. The result is that one of the world’s most famous welfare states has become an apparatus of mass surveillance, with the country’s Public Benefits Administration collecting everything from people’s taxes to their romantic relationships. This, despite flimsy evidence that fraud is nearly as big a problem as some politicians claim.
DM: This story gives the broader political context of the algorithms built to profile benefits recipients by focusing on Denmark where the perception of welfare fraud has led to the creation of a sophisticated fraud detection system.
From our partnersWIRED
AC: The flip side to the political pressure to combat welfare fraud is the businesses that sell governments on proprietary algorithms they claim are a silver bullet for catching cheaters. The problem with these public-private partnerships, we found, is that it leads to secrecy around the technology that keeps people in the dark about how these systems work—and why they were flagged for investigation in the first place.
DM: There’s big money in fraud-hunting, and around the world contractors are peddling software to help governments recoup public funds. Interrogating these algorithms is essential, but getting access can be impossible.
DM: This is one of the earliest examples of a technical investigation into an algorithm. In Machine Bias, reporters at Propublica took apart a tool meant to predict the likelihood of defendants committing future crimes. They analyzed the risk scores of more than 7,000 people and found that the algorithm was biased against Black defendants.
AC: It’s no accident that the work of two of the reporters behind this story—Julia Angwin and Surya Mattu—show up repeatedly on this list. They’re in many ways pioneers of the new generation of data journalism that emerged following the Machine Bias piece. Angwin later founded The Markup, listed below, and Mattu was one of its earliest hires. Dhruv and I have had the opportunity to work with both of them, and I highly recommend following everything they do.
DM: This is one of the stories that made me want to be a reporter. Using flight data from previously identified FBI and DHS spy planes, BuzzFeed News trained an algorithm that could sift through commercially available flight data and flag likely surveillance operations. It’s the first time I had ever seen machine learning used in journalism.
AC: I’d already been in journalism for more than a decade when this story published, but it still sticks with me thanks to the technical complexity of what the reporters accomplished. It made me realize what’s possible when you combine technical abilities with traditional journalism skills.
DM: I think about this story often. BuzzFeed News was able to enumerate Uighur detention camps in Xinjiang by programmatically finding censored satellite imagery on Baidu Maps. I’m always on the hunt for censored satellite imagery, and this story blew my mind.
AC: Like so many of the stories on this list, the first thing I thought when I read it was, “why didn’t I think of this?” I’m still impressed with what this reporting team was able to pull off while investigating one of the biggest humanitarian crises of the 21st century.
DM: This is the first story I ever contributed to. In it, Kashmir Hill and I built a VPN that would block millions of IP addresses belonging to Google, Facebook, Amazon, Apple, and Microsoft. We then connected all of her devices to it and watched her world fall apart.
AC: When Kashmir pitched me this story, it was in the midst of the backlash against Facebook and major tech companies more broadly, and people were increasingly “boycotting” these companies. We wanted to find out what it would take to really cut these companies out of your life, and this series is the result. Turns out, it’s virtually impossible to use the internet without coming into contact with the Big Five in one way or another.
DM: The Markup is a newsroom that has set the standard for how to be transparent when investigating big tech. It’s hard to include just one link here, but each big investigation they put out comes with an even bigger methodology. You can read about how they designed their analyses, where the dataset came from, and what they found. Each methodology is reviewed by experts before it’s published.
AC: There’s a reason I try to read every story The Markup publishes: They’re always great, and they change how I think about technology’s role in society. Bookmark this one.
AC: Like The Markup, Bellingcat is simply in a league of its own. The publication’s journalists use open source intelligence, or OSINT, to conduct their investigations, consistently finding major stories by piecing together information that’s available online if you know where—and how—to look.
DM: Everytime I read a Bellingcat story, I learn some kind of new and clever OSINT technique. They are transparent about their methods and consistently go after big targets.
DM: This is an incredible tutorial written by Leon Yin, a reporter at The Markup, that walks you through how to use your browser’s developer tools to quite literally get the most out of a website. The tutorial walks through a specific example of how you can monitor your web traffic to find and exploit hidden features of a given website in order to investigate how it works. It’s a little technical, but easy enough to understand for someone just getting started.
AC: Despite being a tech journalist for more than a decade, I’m not a particularly technical person. It’s resources like this that make it possible for anyone with the desire to learn how to conduct investigations into everyday technology.
DM: This is a useful curated database of algorithms being used across the US government at the federal, state, and local level. It’s updated frequently and you can set alerts for keywords.
AC: I’d actually not heard of this before Dhruv pointed it out. But now that I know about it, I can guarantee this will be one of the rabbit holes I plan to dive into. It’s also a reminder just how pervasive algorithms are in government—and why it’s important to keep digging into them to better understand the forces at play in our lives.
Andrew Couts & Dhruv Mehrotra
Andrew Couts is Senior Editor, Security at WIRED overseeing cybersecurity, privacy, policy, politics, national security, and surveillance coverage. Prior to WIRED, he served as executive editor of Gizmodo and politics editor at the Daily Dot. He is based in New York's Hudson Valley.
Dhruv Mehrotra (he/him) is an investigative data reporter for WIRED. He uses technology to find, build, and analyze datasets for storytelling. Before joining WIRED, he worked for the Center for Investigative Reporting and was a researcher at New York University's Courant Institute of Mathematical Sciences. At Gizmodo, he was on a team that won an Edward R. Murrow Award for Investigative Reporting for their story Prediction: Bias. Mehrotra is based in New York.