Eve Massacre

2501 days ago

But often we don’t know enough about how ADM systems work to know whether they are fairer than humans would be on their own. In part because the systems make choices on the basis of underlying assumptions that are not clear even to the systems’ designers, it’s not necessarily possible to determine which algorithms are biased and which ones are not. And even when the answer seems clear, as in ­ProPublica’s findings on COMPAS, the truth is sometimes more complicated.

What should we do to get a better handle on ADMs? Democratic societies need more oversight over such systems than they have now. AlgorithmWatch, a Berlin-based nonprofit advocacy organization that I cofounded with a computer scientist, a legal philosopher, and a fellow journalist, aims to help people understand the effects of such systems. “The fact that most ADM procedures are black boxes to the people affected by them is not a law of nature. It must end,” we assert in our manifesto. Still, our take on the issue is different from many critics’—because our fear is that the technology could be demonized undeservedly. What’s important is that societies, and not only algorithm makers, make the value judgments that go into ADMs.

Inspecting Algorithms for Bias

technologyreview.com

It was a striking story. “Machine Bias,” the headline read, and the teaser proclaimed: “There’s software used across the country to predict future criminals. And it’s biased against blacks.”