The other problem? The results of the research were wrong. Those sad tweets didn’t come from Hunter—where Twitter is both banned and known as a social network only for old people—but from one single, very sad Twitter user nearby. From premise to algorithm, the case indicates how data can fail at every level, even as such studies coldly lecture people about something so intimate as how they feel. “It’s also a reflection of the kinds of big data stories that we’re so eager to believe: where a large data set combined with novel algorithms shows us some secret we would not otherwise have seen,” writes Thorp. In response, Thorp calls for human data over big data, for thinking empathetically both about the people supplying information and those consuming it, and, ultimately, for allowing people to provide feedback to the systems that prescribe truth often without listening.
In her essay, Sarah Groff Hennigh-Palermo goes back to the basics. She looks at how early conceptual frameworks accompanying the rise of the computer in the 1950s ultimately framed how we think about both the world and data itself. From the outset, data was treated as powerful for what it excludes, namely anything except universal mathematics. Data as it exists today excludes the fact that information is actually transmitted in a subjective context, and instead favors binary, on/off, yes/no understandings of the world. But, as Hennigh-Palermo writes, “information as we think of it is not natural, neutral, or inevitable. The universe is not a computer. That we think it might be is, in fact, a product of the context in which information was invented.”