"What World Are We Building?"

danah boyd
2015 Everett C Parker Lecture: October 20, 2015

This talk was written for the 2015 Everett C Parker Lecture, in honor of his amazing work. Being able to give this lecture is an honor, but it is also deeply humbling in light of his amazing work.

Citation: boyd, danah. 2015. "What World Are We Building?" Everett C Parker Lecture. Washington, DC, October 20.

INTRODUCTION

I am both honored and humbled to be with you today. Today is a day of celebration and mourning, a reminder that life and death are deeply connected and that what we do with our time on this earth matters. We are here today because Dr. Parker spent much of his life fighting for the rights of others - notably the poor and people of color, recognizing that the ability to get access to new technologies to communicate and learn weren’t simply privileges, but rights. He challenged people to ask hard questions and ignore the seemingly insurmountable nature of complex problems. In the process, he paved a road that enables a whole new generation of activists to rally for media rights.

I’m here today to talk with you about battles underway around new internet-based technologies. I’m an ethnographer, which means that I’ve spent the bulk of my professional life trying to map cultural practices at the intersection between technology and society. It’s easy to love or hate technology, to blame it for social ills or to imagine that it will fix what people cannot. But technology is made by people. In a society. And it has a tendency to mirror and magnify the issues that affect everyday life. The good, bad, and ugly.

…..

I grew up in a small town in Pennsylvania, where I struggled to fit in. As a geeky queer kid, I rebelled against the hypocritical dynamics in my community. When I first got access to the internet - before the “World Wide Web” existed - I was like a kid in a candy store. Through early online communities, I met people who opened my eyes to social issues and helped me appreciate things that I didn’t even understand. Transgender activists who helped me understand gender. Soldiers who helped me understand war. Etc. Looking back, I often think of the internet as my saving grace because the people that I met - the *strangers* that I met - helped me take the path that I’m on today. I fell in love with the internet, as a portal to the complex, interconnected society that we live in.

I studied computer science, wanting to build systems that connected people and broke down societal barriers. As my world got bigger, I quickly realized that the internet was a platform and that what people did with that platform ran the full spectrum. I watched activists leverage technology to connect people in unprecedented ways while marketers used the same tools to manipulate people for capitalist gain. I stopped believing that technology alone could produce enlightenment.

In the late 90s, the hype around the internet became bubbalicious and it was painfully clear that economic agendas could shape technology in powerful ways. After the dot-com bubble burst in 2000, I was a part of a network of people determined to build systems that would enable people to connect, share, and communicate. By then, I was also a researcher trained by anthropologists, curious to know what people would do with this new set of tools called social media.

In the early days of social network sites, it was exhilarating watching people understand that they were part of a large global network. Many of my utopian minded friends started dreaming again of how this structure could be used to breakdown social and cultural barriers. Yet, as these tools became more popular and widespread, what unfolded was not a realization of the idyllic desires of many of the early developers, but a complexity of practices that resembled the mess of everyday life.

INEQUITY GETS BAKED IN

Let’s talk youth for a second. As social media was being embraced, I was doing research, driving around the country talking with teenagers about how they understood technology in light of everything else taking place in their lives. I watched teens struggle to make sense of everyday life and their place in it. And I watched as privileged parents projected their anxieties onto the tools that made visible the lives of less-privileged youth.

Not surprisingly, as social media exploded, our country’s struggle with class and race get entwined with technology. I will never forget sitting in small town Massachusetts in 2007 with a 15-year-old white woman I call Kat talking about her life when she made a passing reference to why her friends all quickly abandoned MySpace and moved to Facebook because it was safer and MySpace was boring. Whatever look I gave her at that moment made her squirm. She looked down and said, “It’s not really racist, but I guess you could say that. I’m not really into racism, but I think that MySpace now is more like ghetto or whatever.”

I was taken aback and started probing to learn more, to understand her perspective. “The people who use MySpace—again, not in a racist way—but are usually more like ghetto and hip-hop rap lovers group.” As we continued talking, she became more blunt and told me that black people use MySpace and white people use Facebook.

Fascinated by Kat’s explanation and discomfort, I went back to my fieldnotes. Sure enough, numerous teens had made remarks that, when read with Kat’s story in mind, made it very clear that a social division had unfolded between these two sites during the 2006-2007 school year. I started asking teens about these issues and heard many more accounts of how race affected engagement. After I posted an analysis online, I got a response from a privileged white boy named Craig.

“The higher castes of high school moved to Facebook. It was more cultured, and less cheesy. The lower class usually were content to stick to MySpace. Any high school student who has a Facebook will tell you that MySpace users are more likely to be barely educated and obnoxious. Like Peet’s is more cultured than Starbucks, and Jazz is more cultured than bubblegum pop, and like Macs are more cultured than PC’s, Facebook is of a cooler caliber than MySpace.”

A white girl from Westchester in NY, explained: “My school is divided into the “honors kids,” (I think that is self explanatory), the “good not-so-honors kids,” “wangstas,” (they pretend to be tough and black but when you live in a suburb in Westchester you can’t claim much hood), the “latinos/hispanics,” (they tend to band together even though they could fit into any other groups) and the “emo kids” (whose lives are allllllways filled with woe). We were all in MySpace with our own little social networks but when Facebook opened its doors to high schoolers, guess who moved and guess who stayed behind.”

This was not the first time that racial divisions became visible in my research. I had mapped networks of teens using MySpace from single schools only to find that, in supposedly “integrated” schools, friendship patterns were divided by race. And I’d witnessed and heard countless examples of the ways in which race configured everyday social dynamics which bubbled up through social media. In our supposedly post-racial society, social relations and dynamics were still configured by race. But today’s youth don’t know how to talk about race or make sense of what they see.

And so, in 2006-2007, I watched a historic practice reproduce itself online. I watched a digital white flight. Like US cities in the 1970s, MySpace got painted as a dangerous place filled with unsavory characters while Facebook was portrayed as clean and respectable. And with money, media, and privileged users behind Facebook, it became the dominant player that attracted everyone. And racial divisions just shifted technology. Instagram and Vine, for example.

Teenagers weren’t creating the racialized dynamics of social media; they were reproducing what they saw everywhere else and projecting them onto their tools. And they weren’t alone. Journalists, parents, politicians, and pundits gave them the racist language that they reiterated. And today’s technology is valued - culturally and financially - based on how much it’s used by the most privileged members of our society.

STATISTICAL PREJUDICE

Let’s now shift focus.

Thirteen years ago, when a group of us were sitting around a table trying to imagine how to build tools that would support rich social dynamics, none of us could’ve imagined being where we are now. Sure, there were those who wanted to be rich and famous, but no one thought that a social network site would be used by over a billion people and valued in the hundreds of billions of dollars. No one thought that every major company would have a “social media strategy” within a few years or that the technologies we were architecting would reconfigure the political and cultural landscape. None of us were focused on what we now know as “big data.”

“Big data” is a fuzzy amorphous concept, referencing a set of technologies and practices for analyzing large amounts of data. These days, though, it’s primarily a phenomenon, promising that if we just have more data, we can solve all of the world’s problems. Of course, the problem with “big data” isn’t whether or not we have the data, but whether or not we have the ability to make meaning from and produce valuable insights with data. And this is often trickier than one might imagine.

One of the perennial problems with the statistical and machine learning techniques that underpin “big data” analytics is that they rely on data entered as input. And when the data you input is biased, what you get out is just as biased. These systems learn the biases in our society. And they spit them back out at us.

Consider the work done by Latanya Sweeney, a brilliant computer scientist. One day, she was searching for herself on Google when she noticed that the ads displayed were for companies offering criminal record background checks with titles like: “Latanya Sweeney, Arrested?”, thereby implying that she may indeed have a criminal record. Suspicious, she started searching for other, more white-sounding names, only to find that the advertisements offered in association with those names were quite different. She set about to more formally test the system finding that, indeed, searching for black names were much more likely to produce ads for criminal justice products and services.

This story attracted a lot of media attention. What the public failed to understand was that Google wasn’t intentionally discriminating or selling ads based on race. Google was unaware of the content of the ad. All it knew is that people clicked on those ads for some searches but not others and so it was better to serve them up when the search queries had a statistical property similar to queries where a click happen. In other words, because racist viewers were more likely to click on these ads when searching for black names, Google’s algorithm quickly learned to serve up these ads for names that are understood as black. In other words, Google was trained to be racist by its very racist users.

Our cultural prejudices are deeply embedded into countless datasets, the very datasets that our systems are trained to learn on. Students of color are much more likely to have disciplinary school records than white students. Black men are far more likely to be stopped and frisked, arrested of drug possession, or charged with felonies even when their white counterparts engage in the same behaviors. Poor people are far more likely to have health problems, live further away from work, and struggle to make rent. Yet all of these data are used to fuel personalized learning algorithms, risk-assessment tools for judicial decision-making, and credit and insurance scores. And so the system “predicts” that people who are already marginalized are higher risks, thereby constraining their options and making sure they are, indeed, higher risks.

This was not what my peers set out to create when we imagined building tools that allowed you to map who you knew or enabled you to display interests and tastes. We didn’t architect for prejudice, but we didn’t design systems to combat it either.

Lest you think that I fear and despise “big data”, let me take a moment to highlight the potential. I’m on the board of Crisis Text Line, a phenomenal service that allows youth in crisis to communicate with counselors via text message. We’ve handled millions of conversations with youth who are struggling with depression, disordered eating, suicidal ideation, and sexuality confusion. The practice of counseling is not new, but the potential shifts dramatically when you have millions of messages about crises that can help train a system designed to help people. Because of analytics that we do, counselors are encouraged to take specific paths to suss out how they can best help the texter. Natural language processing allows us to automatically bring up resources that might help a counselor or encourage a counselor to pass the conversation onto a different counselor who may be better suited to help this particular texter. In other words, we’re using data to empower counselors to better help youth who desperately need our help. And we’ve done more active rescues during suicide attempts than I like to count. So many youth lack access to basic mental health services.

But the techniques we use at CTL are the exact same techniques that are used in marketing. Or personalized learning. Or predictive policing. Let’s examine the latter for a moment. Predictive policing involves taking prior information about police encounters and using that to make a statistical assessment about the likelihood of crime happening in a particular place or involving a particular person. In a very controversial move, Chicago has used such analytics to make a list of people most likely to be a victim of violence. In an effort to prevent crime, police officers approached those individuals and used this information in an effort to scare them to stay out of trouble. Surveillance by powerful actors doesn’t build trust; it erodes it. Imagine that same information being given to a social worker. Even better, to a community liaison. Sometimes, it’s not the data that’s disturbing, but how it’s used. And by whom.

THE WORLD WE’RE CREATING

Knowing how to use the data isn’t easy. One of my colleagues at Microsoft Research - Eric Horvitz - can predict with startling accuracy whether someone will be hospitalized based on what they search for. What should he do with that information? Reach out to people? That’s pretty creepy. Do nothing. Is that ethical? No matter how good our predictions are, figuring out how to use them is a complex social and cultural issue that technology doesn’t solve for us. In fact, as it stands, technology is just making it harder for us to have a reasonable conversation about agency and dignity, responsibility and ethics.

Data is power. And, increasingly, we’re seeing data being used to assert power over people. It doesn’t have to be this way, but one of the things that I’ve learned is that, unchecked, new tools are almost always empowering to the privileged at the expense of those who are not.

Dr. Parker understood that. He understood that if we wanted less privileged people to be informed and empowered, they needed access to the same types of quality information and communication technologies as those who were privileged. Today, we’re standing on a new precipice. For most media activists, unfettered internet access is at the center of the conversation. And that is critically important. But I would like to challenge you to think a few steps ahead of the current fight.

We are moving into a world of prediction. A world where more people are going to be able to make judgments about others based on data. Data analysis that can mark the value of people as worthy workers, parents, borrowers, learners, and citizens. Data analysis that has been underway for decades but is increasingly salient in decision-making across numerous sectors. Data analysis that most people don’t understand.

Many activists will be looking to fight the ecosystem of prediction, regulate when and where it can be used. This is all fine and well, when we’re talking about how these technologies are designed to do harm. But more often than not, these tools will be designed to be helpful, to increase efficiency, to identify people who need help. And they will be used for good alongside uses that are terrifying. How can we learn to use this information to empower?

One of the most obvious issues is that the diversity of people who are building and using these tools to imagine our future is extraordinarily narrow. Statistical and technical literacy isn’t even part of the curriculum in most American schools. In our society where technology jobs are high-paying and technical literacy is needed for citizenry, less than 5% of high schools even offer AP computer science courses. Needless to say, black and brown youth are much less likely to have access let alone opportunities. If people don’t understand what these systems are doing, how do we expect people to challenge them?

We must learn how to ask hard questions of technology and those making decisions based on their analysis. It wasn’t long ago when financial systems were total black boxes and we fought for fiduciary accountability to combat corruption and abuse. Transparency of data, algorithms, and technology isn’t enough; we need to make certain assessment is built into any system that we roll-out. You can’t just put millions of dollars of surveillance equipment into the hands of the police in the hope of creating police accountability. Yet, with police-worn body cameras, that’s exactly what we’re doing. And we’re not even trying to assess the implications. This is probably the fastest roll-out of a technology out of hope, but it won’t be the last. So how do we get people to look beyond their hopes and fears and actively interrogate the trade-offs?

More and more, technology is going to play a central role in every sector, every community, and every interaction. It’s easy to screech in fear or dream of a world in which every problem magically gets solved. But to actually make the world a better place, we need to start paying attention to the different tools that are emerging and learn to ask hard questions about how they should be put into use to improve the lives of everyday people. Now, more than ever, we need those who are thinking about social justice to understand technology and those who understand technology to commit to social justice.

Thank you!