Just Tech: Centering Community-Driven Innovation at the Margins Episode 3 with Dr. Sasha Costanza-Chock
Episode 135 | April 13, 2022
In “Just Tech: Centering Community-Driven Innovation at the Margins,” Senior Principal Researcher Mary L. Gray explores how technology and community intertwine and the role technology can play in supporting community-driven innovation and community-based organizations. Dr. Gray and her team are working to bring computer science, engineering, social science, and communities together to boost societal resilience in ongoing work with Project Resolve. She’ll talk with organizers, academics, technology leaders, and activists to understand how to develop tools and frameworks of support alongside members of these communities.
In this episode of the series, Dr. Gray and Dr. Sasha Costanza-Chock, scholar, designer, and activist, explore design justice, a framework for analyzing design’s power to perpetuate—or take down—structural inequality and a community of practice dedicated to creating a more equitable and sustainable world through inclusive, thoughtful, and respectful design processes. They also discuss how critical thinkers and makers from social movements have influenced technology design and science and technology studies (STS), how challenging the assumptions that drive who tech is built for will create better experiences for most of the planet, and how a deck of tarot-inspired cards is encouraging radically wonderful sociotechnical futures.
Transcript
[MUSIC PLAYS UNDER DIALOGUE]
MARY GRAY: Welcome to the Microsoft Research Podcast series “Just Tech: Centering Community-Driven Innovation at the Margins.” I’m Mary Gray, a Senior Principal Researcher at our New England lab in Cambridge, Massachusetts. I use my training as an anthropologist and communication media scholar to study people’s everyday uses of technology. In March 2020, I took all that I’d learned about app-driven services that deliver everything from groceries to telehealth to study how a coalition of community-based organizations in North Carolina might develop better tech to deliver the basic needs and health support to those hit hardest by the pandemic. Our research together, called Project Resolve, aims to create a new approach to community-driven innovation—one that brings computer science, engineering, the social sciences, and community expertise together to accelerate the roles that communities and technologies could play in boosting societal resilience. For this podcast, I’ll be talking with researchers, activists, and nonprofit leaders about the promises and challenges of what it means to build technology with rather than for society.
[MUSIC ENDS]
My guest for this episode is Dr. Sasha Costanza-Chock,a researcher, activist, and designerwho works to support community-ledprocesses that build shared power,dismantle the matrix of domination,and advance ecological survival.They are the director of research and designat Algorithmic Justice League,a faculty associate with the Berkman Klein Center for Internet & Society at Harvard University,and a member of the steering committee of the Design Justice Network.Sasha’s most recent book, DesignJustice:Community-LedPracticestoBuildtheWorldsWeNeed, was recently a 2021Engineering and Technology PROSE Award finalistand has been cited widely across disciplines.Welcome, Sasha.
SASHA COSTANZA-CHOCK: Thanks, Mary. I’m excited to be here.
GRAY: Can you tell us a little bit about how you define designjustice?
COSTANZA-CHOCK: Design justice is a term—you know, I didn’t create this term; it comes out of a community of practice called the Design Justice Network. But I have kind of chronicled the emergence of this community of practice and some of the ways of thinking about design and power and technology that have sort of come out of that community.And I’ve also done some work sort of tracing the history of different ways that people have thought about design and social justice, really.So, in the book, I did offer a tentative definition, kind of a two-part definition.So, on the one hand, design justice is a framework for analysis about how design distributes benefits and burdens between various groups of people.And in particular, design justice is a way to focus explicitly on the ways that design can reproduce or challenge the matrix of domination, which is Patricia Hill Collins’ term for white supremacy, heteropatriarchy, capitalism, ableism, settler colonialism, and other forms of structural inequality. And also, design justice is a growing community of practice of people who are focused on ensuring more equitable distribution of design’s benefits and burdens, more meaningful participation in design decisions and processes, and also recognition of already existing, community-based, Indigenous, and diasporic design traditions and knowledge and practices.
GRAY: Yeah. What are those disciplines we’re missing when we think about building and building for and with justice at the center of our attention?
COSTANZA-CHOCK: It’s interesting.I think for me, um, so design and technology design in particular, I think, for me, practice came first.So, you know, learning the basics of how to code, building websites, working with the Indymedia network. Indymedia was a kind of global network of hackers and activists and social movement networks who leveraged the power of what was then the nascent internet, um, to try and create a globalized news network for social movements. I became a project manager for various open-source projects for a while.I had a lot of side gigs along my educational pathway.So that was sort of more sort of practice.So, that’s where I learned, you know, how do you run a software project?How do you motivate and organize people?I came later to reading about and learning more about sort of that long history of design theory and history.And then, sort of technology design stuff, I was always looking at it along the way, but started diving deeper more recently. So, my—my first job after my doctorate was, you know, I—I received a position at MIT.Um, and so I came to MIT to the comparative media studies department,set up my collaborative design studio, and I would say, yeah, at MIT, I became more exposed to the HCI literature, spent more time reading STS work,and, in particular, was drawn to feminist science and technology studies.You know, MIT’s a very alienating place in a lot of ways and there’s a small but excellent, you know, community of scholars there who take, you know, various types of critical approaches to thinking about technology design and development and—and sort of the histories of—of technology and sociotechnical systems.And so, kind of through that period, from 2011 up until now, I spent more time engaging with—with that work, and yeah, got really inspired by feminist STS. I also—parallel to my academic formation and training—was always reading theory and various types of writing from within social movement circles, stuff that sometimes is published in academic presses or in peer-review journals and sometimes totally isn’t, but, to me, is often equally or even more valuable if you’re interested in theorizing social movement activity than the stuff that comes sort of primarily from the academy or from social movement studies as a subfield of sociology.
GRAY: Mm-hmm.
COSTANZA-CHOCK: Um, so I was like, you know, always reading all kinds of stuff that I thought was really exciting that came out of movements.So, reading everything that AK Press publishes, reading stuff from Autonomia, and sort of the—the Italian sort of autonomous Marxist tradition. But also in terms of pedagogy, I’m a big fan of Freire.And I didn’t encounter Freire through the academy; it was through, you know, community organizing work.So, community organizers that I was connected to were all reading Freire and reading other sort of critical and radical thinkers and scholars.
GRAY: So, wait.Hold the phone.
COSTANZA-CHOCK: OK. [LAUGHS]
GRAY: You didn’t actually—I mean, there wasn’t a class where PedagogyoftheOppressed was taught in your training?I’m just, now, am like “Really?” That’s—
COSTANZA-CHOCK: I don’t think so.Yeah.
GRAY: Wow.
COSTANZA-CHOCK: Yeah, because I didn’t have formal training in education.It was certainly referenced, but the place where I did, you know, study group on it was in movement spaces, not in the academy.Same with bell hooks. I mean, bell hooks, there would be, like, the occasional essay in, like—I did undergraduate cultural studies stuff. Marjorie Garber, you know, I think—
GRAY: Yeah.
COSTANZA-CHOCK: had like an essay or two on her syllabus, um—
GRAY: Yeah.
COSTANZA-CHOCK: —of bell hooks.Um so, I remember encountering bell hooks early on, but reading more of her work came later and through movement spaces.And so, then, what I didn’t see was a lot of people—although, increasingly now, I think this is happening—you know, putting that work into dialogue with design studies and with science and technology studies.And so, that’s what I—that’s what I get really excited by, is the evolution of that.
GRAY: And—and maybe to that point, I feel like you have, dare I say, “mainstreamed” Patricia Hill Collins in computer science and engineering circles that I travel.Like, to hear colleagues say “the matrix of domination,” they’re reading it through you, which is wonderful.They’re reading—they’re reading what that means. And design justice really puts front and center this critical approach.Can you tell us about how you came to that framework and put it in the center of your work for design justice?
COSTANZA-CHOCK:Patricia Hill Collinsdevelops the term in the ’90s. Um, the “matrix of domination” is her phrase.Um, she elaborates on it in, you know, her text, uh, BlackFeministThought.And of course, she’s the past president of the American Sociological Association.Towering figure, um, in some fields, but, you know, maybe not as much in computer science and HCI, and other, you know, related fields.But I think unjustly so.And so, part of what I’m really trying to do at the core of the DesignJustice book was put insights from her and other Black feminist thinkers and other critical scholars in dialogue with some core, for me, in particular, HCI concepts,um, although I think it does, you know, go broader than that.The matrix of domination was really useful to me when I was learning to think about power and resistance, how does power and privilege operate. You know, this is a concept that says you can’t only think about one axis of inequality at a time.You can’t just talk about race or just talk about gender—you can’t just talk about class—because they operate together.Of course, another key term that connects with matrix of domination is “intersectionality” from Kimberlé Crenshaw.She talks about it in the context of legal theory, where she’s looking at how the legal system is not set up to actually protect people who bear the brunt of oppression.And she talks about these, you know, classic cases where Black women can’t claim discrimination under the law at a company which defends itself by saying, “Well, we’ve hired Black people.”And what they mean is they’ve hired some Black men.And they say, “And we’ve also hired women.”But they mean white women. And so, it’s not legally actionable.The Black women have no standing or claim to discriminationbecause Black women aren’t protected under anti-discrimination law in the United States of America.And so that is sort of like a grounding that leads to this, you know, the conversation.The matrix of domination is an allied concept.And to me, it’s just incredibly useful because I thought that it could translate well, in some ways, into technical fields because there’s a geometry and there’s a mental picture.There’s an image that it’s relatively easy to generate for engineers, I think, of saying, “OK, well, OK, your x-axis is class. [LAUGHS] Your y-axis is gender.Your z-axis is race.This is a field.And somewhere within that, you’re located. And also, everyone is located somewhere in there, and where you’re located has an influence on how difficult the climb is.”And so when we’re designing technologies—and whether it’s, you know, interface design, or it’s an automated decision system—you know, you have to think about if this matrix is set up to unequally distribute, through its topography, burdens and benefits to different types of people depending on how they are located in this matrix, at this intersection.Is that correct?You know, do you want to keep doing that, or do you want to change it up so that it’s more equitable?And I think that that’s been a very useful and powerful concept. And I think, for me, part of it maybe did come through pedagogy.You know, I was teaching MIT undergraduates—most of them are majoring in computer science these days—and so I had to find ways to get them to think about power using conceptual language that they could connect with, and I found that this resonated.
GRAY: Yeah.And since the book has come out—and I, you know, it’s been received by many different scholarly communities and activist communities—has your own definition of design justice changed at—at all?Or even the ways you think about that matrix?
COSTANZA-CHOCK: That’s a great question.I think that one of the things that happened for me in the process of writing the book is I went a lot deeper into reading and listening and thinking more about disability and how crucial, you know, disability and ableism are, how important they are as sort of axes of power and resistance, also as sources of knowledge.So, like, disability justice and disabled communities of various kinds being key places for innovation, both of devices and tools and also of processes of care.And just, there’s so much phenomenal sort of work that’s coming, you know, through the disability justice lens that I really was influenced by in the writing of the book.
GRAY: So another term that seems central in the book is “codesign.”And I think formany folks listening, they might already have an idea of what that is.But can you say a bit more about what you mean by codesign, and just how that term relates to design justice for you?
COSTANZA-CHOCK: I mean, to be entirely honest with you, I think that when I arrived at MIT, I was sort of casting around for a term that I could use to frame a studio course that I wanted to set up that would both signal what the approach was going to be while also being palatable to the administration and not scaring people away.Um, and so I settled on “codesign” as a term that felt really friendly and inclusive and was a broad enough umbrella to enable the types of partnerships with community-based organizations and social movement groups, um, that I wanted to provide scaffolding for in that class.It’s not that I think “codesign” is bad. You know, there’s a whole rich history of writing and thinking and practice, you know, in codesign.I think I just worry that like so many things—I don’t know if it’s that the term is loose enough that it allows for certain types of design practices that I don’t really believe in or support or that I’m critical of or if it’s just that it started meaning more of one thing, um, and then, over time, it became adopted—as many things do become adopted—um, by the broader logics of multinational capitalist design firms and their clients. But I don’t necessarily use the term that much in my own practice anymore.
GRAY: I want to understand what you felt was useful about that term when you first started applying it to your own work and why you’ve moved away from it. What are good examples of, for you, a practice of codesign that stays committed to design justice, and what are some examples of what worries you about the ambiguity of what’s expected of somebody doing codesign?
COSTANZA-CHOCK: So, I mean, there—there’s lots of terms in, like, a related conceptual space, right?So, there’s codesign, participatory design, human-centered design, design justice.I think if we really get into it, each has its own history and sort of there are conferences associated with each.There are institutions connected to each.And there are internal debates within those communities about, you know, what counts and what doesn’t.I think, for me, you know, codesign remains broad enough to include both what I would consider to be sort of design justice practice, where, you know, a community is actually leading the process and people with different types of design and engineering skills might be supporting or responding to that community leadership.But it’s also broad enough to include what I call in the book, you know, more extractive design processes, where what happens is, you know, typically a design shop or consultant working for a multinational brand parachutes into a place, a community, a group of people, runs some design workshops, maybe—maybe does some observation, maybe does some focus groups, generates a whole bunch of ideas about the types of products or product changes that people would like to see, and then gathers that information and extracts it from that community, brings it back to headquarters, and then maybe there are some product changes or some new features or a rollout of something new that gets marketed back to people.And so in that modality, you know, some people might call an extractive process where you’re just doing one or a few workshops with people “codesign” because you have community collaborators, you have community input of some kind; you’re not only sitting in the lab making something.But the community participation is what I would call thin.It’s potentially extractive.The benefit may be minimal to the people who have been involved in that process.And most of the benefits accrue back either to the design shop that’s getting paid really well to do this or ultimately back to headquarters—to the brand that decided to sort of initiate the process.And I’m interested in critiquing extractive processes, but I’m most interested in trying to learn from people who are trying to do something different, people who are already in practice saying, “I don’t want to just be doing knowledge extraction.I want to think about how my practice can contribute to a more just and equitable and sustainable world.”And in some ways, people are, you know, figuring it out as we go along, right?Um, but I’m trying to be attentive to people trying to create other types of processes that mirror, in the process, the kinds of worlds that we want to create.
GRAY: So, it seems like one of the challenges that you bring up in the book is precisely design at—at some point is thinking about particular people and particular—often referred to as “users’”— journeys.And I wanted to—to step back and ask you, you know, you note in the book that there’s a—a default in design that tends to think about the “unmarked user.”And I’m quoting you here. That’s a “(cis)male, white, heterosexual, ‘able-bodied,’ literate, college educated, not a young child, not elderly.”Definitely, they have broadband access.They’ve got a smartphone.Um, maybe they have a personal jet, I don’t know.That part was not a quote of you. [LAUGHTER] But, you know, you’re really clear that there’s this—this default, this presumed user, ubiquitous user.Um, what are the limits for you to designing for an unmarked user, but then how do you contend with this thinking so specifically about people can also be quite, to your earlier point about intersectionality, quite flattening?
COSTANZA-CHOCK: Well, I think the unmarked user is a really well-known and well-documented problem.Unfortunately, it often, it—it applies—you don’t have to be a member of all those categories as an unmarked user to design for the unmarked user when you’re in sort of a professional design context.And that’s for a lot of different reasons that we don’t have that much time to get into, but basically hegemony. [LAUGHTER] So, um—and the problem with that—like, there’s lots of problems with that—one is that it means that we’re organizing so much time and energy and effort in all of our processes to kind of, like, design and build everything from, you know, industrial design and new sort of, you know, objects to interface design to service design, and, you know, if we build everything for the already most privileged group of people in the world, then the matrix of domination just kind of continues to perpetuate itself.Then we don’t move the world towards a more equitable place.And we create bad experiences, frankly, for the majority of people on the planet.Because the majority of people on planet Earth don’t belong to that sort of default, unmarked user that’s hegemonic.Most people on planet Earth aren’t white; they’re actually not cis men.Um, at some point most people on planet Earth will be disabled or will have an impairment.They may not identify as Disabled, capital D.Most people on planet Earth aren’t college educated.Um, and so on and so forth.So, we’re really excluding the majority of people if we don’t actively and regularly challenge the assumption of who we should be building things for.
GRAY: So, what do you say to the argument that, “Well, tech companies, those folks who are building, they just need to hire more diverse engineers, diverse designers—they need a different set of people at the table—and then they’ll absolutely be able to anticipate what a—a broader range of humanity needs, what more people on Earth might need.”
COSTANZA-CHOCK: I think this is a “yes, and” answer.So, absolutely, tech companies [LAUGHS] need to hire more diverse engineers, designers, CEOs; investors need to be more diverse, et cetera, et cetera, et cetera. You know, the tech industry still has pretty terrible statistics, and the further you go up the corporate hierarchy, the worse it gets.So that absolutely needs to change, and unfortunately, right now, it’s just, you know, every few years, everyone puts out their diversity numbers.There’s a slow crawl sometimes towards improvement; sometimes it backslides.But we’re not seeing the shifts that we—we need to see, so it’s like hiring, retention, promotion, everything.I am a huge fan of all those things.They do need to happen. And a—a much more diverse and inclusive tech industry will create more diverse and inclusive products.I wouldn’t say that’s not true.I just don’t think that employment diversity is enough to get us towards an equitable, just, and ecologically sustainable planet.And the reason why is because the entire tech industry right now is organized around the capitalist system.And unfortunately, the capitalist system is a resource-extractive system, which is acting as if we have infinite resources on a finite planet.And so, we’re just continually producing more stuff and more things and building more server farms and creating more energy-intensive products and software tools and machine learning models and so on and so on and so on.So at some point, we’re going to have to figure out a way to organize our economic system in a way that’s not going to destroy the planet and result in the end of homo sapiens sapiens along with most of the other species on the planet.And so unfortunately, employment diversity within multicultural, neoliberal capitalism will not address that problem.
GRAY: I could not agree more.And I don’t want this conversation to end. I really hope you’ll come back and join me for another conversation, Sasha.It’s been unbelievable to be able to spend even a little bit of time with you.So, thank you for—for sharing your thoughts with us today.
COSTANZA-CHOCK: Well, thank you so much for having me.I always enjoy talking with you, Mary.And I hope that, yeah, we’ll continue this either in a podcast or just over a cup of tea.
[MUSIC PLAYS UNDER DIALOGUE]
GRAY: Looking forward to it.And as always, thanks to our listeners for tuning in.If you’d like to learn more—wait, wait, wait, wait!There’s just so much to talk about. [MUSIC IS WARPED AND ENDS] Not long after our initial conversation,Sasha said she was willing to have more discussion.Sasha, thanks for rejoining us.
COSTANZA-CHOCK: Of course.It’s always a pleasure to talk with you, Mary.
GRAY: In our first conversation, we had a chance to explore design justiceas a framework and a practiceand your book of the same name, which has inspired many.I’d love to know how your experience in design justice informs your current role with the Algorithmic Justice League.
COSTANZA-CHOCK:So I am currently the director of researchand design at the Algorithmic Justice League. The Algorithmic Justice League, or AJL for short,is an organization that was founded by Dr. Joy Buolamwini, and our mission is to raise awarenessabout the impacts of AI,equip advocates with empirical research,build the voice and choice of the most impacted communities,and galvanize researchers, policymakers,and industry practitioners to mitigate AI harms and biases,and so we like to talk about how we’re building a movementto shift the AI ecosystem towards more equitableand accountable AI. And my role in AJL is to lead up our research efforts and also,at the moment, product design.Uh,we’re a small team.We’re sort of in start-up mode.Uh, we’re hiring various, you know, director-level rolesand building out the teams that are responsiblefor different functions,and so it’s a very exciting time to be part of the organization.I’m very proud of the work that we’re doing.
GRAY: So you have both product design and researchhappening under the same roofin what sounds like a super-hero setting.That’s what we should take away—and that you’re hiring.I think listeners need to hear that. How do you keep research and product design happening in a setting that usually you have to pick one or the other in a nonprofit. How are you making those come together?
COSTANZA-CHOCK: Well, to be honest,most nonprofits don’t really have a product design arm.I mean, there are some that do,but it’s not necessarily a standard, you know, practice. I think what we are trying to do, though,as an organization—you know, we’re very uniquely positionedbecause we play a storytelling role,and so we’re influencing the public conversationabout bias and harms in algorithmic decision systems,and probably the most visible placethat that, you know, has happenedis in the film CodedBias. It premiered at Sundance,then it aired on PBS,and it’s now available on Netflix,and that film follows Dr. Buolamwini’s journey from,you know, a grad student at the MIT Media Labwho has an experience of facial recognition technologybasically failing on her dark skin, and it follows her journeyas she learns more about how the technology works,how it was trained, why it’s failing,and ultimately is then sort of, you know, testifying in U.S. Congressabout the way that these toolsare systematically biased against women and peoplewith darker skin tones, skin types,and also against trans and gender nonconforming people,and that these toolsshould not be deployed in production environments,especially where it’s going to cause significant impacts to people’s lives. Over the past couple years, we’ve seen a lot of real-world examples of the harms that facial recognition technologies, or FRTs, can create. These types of bias and harm are happening constantlynot only in facial recognition technologiesbut in automated decision systems of many different kinds,and there are so many scholars and advocacy organizationsand, um, community groups that are now kind of emergingto make that more visible and to organizeto try and block the deployment of systemswhen they’re really harmful or at the very leasttry and ensure that there’s more communityoversight of these toolsand also to set some standards in place,best practices, external auditing and impact assessmentso that especially as public agenciesstart to purchase these systems and roll them out,you know, we have oversight and accountability.
GRAY: So, April 15 is around the corner, Tax Day, and there was a recent bit of news around what seems like a harmless use of technology and use of identification for taxes that you very much, um, along with other activists and organizations, uh, brought public attention to the concerns over sharing IDs as a part of our—of our tax process. Can you just tell the audience a little bit about what happened, and what did you stop?
COSTANZA-CHOCK: Absolutely.So, um, ID.me is a, uh, private companythat sells identity verification services,and they have a number of different waysthat they do identity verification,including, uh, facial recognition technologywhere they compare basicallya live video or selfie to a picture IDthat’s previously been uploaded and stored in the system.They managed to secure contracts with many government agencies,including a number of federal agenciesand about 30 state agencies, as well.And a few weeks ago,it came out that the IRS had given a contract to ID.me and that people were going to have to scan our facesto access our tax records.Now, the problem with this—there are a lot of problems with this,but one of the problems is that we knowthat facial recognition technologyis systematically biased against some groups of peoplewho are protected by the Civil Rights Act,so, uh, against Black people and people with darker skin tonesin general, uh, against women,and the systems perform least wellon darker skinned type women.And so what this means is that if you’re, say, a Black womanor if you’re a trans person, it would be more likelythat the verification process would fail for youin a way that is very systematic and has—you know, we have pretty good documentationabout the failure rates,both in false positives and false negatives.The best science shows that these toolsare systematically biased against some people,and so for it to be deployed in contractsby a public agency for something that’s going to affect everybodyin the United States of America and is going to affect Black people and Black womenspecifically most, uh, is really, really problematicand opens the ground to civil rights lawsuits,to Federal Trade Commission action,among a number of other, you know, possible problems.So when we at the Algorithmic Justice League learned that ID.me had this partnership with the IRSand that this was all going to roll outin advance of this year’s tax season,uh, we thought this is really a problemand maybe this is something that we could move the needle on,and so we got togetherwith a whole bunch of other organizations like Fightfor the Futureand the Electronic Privacy Information Center, and basically, all of these organizationsstarted working with all cylinders firing,including public campaigns, op-eds, social media,and back channeling to various peoplewho work inside different agenciesin the federal government like the White House Office of Science and Technology Policy,the Federal Trade Commission,other contacts that we have in different agencieskind of saying, “Did you know that this system—this multi-million-dollar contract for verificationthat the IRS is about to unleash on all taxpayers—is known to have outcomes that disproportionately disadvantage Black people and women and trans and gender nonconforming people?” And in a nutshell, it workedto a degree. So the IRS announced that they would not beusing the facial recognition verification option that ID.me offers, and a number of other federal agencies announcedthat they would be looking more closely at the contractsand exploringwhether they wanted to actually roll this out,and what’s happening now is that at the state level through public records requests and other actions,um, you know, different organizationsare now looking state by state and findingand turning up all these examples of how this same toolwas used to basically deny accessto unemployment benefits for people,to deny access to services for veterans.There are now, I think, around 700 documented examplesthat came from public records requests of peoplesaying that they tried to verify their access, um, especially to unemployment benefits using the ID.me service, and they could not verify,and when they were told to take the backup option,which is to talk with a live agent, the company, you know,was rolling out this system with contractsso quickly that they hadn’t built up their human workforce,so when people’s automated verification was failing,there were these extremely long wait times like weeks or, in some cases,months for people to try and get verified.
GRAY: Well, and I mean, this is—I feel like the past always comes back to haunt us, right,because we have so many cases where it’s, in hindsight,seems really obvious that we’re going to have a systemthat will fail because of the training datathat might have created the model.We are seeing so many cases where training datasetsthat have been the tried-and-true standardsare now being taken off the shelfbecause we can tell that there are too many errorsand too few theories to understand the models we haveto keep using the same models the same way that we haveused them in the past,and I’m wondering what you make of this continued desireto keep reaching for the training dataand pouring more data inor seeing some way to offset the bias. What’s the value of looking for the bias versus setting up guardrails for where we apply a decision-making system in the first place?
COSTANZA-CHOCK: Sure. I mean, I think—let me start by sayingthat I do think it’s useful and valuable for people to do research to try and better understandthe ways in which automated decision systems are biased,the different points in the life cycle where bias creeps in. And I do think it’s useful and valuablefor people to look at bias and try and reduce it.And also, that’s not the be all and end all,and at the Algorithmic Justice League,we are really trying to get people to shiftthe conversation from bias to harmbecause bias is one but not the only waythat algorithmic systems can be harmful to people.So a good example of that would be, we could talk about recidivism risk prediction,which there’s been a lot of attention to that, you know, ever since the—the ProPublica articlesand the analysis of—that’s come out about, uh, COMPAS, which is, you know, the scoring system that’s usedwhen people are being detained pre-trial and a courtis making a decisionabout whether the person should be allowed out on bailor whether they should be detained until their trial. And these risk scoring tools,it turns out thatthey’re systematically biased against Black people,and they tend to overpredict the rate at whichBlack people will recidivate or will—will re-offend during the, you know,the periodthat they’re out and underpredict the rate at which white people, you know, would do so.So there’s one strand of researchers and advocateswho would say,“Well, we need to make this better.We need to fix that system, and it should be less biased,and we want a system that more perfectly—more perfectly does predictionand also more equitably distributesboth false positives and false negatives.”You can’t actually maximize both of those things.You kind of have to make difficult decisions about do youwant it to, um, have more false positivesor more false negatives.You have to sort of make decisions about that.But then there’s a whole nother strand of people like, you know, the Carceral Technology Resistance Network,who would just say, “Hold on a minute.Why are we talking about reducing biasin a pre-trial detention risk-scoring tool?We should be talking about why are we locking people up at all,and especially why are we locking people upbefore they’ve been sentenced for anything?” So rather than saying let’s build a better tool that can help us,you know, manage pre-trial detention,we should just be saying we should absolutely minimize pre-trial detentionto only the most extreme casesthat—where there’s clear evidence and a clear,you know, present danger that the person will immediatelybe harming themselves or—or—or someone else,and that should be something that, you know, a judge can decidewithout the need of a risk score.
GRAY: When you’re describing the consequencesof a false positive or a false negative,I’m struck by, um, how cold the calculation can sound,and then when I think about the implications,you’re saying we have to decide do we let more people we might suspectcould create harms leave a courtroomor put in jail people we could not possibly know how many more of themwould not versus would commit some kind of act between nowand when they’re sentenced.And so, I’m just really struck by the weightiness of that,uh, if I was trying to think about developing a technologythat was going to try and reduce that harmand deliberate which is more harmful.I’m just saying that out loud because I—I feel like thoseare those moments where I see two strands of worksyou’re calling out and two strands of workyou’re pointing outthat sometimes do seem in fundamental tension, right?That we would not want to build systemsthat perpetuate an approachthat tries to take a better guessat whether to retain someonebefore they’ve been convicted of anything.
COSTANZA-CHOCK: Yeah, so I think, like,in certain cases, like in criminal,you know, in the criminal legal system,you know, we want to sort of step outfrom the question that’s posed to us,where people are saying,“Well, what approach should we useto make this tool less biasedor even less harmful,” if they’re using that frame.And we want to step back and say,“Well, what are the other things that we need to invest into ensure that we can minimize the number of peoplewho are being locked up in cages?”Because that’s clearly a horrible thing to do to people,and it’s not making us safer or happier or better,and it’s systematically and disproportionately deployedagainst people of color.In other domains, it’s very different,and this is why I think, you know, it can be very tricky.We don’t want to collapse the conversationabout AI and algorithmic decision systems,and there are some things that we can say,you know, at a very high level about these tools,but at the end of the day, a lot of the times,I think that it comes down to the specific domainand context and tool that we’re talking about.So then we could say, well, let’s lookat another field like dermatology, right?And you would say, well, there’s a whole bunch of researchersworking hard to try and develop better diagnostic toolsfor skin conditions, early detection of cancer. And so it turns out that the existing datasets of skin conditions heavily undersample the wide diversity of human skin typesthat are out there in the world and overrepresent white skin, and so these tools perform way better, um, you know,for people who are, uh, raced as white, uh, under the current, you know, logicof the construction of—of racial identities.So there’s a case where we could say, “Well, yeah, here inclusion makes sense.”Not everybody would say this, but a lot of us would say this is a case where it is a good idea to say, “Well, what we need to do is go out and create much better,far more inclusive datasets of various skin conditionsacross many different skin types, you know,should be people from all across the world and different climatesand locations and skin types and conditions,and we should better train these diagnostic tools,which potentially could really both democratize access to,you know, dermatology diagnostics and could also help with earlier detection of,you know, skin conditionsthat people could take action on, you know.Now, we could step out of that logic for a moment and say,“Well, no, what we should really do is make surethat there’s enough resources so that there are dermatologists in every communitythat people can easily see for freebecause they’re always going to do, you know,a better job than, you know, these apps could ever do,”and I wouldn’t disagree with that statement,and also, to me, this is a casewhere that’s a “both/and” proposition, you know.If we have apps that people can use to do self-diagnosticand if they reach a certain threshold of accuracyand they’re equitable across different skin types,then that could really save a lot of people’s lives,um, and then in the longer run,yes, we need to dramatically overhaul our—our medical systemand so on and so forth. But I don’t think that those goals are incompatible,whereas in another domain like the criminal legal system,I think that investing heavily in the developmentof so-called predictive crime technologies of various kinds, I don’t think that that’s compatible with decarcerationand the long-term project of abolition.
GRAY: I love that you’ve reframed itas a matter of compatibilitycause I—what I really appreciate about your workis that you’re—you keep the tension.I mean you—that you really insist on us being willing to grapple with and stay vigilantabout what could go wrong without saying don’t do it at all,and I’ve found that really inspiring. Um …
COSTANZA-CHOCK: Well—
GRAY: Yeah, please.
COSTANZA-CHOCK: Can I—can I say one more thingabout that, though?I mean, I do—yes,and also there’s a whole nother question here, right?So, you know, is—is this tool harmful?And then there’s also—there’s a democracy question, which is, were people consulted?Do people want this thing?Even if it does a good job, you know,um, and even if it is equitable.And because there’s a certain type of harm,which is, uh, a procedural harm,which is if an automated decision system is deployed against people’s consent or against people’s ideaabout what they think should be happeningin a just interaction with the decision maker, then that’s a type of harm that’s also being done.And so, we really need to think about not onlyhow can we make AI systems less harmfuland less biased, among the various types of harm that can happen,but also more accountable, and how can we ensurethat there is democratic and community oversightover whether systems are deployed at all,whether these contracts are entered into by public agencies,and whether people can opt outif they want to from the automated decision systemor whether it’s something that’s being forced on us.
GRAY: Could you talk a little bitabout the work you’re doing around bounties as a way of thinking about harms in algorithmic systems?
COSTANZA-CHOCK: So at the Algorithmic Justice League,one of the projectsI’ve been working on over the last year culminated in a recently released report,which is called “Bug Bounties for Algorithmic Harms? Lessons from cybersecurity vulnerability disclosure for algorithmic harms discovery, disclosure, and redress,” and it’s a co-authored paper by AJL researchers Josh Kenway, Camille François, myself, Deb Raji, and Dr. Joy Buolamwini. And so, basically, we got some resourcesfrom the Sloan and Rockefeller foundationsto explore this question of couldwe apply bug bounty programs to areas beyond cybersecurity,including algorithmic harm discovery and disclosure?In the early days of cybersecurity,hackers were often in this positionof finding bugs in software,and they would then tell the companies about it,and then the companies would sue them or denythat it was happening or try and shut them down in—in various ways.And over time,that kind of evolved into what we have now,which is a system where, you know, it was once considereda radical new thing to pay hackers to findand tell you about bugs in your—in your systems,and now it’s a quite common thing,and most major tech companies, uh, do this. And so very recently,a few companies have started adoptingthat model to look beyond security bugs.So, for example, you know, we found an early examplewhere Rockstar Games offered a bountyfor anyone who could demonstratehow their cheat detection algorithms might be flawed because they didn’t want to mistakenly flag peopleas cheating in game if they weren’t. And then there was an example where Twitter basically observedthat Twitter userswere conducting a sort of open participatory auditon Twitter’s image saliency and cropping algorithm,which was sort of—when you uploaded an image to Twitter,it would crop the image in a way that it thought would generate the most engagement,and so people noticed that there were some problems with that.It seemed to be cropping outBlack people to favor white people, um,and a number of other things.So Twitter users kind of demonstrated this,and then Twitter engineers replicated those findingsand published a paper about it, and then a few months later, they ran a bounty program, um, in partnershipwith the platform HackerOne,and they sort of launched it at—at DEF CONand said, “We will offer prizes to people who can demonstrate the ways that our image crop system, um,might be biased.”So this was a biased bounty.So we explored the whole history of bug bounty programs.We explored these more recent attempts to apply bug bountiesto algorithmic bias and harms,and we interviewed key people in the field,and we developed a design frameworkfor better vulnerability disclosure mechanisms. We developed a case study of Twitter’s bias bounty pilot.We developed a set of 25 design lessons for people to createimproved bug bounty programs in the future. And you can read all about that stuff at ajl.org/bugs.
GRAY: I—I feel like you’ve reviveda certain, um, ’90s sentiment of “this is our internet; let’s pick up the trash.”It just has a certain, um, kind of collaborative feel to itthat I—that I really appreciate.So, with the time we have left, I would love to hear about oracles and transfeminism. What’s exciting you about oracles and transfeminist technologies these days?
COSTANZA-CHOCK:So it can be really overwhelming to constantlybe working to expose the harms of these systemsthat are being deployed everywhere,in every domain of life, all the time,to uncover the harms,to get people to talk about what’s happened,to try and push back against contractsthat have already been signed,and to try and get, you know,lawmakers that are concerned with a thousand other thingsto pass bills that will rein in the worst of these tools.So I think for me, personally, it’s really importantto also find spaces for play and for visioningand for speculative design and for radical imagination.And so, one of the projects that I’m really enjoying latelyis called the Oracle for Transfeminist Technologies,and it’s a partnership between Coding Rights,which is a Brazil-based hacker feminist organization,and the Design Justice Network,and the Oracle is a hands-on card deckthat we designed to help us useas a tool to collectively envision and share ideasfor transfeminist technologies from the far future. And this idea kind of bubbled up from conversations between Joana Varon,who’s the directoress of Coding Rights,and myself and a number of other peoplewho are in kind of transnational hacker feminist networks,and we were kind of thinking about how, throughout history,human beings have always useda number of different divination techniques,like tarot decks,to understand the present and to reshape our destiny,and so we created a card deck called the Oracle for Transfeminist Technologies that has values cards,objects cards, bodies and territories cards,and situations cards,and the values are various transfeminist values,like autonomy and solidarityand nonbinary thought and decolonialityand a number of other transfeminist values.The objects are everyday objectslike backpacks or bread or belts or lipstick,and the bodies and territories cards,well, that’s a spoiler, so I can’t tell you what’s in them.
GRAY: [LAUGHS]
COSTANZA-CHOCK:Um, and the situations cards are kind of scenariosthat you might have to confront.And so what happens is basicallypeople take this card deck—and there’s both a physical version of the card deck,and there’s also a virtual version of thisthat we developed using a—a Miro board,a virtual whiteboard,but we created the cards inside the whiteboard—and people get dealt a hand, um,and either individually or in small groups,you get one or several values, an object, a people/places card,or a bodies/territory card and a situation,and then what you have to dois create a technology rooted in your valuesand—that somehow engages with the objectthat you’re dealt that will help people dealwith the situation, um, from the future.And so people come up with all kindsof really wonderful things that, um—and—and they illustrate these.So they create kind of hand-drawn blueprints or mockupsfor what these technologies are likeand then short descriptions of them and how they work.And so people have created thingslike community compassion probiotics thatconnect communities through a mycelial networkand the bacteria develop a horizontal governancein large groups,where each bacteria is linked to a personto maintain accountability to the whole,and it measures emotional and affective temperatureand supports equitable distribution of careby flattening hierarchies.Or people created, um, a—
GRAY: [LAUGHS]Right now, every listener is, like, Googling,looking feverishly online for these—for the, the Oracle.Where—where do we find this deck?Where—please, tell us.
COSTANZA-CHOCK: So you can—you can just Google“the Oracle for Transfeminist Technologies”or you can go to transfeministech.codingrights.org. So people create these fantastic technologies,and what’s really fun, right, is that a lot of them,of course, you know, we could create something like that now.And so our dream with the Oracle in its next stagewould be to move from the completely speculative design,you know, on paper piece to a prototyping lab,where we would start prototypingsome of the transfeminist technologies from the future and see how soon we can bring them into the present.
GRAY: I remember being so delightedby a very, very, very early version of this, and it was the tactileness of it was just amazing, like,to be able to play with the cards and dream together.So that’s—I’m so excited to hearthat you’re doing that work.That’s—that is inspiring.I’m just smiling.I don’t know if you can hear it through the radio, but, uh—wow, I just said “radio.” [LAUGHTER]
[MUSIC PLAYS UNDER DIALOGUE]
COSTANZA-CHOCK: It is a radio.A radio in another name.
GRAY: I guess it is a radio.That’s true. A radio by another name.Oh, Sasha, I could—I could really spend all day talking with you. Thank you for wandering back into the studio.
COSTANZA-CHOCK: Thank you.It’s really a pleasure.And next time, it’ll be in person with tea.
GRAY: Thanks to our listeners for tuning in. If you’d like to learn more about community-driven innovation, check out the other episodes in our “Just Tech” series. Also, be sure to subscribe for new episodes of the Microsoft Research Podcast wherever you listen to your favorite shows.
[MUSIC ENDS]
The post Just Tech: Centering Community-Driven Innovation at the Margins Episode 3 with Dr. Sasha Costanza-Chock appeared first on Microsoft Research.