A researcher, writer, and policy advisor, Alondra Nelson is the Harold F. Linder Professor at the Institute for Advanced Study and distinguished senior fellow at the Center for American Progress. She was formerly deputy assistant to President Joe Biden and acting director at the White House Office of Science and Technology Policy. A scholar of science, technology, medicine, and inequality, Nelson was selected by Nature as one of the ten people who shaped science in 2022. She is author, most recently, of The Social Life of DNA, an award-winning exploration of the implications of direct-to-consumer genomics.
Algorithms already operate at scale in the health care system, affecting life and death decisions for hundreds of millions of patients. Auditing these algorithms, in terms of overall performance and fairness, presents some unique problems and opportunities for impact. I'll review some of these, emphasizing how engagement with concrete use cases can inform abstract notions of fairness and how to achieve it.
Ziad trained as an emergency doctor - and he still gets away as often as he can, to a hospital in rural Arizona, to work in the ER. But these days, he spends most of his time on research and teaching at Berkeley. Inspired by his clinical practice, he builds machine learning algorithms that help doctors make better decisions. He also studies where algorithms can go wrong, and how to fix them: his work on algorithmic bias has been highly influential both in public debate about algorithms, and in regulatory oversight and civil investigations. He is a Chan Zuckerberg Biohub Investigator, a Faculty Research Fellow at the National Bureau of Economic Research, and has been named an emerging leader by the National Academy of Medicine. His work has won numerous awards, and appeared in a wide range of journals (Science, Nature Medicine, the New England Journal of Medicine, leading computer science conferences). He is a co-founder of Nightingale Open Science, a non-profit that makes massive new medical imaging datasets available for research, and Dandelion, a platform for AI innovation in health. Before coming to Berkeley, he was an Assistant Professor at Harvard Medical School and a consultant at McKinsey & Co.
Dr. Alex Hanna is Director of Research at the Distributed AI Research Institute (DAIR). A sociologist by training, her work centers on the data used in new computational technologies, and the ways in which these data exacerbate racial, gender, and class inequality. She also works in the area of social movements, focusing on the dynamics of anti-racist campus protest in the US and Canada. Dr. Hanna has published widely in top-tier venues across the social sciences, including the journals Mobilization, American Behavioral Scientist, and Big Data & Society, and top-tier computer science conferences such as CSCW, FAccT, and NeurIPS. Dr. Hanna serves as a co-chair of Sociologists for Trans Justice, as a Senior Fellow at the Center for Applied Transgender Studies, and sits on the advisory board for the Human Rights Data Analysis Group and the Scholars Council for the UCLA Center for Critical Internet Inquiry. FastCompany included Dr. Hanna as part of their 2021 Queer 50, and she has been featured in the Cal Academy of Sciences New Science exhibit, which highlights queer and trans scientists of color. She holds a BS in Computer Science and Mathematics and a BA in Sociology from Purdue University, and an MS and a PhD in Sociology from the University of Wisconsin-Madison.
Brook Hansen is an organizer with Turkopticon, a worker-led non-profit organization dedicated to fighting for the rights of Amazon Mechanical Turk (AMT) workers, and a crowdsource worker on multiple online platforms. She also serves as an elected School Board Trustee for a Michigan public school district and is an advocate for public education especially for those who are most vulnerable and often marginalized. Prior to her crowdsource work she worked for the Michigan Department of Natural Resources. Brook graduated from Northern Michigan University with a degree in Environmental Conservation/Biology.
Krista Pawloski is a crowdworker with a background in print, prepress, graphic design, and mail merge coding. She currently works with Turkopticon in their fight to bring fair treatment to crowdworkers using Amazon Mechanical Turk. She is a mother of two amazing teenagers, one being special needs. As a mother to a special needs child, crowd work is able to give her a lifestyle that allows her to support her children and remain financially independent. She spends her free time volunteering at her local nature center and land conservancy group.
Krystal Kauffman is a gig worker and the lead organizer of Turkopticon, a nonprofit organization dedicated to fighting for the rights of gig workers, specifically those using Amazon's Mechanical Turk (AMT) platform. She started working on AMT in 2015 and is well-versed in the types of microtasks presented to crowdsource workers. Krystal joined Turkopticon in 2020 and recently started as a research fellow with the DAIR Institute. She is committed to working alongside others to build a community of workers who are united in righting the wrongs of the big-tech marketplace platforms. Prior to her work with Turkopticon and DAIR, Krystal worked as an organizer on political and issue campaigns for a decade before pursuing a degree in geology.
We introduce the calculus of performative prediction for its use in reasoning about the effects of algorithmic predictions on human populations. Performative prediction reveals a distinction between two mechanisms fundamental to prediction. One is to discover patterns in a population. The other, less recognized, is to steer the population through predictions. Building on performative prediction, we develop a notion of power tailored to digital platforms operating predictive systems. From here we examine the power of platforms in digital markets through theory, observational causal inference, and randomized experiments. We end on a discussion of collective algorithmic strategies to effectively resist the power of platforms.
Moritz is a director at the Max Planck Institute for Intelligent Systems. Prior to joining the institute, he was Associate Professor for Electrical Engineering and Computer Sciences at the University of California, Berkeley. His research contributes to the scientific foundations of machine learning and algorithmic decision making with a focus on social questions.
FAIR principles of Findability, Accessibility, Interoperability, and Reusability remain an essential guide to organizations in the handling of data systems. However, as global labor becomes increasingly digital, worker concerns escalate of being replaced, alienated, or undermined by data systems. What if we were to shift our focus from data-centric to worker-centric in the design of work platforms? What kinds of principles are needed to ensure that data works for us to foster equitable, and inclusive work platforms across the global value chain? This talk builds on the voices and experiences of the workers in the Majority World to inspire us to rethink data systems for an inclusive future.
Payal Arora is a digital anthropologist, a Professor in Technology, Values, and Global Media Cultures, and Director in UX and Inclusive Design at Erasmus University. She is the co-founder of FemLab, a feminist future of work initiative. Her expertise draws from two decades of user experiences among low-income communities worldwide to shape inclusive designs and policies. She is the author of award-winning books including the “The Next Billion Users” with Harvard Press. Engadget stated that her Harvard book is one of “the most interesting, thought-provoking books on science and technology we can find.” Forbes named her the “next billion champion” and “the right kind of person to reform tech.” About 150 international media outlets have covered her work including the BBC, Financial Times, 99% Invisible, The Economist, Quartz, Tech Crunch, The Boston Globe, F.A.Z, The Nation and CBC. She has consulted on tech innovation for diverse organizations such as IDEO, Adobe, Spotify, Google, KPMG, GE, UNHCR, and HP. She has given more than 300+ talks including TEDx talks on the future of the internet and innovation. She sits on several boards for organizations such as Columbia University’s Earth Institute, UNICEF, UNESCO, and World Women Global Council in New York. She has held fellow positions at Rockefeller Foundation Bellagio, ITSRio, GE, ZEMKI, and NYU. She did her MA and PhD from Harvard and Columbia University in International Policy and Tech studies. Her upcoming book on designing systems to include the Majority world is in contract with MIT Press and Harper Collins. She currently lives in Amsterdam.
Charlotte A. Burrows was designated Chair of the U.S. Equal Employment Opportunity Commission (EEOC) by President Biden on January 20, 2021. Chair Burrows has advocated for strong workplace civil rights protections and robust cooperation between the Commission, employees, civil rights advocates, and employers. She is particularly interested in the impact of new and emerging technologies on civil rights and employee privacy. In 2021, she launched the EEOC’s initiative to examine the use of algorithmic decision-making tools, including artificial intelligence, in employment.
Beneath the Surface is a project that uses narrative justice and data science to interrogate the intersection between gender based violence and police misconduct. In 2020, journalists at the Invisible Institute gained access to investigative documents regarding over 27,000 complaints against the Chicago Police between the years of 2011 and 2015. Within these documents were the unstructured stories behind the spreadsheet-level complaint data that was previously available in Chicago. Trina assembled the Beneath the Surface team in order to sift through this important, but unwieldy, collection and recover the testimonies of survivors of gender based violence.
In 2021, the BTS team recruited and trained volunteers to read narratives from complaints filed between 2011-2015. They generated over 4000 records of training data which helped to identify allegations of gender based violence at the hands of the Chicago Police Department. These records, as training data, became the engine that drives Judy, a machine learning tool the team built to identify categories of misconduct.
This Community Keynote highlights our experiences and the tools we employed, and makes space to discuss more strategies to sustain human rights work that builds community.
Trina Reynolds-Tyler is a data analyst at the Invisible Institute. She co-founded the Chicago Chapter of Data 4 Black Lives, and is an organizer with BYP100 and the Black Abolitionist Network. Before receiving her Masters of Public Policy from the University of Chicago, she worked closely with the Citizens Police Data Project and the Youth / Police Project. She is active as a community organizer in Chicago.
Tarak Shah is a data scientist at the Human Rights Data Analysis Group, where he leads the team’s U.S. projects. He partners with local journalists and grassroots human rights organizations in places including Chicago, New Orleans, Puerto Rico, Boston, Berkeley, and San Francisco, to identify questions that can be answered and arguments that can be strengthened through quantitative analysis.
Community Town Hall hosted by the 2023 General Chairs