The majority of news discovery and reading now takes place on social media platforms. This means that the ranking of news stories is determined by the platform's algorithm, which uses engagement metrics like views, clicks, and shares rather than journalistic conventions. This talk will cover some of the social issues that algorithmic amplification causes, such as news ranking, clickbait, misinformation, and fact-checking bias. I will also share South Korea's experience with confronting the propagation of misinformation during COVID-19 and the Facts Before Rumors campaign that our group launched.
About the Facts Before Rumors campaign (https://ibs.re.kr/fbr/): During the early days of the pandemic, we launched an online campaign to debunk COVID-19 rumors that disseminated accurate coronavirus-related information to over 50,000 individuals in 151 countries. The campaign aimed to collect fact-checked information from regions that had already suffered from the infodemic and spread it to other regions where the infodemic was in its infancy. Alongside our campaign, we conducted a series of research projects to understand what kind of coronavirus-related information was being shared online. Focusing on misinformation, we quantified the spread of COVID-19 misinformation through survey studies.
Meeyoung Cha is an associate professor at the Korea Advanced Institute of Science and Technology (KAIST). Her research is on data science with an emphasis on modeling socially relevant information propagation processes. Her work on misinformation, poverty mapping, fraud detection, and long-tail content has gained more than 17,000 citations. She worked at Facebook's Data Science Team as a Visiting Professor and is a recipient of the Korean Young Information Scientist Award and AAAI ICWSM Test of Time Award. She is currently jointly affiliated as a Chief Investigator at the Institute for Basic Science (IBS) in Korea.
The AI “arms race” has reached a point where different organizations in different countries are competing to build ever larger “language” models in text, in speech, in image and so on, trained from ever larger collections of databases. These pre-trained models have so far proven to be extremely powerful in enabling zero-shot and few-shot learning of new tasks. Meanwhile, with great power comes great responsibility. Our society in general, our users in particular, are demanding that AI technology be more responsible – more robust, fairer, more explainable, more trustworthy. Natural language processing technologies built on top of these large pre-trained language models are expected to align with these and other human “values” in deployment because they impact our lives directly.
The core challenge of “value-aligned” NLP (or AI in general) is twofold: 1) What are these values and who defines them? 2) How can NLP algorithms and models be made to align with these values?
In fact, different cultures and communities might have different approaches to ethical issues. Even when people from different cultures happen to agree on a set of common principles, they might disagree on the implementation of such principles. It is therefore necessary that we anticipate value definition to be dynamic and multidisciplinary. I propose that we should modularize the set of value definitions as external to the development of NLP algorithms, and that of large pretrained language models and encapsulate the language model to preserve its integrity. We also argue that value definition should not be left in the hands of NLP/AI researchers or engineers. At best, we can be involved at the stage of value definition but we engineers and developers should not be decision makers on what they should be. In addition, some values are now enshrined in legal requirements. This argues further that value definition should be disentangled from algorithm and model development.
This history of NLP applications has so far focused on clearly defined task completion objectives – in answering factoid questions with unambiguous answers; in summarizing articles with salient information, in machine translation faithful to the source; in classifying texts into distinct categories; in task-oriented dialog systems to fulfil user queries and commands; etc. I propose that it behooves us to develop NLP systems with the ability to explicit align with human values. In this talk, I will present initial experiments on value based NLP where we allow the input to an NLP system to have human defined values or ethical principles for different output results. I propose that many NLP tasks, from classification to generation, should ouptput results according to human defined principles for better performance and explainability. I will introduce (1) AiSocrates, a new task that answers ethical quandary questions according to different moral philosophy principles; (2) initial experiments on sexism classification with different sexism definitions; and (3) NeuS, a multi-document news summarization system that filters out framing bias in the source news stories in order to provide an additional neutral perspective on events and news to the reader.
Pascale Fung is a Professor at the Department of Electronic & Computer Engineering and Department of Computer Science & Engineering at The Hong Kong University of Science & Technology (HKUST), and a visiting professor at the Central Academy of Fine Arts in Beijing. She is an elected Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) for her "significant contributions to the field of conversational AI and to the development of ethical AI principles and algorithms", an elected Fellow of the Association for Computational Linguistics (ACL) for her “significant contributions towards statistical NLP, comparable corpora, and building intelligent systems that can understand and empathize with humans”. She is a Fellow of the Institute of Electrical and Electronic Engineers (IEEE) for her “contributions to human-machine interactions” and an elected Fellow of the International Speech Communication Association for “fundamental contributions to the interdisciplinary area of spoken language human-machine interactions”. She is the Director of HKUST Centre for AI Research (CAiRE), an interdisciplinary research centre on top of all four schools at HKUST. She co-founded the Human Language Technology Center (HLTC). She is an affiliated faculty with the Robotics Institute and the Big Data Institute at HKUST. She is the founding chair of the Women Faculty Association at HKUST. She is an expert on the Global Future Council, a think tank for the World Economic Forum. She represents HKUST on Partnership on AI to Benefit People and Society. She is on the Board of Governors of the IEEE Signal Processing Society. She is a member of the IEEE Working Group to develop an IEEE standard - Recommended Practice for Organizational Governance of Artificial Intelligence. Her research team has won several best and outstanding paper awards at ACL, ACL and NeurIPS workshops.
This talk will explore some of the key possibilities, risks, and questions associated with what could be called “hypersocial” artificial intelligence: increasingly sophisticated AI systems capable of shaping and carrying on complex social relationships between and with humans, across a variety of social contexts. From its earliest days, AI had the potential to help us understand and affect a major facet of the human experience facilitated by natural intelligence: our linguistic and social interactions at work, in our personal and economic lives, and in our civic engagement. Today our technology infrastructure is increasingly shaping social life given the convergence of machine learning techniques, internet platforms, growing computational power, norms supporting our routine use of computers in daily life, and vast troves of data. Given the progress achieved even in the last few years with foundation models, the world will likely experience further advances in the capabilities of AI systems and greater deployment of these systems in social, economic, and political spheres. We may eventually interact with and depend on AI technology that becomes increasingly fluent in those social domains –– of conversation and camaraderie, parenting, collective deliberation, citizenship, diplomacy, and critical decision-making about value-laden issues –– that (for many people) help define what it means to be human. Hypersocial AI can spur improvements in the human condition but also cultural and political conflict, and ultimately, choices among goals that will sometimes be difficult or impossible to reconcile.
Beyond the possible impacts of hypersocial AI on our daily lives, what trade-offs and governing dilemmas should we bear in mind as we consider how much to incorporate hypersocial AI into the web of human relationships? How might hypersocial AI shape geopolitical dynamics? Most fundamentally, hypersocial AI may force us to ponder the implications of differing visions for the future of AI that draw from the work of scholars who have prioritized AI as a means for optimizing decision-making and that of thinkers who have focused more on AI research as an intellectual domain for understanding the complexities of human discourse, collaboration, and value-choices.
Mariano-Florentino (Tino) Cuéllar is the president of the Carnegie Endowment for International Peace –– the oldest think tank in America and the only one dedicated to pursuing global security and peace through its operations in the United States and India, China, Belgium, Lebanon, and Russia. Cuéllar previously served for nearly seven years as a justice on the Supreme Court of California, the highest court of America’s largest judiciary, and led the courts’ efforts to better meet the needs of millions of limited-English speakers. Before that, he was the Stanley Morrison Professor at Stanford Law School and director of Stanford University’s Freeman Spogli Institute for International Studies. He served two U.S. presidents in a variety of roles in the federal government, including as special assistant to the president for justice and regulatory policy at the White House in the Obama administration. He chairs the board of the William and Flora Hewlett Foundation. Born in Matamoros, Mexico, he grew up primarily in communities along the U.S.-Mexico border. He graduated from Harvard and Yale Law School, and obtained a Ph.D. in political science from Stanford.
Conversations around the spread of mis/disinformation often revolve around the role of algorithms in disseminating and radicalizing racialized conspiracies to “naive” internet users. This is most immediately obvious in reporting and research on the viral spread of QAnon conspiracy theories on Instagram, YouTube, and other social media services. I offer instead that the concept of weak tie racism can help us to understand how racism becomes encoded into algorithmic and other computational modes (e.g., neural nets and Twitter’s recent image crop “problem”), in the process stimulating libidinal economies of white supremacist ideology latent in Western online populations.
Weak tie racism, briefly stated, is a concept i introduced in Distributed Blackness (2020). It is based on Granovetter's (1973) hugely influential argument that weak ties are enormously productive in spreading information. my contribution to this theory, drawing from social informatics, that instead of conceptualizing the ties between individuals as based on comity, we should consider racism as the glue between enculturated social nodes. Moreover, the role of computation in sharing weak-tie information (e.g., how we have come to rely on search engines for authoritative information because algorithm) leads to my argument that the computer itself is an agent and practitioner of racist activity and belief as it uncritically shapes the ways discourses around race and racism are presented, shared, and discussed.
André Brock is an associate professor of media studies at Georgia Tech. He writes on Western technoculture and Black cybercultures; his scholarship examines race in social media, video games, blogs, and other digital media. His book, *Distributed Blackness: African American Cybercultures*, (NYU Press 2020), the 2021 winner of the Harry Shaw and Katrina Hazzard-Donald Award for Outstanding Work in African-American Popular Culture Studies and the 2021 Nancy Baym Book Award, theorizes Black everyday lives mediated by networked technologies.
Intersectionality has become an important area of research in the detection and mitigation of algorithmic bias. Often the ways in which discrimination manifests in sociotechnical systems can be hidden when evaluating fairness on distinct demographic features, without considering how identities and experiences might intersect in unique ways. For example, in her seminal work on intersectionality, Kimberlee Crenshaw highlighted the ways in which U.S. antidiscrimination law failed to protect Black women from discrimination in contexts where their intersectional experience was not just the sum of racism and sexism. Reflecting intersectionality in practice, however, can be difficult given the many possible identities of interest and the nuanced ways in which they can interact. This panel will explore different possible approaches for implementing intersectionality in algorithmic fairness.
Dr. James Foulds is an Assistant Professor in the Department of Information Systems at the University of Maryland, Baltimore County (UMBC). His research aims to improve the role of artificial intelligence in society by addressing issues of fairness, bias, and privacy, and by promoting the practice of computational social science. His master's and bachelor's degrees were earned with first-class honours at the University of Waikato, New Zealand, where he also contributed to the Weka data mining system. He earned his Ph.D. in computer science at the University of California, Irvine, and was a postdoctoral scholar at the University of California, Santa Cruz, followed by the University of California, San Diego. His research in socially conscious artificial intelligence and machine learning has been supported by the NSF CAREER award, the NSF CISE Research Initiation Initiative (CRII) Award, and several other grants from NSF and NIST. He has served in organizing roles for the AISTATS conference, the ITA workshop, and the Ethics in Data Science Pedagogy workshop (EDSP).
Youjin Kong will be an Assistant Professor in Philosophy at the University of Georgia, starting in August 2023. She is currently a Visiting Assistant Professor in Philosophy at Oregon State University. Located at the nexus of ethics of AI, social-political philosophy, and feminist philosophy, her research critically analyzes how AI reproduces gender and racial injustice, and develops philosophical frameworks for promoting fairness in AI. She is also committed to advancing the field of Asian American feminist philosophy, which remains underrepresented in the philosophy literature. She teaches courses on ethical issues arising in emerging technologies, as well as courses in social and moral philosophy.
Dr. Yolanda A. Rankin is an Assistant Professor in the School of Information at Florida State University. As the Director of the DEsigning TechnOlogies for the UndeRserved (DETOUR) Research Lab, she merges Black feminist epistemologies with participatory design practices to understand the diverse perspectives and information needs of minoritized populations who are often denied or given limited access as consumers to technology. Leveraging Black feminist thought and intersectionality as critical frameworks, her research reveals (1) how intersecting identities and systems of power impact Black women’s ability to persist in the field of computing and (2) how centering the lived experiences of Black people and other historically excluded populations contributes to more equitable design practices. A McKnight Fellow (2020-2021) and a Woodrow Wilson Fellow (2016), Dr. Rankin has published more than 40 peer-reviewed publications, including journal articles, conference papers, and books. Prior to entering academia, she accumulated more than twelve years of industry experience while employed at IBM Research Lab – Almaden in San Jose, CA and Lucent Technologies Bell Labs in Naperville, IL. Yolanda completed her Ph.D. in Computer Science at Northwestern University, her M.A. in Computer Science at Kent State University, and her B.S. in Mathematics at Tougaloo College, a historically Black college in Jackson, Mississippi.
Dr. Olga Russakovsky is an Assistant Professor in the Computer Science Department at Princeton University. Her research is in computer vision, closely integrated with the fields of machine learning, human-computer interaction and fairness, accountability and transparency. She has been awarded the AnitaB.org's Emerging Leader Abie Award in honor of Denice Denton in 2020, the CRA-WP Anita Borg Early Career Award in 2020, the MIT Technology Review's 35-under-35 Innovator award in 2017, the PAMI Everingham Prize in 2016 and Foreign Policy Magazine's 100 Leading Global Thinkers award in 2015. In addition to her research, she co-founded and continues to serve on the Board of Directors of the AI4ALL nonprofit dedicated to increasing diversity and inclusion in Artificial Intelligence (AI). She completed her PhD at Stanford University in 2015 and her postdoctoral fellowship at Carnegie Mellon University in 2017.
Cynthia Dwork, Gordon McKay Professor of Computer Science at Harvard, Affiliated Faculty at Harvard Law School and Department of Statistics, and Distinguished Scientist at Microsoft, is renowned for placing privacy-preserving data analysis on a mathematically rigorous foundation. She has also made seminal contributions in algorithmic fairness, cryptography, and distributed computing. Her current focus is on the theory of algorithmic fairness. Dwork is the recipient of numerous awards including the IEEE Hamming Medal, the Gödel Prize, and the ACM Paris Kanellakis Theory and Practice Award. Dwork is a member of the US National Academy of Sciences and the US National Academy of Engineering, and is a Fellow of the American Academy of Arts and Sciences and the American Philosophical Society.
In recent years, accelerated by the global pandemic, employers in all economic sectors, from low wage to high tech, have introduced computational tools that reduce workers' digital and physical autonomy, while transferring value from employees to employers, and risk in the other direction. This keynote panel, featuring scholars from across STS, law, computer science and philosophy, will explore the rise of bossware and algorithmic management, highlighting not only the key empirical trends, but also their underlying causes, and how workers, researchers, and regulators can fight back.
Prof Frederik Zuiderveen Borgesius is Professor ICT and Law at Radboud University Nijmegen, where he is affiliated with Radboud's interdisciplinary research hub on digitalization and society: the iHub. His research mostly concerns human rights, such as the right to privacy, to the protection of personal data, and to non-discrimination, in the context of new technologies.
In 2019, he wrote a report on discrimination, artificial intelligence and algorithmic decision-making for the Council of Europe.
More info at:
Min Kyung Lee is an assistant professor in human-computer interaction in the School of Information at the University of Texas at Austin. Dr. Lee has conducted some of the first studies that empirically examine the social implications of algorithms’ emerging roles in management and governance in society. She has extensive expertise in developing theories, methods and tools for human-centered AI and deploying them in practice through collaboration with real-world stakeholders and organizations. She developed a participatory framework that empowers community members to design matching algorithms that govern their own communities. Dr. Lee is a Siebel Scholar and has received the Allen Newell Award for Research Excellence, research grants from NSF and Uptake, and six best paper awards and honorable mentions and two demo/video awards in venues such as CHI, CSCW, DIS and HRI. She is an Associate Editor of Human-Computer Interaction and a Senior Associate Editor of ACM Transactions on Human-Robot Interaction.
Dr Wilneida Negron is the Director of Research and Policy at Coworker.org. She most recently worked at the Ford Foundation, where she led cross-thematic area strategy development between the Gender, Race, Ethnic Justice, Technology and Society, Mission Investing, Future of Work(ers), and Civic Engagement Thematic areas, with a focus on helping labor movements deepen and leverage economic partnerships and movement-based partnerships. She is a lifelong fellow at Data & Society Research Institute and Atlantic Institute for Racial Equity.
Rida is a Research Scientist with Google's Ethical AI team. She received her PhD in Computational Urban Science from the Massachusetts Institute of Technology. Her research examines how communities respond to, repair and resist algorithmic systems in non-western urban spaces. She is particularly interested in making visible the algorithmic failures and frictions caused by culturally inappropriate technological design.
This keynote video and panel discussion demonstrates how academics and practitioners can partner with community groups to audit real-world algorithmic systems. In 2020, just as the pandemic began, app-based workers for Shipt, Target’s delivery company, began reporting falling earnings. They challenged Shipt’s claims that a new opaque black box pay algorithm fairly rewarded workers for “effort”. The video featured in this keynote details how workers partnered with an academic researcher to independently evaluate this new algorithm’s impact through community research and design. They co-designed a tool—an SMS chat bot—that collected and analyzed over 200 worker's pay histories to perform a “real world” audit of Shipt’s new algorithm. This audit showed that the new algorithm effectively cut over 40% of workers’ pay. The resulting findings fueled an organizing campaign that made national headlines. This keynote argues that to have real-world impact, researchers and practitioners of algorithmic fairness should partner with—and be guided by—workers and others who are most effected by automated systems.
Drew Ambrogi serves as Coworker.org’s Digital Director where he leverages digital tools to support workers’ efforts to build power. Drew also leads Coworker’s projects with gig workers including worker-led research, campaign strategy and support, and policy work. Before joining Coworker.org in 2019, Drew worked as a digital strategist at a national racial justice organization, where he served as President of their staff union, and on the side as a communications consultant for grassroots community organizations. Drew currently lives and works in Washington, DC.
Vanessa Bain is a former educator from Silicon Valley that began working full-time as an Instacart Shopper in 2016. She has been grassroots organizing Instacart Shoppers for over five years and has organized several national walkouts, boycotts, and direct actions over labor grievances, worker classification, and workplace safety in the grocery gig economy. In January 2020, Bain cofounded Gig Workers’ Collective, a worker collective that fosters worker-led worker organizing and advocacy in the gig economy. Bain has presented at UC Berkeley, Cornell, and was a featured presenter at last year's American Institute for Public Health's Annual Meeting on the topic of Organizing the Gig Economy for Social Justice. Bain is passionate about building worker power, policy, intersectional organizing, social and economic justice. Bain has been profiled by the Washington Post.
Dan is a doctoral candidate at the MIT Media Lab studying how data stewardship and analysis can impact community governance. His current work investigates how data rights and labor rights intersect and explores how to build real-world tools that help workers build power by leveraging the data they generate at work. Their research on community surveillance and tools for gig workers have been discussed in Gizmodo, Wired, Reuters, the New York Times, and other major publications. They have exhibited digital art that addresses themes such as urban inequality and digital surveillance in galleries around the globe. They also have experience as a startup co-founder, a machine learning researcher, and a data science bootcamp teacher. They received their B.S. from Northeastern University in 2015 and their M.Sc from MIT in 2019.
Willy Solis is a Shipt Shopper from the Dallas Texas Metroplex. Willy has a background in construction and has run his own business since 2008. In 2019, Willy began working as a Shipt Shopper and began grassroots organizing his fellow Shipt Shoppers in January 2020 when Shipt implemented a devastating pay cut. In February 2020 he formalized a relationship with Gig Workers' Collective where he has functioned as the lead organizer for Shipt Shoppers nationally. Organizing successes include securing PPE for hundreds of thousands of Shipt Shoppers, and the repayment of hundreds of thousands of dollars in misappropriated tips. Willy has partnered with organizations such as Human Rights Watch, MIT, and Coworker on generative worker-centric research and data about the gig economy. Solis is passionate about building worker power, policy, social and economic justice. In the past year, Solis is frequently a featured panelist and presenter on gig economy issues, most recently he was a featured speaker at SXSW. Solis has been profiled by NPR and The Hill.
Danny is a UX researcher for a cooperative economy. He currently is Lead Researcher with Turning Basin Labs, a staffing and training co-op based in California, facilitating worker- and user-led studies. Previously, Danny worked with a variety of community-owned digital tools including Up & Go, a platform for booking home cleanings in NYC. He also created the Ownership Model Canvas with the co-op accelerator Start.coop, and organized with Exit to Community Collective creating resources for startups to build community leadership and ownership. Danny believes everyone can do influential, standardized, and politically imaginative research.
Media companies have always curated the public sphere of the political community where they operate. They shape the information environment in which the community deliberates about collective action—deciding what is included and excluded, what is amplified and reduced. But in the age of mass participation in social media, curation is not just about selecting content, it is about governing the social relations from which that content emerges—shaping not only what we may learn or see, but how we relate to one another as members of a political community. Increasingly, this governance is algorithmic: recommender systems determine what speech to amplify, what to reduce, and (often with some human oversight) what to remove. The social impacts of algorithmic governance of the public sphere are highly contested; the paramount importance of using these tools more effectively to realise our social and political ideals surely less so. This keynote panel brings together scholars from communication studies, philosophy, law and computer science to better understand the nature of algorithmic governance of online speech, and to propose regulatory and technological paths forward.
(@MeredithDClark; she/her/hers) is an associate professor in Journalism and Department of Communication Studies at Northeastern University. Her research focuses on the intersections of race, media, and power. Her first book, “We Tried to Tell Y’all: Black Twitter and Digital Counternarratives” is forthcoming from Oxford University Press. Her research has also been published in Communication & the Public, Communication, Culture & Critique, Electronic News, Journalism & Mass Communication Educator, Journal of Social Media in Society, New Media & Society, and Social Movement Studies. She’s been quoted in The New York Times, The Washington Post, The Associated Press, and has been a guest on “Full Frontal with Samantha Bee,” as well as NPR’s “All Things Considered” and “Code Switch,” among other media appearances. She was a 2020-2021 faculty fellow with the Data & Society Research Institute.
Clark is currently serving a four-year leadership term with the Association for Education in Journalism & Mass Communication’s Council of Divisions, where she was formerly chair of the Commission on the Status of Women. A longtime member of the National Association of Black Journalists, she was the faculty advisor for the UVA chapter of the NABJ from 2017-2021. Clark currently serves as a faculty affiliate at the Center on Digital Culture & Society at the University of Pennsylvania, and academic lead for “Documenting the Now,” a community-based digital archives project supported by the Andrew W. Mellon Foundation. Clark also sits on the advisory boards for Project Information Literacy (Harvard University); the Center for Critical Race and Digital Studies (New York University); and the news nonprofit, Report for America. She oversaw the annual Newsroom Diversity Survey for News Leaders Association (formerly the American Society of News Editors) from 2018 - 2021.
Tarleton Gillespie is a senior principal researcher at Microsoft Research, and an affiliated associate professor in the Department of Communication and Department of Information Science at Cornell University. He is the author of Wired Shut: Copyright and the Shape of Digital Culture (MIT, 2007) , co-editor of Media Technologies: Essays on Communication, Materiality, and Society (MIT, 2014), and author of Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media (Yale, 2018).
Daphne Keller's work focuses on platform regulation and Internet users' rights. She has testified before legislatures, courts, and regulatory bodies around the world, and published both academically and in popular press on topics including platform content moderation practices, constitutional and human rights law, copyright, data protection, and national courts' global takedown orders. Her recent work focuses on legal protections for users’ free expression rights when state and private power intersect, particularly through platforms’ enforcement of Terms of Service or use of algorithmic ranking and recommendations. Until 2020, Daphne was the Director of Intermediary Liability at Stanford's Center for Internet and Society. She also served until 2015 as Associate General Counsel for Google, where she had primary responsibility for the company’s search products. Daphne has taught Internet law at Stanford, Berkeley, and Duke law schools. She is a graduate of Yale Law School, Brown University, and Head Start.
Jon Kleinberg is the Tisch University Professor in the Departments of Computer Science and Information Science at Cornell University. His research focuses on the interaction of algorithms and networks, the roles they play in large-scale social and information systems, and their broader societal implications. He is a member of the National Academy of Sciences and National Academy of Engineering, and serves on the US National AI Advisory Committee. He has received MacArthur, Packard, Simons, Sloan, and Vannevar Bush research fellowships, as well awards including the Harvey Prize, the Nevanlinna Prize, the Newell Award, and the ACM Prize in Computing.
Karen Hao is a HK-based reporter at the Wall Street Journal, covering tech & society in China. She was previously a senior editor at MIT Technology Review, covering cutting-edge AI research and its impacts on society. Her work is regularly taught in universities, including Harvard, Stanford, and Yale, and cited in government reports and by Congress. She has won numerous awards, including an ASME Next, the highest honor for magazine journalists under 30. In a past life, she was an application engineer at the first startup to spin out of Alphabet's X. She received her B.S. in mechanical engineering and minor in energy studies from MIT.
William Isaac is a Staff Research Scientist at DeepMind, Advisory Board Member of the Human Rights Data Analysis Group, and Research Affiliate at Oxford University Centre for the Governance of AI. His research focuses on the societal impact and governance of emerging technologies. Prior to DeepMind, William served as an Open Society Foundations Fellow. His research has been featured in publications such as Science, New York Times, and the MIT Technology Review.