The goal of tutorials is to broaden the perspective of our interdisciplinary community, addressing practical, technical, policy, regulatory, ethical, or societal issues related to FAccT. We solicited three types of tutorials: Translation Tutorials to foster dialogue between disciplines, Implications Tutorials to describe effects of algorithmic systems in society, and Practice Tutorials focused on a specific tool or frameworks.

Translation Tutorials

A Hands-On Introduction to Large Language Models for Fairness, Accountability, and Transparency Researchers

Maria Antoniak* (Allen Institute for AI), Melanie Walsh* (University of Washington), Luca Soldaini* (Allen Institute for AI), David Mimno* (Cornell University), and Matthew Wilkens (Cornell University)

This tutorial will offer a hands-on, technical introduction to large language models (LLMs) for fairness, accountability, and transparency researchers who might have less familiarity with the inner workings of these models but who are interested in exploring, auditing, or anticipating their capabilities. We will focus on building practical knowledge of (a) how these models work and how they are trained and (b) how practitioners can work with these models, via hands-on, accessible coding tutorials. Throughout the tutorial, we will focus on potential use cases that we believe are of particular interest, such as measuring biases (e.g., between vector representations, in generative outputs), analyzing training data and output coverage and attribution, and examining outputs for private information and toxicity. We will also discuss particular ways that FAccT researchers can contribute to improving the design and release of these models. Tutorial materials can be found at www.bertforhumanists.org.

Steering Language Models with Reinforcement Learning from Human Feedback and Constitutional AI

Amanda Askell (Anthropic), Deep Ganguli* (Anthropic), and Nathan Lambert* (Hugging Face)

Reinforcement learning from human feedback (RLHF) is a recent technique that has dramatically improved the real-world performance and user experience of large language models – both increasing the helpfulness and actively reducing the harms present in large ML models. Constitutional AI (CAI) is a technique built on RLHF that reduces the amount and variety of human feedback required. In this tutorial, we will provide a high-level overview of RLHF and CAI. We will describe the technical processes and procedures required to make RLHF and CAI work, and lead a discussion on the advantages of RLHF and CAI, as well as current challenges and limitations.

Practices and limitations of participatory methods: views from computer science, political science and design

Emily Black* (Stanford University), Sofia Bosch Gomez (Northeastern University), and Luisa Godinez-Puig (Urban Institute)

The machine learning (ML) community has called for greater community engagement in the development and research of public-facing AI systems in response to harmful algorithmic systems. Stakeholder participation is hoped to result in more equitable outcomes in creating less harmful AI systems or more transformative research products. Despite some successes, there remain unknowns regarding the challenges and limitations of using participatory research techniques in the AI lifecycle. This tutorial aims to inform the FAccT community on how participatory methods can be used in the AI lifecycle, while also highlighting the shortcomings of participatory approaches through case studies. Participatory methodologies do not inherently focus on increasing equity; equitable processes must be centered in project design to avoid unintended harmful practices. The tutorial aims to provide participants with a better understanding of the main tenets of participatory methodologies and practical limitations.

Using Technical Skills to Fight Actual Public Benefits Cuts and Austerity Policies, with the Benefits Tech Advocacy Hub

Emma Weil* (Upturn) and Elizabeth Edwards* (National Health Law Program)

The U.S. social safety net is designed to determine whether someone is “truly deserving” of assistance: a distinction rooted in racist, ableist, xenophobic, and sexist scrutiny of the autonomy of marginalized people. These determinations are increasingly made by standardized assessments and operationalized within large software systems—which often fail, affecting people at a mass scale. When people cannot get the resources they need, they suffer compounding economic and health consequences, sometimes severely. This tutorial will give technologists an introduction to how they can support on-the-ground challenges to public benefits technology, as well as crucial context about the politics and history of the U.S. social safety net. This “technical assistance” to advocates is a concrete way that members of the FAccT community can use their skills to intervene in government use of technology that denies people access to essential support, while not conceding to austerity logics.

Contextualizing AI with Cross-Cultural Perspectives

Aida Davani* (Google Research) and Sunipa Dev* (Google Research)

Training and evaluation of AI models rely heavily on semi-structured data annotated by humans. Both the data and human perspectives involved in the process, thus play a key role in what is taken as ground truth by models. Historically, this perspective has been Western-oriented which leads to a lack of representation of global contexts and identities in models as well as evaluation strategies, and the risk of disregarding marginalized groups that are most significantly affected by implicit harms. Accounting for cross-cultural differences in interacting with technology is an important step for building and evaluating AI holistically. Our work talks through different strategies including participatory approaches, along with survey experiments to capture a more diverse set of perspectives in data curation and benchmarking efforts. We zoom in on what cultural differences can explain human disagreements on language interpretation that inform model evaluations, and how socio-culturally aware AI research can fill in the gaps in fairness evaluations.

A Guiding Framework for Vetting Technology Vendors Operating in the Public Sector

Cynthia Conti-Cook (The Ford Foundation), David Liu* (Northeastern University), Roya Pakzad (Taraaz), and Sarah Ariyan Sakha* (independent)

This tutorial aims to improve the vetting process for technology vendors operating in the public sector, focusing on bridging communication gaps between governments, civil society organizations, philanthropies, and technology vendors. With the growing presence of technology vendors in the public sector, the need for an effective and transparent vetting process is crucial. The tutorial is based on the framework developed by The Ford Foundation in collaboration with Taraaz. It contains a list of red flags across seven categories: theory of change and value proposition; business model and funding; organizational governance, policies, and practices; product design, development, and maintenance; third-party relationships, infrastructure, and supply chain; government relationships; and community engagement. The tutorial's goal is to better equip both funders and vendors in understanding potential technological harms and limitations, while promoting dialogue on human rights, social and economic justice, and democratic values.

Implications Tutorials

Generative AI meets Responsible AI: Practical Challenges and Opportunities

Krishnaram Kenthapadi (Fiddler AI), Hima Lakkaraju* (Harvard University), and Nazneen Rajani* (Hugging Face)

Generative AI models and applications are being rapidly deployed across several industries, but ethical and social considerations that need to be addressed. These include lack of interpretability, bias and discrimination, privacy, lack of model robustness, fake and misleading content, copyright implications, plagiarism, and environmental impact. This tutorial focuses on the need for adopting responsible AI principles when developing and deploying large language models and other generative AI models. It provides a technical overview of text and image generation models, and highlights key responsible AI desiderata associated with these models. The tutorial also provides real-world generative AI use cases and practical solution approaches and guidelines for applying responsible AI techniques effectively. The tutorial concludes by discussing lessons learned and open research problems. We hope that our tutorial will inform researchers and practitioners, stimulate further research on responsible AI in the context of generative AI, and encourage building more reliable and trustworthy generative AI applications in the future. Tutorial materials can be found at https://sites.google.com/view/responsible-gen-ai-tutorial/.

AI Governance and Policy in the US: Spotlight on the Blueprint for an AI Bill of Rights

Sorelle Friedler* (Haverford College), Marc Aidinoff* (Institute for Advanced Study)

US and international policy has increasingly focused on concerns around algorithmically driven harms, especially those relating to artificial intelligence (AI) and algorithmic discrimination. Last fall, the White House released the AI Bill of Rights, which includes principles as well as a technical companion meant to help move policy from principles to practice. In this tutorial, we will discuss cross-cutting questions that policy around AI must address and the concrete steps that the FAccT research community can take to be helpful as policy makers struggle with these complex questions.

Practice Tutorials

Finding and Using Undocumented APIs for Algorithm Audits

Leon Yin* (The Markup), Piotr Sapiezynski* (Northeastern University), and Inioluwa Deborah Raji (Mozilla Foundation; University of California Berkeley)

Data-driven journalism and external algorithm audits rely on rich, purpose-built datasets. Creating such datasets is far from trivial. Despite the promising trend of online platforms creating academic APIs and ad libraries, many pertinent questions cannot be answered with such curated information. Instead, journalists and auditors rely on bespoke tools to gather public data that has yet been synthesized. The skills necessary for this kind of work are seldom taught in traditional coursework. This tutorial is the first step to addressing this gap. We will cover case studies of investigative journalism and algorithm audits that focus on the technical challenges of data collection. We will introduce practical skills that we use everyday in our work, while providing participants with a toolkit that will allow them to identify undocumented APIs in the wild and use them to collect the data relevant to their work. Tutorial materials can be found at https://inspectelement.org/apis.html.

Using the NIST AI Risk Management Framework

Elham Tabassi* (NIST), Reva Schwartz* (NIST), Kathy Baxter (Salesforce), Sina Fazelpour* (Northeastern University), Luca Belli* (NIST), and Patrick Hall* (BNH.AI, GWU)

AI system risks and resulting negative impacts can emerge for a variety of reasons and do not just stem from challenges with datasets, models, or algorithms. AI systems are built within organizational environments, based on individual and group decisions across enterprises - reflecting a variety of incentives and purposes. The contextual realities of AI system deployment are another contributing source of risk. The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) provides organizations with a guiding structure to operate within, and outcomes to aspire towards, based on their specific contexts, use cases, and skillsets. The rights-affirming framework operationalizes AI system trustworthiness within a culture of responsible AI practice and use. Tutorial attendees will learn how to use the AI RMF to enhance their organizational AI risk management posture, governance mechanisms, operationalize AI system trustworthiness, and bring contextual awareness into their organizational practices.

Integrating notions of fairness and demographic variance into large personalized advertising systems

Miranda Bogen* (Meta), Sean Gahagan* (Meta), and Aditya Srinivas Timmaraju (Meta)

Fairness in personalized ads has emerged as an area of significant focus for policymakers, regulators, civil rights groups, industry and other stakeholders. Early efforts to address related concerns focused on preventing potential discrimination by changing how advertisers can use tools to target their ads (especially those offering housing, employment or credit). Over time, concerns have shifted to the potential for exclusion in the machine learning-driven process that platforms often use to decide who within the target audience ultimately sees an ad. This tutorial will begin with a presentation detailing how evolution of concerns about fairness in personalized advertising have played out in a practical setting, gaps and trade-offs between theoretical recommendations and implementation constraints, and a novel approach that was developed to address concerns. The presentation will be followed by reflections from 1-2 discussants and structured breakout conversations in which attendees can reflect on open questions raised by presenters and discussants.

When the Rubber Meets the Road: Experience Implementing AI Governance in a Public Agency with the City of San José

Albert Gehami* (City of San José) and Leila Doty* (City of San José)

AI best practices are still developing as governments grapple with how to actually operationalize trustworthy AI principles. The City of San José is a leader among U.S. cities, having developed and applied an AI governance framework that promotes the procurement and deployment of trustworthy AI systems. In this session, practitioners from the City of San José discuss their new AI Review Framework based on existing guidance, their experience piloting the framework, and the practitioner skills that are necessary for its implementation. They identify the elements necessary for AI governance to be successful when “the rubber meets the road.”

Theories of Propaganda and New Technology: Applications and Interventions

Megan Hyska* (Northwestern University), and Michael Barnes* (Australian National University)

Innovations in communications technologies have always had consequences for the way that political actors try to influence one another—new technology, in other words, means new forms of propaganda.With the continued expansion of AI/ML technologies into more corners of our lives, it is crucial to understand how these technologies will alter the way that propaganda—and the strategies for mitigating its negative effects—might come to operate. This tutorial will be composed of two substantive mini-lectures followed by a structured discussion. We will begin with an overview of what propaganda is, focussing on theories that emphasize its role in the destruction, formation, and control of group agency. Next, we will demonstrate how this understanding of propaganda illuminates some potentially concerning uses of AI/ML technologies. Finally, our structured discussion will take up questions about how this notion of propaganda suggests different harm-reducing interventions on the parts of developers and regulators.

Responsible AI Toolbox

Besmira Nushi* (Microsoft Research), Rahee Ghosh Peshawaria* (Microsoft Research), Mehrnoosh Sameki (Microsoft), Minsoo Thigpen* (Microsoft), and Wenxin Wei* (Microsoft)

Responsible AI Toolbox is an open-source framework for accelerating and operationalizing Responsible AI via a set of interoperable tools, libraries, and customizable dashboards. The toolbox supports the machine learning lifecycle through the stages of identifying, diagnosing, and mitigating Responsible AI concerns and then validating and comparing different mitigation actions. In this tutorial, we will summarize and demonstrate the different tools available to the community today, and illustrate how they can be used altogether for debugging and improving machine learning models in models trained on different data types including structured data, images and text. Through several case studies and user stories, we will share how the tools are being used in practice and describe the main challenges faced during deployment or adoption. Finally, the tutorial will conclude by identifying future opportunities for open collaboration in this space that can enable participatory tool design and implementation. Tutorial materials can be found at https://www.microsoft.com/en-us/research/uploads/prod/2023/06/responsible_ai_toolbox_facct_tutorial_2023.pdf.