The goal of tutorials is to broaden the perspective of our interdisciplinary community, addressing practical, technical, policy, regulatory, ethical, or societal issues related to FAccT. We solicited three types of tutorials: Translation Tutorials to foster dialogue between disciplines, Implications Tutorials to describe effects of algorithmic systems in society, and Practice Tutorials focused on a specific tool or frameworks.

Ethical Design Science Research – an Interdisciplinary AI Ethics Framework

Dialogue/Translation tutorial Eva Maria Hille, Rheinische Friedrich-Wilhelms-Universität Bonn; Hannah Bleher, Rheinische Friedrich-Wilhelms-Universität Bonn

In recent years, the subject of AI ethics has assumed a prominent role in the broader AI discourse, yet two major challenges remain: the gap between theoretical frameworks and their practical application, and the difficulty of addressing inequalities and vulnerabilities emerging from AI use. This tutorial responds to these challenges by introducing the Ethical Design Science Research (EDSR) framework – a methodological approach that integrates ethical reflection directly into the AI development process. Based on the Design Science Research model, EDSR incorporates ethical theory, stakeholder perspectives, and social context through three iterative cycles: relevance, rigor, and design. Drawing on feminist ethics and the wide reflective equilibrium, the framework highlights the importance of including the lived experiences and structural vulnerabilities of those affected by AI systems. The tutorial consists of three parts: an introduction to the EDSR framework, a hands-on group activity applying EDSR to participants’ own cases, and a reflective discussion on challenges and adaptations. This tutorial provides the FAccT community with a concrete and participatory methodology to bridge the gap between ethical principles and real-world practice.

A Citizen’s Practical Guide to Creating Algorithmic Transparency Registers

Translation tutorial Kimberlee Weatherall, University of Sydney Law School ; Alexandra Sinclair University of Sydney Law School; Joseph Summers, Public Law Project

In this tutorial two organisations: the Public Law Project in London, UK and the Australian Research Council Centre for Automated Decision-making and Society (ADM+S) in Sydney, Australia give a detailed account of how they developed and published algorithmic transparency registers recording the use of automated systems by government in their respective jurisdictions. The tutorial provides a step-by-step guide for civil society groups and/or researchers who wish to develop their own transparency registers of automated systems in government. Additionally, the tutorial explains how individuals and civil society groups can make use of existing algorithmic transparency registers in their litigation and advocacy. The tutorial is also useful for civil society groups wishing to make a case to policy-makers for compulsory registers or who are working with government on the practical details of what should be included where registers are proposed.

“Shh! Don’t Say ***": Ethical Community Engagement in the Era of Anti-Equity Backlash

Dialogue tutorial Tina M. Park, Partnership on AI; Wilneida Negrón, coworker.org; Lara Groves, Ada Lovelace Institute; Eric Corbett, Google Research; Jacob Metcalf, Data Society

While over the past several years, tremendous progress has been made to engage socially marginalized communities and other impacted people as part of the AI development process, shifts in government and industry priorities may threaten these advancements. This tutorial offers a practical and collaborative space to explore how to sustain ethical, community-centered engagement under growing legal, political, and organizational pressure to retreat from equity-focused work. Drawing on learning from previous cycles of anti-equity backlash in other fields, as well as insights gained from workshop participants’ own experiences, we will collectively brainstorm ways to navigate this moment to continue facilitating participatory stakeholder engagements in AI. In this session, participants will not only deepen their understanding of these risks but also develop practical strategies to continue this work — even under politically hostile conditions. This tutorial will help AI researchers and practitioners build ethical resilience into their work, protecting participatory methods from becoming politically obsolete.

The Pentagon and Silicon Valley, or There and Back Again: Research Approaches to Investigating Sociotechnical Systems and Conflict

Dialogue/Translation tutorial Shazeda Ahmed, University of California; Joshua A. Kroll, Naval Postgraduate School; Andrew Smart, Google Research; Tina M. Park, Independent Researcher; David G. Widder, Cornell Tech

Questions about the adoption of AI technologies by militaries around the globe raise important socio-political issues and concerns about ethical and societal ramifications: does the automation of warfare create new risks? The recent conflict in Gaza has raised significant discussion around the ways state militaries and tech companies leverage machine learning technology to intensify warfare. This tutorial explores the nature and meaning of military AI use and the applications where advanced technology raises difficult ethical questions through a series of lightning talks and discussion involving the audience covering the history of interconnection between the technology industry and defense projects around the globe, the ways advanced automation integrates into the work of war, the dual-use nature of research in computing (including FAccT research) and its enlistment for military purposes, and the context of great power competition. Warfighting is an inherently sociotechnical effort, with military personnel using tools and technologies within larger command-and-control structures subject to guidance by law, international norms, doctrinal guidance, and national strategy. Automating portions of this system restructures control and accountability relationships in ways that raise difficult ethical quandries that FAccT research can speak to in direct ways. This workshop-style tutorial provides attendees with a primer on terms, concepts, and actors driving military AI uses; an understanding of the dual-use nature of AI and AI research in the military context; and an understanding of how to engage with military issues while undertaking FAccT research while providing space for the community to share where such questions present struggles as militaries adopt these technologies for both battlefield use and supporting functions.

Understanding U.S. Anti-discrimination Requirements for Artificial Intelligence under the Trump Administration

Translation tutorial Emily Black, New York University; Charlotte Burrows, University of California, Berkeley; Pauline Kim, Washington University, St. Louis; Jenny Yang, University of California, Berkeley

The Trump administration signaled almost immediately that its approach to both AI and civil rights issues would differ sharply from that of its predecessor, causing concern within civil rights and Responsible AI communities. Specifically, this administration has embraced deregulation for AI as a means of furthering American technological dominance over AI innovation, and has broadly opposed—without defining—efforts to foster diversity, equity, and inclusion (DEI). This has led to a wave of organizations and individuals making broad changes, from companies abandoning initiatives to debias AI systems to academics altering their research agendas. And yet, many of these changes are likely not necessary, nor are they compelled by the law. Drawing on Yang’s and Burrows’ decades of experience enforcing federal civil rights laws, this translation tutorial will separate the smoke from the fire around changes in equal opportunity and DEI-related regulation, to help participants understand how the law has and has not changed under the new administration with a focus on AI and other automated decision systems (ADS). We aim to empower participants to help their institutions continue important civil rights related initiatives rather than preemptively curtail them out of fear. To accomplish this goal, we will (1) outline the scope and application of existing civil rights law, (2) explain what has and has not changed in the ADS & equal opportunity regulatory landscape, (3) describe how this might concretely impact bias mitigation efforts, and (4) offer practical takeaways for participants, including those in industry, academia, and government, to help prevent harm from dismantling efforts designed to protect civil rights.

AI Measurement as a Stakeholder-Engaged Design Practice

Translation tutorial Anna Kawakami, Carnegie Mellon University; Su Lin Blodgett, Microsoft Research; Solon Barocas, Microsoft Research; Alex Chouldechova, Microsoft Research; Abigail Jacobs, University of Michigan; Emily Sheng, Microsoft Research; Jenn Wortman Vaughan, Microsoft Research; Hanna Wallach, Microsoft Research; Amy Winecoff, Center for Democracy & Technology; Angelina Wang, Stanford University; Haiyi Zhu, Carnegie Mellon University; Ken Holstein, Carnegie Mellon University;

AI systems often fail in deployment due to validity, bias, and value alignment issues. Effectively anticipating these issues requires having effective approaches and tools to measure them. However, existing design and evaluation practices often suffer from inappropriate measurement assumptions, made when translating from abstract, unobservable concepts to readily implementable approaches for measuring those concepts. In FAccT 2020, researchers organized a translation tutorial---The Meaning and Measurement of Bias: Lessons from Natural Language Processing---to introduce the language of measurement modeling to better examine fairness issues in NLP technologies. In the five years since, a small but growing body of work in the FAccT community has called for greater stakeholder participation in decisions about how to measure concepts like fairness, functionality, or stereotyping. Researchers have begun to explore approaches to support such participation in practice, but efforts remain nascent. Importantly, beyond the context of AI, there are rich, existing traditions of this kind of engagement in the design of quantitative measurement approaches. Research communities---from the social and life sciences to philosophy of science and human-computer interaction---have proposed approaches to support the design and evaluation of measurement instruments for concepts like worker "well-being," community "peace," or "quality" of services. This tutorial will introduce concepts, practices, and tools from these disciplines and demonstrate their applicability to the context of collaborative measurement for evaluation of AI systems. Through concrete case studies, we will re-formulate AI measurement as a collaborative design practice that combines AI expertise and non-AI expertise, from the scholarly expertise of social scientists to the lived expertise of impacted communities. We will conclude with a discussion on opportunities for cross-disciplinary collaboration to support future work in this space.

Opening the State House Door: The Landscape of State AI Policy and the Stakeholders Who Influence It

Translation tutorial Kara Williams, Electronic Privacy Information Center; Grace Gedye, Consumer Reports; Cody Venzke, ACLU

In the United States, states—not the federal government—have become the critical venue for U.S. artificial intelligence policy in 2025. In the face of gridlock at the federal level, state legislatures considered almost 700 AI bills in 2024, 113 of which were enacted. Many of those bills focused on specific sectors or public use of AI. However, Colorado passed a comprehensive bill regulating private uses of AI, aimed at building transparency and mitigating bias in the use of AI in high-stakes decisions. Similar bills are pending or were considered in at least 14 states. State legislatures often have minimal resources to build deep subject matter expertise and little support staff, if any. As a result, experts, advocates, and lobbyists play significant roles in shaping bills. Lobbyists for large tech companies have built a strong playbook, planting industry-favored bills while pushing back against strong consumer-protective proposals. In this environment, the voices of experts in computer science, law, and public policy are critical. The presenters for this tutorial work on the frontlines of state-level AI policy debates, testifying in state houses, meeting with legislators, and suggesting amendments. They will give attendees a deep understanding of AI issues garnering the most legislative attention; a step-by-step guide to how a policy concept becomes a law in U.S. states; and an understanding of how experts—such as FAccT attendees—can influence U.S. state policy for the better, with concrete examples of how to get involved.

Building Transparent AI Systems for Justice: From Criminal Reform to Legal Access

Translation tutorial Aparna Komarla, Redo.io; Arman Aydemir, OpenProBono; Nicholas Colvin, OpenProBono;

In this tutorial, we will demonstrate a cross-disciplinary approach to building legal technology for high-risk and high-stakes subject areas. Redo.io (redoio.info) and OpenProBono (openprobono.org) are startups building technology and AI solutions to improve access to justice in the United States. Low-income Americans receive little to no legal assistance for roughly 92% of their civil-legal challenges, according to a 2022 survey by the Legal Services Corp.In the past year alone, 74% of low-income households experienced at least one civil legal problem, with 39% facing five or more, and 20% grappling with ten or more. In 75% of civil cases, particularly in state court which includes family, and housing court, at least one party lacks legal representation. In criminal cases, the injustice in justice problem takes a different shape. While incarcerated folks have access to indigent defense, post-conviction relief is unequally distributed due to resource constraints and political factors. In California, hundreds of criminal cases eligible for resentencing under sentence correction laws remain unidentified among overwhelming caseloads. This tutorial demonstrates how our interpretable and transparent AI systems help laypeople and attorneys fill multifaceted justice gaps. We will present two complementary case studies that demonstrate how our AI tools democratize legal research and correct decadal injustices in prosecutorial practices.

Pathways among human heuristics and AI harms

Dialogue/Translation tutorial Athina Vakali, Aristotle University Thessaloniki; Ricardo Baeza-Yates, Universitat Pompeu Fabra Barcelona

This tutorial aims to enlight the ties between human heuristics and AI harms in todays mixed human-AI decision making. A bowtie model and an AI sociotechnical harms taxonomy will explore the role of cognitive science principles as causes of particular AI harms and consequences. Interdisciplinary attendees will be encouraged to recognize risk events which challenge responsible human and AI preferences and will gain a deeper understanding of emerging practices through real world examples of varying risk levels, such as in job hiring, and health decision making domains.

Troubling Translation: Sociotechnical expertise and research in AI policy and governance

Dialogue/Translation tutorial Serena Oduro, Data & Society Research Institute; Melinda Sebastian, Data & Society Research Institute and Public Technology Leadership Collaborative-PTLC; William Agnew, QueerinAI and CMU HCII; Carter Buckner, Queer in AI and University of Arkansas

While many researchers across machine learning, the humanities, and social sciences aim to ground AI governance and policy in evidence-based research, the reality of bridging research and policy is complex. Often, researchers are asked to “translate” their research findings for policymakers to use, yet ensuring that research is translatable and implemented in policy requires community-building, the right political circumstances, bridging of epistemic and disciplinarily distinct paradigms, and the resources to endure and overcome bureaucratic hurdles. Much of policy work is relationship based, requires real trust, and is built on cultural competence and mutual understanding. This tutorial will use lessons from the panelists experiences as members of the National Institute of Standards and Technology’s US AI Safety Institute Consortium and Data & Society Research Institute’s Public Tech Leadership Collaborative, and federal congressional experience as examples of the challenges and opportunities researchers must be aware of to successfully influence AI policy and governance. We particularly aim for this tutorial to be helpful for researchers and experts representing and/or advocating for historically marginalized communities and the lessons we have learned aiming to bring research rooted in advancing the rights and wellbeing of historically marginalized communities in potentially hostile environments.

Leveraging Survey Methodology to Improve Training Data Quality and Enrich Fairness Research

Translation tutorial Christoph Kern, LMU Munich; Clara Strasser Ceballos, LMU Munich; Frauke Kreuter, LMU Munich

Many harmful consequences and disparate impacts of AI and algorithmic-decision making (ADM) systems can be traced back to biases and errors in training data. We invite the FAccT community to engage with concepts and methods from survey research, an academic field with a strong tradition in assessing and improving data quality. We highlight how data quality frameworks such as the total survey error (TSE) framework offer new perspectives and solutions for fair ML research along two key dimensions of data quality -- measurement and representation. We demonstrate which survey research concepts already contribute to building more equitable algorithms and which methods offer promising potentials but have not yet been utilized by the fair ML community.

Disentagling the Relationship between Explainable AI and Fairness

Dialogue/Translation tutorial Luca Deck, University of Bayreuth & Fraunhofer FIT; Jakob Schoeffer, University of Groningen; Maria De-Arteaga, University of Texas at Austin; Niklas Kühl, University of Bayreuth & Fraunhofer FIT

Proponents of explainable artificial intelligence (XAI) commonly assume an implicit link between explanations and fairness. A plethora of XAI approaches and methods have been claimed to "promote" or even "ensure" fairness. However, the exact relationship often remains unclear. In this tutorial, we present a critical view of common claims on the alleged fairness benefits of XAI, as well as its drawbacks, anchored on a systematic review of 175 recent articles on the topic. By organizing the scattered debate into meaningful sub-debates around seven archetypal claims on the alleged fairness benefits of XAI, we provide an entry point for future discussions on the suitability and limitations of XAI for fairness. To foster more productive research, design, and application of XAI methods for fairness purposes, we provide guidelines for researchers and practitioners to be specific about what kind of XAI method is used, which fairness desideratum it addresses, how exactly it promotes fairness, and who is the stakeholder that benefits from XAI.

Evaluating the Frontier - AI Model Lifecycle and Governance Insights

Dialogue tutorial Veronica Chatrath, Vector Institute; John Willes, Vector Institute; Deval Pandya, Vector Institute; Marcelo Lotif, Vector Institute

AI evaluation is often treated as a one-time, pre-deployment task, but this practice falls short of ensuring fairness, robustness, and real-world accountability. In this interactive tutorial, we reframe evaluation as a full-lifecycle process, spanning early-stage risk identification (e.g., InspectEvals), to real-world auditing (e.g., FairSense), and continuous post-deployment monitoring. Using real-world frameworks and case studies—such as NYC Local Law 144 and the “Null Compliance” phenomenon—we will reveal the critical gaps in today’s compliance-driven governance. We will also share new insights from our State of Evaluation Report and showcase our Evaluation Leaderboard, which includes evaluations of whether AI agents can take harmful actions or exhibit dangerous capabilities. These assessments examine agents’ ability to pursue or enable harmful outcomes through reasoning, planning, and real-world-relevant behavior, with a focus on emergent risks beyond static output generation. Through live demonstrations and moderated discussions, participants will explore how structured evaluation methods can generate actionable insights for policymakers and regulators, supporting governance approaches that evolve alongside rapidly advancing AI systems. Attendees will leave equipped with practical strategies to embed evaluation throughout the AI model lifecycle and to bridge the gap between technical assessments and real-world policy applications. This session is designed for AI practitioners, researchers, and policymakers committed to advancing evidence-based, accountable AI governance. Join us to learn how you can lead the next frontier of AI evaluation. To see a breakdown of what this tutorial covers, visit the following link.

The Many Faces of Multiplicity in ML

Dialogue/Translation tutorial Prakhar Ganesh, McGill University and Mila; Shomik Jain, MIT Institute of Data, Systems & Society.; Carol Long, Harvard University; Afaf Taik, Université de Montréal and Mila; Hsiang Hsu, JPMorgan Chase Global Technology Applied Research ; Flavio Calmon, Harvard John A. Paulson School of Engineering and Applied Sciences; Ashia Wilson, MIT; Kathleen Creel, Northeastern University; Golnoosh Farnadi, McGill University and Mila

This tutorial aims to familiarize the FAccT community with the phenomenon of multiplicity, or when a set of “good” models exhibit different properties or outputs. We will begin with a demo illustrating how multiple equally accurate yet distinct models can produce conflicting predictions for individuals. Building on this demo, we will provide an overview of the growing literature on multiplicity and related concepts. In particular, we will examine the implications of multiplicity for fairness and explainability in algorithmic decision-making, highlighting different perspectives from the literature. We will conclude with a discussion on when and how to address multiplicity, inviting audience perspectives and identifying directions for future research.

Data Access under the EU Digital Service Act

Practice tutorial Emilia Gómez, European Commission Joint Research Centre; Maria Eriksson, European Commission Joint Research Centre; João Vinagre, European Commission Joint Research Centre

With the Digital Services Act (DSA) that has been in the process of being implemented since 2022, the European Union has established a new and unprecedented set of rules for online platforms and search engines with more than 45 million active recipients within the EU. Designated as Very Large Online Platforms (VLOPs) and Very Large Search Engines (VLOSEs), such platforms are obliged to provide access to data for researchers who study how online platforms may give rise to systemic risks within the EU. According to the DSA, systemic risks may for example include the dissemination of illegal content, risks to fundamental human rights, negative effects on electoral processes, or increased prevalence of gender-based violence as a result of the use, design, or functioning of VLOPs and VLOSEs. Importantly, the DSA invites researchers from anywhere in the world to investigate such risks and enables them to make formal requests to access both publicly available and private data. This tutorial is aimed at researchers who are interested in submitting such data access requests and will provide them with the knowledge to do so by introducing the relevant definitions and provisions of the DSA, and addressing the most important procedural steps to obtain data access.

Participatory & Periodic Red-Teaming of LLMs

Practice tutorial Manish Nagireddy, IBM Research; Michael Feffer, Carnegie Mellon University; Rebekah Tweed, All Tech Is Human; Ioana Baldini, Bloomberg

A mixture of tangible results and hype has led to public eagerness to use GenAI models in many ways. At the same time, there are various concerns surrounding these models, leading to burgeoning efforts to document and classify their negative impacts. Red-teaming, which typically takes the form of interactive probing, is commonly used as part of these efforts. To uncover potential risks most effectively via red-teaming, we argue that participatory approaches are paramount. In this tutorial, we engage the FAccT participants, with their multidisciplinary expertise, to discover GenAI failures and ascertain the limitations of current GenAI safeguarding approaches. The first part of our tutorial is formatted as a brief educational module to review key red-teaming considerations. The second part of the tutorial is structured as an interactive and iterative exercise, where the audience joins in multiple rounds of discussion on AI risks and live red-teaming for such risks. We plan to collaboratively discover the benefits and drawbacks to conventional red-teaming approaches while discussing how to address the latter pitfalls. We organize the red-teaming exercises in a group setting first, to encourage communal dialogue, and then move to individual evaluations in subsequent iterations.

Public Interest Technology (PIT) Clinics as Applied Sociotechnical Pedagogy

Practice tutorial Lauren M. Chambers, Berkeley School of Information; Diag Davenport, Berkeley School of Information

What does it mean to incorporate social impact work into data and technology education? Public Interest Technology (PIT) clinics are a growing model of experiential technical education that empower students to engage directly with real-world data and technology challenges in service of the public interest. This tutorial explores PIT clinics as innovative learning spaces where, like in law school clinics, students develop professional and ethical judgment and contribute to practical initiatives beyond the university. Indeed, by centering collaboration with civil society partners, student agency, and ethical reflexivity, PIT clinics equip students to navigate complex sociotechnical landscapes while prioritizing meaningful social impact. We will draw from case studies of PIT clinics at multiple institutions, including a deep dive into a new course offered within the UC Berkeley School of Information. The tutorial will include discussions and peer learning regarding the structure of such clinics, their pitfalls, their definitions of success, and their futures. Attendees will leave with concrete strategies for making key pedagogical design choices and troubleshooting common challenges in experiential courses in public interest data work.

Auditing Social Media Platforms using SOAP

Practice tutorial Luka Bekavac, University of St. Gallen; Simon Mayer, University of St. Gallen

Despite their growing influence, social media platforms still operate largely as black boxes. This hands-on tutorial introduces participants to SOAP — the System for Observing and Analyzing Posts — a flexible, open-source framework designed to audit platform behavior through sockpuppet modeling. SOAP has already been used in journalistic investigations, academic research, and public outreach. In the context of growing regulatory frameworks like the EU Digital Services Act (DSA), SOAP enables critical audits of personalization algorithms, filter bubbles, and systemic risks on platforms like Instagram and TikTok. Participants will explore real-world audit scenarios, learn how to run their own platform investigations, and reflect on ethical and legal implications. From crafting research questions to building simulated user journeys, this tutorial empowers researchers, practitioners, and policymakers to conduct meaningful audits—even when direct platform data is limited. Whether you're new to social media auditing or looking to scale your research, SOAP provides the tools and guidance to get started.

Generative AI and Civil Society: Insights from ACLU

Practice tutorial Marissa Gerchick, American Civil Liberties Union; Brooke Madubuonwu, American Civil Liberties Union; Ranya Ahmed, American Civil Liberties Union; Ijeoma Mbamalu, American Civil Liberties Union

In this tutorial, leaders of the ACLU’s Technology Team will explain a leading civil rights organization’s values-based approach to generative AI and how research and resources from the FAccT community critically inform this approach. The session will describe the ACLU’s development of an interdisciplinary working group focused on generative AI, creation of principles and guidelines related to internal generative AI uses, and deployment of a scalable resource to evaluate vendors’ generative AI offerings against ACLU values. The session will provide practical insights for civil society organizations, other non-profits, and academic researchers, demonstrating how to build generative AI evaluation capabilities within non-profits and how to make sociotechnical AI research resources usable for non-profits.

Computational Argumentation for Fair and Explainable AI Decision-making

Dialogue/Translation tutorial Elfia Bezou-Vrakatseli, King’s College London; Madeleine Waller, King’s College London; Andreas Xydis, University of Lincoln

As AI systems increasingly influence high-stakes domains, from autonomous systems to healthcare and law, the need for transparency, interpretability, and accountability is more urgent than ever. This tutorial introduces Computational Argumentation as a powerful, interdisciplinary approach to meet these demands. Participants will gain foundational knowledge of argumentation theory, explore real-world applications (including a case study with the Dutch police), and learn how argumentative frameworks support human-AI joint reasoning. We will demonstrate how computational argumentation can be used to detect bias, explain decisions, and align AI systems with human values. Combining theory with hands-on tools, this interactive tutorial equips attendees with methods to critically assess both human and AI reasoning, and contribute to the development of fair, explainable, and trustworthy technologies.