The goal of tutorials is to broaden the perspective of our interdisciplinary community, addressing practical, technical, policy, regulatory, ethical, or societal issues related to FAccT. We solicited three types of tutorials: Translation Tutorials to foster dialogue between disciplines, Implications Tutorials to describe effects of algorithmic systems in society, and Practice Tutorials focused on a specific tool or frameworks.

Creating Ethical Charters in AI Development

Practice tutorial Margaret Mitchell, Chief Ethics Scientist, Hugging Face; and Giada Pistilli, Principal Ethicist, Hugging Face

This tutorial outlines the process of creating ethical charters for AI development projects, focusing on moral value pluralism and collaborative, inclusive practices. We discuss why it’s important to identify common core values to guide projects and navigate tensions, drawing inspiration from Confucian ethical traditions that promote harmony. The session includes steps for determining relevant values, building consensus, and implementing these values within specific AI initiatives. We aim to promote values-informed AI development by utilizing previous work as case studies, such as the BigScience project and other ethical frameworks from major tech companies. Participants will learn techniques for integrating ethics into their work, ultimately influencing priorities and impacts in AI technology development. Further details can be found at hf.co/spaces/society-ethics/ethical-charter-tutorial.

Should I disclose my dataset? Legal and ethical considerations for researchers dealing with court documents

Raysa Benatti, University of Tübingen

Natural language processing techniques have helped domain experts solve problems in different domains. In the legal realm, digital availability of court documents increases possibilities for researchers, who can access them as a source for building datasets --- whose disclosure is aligned with good reproducibility practices in computational research. Large and digitized court systems, such as the Brazilian one, are prone to be explored in that sense. However, personal data protection laws impose restrictions on data exposure and state principles about which researchers should be mindful. Special caution must be taken in cases with human rights violations, such as gender discrimination, over which we elaborate as an example of interest. In this tutorial, we present legal and ethical considerations on the issue, as well as guidelines for researchers dealing with this kind of data and deciding whether --- and to which extent --- to disclose it.

Documenting AI’s Environmental Impact

Practice Tutorial Bran Knowles, Lancaster University; David Piorkowski, IBM T.J. Watson Research; John T. Richards, IBM T.J. Watson Research

This tutorial aims to facilitate a much needed conversation on AI’s environmental impacts, and to inspire reflection on how the kinds of documentation that has been developed to support accountability for the myriad social harms of interest to the FAccT community could be expanded to account for environmental harms. In this tutorial, attendees will gain a deeper understanding of the environmental impacts of AI and limitations of existing approaches to mitigating these impacts. They will also gain insights into emerging environmental accountability practice through realworld examples including the use of IBM’s AI FactSheets to capture energy savings from hardware-aware AI models and the calculation of energy consumption and carbon emissions for IBM’s Granite model.

EDIA Demo: Bias assessment for experts in discrimination, not in computer science

Guido Ivetta, Universidad Nacional de Córdoba, Argentina; Luciana Benotti, CONICET, Argentina; Nair Mazzeo, Fundación Via Libre; Hernán Maina, CONICET, Argentina; Laura Alonso Alemany, Universidad Nacional de Córdoba, Argentina; Beatriz Busaniche, Fundación Via Libre; Alexia Halvorsen, Fundación Via Libre

Methodologies for bias assessment usually require such technical skills that, by design, discrimination experts are left out. In this demo we present EDIA, a graphical interactive tool that facilitates that experts in discrimination explore social biases in word embeddings and language models. Experts can then characterize those biases so that their presence can be assessed more systematically, and actions can be planned to address them. They can work interactively to assess the effects of different characterizations of bias in a given word embedding or language model, which helps to specify informal intuitions in concrete resources for systematic testing. This 1.5 hour session will first showcase a demo of the tool, conveying the lessons learned from diverse hands-on workshops we have carried out. Then participants will be given time to use the tool themselves and try out some examples we present or others they may bring. Then, there will be an open discussion to clarify nuanced aspects of the problems and methods, share insights, and answer questions. Through this demo session, we aim to provide experts, specially non-technical people, with skills to assess biases in these pervasive language technology constructs, language models and word embeddings. EDIA was designed for non extractive stereotype data collection which is situated in a particular cultural context. Our approach is to lower technical barriers, so that actual discrimination experts in any culture can have intuitive access to bias metrics on language models in their language, interacting with models in a graphical way. We want to share it with the community so that they can reuse it for their cultural contexts. We will offer EDIA, which is an open source tool, to those researchers interested in using it for their own research. EDIA is easily adaptable to any language, as the specific language model and word embeddings to be explored are one of the parameters of the tool. Moreover, we hope to gather feedback that will enhance the tool's effectiveness and user-friendliness. The insights of participants will play a crucial role in refining EDIA and ensuring its relevance in diverse research contexts. Find the tool here

Translating Lessons from 100 Years of Safety Engineering to Responsible ML Development

Shalaleh Rismani, McGill University; Roel Dobbe, TU Delft; AJung Moon, McGill University

Identifying, assessing, and mitigating emerging harms from ML systems is challenging. In this tutorial, we reflect on the mature discipline of safety engineering and examine the frameworks, practices, and organizational culture needed to build safe systems. Using examples and case studies, we highlight and focus on relevant lessons, tools, and frameworks from system safety for responsible ML development. We conclude the tutorial with an open discussion and invitation to reflect on the efficacy of system safety approaches for examining ML-based products/services.

How to Anticipate Generative A.I.’s Impacts on Children’s Rights

Hye Jung Han, Human Rights Watch

Children are amongst the earliest adopters of generative A.I., though there has not yet been an examination of how the technology may impact them. Threats specific to children have already emerged, raising the stakes for anticipating and mitigating current and likely future harms. This tutorial will offer a crash course on children’s rights; how international human rights law entitles all children to specific protections for their safety, privacy, education, and identity, among others; and how these rights interact with the digital world. Participants will be guided through real-world examples to identify and assess how the development and use of generative A.I. may place multiple rights at risk, and how these impacts may be amplified by contextual factors that have historically resulted in groups of children facing discrimination and exclusion. The tutorial will conclude with an open discussion on how these risks might be mitigated.

LLM Agents: Prospects and Impacts

Seth Lazar, Daniel Kilov, Australian National University; Aaron Snoswell, Queensland University of Technology; Dylan Hadfield-Menell, MIT

Large Language Models like OpenAI’s GPT-4 and Google’sGemini are likely to have greater social impacts as the executive centre for complex systems that integrate additional tools for both learning about the world and acting on it.All of the leading AI research labs, and many upstarts, are now investing vast amounts of resources in making LLM agents work, releasing new models optimised for tool-use (e.g. from https://www.youtube.com/watch?v=zjkBMFhNj_g. Adept, Cohere), and software agents like Cognition's 'Devin', designed to go beyond co-piloting to independently undertaking complex coding tasks. This tutorial will offer the FAccT community a technical and philosophical introduction to LLM agents, explaining how they work, their limitations and their potential societal impact, then exploring that impact through the lens of moral and political philosophy.

Navigating Equity and Reflexive Practices in Gigwork Design---A Journey Mapping Experience

Translation/Dialogue Tutorial Alicia Boyd, New York University; Danielle Cummings, Department of Defense; Angie Zhang, University of Texas-Austin

How do we create ethical and equitable experiences on global platforms? How might UX designers and developers incorporate reflexive practices--a continuous self-evaluation of one's assumptions and biases--to mitigate assumptions and workers' experience? This tutorial will explore ways to build equitable user experiences using gig work platforms as a target use case. With the rise of gig work platforms, the informal digital economy has altered how algorithmic systems manage occasional workers; its questionable assumptions have spread worldwide. Concerns over autonomy, gamification, and worker privacy and safety are amplified as these practices expand worldwide. We will practice reflexive techniques within this context by implementing an equity-focused journey-mapping experience. Journey mapping allows designers to map out the customer experience and identify potential pain points at each step that could hinder the user experience. Using a ride-sharing scenario, participants will be guided through a custom journey map highlighting equitable considerations that can facilitate responsible user experience innovation. More information can be found at the Navigating Equity and Reflexive Practices in Gigwork Design website.

Risks of General-Purpose LLMs for Settling Newcomers in Canada

Implications Tutorial Isar Nejadgholi, National Research Council Canada; Maryam Molamohammadi, Mila - Quebec Artificial Intelligence Institute; Samir Bakhtawar, Immigration, Refugees and Citizenship Canada

While AI has been frequently applied in the context of immigration, most of these applications primarily focus on selection and screening processes, which have raised concerns due to their understudied reliability and high impact on people's quality of life. In this tutorial, we focus on Canada’s immigration settlement phase, highlighting that this stage of immigration is information-heavy, and its service providers are overburdened. With concrete examples, we highlight how new immigrants and refugees might become overly dependent on and vulnerable to the extensive use of generic chatbots such as ChatGPT and raise awareness about the challenges and implications of over-reliance on such technologies. Based on the demonstrated evidence, we suggest that the settlement sector is a prime candidate for the adoption of human-centered AI applications, yet it remains under-explored in AI research. The tutorial provides recommendations and guidelines for further research on the development of AI literacy programs and the participatory design of AI tools for the newcomer community in Canada.

Responsible AI in the Generative Era: Science and Practice

Alicia Sagae, Amazon AWS AI/ML; Nil-Jana Akpinar, Amazon AWS AI/ML; Riccardo Fogliato, Amazon AWS AI/ML; Mia Mayer, Amazon AWS AI/ML; Michael Kearns, University of Pennsylvania &Amazon AWS AI/ML

Generative AI brings additional nuance to the challenges of Responsible AI (RAI). These challenges include some that were common before generative AI, such as bias and explainability, and some that are unique to generative models, including hallucination, toxicity, and intellectual property protection. This tutorial is structured around hands-on exercises that let participants engage with large language models and with each other, to explore specific strategies that they can apply in their own RAI work. We will compare the challenges of traditional vs. generative RAI, and align those challenges with best practices for (and by) industry practitioners to assess and minimize RAI risk. Together with participants, we will test RAI guardrails in a jailbreaking game and conduct a structured risk assessment for a realistic use case, automatic generation of product descriptions.

Not Just Metrics: Qualitative Evaluations for Geo-Cultural Representation in Generative AI Technologies

Rida Qadri, Google Research ; Sunipa Dev, Google Research ; Remi Denton, Google Research; Mark Diaz, Google Research; Aida Davani, Google Research

The Watchdog and the Government Exploring Transparency Models for Public Sector Algorithms

Translation Tutorial Gabriel Geiger (Lighthouse Reports), Justin-Casimir Braun (Lighthouse Reports), Soizic Penicaud (Independent Researcher and former member of Etalab, French Department for Data Policy), Romina Garrido (Deputy Director, GobLab UAI, Chile), dr. Anne Schuth (AI Validation Team, Dutch Ministry of the Interior and Kingdom Relations)

A growing club of nations is turning to predictive technology to streamline public services. Machine learning algorithms increasingly make life-changing decisions about people’s lives in criminal justice, health and welfare systems. Yet governments often keep these systems under lock and key. Public watchdogs have struggled to keep up with this growing trend and obtain access to technical materials such as code, model files and training data that would allow them to test claims of fairness and hold governments accountable for faulty systems. In the past few years, local and national governments — often in partnership with academia or civil society — have begun putting in place regulation and governance mechanisms to increase the transparency of decision-making algorithms. These changes have the potential to increase public trust, anticipate and correct the harms caused by these systems, and reformulate traditionally antagonistic relationships between watchdogs and governments. But only if they manage to overcome tangled and at times even contradictory priorities between these parties. This tutorial will bring together watchdogs and government agencies to discuss these hurdles and possible solutions. A roster of investigative journalists, government representatives and civil society from Chile, the Netherlands, and France will present real-world transparency models ranging from algorithm registers to regulatory changes to synthetic data access regimes.

Environmental Justice Beyond Carbon and Towards Consent

Translation/Dialogue Tutorial Tamara Kneese, Data & Society Research Institute, USA; Lori Regattieri, Pan-Amazonian Technopolitical Coalition, Brazil; Bogdana Rakova, Speculative Friction Initiative, USA; Ray Alves, Amazon Environmental Research Institute, Brazil; Martha Fellows, Amazon Environmental Research Institute, Brazil; Valderli Piontekowski, Amazon Environmental Research Institute, Brazil

The environmental and climate impacts of AI extend beyond decarbonization, necessitating a broader perspective on environmental justice. While companies and countries report GHG emissions and ESG metrics to meet Net Zero and UN Sustainable Development goals, this tutorial highlights unfair labor practices in mining and manufacturing, pollution affecting land, air, and water, the resource demands of data centers, and downstream health impacts like cancer and respiratory illnesses. These issues are rooted in long-standing histories of colonialism and extraction. Given FAccT's location in Rio de Janeiro this year, we emphasize the importance of Brazilian perspectives, particularly through Brazilian organizations employing community-led and Indigenous participatory methods. We spotlight the System for Observation and Monitoring of the Indigenous Amazon (SOMAI), an online platform developed by the Amazon Environmental Research Institute (IPAM). SOMAI aims to strengthen the role of Indigenous territories in maintaining climate balance. This tutorial explores data trust for AI environmental modeling through principles of Free, Prior, and Informed Consent (FPIC). IPAM’s initiative serves as a model for integrating advanced data stewardship with FPIC, enhancing Indigenous autonomy over land management in the Brazilian Amazon Basin. Our discussion will address the broader implications of AI, data stewardship, and consent in creating a collaborative, trustworthy framework for environmental justice beyond carbon footprinting. For more information, visit the SOMAI platform at https://somai.org.br/.

Developing Gen AI in the Global South; debating five practical cases and their challenges

Rachel Adams - African Observatory on Responsible AI; Clemence Kyara - Code for Africa; Fola Adeleke . - University of the Witwatersrand; Aarushi Gupta. Digital Futures Lab; Christian Perrone - ITS Rio

The wide expansion and popularity of Generative Artificial Intelligence technologies (GenAIs) has had a wide impact in many different sectors worldwide, from education to health. The promise of these technologies is often overshadowed by significant challenges specific from local realities in the Global South, where limitations in data accessibility, computational infrastructure, and clear governance frameworks frequently curtail the research, development and implementation of AI systems This tutorial is based on a wide project that has supported more thanfifty specific initiatives throughout the Global South aiming to debate and foster responsible development of these technologies. This tutorial will discuss in practice five specific cases highlighting challenges around development of the technologies, testing different Large Language Models (LLMs) and applying to particular specific contexts. We will explore five practical cases of Gen AI applications, selected based on their diversity of application areas, geographical representation, and the distinct challenges they illustrate. The case studies will encompass sectors such as healthcare, educational content generation methodologies, language processing for underrepresented languages, and ethical AI development frameworks.

Algorithm Auditing and Generative AI

Danaë Metaxa, UPenn; Leon Yin, Bloomberg News; Sarah Cen, MIT; Leonardo Nicoletti, Bloomberg News; Piotr Sapieżyński, Northeastern

With industry's rapid adoption of generative AI tools, journalists and researchers are developing accountability techniques to measure these tools’ outputs and understand their potential social impacts. This panel will bring together experts from academia and journalism to discuss two sides of algorithm audits and generative AI: first, audits of generative AI technologies themselves, and second, uses of generative AI to help conduct audits of other technologies.

How to Conduct Human Rights Assessments of AI: Methodology and Comparison to Other Assessment Frameworks

Lindsey Andersen, BSR; Hannah Darnton, BSR; Betsy Popken, UC Berkeley

Human rights assessments (HRAs) are a well-established approach to assessing risks to people and society. They are a core part of the responsibilities of companies under the UN Guiding Principles on Business and Human Rights, and thus companies have been conducting them for years—including for AI products and services. However, there are no examples of HRAs for AI that are published in full, meaning the knowledge of how to conduct them and what they look like in practice largely lives with the companies and consultants who conduct them. This tutorial will teach participants about what HRAs are and how they are useful for responsible AI practitioners. It will include a discussion of the benefits and limitations of HRAs, how they are similar or different to other types of assessments and audits, and how they can be helpfully integrated into common responsible AI practices. The tutorial will also include a step-by-step walkthrough of HRA methodology, which participants will practice applying using an example of a hypothetical AI product.

What is Sociotechnical AI Safety? A participatory workshop about defining and expanding responses to sociotechnical risk in AI Safety

Dialogue/Implications Tutorial Andrew Smart, Google Research; Shazeda Ahmed, UCLA; Jake Metcalf, Data & Society; Atoosa Kasirzadeh, CMU, Google Research; Luca Belli, UC Berkeley; Shalaleh Rismani, McGill; Roel Dobbe, TU Delft; Abbie Jacobs, U Michigan; Joshua A. Kroll, NPS; Donald Martin Jr,, Google Research; Renee Shelby, Google Research; Heidy Khlaaf, BSI; Genevieve Smith, UC Berkeley

Our goal is to invite discussion and critique of the currently dominant ideas around AI safety, and to shed light on alternative research. The purpose of this Tutorial session is to give space to well established research fields such as systems safety engineering, sociotechnical work in labor studies, that have received less attention than work on alignment or the control of existential risks. At the same time, the session aims to critique and expand the current understanding of AI Safety in order to offer a path forward for research and practice that centers equity, participatory approaches, expanding the kinds of expertise that are relevant, and community inclusion. This research program focuses on current, actual, societal harms from the development and deployment of AI systems, and adapts safety and systems science and engineering approaches to the problem of mitigating risk from these systems, relating existing and emerging technical tools to sociotechnical risks in structured and scientific ways. These approaches are in turn informed by critical social science research so that a synthesis between societal understanding and organizational/technical risk mitigation actually reduces harm to society. Finally, this research program sees the problem of AI Safety not as a technical or mathematical problem, but rather as a social, organizational, political and cultural problem in guiding the development and use of technology. This problem takes on particular urgency as policy responses such as the creation of the U.S. AI Safety Institute and the passage of the EU AI Act demand operationalizing AI Safety in ways that capture sociotechnical risks.