«Critiquing and Rethinking Fairness, Accountability, and Transparency» (CRAFT) is a dedicated track to build bridges from the conference to people who contend with computing systems from many different angles, from journalism and organizing, art and education, to advocacy, governance and beyond. Critique, reflection, and power are at its core, and it provides a unique opportunity within an academic conference to center the impact of technology on communities and the policy implications that arise from that impact.
Vryan Feliciano, Ilana Nguyen, Jaemarie Solyst, Evan Shieh
Coordinator Contact: evan.shieh@youngdatascientists.org
Recently, the FAccT community has called for responsible AI efforts to include a broader range of voices. In this CRAFT session, we center the perspectives of minoritized youth on AI justice due to their unique potential to contribute meaningful insights on AI systems impacting themselves and their communities. Employing decolonial practices of reverse tutelage and bottom-up knowledge production, we feature peer-reviewed contributions of undergraduate and high school researchers hailing from minoritized, Title I urban communities in the United States that traditionally lack pathways to the technology industry. Our session invites the FAccT community to learn from and engage with diverse youth emerging as key contributors to critical AI scholarship, forming new connections and collectively participating in activities to advance sociotechnical AI justice.
Amos Toh, Emile Ayoub
Coordinator Contact: toha@brennan.law.nyu.edu
The business of war is increasingly driven by narratives and visions of AI’s promise. In the United States, the Department of Defense, the government’s largest agency and its biggest spender on AI, has over 800 AI projects – and that’s just those that are unclassified. Some of the most high-profile projects, from the system to track and identify targets under Project Maven, to the initiative to mass produce autonomous drones and munitions, are being developed by venture-capital backed firms that have positioned themselves as AI disruptors. These projects are also spreading to the border and throughout law enforcement, driving a broader militarization of society. Over the last year, the authors of this proposal have collaborated with the investigative nonprofit Tech Inquiry to trace how these firms are growing their share of the defense budget, and the strategies and networks of influence they exercise to shape warfare in their image.
During this session, we will share our preliminary findings on the key players distinguishing themselves in the crowded field of defense tech, the vulnerabilities and dependencies they are creating in military supply chains, and how the pressures of their business model escalate risks to human rights. We will also explain our factfinding process: how we, as lawyers and policy analysts, worked with data scientists to make sense of the opaque and arcane world of defense contracting, and navigate the challenges and limitations of open-source research in this space.
Christoph Becker, Eshta Bhardwaj
Coordinator Contact: eshta.bhardwaj@mail.utoronto.ca
This interactive CRAFT workshop session uses satire and creative expression to facilitate and strengthen collective organizing around algorithmic harms.
Some of the organizers are currently coordinating a distributed satirical campaign to Savethe.AI which uses humour to connect individuals’ and communities’ needs for Earth’s resources with the evidence of just how much of these resources are now being claimed for the data centres running generative AI. By connecting our thirst with a data center’s cooling system, the campaign addresses psychological distance: it is very hard to relate personally to the distant data centers where generative AI models do their work.
In this CRAFT interactive workshop session, we will build on the satirical angle and facilitate a creative humorous engagement with questions of individual agency, community experiences, and collective resistance. We collectively explore ecological and social impacts of data centers and platforms that are already felt by diverse communities. These impacts vary greatly in place and time and are documented primarily by the tireless work of organizers, civil society, and academics, while being obfuscated by the companies creating the situation.
Priya Goswami
Coordinator Contact: priya@mozillafoundation.org
What does the digitalisation drive of the world's most populous country look like? ‘Digital Bharat’ is an exhibition that brings you face-to-face with the lived realities of rural India as people adapt to the unprecedented drive of digitalisation in the age of data mining for emergent technologies. What is it like to be a rural Indian shouldering Digital Public infrastructures (DPIs) affecting health and livelihood– the two fundamental rights of people? In India, there is a saying. There is ‘India’ of the urban and upwardly mobile. And then there is ‘Bharat’, the Hindi name for India. ‘Bharat’ of the tier 3,4,5 cities, towns and villages and its denizens.
‘Digital Bharat’ explores the impact in rural India. To discover this, I followed one million strong ASHA workers, or female on-ground health activists, where ASHA stands for Accredited Social Health Workers, and simultaneously discover a surveillance app to monitor the ‘business and productivity’ of daily wage labourers, who earn as meagerly as $3.5 a day for less than 100 days a year. My questions have led me to rural Rajasthan, 500 Km away from the capital and rural Haryana, 180 km away. Keeping the fulcrum as health data and surveillance on daily-wage labourers of the two states and within hundreds of miles radius, my search has yielded some diverse, heterogeneous, even bizarre stories directly from the ground and the community, especially rural women. These stories of the personal struggles of rural people would be seldom found on any online platform, while dangerously and increasingly so, their data is available, live online on public infrastructures, without them consenting or comprehending the ramifications of it.
Tania Duarte, Elizabeth Remfry
Coordinator Contact: lizzie.remfry@weandai.org
Typical models of AI literacy and participatory research often reinforce the exclusion of under-represented voices, particularly learning disabled people, older adults or children, or those with differing literacy abilities. If we truly want to build AI technologies that work for and are accountable to all of society, then we need better methods for including and enabling a diverse range of voices in the development and public discourse on AI.
Inspired by Memory Boxes and Sensory Stories, We and AI, a non-profit volunteer organisation, are designing “Sensory AI Boxes”: tangible and tactile boxes containing sensory materials to spark conversation. They allow participants to engage in a non-intimidating way, prompting critical thinking and discussion. We draw upon existing research on the experiential quality of art to support people’s AI literacy by countering the opaque and perceived intangible quality of AI technologies.
We invite attendees to come and interact with the Sensory AI Boxes, bringing their own lived experiences, to help us shape further iterations and provide critical feedback on this approach.
Nina da Hora
Coordinator Contact: ninadhoraa@gmail.com
The session titled "Beyond the Face: Exploring the Impact of Facial Geometric Variation on Recognition Algorithms across Ethnicities" delves into the intersection of technology, ethnicity, and social implications inherent in Facial Recognition Technology (FRT). This session will highlight the cutting-edge research that identifies and analyzes the challenges posed by ethnic biases in facial recognition systems. Through a detailed examination of deep learning algorithms and their interaction with geometric features across different ethnic groups, this session aims to unfold the layers of ethical and technical considerations that must be navigated in the development and application of FRT.
Participants will explore the nuances of how automated systems interpret facial features differently across ethnicities, potentially leading to biases. The central questions addressed will include: How do geometric variations in facial features influence the accuracy and bias of recognition systems? What are the broader societal implications of these biases? And how can technology be developed to be more equitable? The session structure will combine a presentation of empirical findings with interactive discussions, encouraging a participatory and reflective examination of the material.
Freyja van den Boom
Coordinator Contact: s5076626@bournemouth.ac.uk
This curated AI/Smart City walk, organized as part of the FAccT Conference, will critically engage participants with the past, present, and speculative futures of smart city infrastructures in Athens. Through a series of staged interventions and interactive provocations, the walk will challenge dominant narratives surrounding AI-driven urban governance, surveillance, and digital infrastructures. By juxtaposing real-world smart city applications with speculative futures, the event will encourage participants to question the trade-offs often presented in AI policy—such as efficiency versus privacy or security versus democratic freedoms. Drawing on speculative design, futures thinking, and causal layered analysis, this research-led intervention will examine how AI shapes urban life and how alternative futures can be envisioned and contested.
Malavika Jayaram, Nayantara Ranganathan
Coordinator Contact: mjayaram@digitalasiahub.org
Indices that chart AI trajectories and states of readiness are a powerful lever to enlist countries in the race to build and deploy “AI”. Endorsed by international development agencies and adopted widely, indices are compelling tools of influence that measure and rank countries on their technical and social AI infrastructure, implicitly getting majority world countries to co-sign an agenda where their role begins as, and often remains, marginal.
We want to question the idea and form of the AI index, including what it includes and excludes. We want participants to visualize that The Map is NOT the Territory. We imagine the CRAFTing circle as an embodied and present moment centered on community and care, towards creating individual or communal objects that upend dominant narratives.
Alejandra Judith Josiowicz , Anaelia Ovalle, Beatriz Busaniche, Diana Mosquera, Genoveva Vargas-Solar, Ivana Feldfeber, Ivanna Martínez Polo, Laura Alonso Alemany, Liliana Zaragoza Cano, Luciana Benotti, Patricia Peña, Susana Cadena-Vela, Virginia Brussa
Coordinator Contact: lauraalonsoalemany@unc.edu.ar
Funding remains a critical challenge for independent, feminist, and Latin American initiatives in technology, artificial intelligence, and data governance. Traditional funding structures are often opaque, exclusionary, and shaped by the priorities of institutions in the Global North, which has deepened recently, with agencies and governments drastically reducing funding for social justice initiatives. Despite their potential impact on local communities, these dynamics limit access to resources for initiatives outside mainstream academic and industrial spaces. This session seeks to put in evidence these barriers and work on ethical, sustainable, and inclusive strategies to secure and maintain critical resources for project development.
We will collectively reflect on past experiences with funding in Latin America—successes and failures. We will identify common obstacles, such as the difficulty of obtaining financial support without the backing of well-established institutions, the exclusionary nature of many grant evaluation processes, and the structural barriers posed by language, engagement speed, changing requirements due to political contexts and bureaucratic constraints. We will also examine the ethical dilemmas in accepting funding from organizations that may not align with feminist values and discuss under what conditions such resources might be used without compromising core principles. Beyond monetary funding, this session will explore alternative ways to sustain projects, such as grassroots networks, cooperative governance models, and knowledge-sharing practices. Many Latin American initiatives already operate through collaborative support structures rather than relying solely on institutional funding, yet these approaches are often underappreciated in mainstream funding discussions. We will highlight and build upon these practices, discussing how to make them more visible and respected.
Kimi Wenzel, Avijit Ghosh, Sachin Pendse, Stephanie Milani, Ajeet Singh, Laura Dabbish, Geoff Kaufman
Coordinator Contact: kwenzel@andrew.cmu.edu
Artificial intelligence (AI) has become an invisible curator of human identity, silently shaping how we are seen and represented. When we encounter ourselves within AI-mediated representations – whether in social media filters, automated photo organizing, or generated images – we confront a version of ourselves interpreted and reconstructed by machine learning systems. For many individuals, the AI-mediated interpretations fail to capture (or actively misrepresent) their self-image, identity, and lived experience.
Emerging generative AI systems thus resemble a distorted mirror, allowing individuals to directly see how they are understood by algorithmic systems. Through engaging with images produced by generative AI systems, individuals can reflect on how they perceive themselves and their communities against how they are perceived by data and society. In non-AI spaces, such as viewing one's representation in media, this process of reflection can support healthy self-concept and self-acceptance, but can also trigger dysmorphia if gaps between perception and self-image are too large. Understanding the experience and impact of misrepresentation through generative AI systems is a critical first step toward mitigating this source of harm.
By the end of the workshop, participants will have a greater scientific understanding of AI’s impacts, as well as an enhanced sense of community, and a clearer direction for future projects. This workshop is not just about technology—it is about being fully *seen* and the shifting nature of identity in an era where recognition is mediated by machines.
Roya Pakzad
Coordinator Contact: roya@mozillafoundation.org
Humanitarian and government agencies are turning to (Gen)AI-enabled chatbots to deliver information and services to refugees and migrants -- yet these tools are often developed and evaluated through an English-centric lens, overlooking the context and linguistic, cultural, and legal nuances important for displaced communities. Our session, “Centering Refugee and Migrant Voices: Human Rights-Based AI Evaluation Across Languages,” brings together a diverse panel -- including a UNHCR specialist with lived Afghan refugee experience, legal and humanitarian experts, and AI evaluation expert. Currently, I am collaborating with them to develop a multilingual, human rights-centered AI evaluation framework as part of my ongoing Mozilla Foundation Fellowship. Although still in progress, this framework serves as an emerging tool to benchmark GenAI chatbot performance in multiple languages (e.g., English, Persian, Dari, Arabic, Spanish), using criteria that go beyond simple accuracy -- addressing empathy, trustworthiness, data privacy, and anti-discrimination.
Rafael Grohmann
Coordinator Contact: rafael.grohmann@utoronto.ca
Around the world, workers are trying to build alternatives to large platform and technology companies in movements called worker-owned technologies, platform cooperativism, and digital solidarity economies. In Latin America, these movements have gained particular traction in recent years, in terms of mobilizing and organizing workers to come together and build technologies together, without a technocentric perspective, and with the goal of building technologies rooted in community values.
This community-oriented session will share collective learnings from workers in Brazil and Argentina who are working in collectives, cooperatives, and social movements building technologies from below. We will share findings from the community-oriented research Worker-Owned Intersectional Platforms (WOIP) and will invite participants to reimagine alternative digital futures from and with Latin America, considering issues of race, class, and gender.
What does it mean to understand technologies from the perspectives of racialized and gendered Latin American workers? How can thinking together with Latin American workers who are struggling to build their own technologies help rethink the ways in which computing and AI are understood, computing otherwise?
We will share how, together, we are theorizing and practicing concepts such as popular digital sovereignty, social technology, the importance of building federation and intercooperation in the technology sector, what queering technologies means, the importance of recovering the rich Latin American past on tech and labor, including critical pedagogies for tech and labor.
Misha Salahshoor
Coordinator Contact: misha.salahshoor@iaaa-algorithmicauditors.org
As AI systems rapidly proliferate, algorithmic auditing has emerged as a crucial practice for identifying and mitigating potential harms. However, despite its growing importance, algorithmic auditing faces significant resistance from powerful technology companies and stakeholders who prioritise rapid innovation and market growth over robust safety measures and accountability. These actors often have disproportionate influence over AI governance frameworks, creating a landscape where third-party audits are portrayed as barriers to innovation rather than essential safeguards for public welfare.
Despite growing recognition of the importance of auditing, there remains a significant disconnect between auditing professionals and the communities they aim to serve. This disconnect perpetuates power asymmetries in technology governance and limits the effectiveness of auditing as a tool for algorithmic justice. As the field of algorithmic auditing continues to develop, we have a crucial opportunity to shape its future direction in ways that centre community needs and experiences from the outset.
The field currently faces several interrelated challenges: insufficient pathways for community voices to inform auditing priorities and methodologies; limited frameworks for translating community concerns into technical auditing approaches; structural barriers to accessibility and participation; and resistance from technology companies to meaningful external oversight. As AI systems increasingly shape access to resources, opportunities, and rights, we must urgently develop auditing approaches that centre the experiences and needs of marginalised communities and establish independent audits as non-negotiable components of responsible AI development.
This interactive session brings together algorithmic auditors, policy experts, researchers, and advocates to strengthen the role of auditing as a practice of accountability and empowerment. The International Association of Algorithmic Auditors (IAAA) will facilitate a dialogue examining how we can build stronger connections between auditing professionals and the communities most affected by algorithmic systems, ensuring that auditing practices serve as effective tools for challenging harmful technologies. Recognising the barriers to in-person participation for many affected communities at FAccT, we will incorporate their perspectives through several approaches, including pre-recorded video testimonials from community members sharing experiences with algorithmic harms, and presentation of case studies co-created with affected communities prior to the conference. These methods ensure that community voices remain central to our discussions even when physical presence isn't possible.
Georgia Panagiotidou, Ben Snaith, Tamara Kneese
Coordinator Contact: georgia.panagiotidou@kcl.ac.uk
The rapid adoption of machine learning (ML) and artificial intelligence (AI) has come with a significant environmental burden due to increased resource usage, yet such material and environmental impacts of AI are no surprise anymore, especially to the FAccT community. This workshop aims to shift from awareness and transparency to empowerment and action with the understanding that such action will need to be both individual as well as collective. This workshop will interweave this individual vs. collective tension through a round-table discussion and a hands-on exercise that gathers researchers and activists to discuss and inform each other on methods and tactics for supporting environmental action. Specifically, the focus will be on tools and tactics for supporting agency and data citizenship, taking two perspectives: (1) ML developers and the tools of measurement and eco-feedback that are being deployed and developed for them and (2) the local activist and creative art communities around data centres and their actions to invoke change. Regarding AI/ML developers, the co-organisers’ research has shown that even with eco-feedback measurement tools, developers perceive that, as individuals, their impact is not only minimal but also inevitable – essentially lacking agency for action. At the same time, activists, like the co-facilitators from Spain and Greece, are actively engaging local communities to call attention to the impacts of data centres, essentially also developing local data infrastructure literacy through educational outreach. From eco-feedback measurement tools, art installations, and hacks for mapping local data centres, there are many ways to increase awareness of and action against the environmental impacts of the various lifecycle stages of AI. Nevertheless, these tools and tactics are now siloed across communities. This workshop will start with a structured roundtable discussion on existing tools and practices to bring this knowledge together and will continue with a hands-on exercise on tracking and mapping local data centres based on the “Getting into Fights with Data Centres” zine by Anne Pasek.
Armanda Lewis
Coordinator Contact: al861@nyu.edu
Throughout history, artists have leveraged human technologies, including writing, traditional crafts, and algorithmic tools, to expose injustices and prompt change in what we make, how we produce, and how we think. One latest innovation - generative AI - comes with potentials and cautions given its ability to model phenomena and generate multimedia content based on such phenomena. Systems running on generative AI can learn and produce audio, image, and video art in various styles. Artists’ reactions to generative AI are varied. Positively, generative AI has the potential to extend human capacity for creativity, handle low level tasks, and increase an artist’s productivity. Adversely, generative AI could contribute to exploitative artistic practices, proliferation of existing inequities, and some say the end of human creativity itself. It is within this evolving landscape that we propose a CRAFT session on artistic engagement with generative AI. Specifically, we put forward an interactive panel with working artists who thematically and/or practically integrate generative AI into their art. Our group of artists - representing various continents, artistic styles, and demographic backgrounds - will discuss how they use generative AI, the tensions that they find in their artistic views and processes when leveraging generative AI, and ways in which art can bring about a more equitable generative AI. They will showcase their art and prompt audience members to participate in a discussion on how artistic practices might inform how we develop AI tools and technologies. As a collective, we will discuss what to make of art and what to make of technology as tools gain more capacity to create.
This CRAFT complements a parallel research study funded by FaCCT that includes a short scoping review of the integration of artistic practices and emerging generative AI toola, a qualitative study, and a workshop. Though much scholarship exists on generative art, less explored is how artists view AI as a technical system and how artistic processes may alter AI technical processes. That project also summarizes findings from ongoing qualitative interviews from practicing artists who engage with generative AI, through traditional and/or emergent artistic practices, on the affordances of generative AI, concerns of these tools of scale, and techniques for countering inequities. Our proposed CRAFT session builds on this research work. A subset of artists interviewed will engage in a real time discussion of how their artistic practice uses generative AI, and how they both leverage and resist the challenges of scale. We have a diverse set of artists from across the globe, and will also touch on issues of equity and access, and how art voices human-centered interests.
Alex Hanna, Clarissa Redwine, Linda Huber, Nathan Kim, Cella Sum, JS Tan, Justine Zhang, Gena Kim, Sophie Song, Shreya Chowdhary, Bria Massey, Francesca Lameiro
Coordinator Contact: clarissa.redwine@gmail.com
AI’s impact on the labor process raises new questions about control, autonomy, and resistance. The goal of our workshop is to identify concrete possibilities for labor resistance by examining, with our participants and panelists, how AI technologies restructure work (and management), intensify exploitation, and shift the dynamics of control in everyday workplaces. This session will be a joint collaboration between DAIR, ACM Strike School, and Collective Action in Tech.
This workshop will begin with a panel discussion exploring the various ways in which AI has changed the labor process—from algorithmic management and surveillance to automation and alienation. The four panelists, which will include rank-and-file workers, organizers, and labor experts (e.g from Writers Guild of America, Data Labors Association, and National Nurses United), will draw from their own experiences to highlight examples of how workers have organized to resist AI in the workplace and AI-empowered management practices. Through contracts and bargaining fights, worker-led mutual aid efforts, grievances and unfair labor practice (ULP) charges, and a variety of other methods, workers can contest the harms of AI alongside other workplace issues including compensation, working schedules, and technologies of surveillance and management.
In the second part of our workshop, we will translate insights and strategies from the panel conversation to participants’ own workplace conditions through our AI Workers’ Inquiry exercise. We follow Marx’s 1880 “workers’ inquiry” – a set of questions about workers’ working conditions, experiences of exploitation, their role in the value-production process, and the opportunities for collective response through workers’ councils and strikes. Marx’s survey has subsequently been a point of inspiration for workers to facilitate collective sense-making about their conditions as well as to perform the spadework for resistance and ways to build power in their own workplaces. For example, the Distributed AI Research Institute has convened data workers from around the world to conduct their own Data Workers Inquiry.
Shiran Dudy, Jan Simson, Yanan Long;
Coordinator Contact: ylong@uchicago.edu
As a relatively new venue, FAccT has served as a platform for activists and scholars to critically interrogate emerging AI/ML technologies, bringing together academics, civil society, and government representatives from various disciplines to address the broader societal implications of the deployed or proposed technologies. To assess how participants perceive FAccT’s effectiveness, we propose a participatory session where attendees reflect on their experiences and expectations. This session will follow a bottom-up approach, allowing participants to raise key questions for the community to answer, facilitated through methods of participatory design.
Prior reflections on FAccT have examined its limitations and opportunities through an analysis of published research. Building on this, our CRAFT session aims to foster direct participation and reflexive engagement, encouraging attendees to critically examine FAccT beyond its scholarly output. This may include reflecting on the experience of attending the conference, its role as a catalyst for activism, its societal impact, as well as identifying pressing issues and articulating long-term visions for FAccT.