«Critiquing and Rethinking Fairness, Accountability, and Transparency» (CRAFT) is a dedicated track to build bridges from the conference to people who contend with computing systems from many different angles, from journalism and organizing, art and education, to advocacy, governance and beyond. Critique, reflection, and power are at its core, and it provides a unique opportunity within an academic conference to center the impact of technology on communities and the policy implications that arise from that impact.
Vryan Feliciano, Ilana Nguyen, Jaemarie Solyst, Evan Shieh
Coordinator Contact: evan.shieh@youngdatascientists.org
Recently, the FAccT community has called for responsible AI efforts to include a broader range of voices. In this CRAFT session, we center the perspectives of minoritized youth on AI justice due to their unique potential to contribute meaningful insights on AI systems impacting themselves and their communities. Employing decolonial practices of reverse tutelage and bottom-up knowledge production, we feature peer-reviewed contributions of undergraduate and high school researchers hailing from minoritized, Title I urban communities in the United States that traditionally lack pathways to the technology industry. Our session invites the FAccT community to learn from and engage with diverse youth emerging as key contributors to critical AI scholarship, forming new connections and collectively participating in activities to advance sociotechnical AI justice.
Amos Toh, Emile Ayoub, Jack Poulson
Coordinator Contact: toha@brennan.law.nyu.edu
The business of war is increasingly driven by narratives and visions of AI’s promise. In the United States, the Department of Defense, the government’s largest agency and its biggest spender on AI, has over 800 AI projects – and that’s just those that are unclassified. Some of the most high-profile projects, from the system to track and identify targets under Project Maven, to the initiative to mass produce autonomous drones and munitions, are being developed by venture-capital backed firms that have positioned themselves as AI disruptors. These projects are also spreading to the border and throughout law enforcement, driving a broader militarization of society. Over the last year, the authors of this proposal have collaborated with the investigative nonprofit Tech Inquiry to trace how these firms are growing their share of the defense budget, and the strategies and networks of influence they exercise to shape warfare in their image.
During this session, we will share our preliminary findings on the key players distinguishing themselves in the crowded field of defense tech, the vulnerabilities and dependencies they are creating in military supply chains, and how the pressures of their business model escalate risks to human rights. We will also explain our factfinding process: how we, as lawyers and policy analysts, worked with data scientists to make sense of the opaque and arcane world of defense contracting, and navigate the challenges and limitations of open-source research in this space.
Christoph Becker, Eshta Bhardwaj, Rosi Leonard
Coordinator Contact: eshta.bhardwaj@mail.utoronto.ca
This interactive CRAFT workshop session uses satire and creative expression to facilitate and strengthen collective organizing around the environmental damages caused by AI systems.
Many find it hard to personally relate to the distant data centers where generative AI models do their work. These impacts vary greatly in place and time and are documented primarily by the tireless work of organizers, civil society, and academics, while being obfuscated by the companies creating the situation. In this session, we use humour to connect communities’ needs for Earth’s resources with the evidence of just how much of these resources are now being claimed for the data centers running generative AI.
We connect individual agency, community experiences, and collective resistance by exploring the impacts of data centers and platforms that are already felt by communities and turn our findings into satire. We invite and guide participants to collectively develop their own humorous ideas and materials that express the absurdity of those harms being taken for granted, offering an engaged period of collective creativity, perhaps some cathartic release, and possibly some fun materials for further shared use.
Priya Goswami
Coordinator Contact: priya@mozillafoundation.org
What does the digitalisation drive of the world's most populous country look like? ‘Digital Bharat’ is an exhibition that brings you face-to-face with the lived realities of rural India as people adapt to the unprecedented drive of digitalisation in the age of data mining for emergent technologies. What is it like to be a rural Indian shouldering Digital Public infrastructures (DPIs) affecting health and livelihood– the two fundamental rights of people? In India, there is a saying. There is ‘India’ of the urban and upwardly mobile. And then there is ‘Bharat’, the Hindi name for India. ‘Bharat’ of the tier 3,4,5 cities, towns and villages and its denizens.
‘Digital Bharat’ explores the impact in rural India. To discover this, I followed one million strong ASHA workers, or female on-ground health activists, where ASHA stands for Accredited Social Health Workers, and simultaneously discover a surveillance app to monitor the ‘business and productivity’ of daily wage labourers, who earn as meagerly as $3.5 a day for less than 100 days a year. My questions have led me to rural Rajasthan, 500 Km away from the capital and rural Haryana, 180 km away. Keeping the fulcrum as health data and surveillance on daily-wage labourers of the two states and within hundreds of miles radius, my search has yielded some diverse, heterogeneous, even bizarre stories directly from the ground and the community, especially rural women. These stories of the personal struggles of rural people would be seldom found on any online platform, while dangerously and increasingly so, their data is available, live online on public infrastructures, without them consenting or comprehending the ramifications of it.
Tania Duarte, Elizabeth Remfry
Coordinator Contact: lizzie.remfry@weandai.org
Typical models of AI literacy and participatory research often reinforce the exclusion of under-represented voices, particularly learning disabled people, older adults or children, or those with differing literacy abilities. If we truly want to build AI technologies that work for and are accountable to all of society, then we need better methods for including and enabling a diverse range of voices in the development and public discourse on AI.
Inspired by Memory Boxes and Sensory Stories, We and AI, a non-profit volunteer organisation, are designing “Sensory AI Boxes”: tangible and tactile boxes containing sensory materials to spark conversation. They allow participants to engage in a non-intimidating way, prompting critical thinking and discussion. We draw upon existing research on the experiential quality of art to support people’s AI literacy by countering the opaque and perceived intangible quality of AI technologies.
We invite attendees to come and interact with the Sensory AI Boxes, bringing their own lived experiences, to help us shape further iterations and provide critical feedback on this approach.
Nina da Hora
Coordinator Contact: ninadhoraa@gmail.com
The session titled "Beyond the Face: Exploring the Impact of Facial Geometric Variation on Recognition Algorithms across Ethnicities" delves into the intersection of technology, ethnicity, and social implications inherent in Facial Recognition Technology (FRT). This session will highlight the cutting-edge research that identifies and analyzes the challenges posed by ethnic biases in facial recognition systems. Through a detailed examination of deep learning algorithms and their interaction with geometric features across different ethnic groups, this session aims to unfold the layers of ethical and technical considerations that must be navigated in the development and application of FRT.
Participants will explore the nuances of how automated systems interpret facial features differently across ethnicities, potentially leading to biases. The central questions addressed will include: How do geometric variations in facial features influence the accuracy and bias of recognition systems? What are the broader societal implications of these biases? And how can technology be developed to be more equitable? The session structure will combine a presentation of empirical findings with interactive discussions, encouraging a participatory and reflective examination of the material.
Freyja van den Boom
Coordinator Contact: s5076626@bournemouth.ac.uk
This curated AI/Smart City walk, organized as part of the FAccT Conference, will critically engage participants with the past, present, and speculative futures of smart city infrastructures in Athens. Through a series of staged interventions and interactive provocations, the walk will challenge dominant narratives surrounding AI-driven urban governance, surveillance, and digital infrastructures. By juxtaposing real-world smart city applications with speculative futures, the event will encourage participants to question the trade-offs often presented in AI policy—such as efficiency versus privacy or security versus democratic freedoms. Drawing on speculative design, futures thinking, and causal layered analysis, this research-led intervention will examine how AI shapes urban life and how alternative futures can be envisioned and contested.
Malavika Jayaram, Nayantara Ranganathan
Coordinator Contact: mjayaram@digitalasiahub.org
Indices that chart AI trajectories and states of readiness are a powerful lever to enlist countries in the race to build and deploy “AI”. Endorsed by international development agencies and adopted widely, indices are compelling tools of influence that measure and rank countries on their technical and social AI infrastructure, implicitly getting majority world countries to co-sign an agenda where their role begins as, and often remains, marginal.
We want to question the idea and form of the AI index, including what it includes and excludes. We want participants to visualize that The Map is NOT the Territory. We imagine the CRAFTing circle as an embodied and present moment centered on community and care, towards creating individual or communal objects that upend dominant narratives.
Alejandra Judith Josiowicz, Diana Mosquera, Genoveva Vargas-Solar, Ivana Feldfeber, Ivanna Martínez Polo, Laura Alonso Alemany, Luciana Benotti, Patricia Peña, Susana Cadena-Vela, Virginia Brussa
Coordinator Contact: lauraalonsoalemany@unc.edu.ar
This session seeks to co-create, reflect on, expand, enrich and articulate ethical, inclusive, and sustainable approaches to securing and maintaining funding for non-standard, community-driven projects that often thrive outside traditional academic or institutional structures. Through collective exploration and discussion, we aim to uncover shared challenges—such as exclusionary grant processes, language and bureaucratic barriers, and rigid definitions of impact—and assemble tools that support funding practices aligned with the identity and values of these initiatives. We focus on strategies that allow funding to emerge organically from the logic and priorities of the communities involved, and where external expectations are thoughtfully considered but secondary.
Grounded in the experiences of feminist and Latin American initiatives in technology, AI, and data governance, the session will offer a concrete lens through which to explore broader questions of autonomy, ethics, and sustainability in resourcing projects. Participants will examine grassroots support models, alternative governance structures, and collective knowledge practices that challenge dominant funding paradigms. Outcomes will include a practical toolkit and the foundation for a recommendation for funders to adapt the needs arising from their driving principles (security, accountability, respect for environment, among many others), adopting more inclusive and context-aware evaluation criteria—ultimately fostering a funding ecosystem that values integrity, diversity, and community-defined relevance.
Kimi Wenzel, Avijit Ghosh, Sachin Pendse, Stephanie Milani, Ajeet Singh, Laura Dabbish, Geoff Kaufman
Coordinator Contact: kwenzel@andrew.cmu.edu
When we encounter ourselves within AI‑mediated representations—whether in social‑media filters, automated photo‑organizing apps, or generated images—we confront versions of ourselves that have been interpreted and reconstructed by machine‑learning systems. For many people, these AI‑mediated interpretations fail to capture (or actively misrepresent) their self‑image, identity, and lived experience.
Emerging generative‑AI systems thus resemble a distorted mirror: they let us see directly how algorithmic systems “understand” us. By engaging with images produced by these models, individuals can compare how they perceive themselves and their communities with how data‑driven systems and society perceive them. In non‑AI contexts—such as seeing one’s representation in traditional media—this reflective process can foster healthy self‑concept and self‑acceptance, but it can also trigger dysmorphia when the gap between perception and self‑image is too large. Understanding the experience and impact of misrepresentation through generative‑AI systems is therefore a critical first step toward mitigating this source of harm.
Toward this goal, our CRAFT workshop will bring experts and community members together to surface and discuss synthetic misrepresentations created by generative‑AI systems. The workshop will feature a panel, an interactive reflective exhibit, and facilitated breakout discussions.
Roya Pakzad
Coordinator Contact: roya@mozillafoundation.org
Humanitarian and government agencies are turning to (Gen)AI-enabled chatbots to deliver information and services to refugees and migrants -- yet these tools are often developed and evaluated through an English-centric lens, overlooking the context and linguistic, cultural, and legal nuances important for displaced communities. Our session, “Centering Refugee and Migrant Voices: Human Rights-Based AI Evaluation Across Languages,” brings together a diverse panel -- including a UNHCR specialist with lived Afghan refugee experience, legal and humanitarian experts, and AI evaluation expert. Currently, I am collaborating with them to develop a multilingual, human rights-centered AI evaluation framework as part of my ongoing Mozilla Foundation Fellowship. Although still in progress, this framework serves as an emerging tool to benchmark GenAI chatbot performance in multiple languages (e.g., English, Persian, Dari, Arabic, Spanish), using criteria that go beyond simple accuracy -- addressing empathy, trustworthiness, data privacy, and anti-discrimination.
Rafael Grohmann, Luana Cuello, Maraiza Adami
Coordinator Contact: rafael.grohmann@utoronto.ca
Around the world, workers are organizing to build alternatives to dominant platform and technology companies through initiatives like worker-owned technologies, platform cooperativism, and digital solidarity economies. In Latin America, these efforts are grounded in community values, rejecting technocentric approaches and emphasizing collective governance and intersectional struggles. This session shares collective learnings from cooperatives, collectives, and social movements in Brazil and Argentina engaged in building technologies from below. Drawing on findings from the Worker-Owned Intersectional Platforms (WOIP) research, we invite participants to reimagine digital futures with and from the workers' perspective. Together, we explore how these collectives theorize and practice concepts such as feminist autonomous infrastructures and queering computation. We also reflect on how Latin America's historical engagements with community technology and labor can inform new critical pedagogies and more just, community-rooted approaches to computing and AI.
Gemma Galdon-Clavell (speaker), Misha Salahshoor
Coordinator Contact: misha.salahshoor@iaaa-algorithmicauditors.org
As AI systems increasingly shape access to resources, opportunities, and rights, algorithmic auditing emerges as a crucial practice for identifying and mitigating potential harms. However, the field faces significant resistance from powerful stakeholders who prioritise innovation over accountability, creating power asymmetries that limit auditing's effectiveness as a tool for justice. This interactive workshop invites algorithmic auditors, policy experts, researchers, and advocates to reimagine auditing as a practice of community empowerment and collective resistance. Led by the International Association of Algorithmic Auditors and Eticas AI, participants will explore how to build meaningful connections between auditing professionals and affected communities. Through examination of real-world case studies from Eticas' repository of community-led audits, we will engage with authentic community concerns, methodological approaches, and tangible outcomes. These materials provide concrete examples of how auditing can centre marginalised voices while serving as anchors for our collaborative discussions. This session welcomes policymakers, developers, computer scientists, community advocates, and auditing professionals interested in strengthening algorithmic accountability through collaborative, community-centred practices. Join us in building counter-power against harmful AI systems and ensuring auditing serves those most affected by algorithmic decision-making
Ben Snaith, Georgia Panagiotidou, Aurora Gomez, Tamara Kneese
Coordinator Contact: georgia.panagiotidou@kcl.ac.uk
The rapid adoption of machine learning (ML) has come with a significant environmental burden due to increased resource usage, yet such material and environmental impacts of AI are no surprise anymore, especially to the FAccT community. This workshop aims to shift from awareness and transparency to empowerment and action with the understanding that such action will need to be both individual as well as collective. This workshop will interweave this individual vs. collective tension through a round-table discussion that gathers researchers and activists to discuss and inform each other on methods and tactics for supporting environmental action. Specifically, the focus will be bringing together various actors interested in the environmental impacts of AI and introducing them to the tools and tactics used ‘on the ground’ to map, measure, understand, and mitigate these various impacts.
James Clar, Yi Fan Jiang, Ariel Kotzer, Ruhee Maknojia, Mónica Rikic, Armanda Lewis
Coordinator Contact: al861@nyu.edu
Generative AI, Art, and the Bounds of Creativity explores the dynamic intersection where algorithms meet imagination. As generative models produce increasingly sophisticated outputs, they introduce new conversations about originality, ownership, and the human role in artistic production. Join some of the world's leading multimedia artists as they discuss the topic of generative AI in the context of their own artistic practice and in the overall creative process. From AI-generated visual art to traditional painting techniques to interactive works, we will delve into the role of art, artistic agency, and implications for society. Are we witnessing the evolution of artistic expression, or the automation of inspiration? Do algorithmic tools democratize creativity—or reinforce existing power structures under the appearance of innovation? This session is suitable to anyone interested and/or involved in the ethics and implications of computational creativity at scale and the boundless human capacity for creativity.
Alex Hanna, Clarissa Redwine, Linda Huber, Nathan Kim, Cella Sum, JS Tan, Justine Zhang, Gena Kim, Sophie Song, Shreya Chowdhary, Bria Massey, Francesca Lameiro
Coordinator Contact: clarissa.redwine@gmail.com
AI’s impact on the labor process raises new questions about control, autonomy, and resistance. The goal of our workshop is to identify concrete possibilities for labor resistance by examining, with our participants and panelists, how AI technologies restructure work (and management), intensify exploitation, and shift the dynamics of control in everyday workplaces. This session will be a joint collaboration between DAIR, ACM Strike School, and Collective Action in Tech.
This workshop will begin with a panel discussion exploring the various ways in which AI has changed the labor process—from algorithmic management and surveillance to automation and alienation. The four panelists, which will include rank-and-file workers, organizers, and labor experts (e.g from Writers Guild of America, Data Labors Association, and National Nurses United), will draw from their own experiences to highlight examples of how workers have organized to resist AI in the workplace and AI-empowered management practices. Through contracts and bargaining fights, worker-led mutual aid efforts, grievances and unfair labor practice (ULP) charges, and a variety of other methods, workers can contest the harms of AI alongside other workplace issues including compensation, working schedules, and technologies of surveillance and management.
In the second part of our workshop, we will translate insights and strategies from the panel conversation to participants’ own workplace conditions through our AI Workers’ Inquiry exercise. We follow Marx’s 1880 “workers’ inquiry” – a set of questions about workers’ working conditions, experiences of exploitation, their role in the value-production process, and the opportunities for collective response through workers’ councils and strikes. Marx’s survey has subsequently been a point of inspiration for workers to facilitate collective sense-making about their conditions as well as to perform the spadework for resistance and ways to build power in their own workplaces. For example, the Distributed AI Research Institute has convened data workers from around the world to conduct their own Data Workers Inquiry.
Shiran Dudy, Jan Simson, Yanan Long
Coordinator Contact: ylong@uchicago.edu
Session Website: https://facct25pd.github.io/
As a relatively new venue, FAccT has brought together academics, activists, and policymakers to critically interrogate the societal implications of AI/ML technologies. To reflect on how well the conference meets these goals, we are organizing a participatory session that invites attendees to surface key questions and concerns through a bottom-up process. Using methods from participatory design, the session will create space for collective reflection on experiences, expectations, and visions for FAccT.
Building on prior work like Laufer et al. (2022), which analyzed FAccT’s scholarly output, this session centers around eliciting direct community input. Participants will generate statements about the conference’s role, impact, and future direction, which will be shared and voted on interactively through the Polis platform. While initial discussion and voting will take place during the session, the process will continue for another week via the conference Slack channel.
The session will begin with a brief overview of participatory design practices to frame the value and goals of this session. In small groups, participants will surface key questions and themes about FAccT, generating seed statements to kick off broader conversations. We will aim to close with early insights from the session.
Dr Laurens Naudts, Dr Charis Papaevangelou, Dr Plixavra Vogiatzoglou
Coordinator Contact: c.papaevangelou@uva.nl
Greece is experiencing a surge in data centre development, driven largely by US tech giants like Microsoft. The company’s investment in three Attica-based centres was celebrated as a milestone of modernisation and facilitated through fast-tracked regulations and tax breaks. Yet this influx of foreign capital sits uneasily with the EU’s ambitions for digital sovereignty, which call for domestic infrastructure to ensure technological independence. In this context, Greece has been chosen to host one of seven AI Factories across Europe. National policy frames this as a strategic opportunity, positioning Greece as a digital hub. However, these developments are often framed by policymakers and media through a techno-optimistic lens, overlooking local social, environmental, and labour implications.
This CRAFT session brings together interdisciplinary scholars and local practitioners to critically examine Greece’s digital transformation and its geopolitical positioning. Drawing from local experience, the CRAFT will raise broader questions of who carries the burden of building and sustaining these infrastructures vis-à-vis who profits, how decision-making processes for such projects could be more participatory, and how communities can resist and imagine alternative futures of affective infrastructures in light of the above-discussed power imbalances. Following this cross-disciplinary dialogue, participants will be equipped to critically reflect on the currently promoted digital future that embraces large-scale adoption of AI as a means of furthering national growth and achieving supranational digital sovereignty often to the detriment of local interests.