«Critiquing and Rethinking Fairness, Accountability, and Transparency» (CRAFT) is a dedicated track to build bridges from the conference to people who contend with computing systems from many different angles, from journalism and organizing, art and education, to advocacy, governance and beyond. Critique, reflection, and power are at its core, and it provides a unique opportunity within an academic conference to center the impact of technology on communities and the policy implications that arise from that impact.

Building Feminist AI in Latin America: experiences from the ground

Alejandra Josiowicz, Universidade do Estado do Rio de Janeiro, Brazil-Argentina Kruskaya Hidalgo Cordero, Platform Observatory, International Equalities Institute, Ecuador - México Diana Mosquera, Diversa Studio, Ecuador Susana Cadena Vela, Universidad Central del Ecuador, Ecuador Ivana Feldfeber, DataGenero, Argentina Virginia Brussa, Universidad Nacional de Rosario, Argentina Genoveva Vargas-Solar, CNRS, LIRIS, Francia-México Paola Ricaurte, Tecnológico de Monterrey, México-Ecuador

This unconference brings together the experiences and reflections of Latin American activists, practitioners and academics who are members of the Latin American and Caribbean hub of the Feminist AI Research Network. Participants will share the projects and initiatives that reflect the real possibilities and results of developing feminist AI in the region, as well as the obstacles that hinder women's ability to be considered as central actors in the development of AI. We will share lessons learned and achievements after three years of work at the Feminist AI Research Network. These include exploring algorithmic harms beyond bias, exploring participatory methodologies, incorporating feminist approaches and principles in the development of technological solutions, working with indigenous communities, creating platforms for capacity building, developing multidisciplinary teams, democratizing the development of AI in the region beyond academia and private companies, building prosocial AI for the advancement of women's rights and the rights of marginalized communities in the region. With this unconference, we want to challenge the basic assumptions about technological development: Which technologies should be developed? By whom? How? For what? Where? Who will bear the costs?


Better Utopias: resisting Silicon Valley ideology and decolonizing our imaginaries of the future

Nina da Hora, Joana Varon, Annette Zimmermann

Over decades, Silicon Valley and Hollywood have been churning out imaginaries of possible futures. We will explore how these imaginaries are intertwined with the growing influence of ideologies that guide CEOs and investors of big tech oligopolies that disproportionately benefit from an unprecedented concentration of power, wealth, and information. This includes branches of longtermism (effective altruism, but also effective accelerationism) and right-wing libertarianism. What are the actual and possible harms associated with such ideologies and imaginaries? Our aim is to critically discuss, from different theoretical and practical perspectives, how dominant narrow visions of the future underpinned by colonial world views about what kind of future technology is meant to create are incompatible with the justice-oriented values articulated by decolonial, antiracist, ecological and queer-/transfeminist visions of being on this planet. At the same time, we also want to provide a space for envisioning alternatives. What ideologies and imaginaries of the future would participants of this session like to center in the design and deployment of emerging technologies? Which paths are available for decolonizing our imaginaries and for articulating real-world alternatives for tech development guided by social, political, and environmental justice? Which initiatives that effectively critique and contest dominant tech industry ideologies already exist, and which new initiatives are needed?


Towards Responsible Non-extractive AI Research and Collaboration with Indigenous Communities: Centering Language Communities in NLP Ethics, Fairness and Accountability

Edem Wornyo, Google Research, New York, USA; Ben Hutchinson, Sydney, Australia; Claudio Santos Pinhanez, IBM Research, Brasil; Luciana Benotti, Universidad Nacional de Cordoba, Argentina; Vukosi Marivate, University of Pretoria, South Africa; Zara Wudiri, University of Maiduguri, Nigeria; Michael and Caroline Running Wolf, Buffalo Tongue, Canada; Aline Paes, Universidade Federal Fluminense, RJ, Brasil; Altaci Kokama, Universidade Federal do Amazonas, Brasil; Andrew Smart, Google Research, USA; Jamila Smith Loud, Google Research, USA; Donald Martin, Google Research, USA; Katherine Heller, Google Research, USA; Ding Wang, Google Research, USA; Beth Coleman, University of Toronto, Canada;Te Mihinga Komene, University of Otago, New Zealand; Kathy DeerInWater, American Indian Science and Education Society, USA

Indigenous communities around the globe have been exploited over the centuries. Members of these communities not only still face existential issues but also are often under threat of theft of their cultural and linguistic knowledge. In the AI context, a myriad of new challenges are appearing for these communities, especially regarding natural language processing (NLP), often in the form of extractive data practices. The wealth concentration in the Global North cannot be understood apart from the unequal historical legacy of past colonial empires, which extracted resources from their colonies, and current data extraction is a continuation of this old story. At the ACM FAccT conference, we intend to organize a workshop bringing together diverse communities, including AI academic researchers and industrial partners, scholars from other areas who have been working with research on Indigenous cultural issues, and Indigenous scholars and thinkers, to explore and deliberate on some of these issues. Our overarching goal is to move towards establishing a code of ethics, a safe and fair open-source collaboration platform, and set up legal and licensing frameworks for partnerships between Indigenous communities, researchers, and industry in the NLP space. Moreover, we intend to catalyze a working group of researchers to discuss and evaluate practices and programs related to Indigenous people around the world, preventing extractive practices in the AI age.


Building networks for decolonial AI resistance: solidarity & strategies to fight neocolonial AI pipelines

Pratyusha Ria Kalluri , Stanford University Marie-Therese Png , Oxford Internet Institute Karen Hao , The Atlantic

"AI" research and technologies frequently constitute global pipelines driving extraction of lands and exploitation of peoples. These AI pipelines echo and perpetuate the historical and ongoing legacy of colonization, and now they are rapidly expanding the scope and accelerating the pace of neocolonial projects. The AI industry's narrative obfuscates its reliance on the extraction of labor, metals, water and land to drive compute power, and build hardware / wider physical infrastructure. These complex multi-country supply chains, leaves Global Majority populations and ecosystems systemically more vulnerable to risks from AI industry's growth. Simultaneously, at every stage of these extractivist pipelines, there are powerful communities - in Brazil and globally - building toward resistance and transformation, including water defenders, land defenders, labor organizers, community leaders, artists, activists, language defenders, gig worker and data worker collectives, anti-surveillance activists, and many more. They demand accountability from the AI industry amassing massive value (financial, resource, political, market power, etc.) at the cost of Global Majority socio-political-ecological systems. These communities are scholars and drivers of change, rich with insight into these pipelines and tactics of intervention, yet frequently denied the resources to connect and interlock our struggles and strategies. This workshop will bring together 25 people embedded in decolonial struggles across the globe, where these struggles may be occurring under different names yet are unified in challenging, fighting, and transforming the neocolonial AI pipelines.


In Código of Hope: Possibilities of Global Solidarity and Activism in Academic Spaces

Arjun Subramonian (University of California, Los Angeles; Queer in AI), Evani Radiya-Dixit (Stanford University), Christina Chance (University of California, Los Angeles)

Researchers connected to the Global South work to protect their communities from homogenization, academic exploitation, and global injustices, amid a Western-centric, neoliberal FAccT culture. This has led to serious burnout and frustration. This CRAFT aims to provide a healing space for such researchers to share their experiences, challenge oversimplified narratives of the Global South, and learn from Brazilian and anti-genocide activists about integrating activism into research and sustaining relationships with local communities. The session comprises two 20-minute panels. The first panel asks, "How do we engage with local cultures when hosting conferences in Brazil and the Global South more broadly?," while the second explores, "How do we push the possibilities of calling out and engaging in direct action against global injustices, including the genocides in Palestine, Congo, and Sudan?” Each panel is followed by a 25-minute small-group discussion to foster solidarity through sharing vulnerabilities and strategies for advancing justice.


In Código of Hope: Collective Methodologies to Reimagine Languages that Connect Us

Liliana Zaragoza Cano (ellae/they), Feminist AI Research Network (fr) América Latina & Caribe Hub. Anaelia Ovalle (they/them), Feminist AI Research Network (fr) América Latina & Caribe Hub, Queer in AI core organizer, University of California Los Angeles

This CRAFT session centers how to reimagine and nurture connections between languages from our very own situated contexts that allow us to be and to take care of our communities, networks, and territories. After all, how can we ask this of machines if we can't even do this amongst ourselves? As a way to return to the body and their connection with the environment, we will center pedagogical practices based in educación popular and hackfeminist perspectives to envision language that (re)connects us. In practice, we will facilitate a safer, playful space which nurtures a collective construction of knowledge not centered on the dynamics in knowing who we are or what we do, but of what we can co-create from imagining, designing, and doing together. To do this, we will play with creative narratives and affective technologies that invite us to pause; to reimagine languages that bring us closer to more habitable worlds, in a code of hope.


Visions of Afro-feminist Emancipatory, Liberatory AI (VELAI)

Bobina Zulfa Bonnita Nyamwire

Governance of AI is one of the ways to realise more inclusive AI ecosystems especially to persons living at the margins of society. Our latest research on Afro-feminist AI puts across a number of draft principles and interventions aimed at the practical realization of this aim. We propose a session based on our existing research on building "Afro-feminist AI". This session will advance and explore in-depth the need for an afro-feminist approach to AI governance with focus on decoloniality and giving voice to African women's perspectives. With this, our goal is to showcase how Afro-feminist principles can build more situated and contextual AI governance for Africa and the global majority . Additionally we will have a discussion workshop with researchers and other attendees about the frameworks advanced in the reports on how they can integrate these into use cases that can be applied to AI governance scenarios.


Black in Generative AI

TBD

This workshop is motivated by the following question: Where is the conversation about Black people in AI? What should that conversation focus on in the future(s) of generative AI being envisioned? We aim to start this conversation by bringing together industry practitioners, academic experts in race, computing, science and technology studies, activists, and nonprofit leaders. Our conversation will center on two topics: (1) uncovering and mitigating harms and (2) holding space for generative work. Our session will begin by eliciting reflections from participants on how work for or with Black people has been framed within AI research over the last decade. Next, our interdisciplinary expert panel will discuss the two topics followed by open discussion between panelists and participants. In all, this workshop will help start a conversation and build a research community around shaping generative AI towards a future that is equitable, inclusive, meaningful for Black communities.


Designing for the Long Tail: Case Studies in Contextual ML Design

Nyalleng Moorosi, Raesetje Sefala, Asmelash Teka

Big tech's "one model for all" approach aids their centralization of power, while in reality these models result in subpar performance, even by their own metrics, for non-dominant populations. In fact, hype from big tech companies promoting this approach, has harmed small community rooted organizations which focus on building task specific models suited for their communities. These small organizations have developed models that outperform big tech's because they encode values aligned with their communities' needs: ranging from dataset design choices, to decisions on model architecture and metrics for evaluation. This raises the question: How do the design choices of a researcher deeply familiar with a research problem's social context differ from those of a researcher with limited understanding of this context? Through the guidance of these builders, we will use case studies and participatory methods to discuss how knowledge of communities impacts decisions when building ML products.


Sure, here are the remaining rows starting from "Grounded Frameworks for auditing public sector uses of AI in Latin America":

Grounded Frameworks for auditing public sector uses of AI in Latin America

Derechos Digitales

The session aims to introduce frameworks for and facilitate discussion on assessing and overseeing the deployment of artificial intelligence (AI) in the public sector across Latin America. Participants will share methodologies that empower civil society to effectively examine the deployment of AI, and show how collaborations among researchers and non-governmental organizations can foster transparency, accountability, and societal impact in the realm of AI governance, ultimately benefiting the communities served. During the discussion, questions will be presented to the panelists and audience regarding how to advance partnerships and methodologies to better assess AI deployment and strategies to overcome existing challenges.


Centering communities' visions of success: a data feminist approach to impact evaluation of cross-sector partnerships

Hongjin Lin (Harvard University), Dasha Pruss (Harvard University), Pablo Nunes (O Panóptico project, Brasil), Thallita Lima (O Panóptico project, Brasil), Chinasa T. Okolo (Brookings Institute), Helena Suárez Val (Feminicidio Uruguay and Universidad Tecnológica del Uruguay, Uruguay), Alessandra Jungs de Almeida (Universidade Federal de Santa Catarina and MIT), Isadora Araujo Cruxen (Queen Mary University of London), Lauren Klein (Emory University), Catherine D'Ignazio (MIT)

There has been little proven tangible benefit to the communities involved in cross-sector partnerships aimed at developing AI technologies for social applications. In this bridge-building workshop, we bring together community organizations, activists, technologists, and academics to discuss and co-design impact evaluation guidelines that prioritize the objectives of local communities and align with their visions of success. Grounding our discussions in the intersectional data feminism framework, we highlight principles of challenging power, elevating emotion and embodiment, rethinking binaries and hierarchies, embracing pluralism, considering context, and making labor visible. Beginning with a brief presentation to establish a common understanding, participants will then delve into small group discussions and collaboratively contribute to a document outlining proposals for impact evaluation guidelines. Post-workshop, organizers will consolidate these proposals and outstanding open questions before sharing them with the broader FAccT community.


Do repositories of public algorithms in Latin America effectively promote transparency?

Juan David Gutiérrez, Associate Professor, School of Government, Universidad de los Andes. Verónica Rojas Montes, Professor of Law & Technology (Information Society) at Pontificia Católica del Perú. Jorge Luis Ordelin Font, Professor of Intellectual Property Rights, Law & New Technologies at CIDE. Romina Garrido, Subdirector of GobLab, Universidad Adolfo Ibáñez. Nicolo Zingales, Professor at Fundação Getulio Vargas (FGV) Law School.

This panel will discuss the creation and operation of repositories of public algorithms, which are gaining prominence among governments as a transparency instrument. These repositories provide information on how automated decision-making systems work and their decision-making processes in the public sector, among others. However, the actual contributions of these repositories to algorithmic transparency have not been fully explored. Academics from Chile, Colombia, Mexico, and Peru will discuss the challenges of building repositories of public algorithms, identify their strengths and weaknesses, usages and users, costs of maintenance, and potential contributions to transparency and other goals such as accountability and explainability.


Cristina Alves - Tech and Policy Researcher at ITS Rio / João Victor Archegas - Sênior Tech and Policy Researcher / Celina Bottino - Project Director at ITS Rio

This workshop proposes a vital debate on the challenges and opportunities brought by Generative Artificial Intelligences (IAGs) in the context of copyright. Our discussion will seek to explore the impacts of IAGs technologies in the light of Brazilian copyright law, focusing on how these innovations can be harmonized with current and future regulations. Regulatory models discussed in the National Congress will be addressed, emphasizing the importance of balancing innovation and protection in a field where culture, economy, and legislation meet. The workshop is aimed at stakeholders from different areas, including the creative, legal, and innovative sectors, promoting a multisectoral dialogue on how IAGs can coexist with copyright, benefiting society as a whole.


AI in the Classroom: Participation Models for the School Community

Júlia Mendonça, Carla Rodrigues e Eduardo Mendonça

In Brazil, public schools are undergoing a transformation in educational practice. This includes the increasing adoption of artificial intelligence (AI)-based teaching and learning technologies, which process massive amounts of data from children and adolescents for various problematic purposes, such as sending abusive targeted advertising. Despite the country having norms for democratic and participatory school management that need to be followed, the introduction of such technologies is currently being done without any risk analysis or training for the school community, completely excluding them from the process of their choices. The lack of participation and engagement from the community, especially students, allows companies providing such products to dominate and construct a techno-solutionist narrative that the notion of "progress and advancement" is only possible through their implementation in an agile and thoughtless manner. Faced with this issue, we have developed the project "AI in the Classroom: Participation Models for the School Community," which resulted in a booklet containing reflections on the importance of recognizing students' voices in the decision-making process regarding technologies used in teaching and learning, the right time to consult them, as well as proposing three concrete models that can be applied in practice: (i) "Unified" Model; (ii) "Student as Protagonist Agent" Model; (iii) "Student as Participant and Supporter of the 'Teacher Advisor'". The primary target audience is public education managers from the Education Departments of Brazil, in order to achieve concrete results through the practical application of the constructed models. Our proposal is for a lightning talk about our booklet, aiming to collect impressions and comments from the specialized audience of the conference.


Countering AI Harms through Public Interest Litigation

Bilel Benbouzid & Nathalie Smuha

This session will explore public interest litigation to counter AI-related harms. Thus far, much research on AI litigation focused on epistemological issues raised by courts on, e.g. the use of black box models and their lack of interpretability. In this session, we propose a hands-on approach by zeroing in on the tactics and practices of stakeholders who share their experiences of pursuing public interest litigation to counter AI harms. We'll pay special attention to how existing legal frameworks can be leveraged to challenge problematic AI applications. By blending legal and sociological perspectives, we seek to better understand litigation's potential and limits as a tool against AI's harmful use, as well as its role in the development of regulations to counter AI's excesses. Furthermore, we wish to explore how (new) AI regulatory frameworks that aim to protect individuals (such as the EU's AI Act) can inadvertently limit the instigation of litigation. The session consists of two parts. In the first part, three panelists from Brazil, Europe and the US will share their experience in litigating AI harms through a case they were involved in. This enables us to examine the legal, structural and cultural barriers to democratizing the use of law in AI matters. The second part consists of a group discussion with the audience on strategies to enhance legal expertise among activists, identify relevant judicial arenas and counter AI-related harms through strategic litigation.


Here are the remaining rows:

Generative AI Labor Impacts: A Time Capsule of Workers' Stories

Alex Hanna [Distributed AI Research Institute (DAIR)] Tamara Kneese [Data & Society Research Institute] Nataliya Nedzhvetskaya [University of California, Berkeley; Collective Action in Tech] Clarissa Redwine [Collective Action in Tech] Kristen Sheets [Collective Action in Tech] Xiaowei Wang [UCLA/Center on Race and Digital Justice]

Our session provides insight into the on-the-ground realities of Generative AI's impacts on work in 2024, constituting a time capsule of GenAI from workers' perspectives. We bring together voices from workers and labor organizers across different industries, building upon a zine where we interviewed individuals in a range of fields to get their perspectives on how Generative AI is (or is not) affecting their day-to-day work lives and managers' expectations around productivity. Many community and labor organizers have already spoken out about the harmful hype surrounding GenAI, and how it contributes to the continued erasure and devaluing of labor, which remains essential and irreplaceable for GenAI systems. Union campaigns, like the WGA and SAG-AFTRA strikes in Hollywood, also reflect a success story in the ways labor can negotiate and win against emerging, extractive uses of technology. Our session looks at the existing harms and possibilities of GenAI, centering lived realities as well as strategic responses from organized labor and collective action — rather than reiterating discourses that consolidate top-down decision making power around AI, labor, and markets.


Decoding Hate in the Horn of Africa's Digital Landscape

- Nuredin Ali (DAIR Institute) - Timnit Gebru (DAIR Institute) - Zeerak Talat (Mohamed bin Zayed University of Artificial Intelligence (MBZUAI))

The dissemination of hate and violence through social media channels can fuel the flames of war, making these platforms an important battleground. Although content moderators play a pivotal role in curbing such incendiary rhetoric, social media platforms do not put enough resources into effective content moderation. While researchers have called for local expertise in topics subject to moderation, even moderators with deep contextual knowledge can disagree with each other on how to moderate specific content. This session will explore how this disagreement impacts content moderation, through a panel with content moderators who have worked on moderating harmful content, activists who have been victims of social media-fueled attacks, journalists who have covered genocide and have been targets of social media campaigns, and machine learning researchers working on hate speech classification. We discuss how social media platforms have handled moderation of this content and the challenges of current and future moderation practices.


Artisanal AI: An art exhibition for embodied understandings of data work

Yemariam Mamo (Stanford University) Dylan Baker (DAIR Institute) Milagros Miceli (DAIR Institute, Weizenbaum Institute) Krystal Kauffman (Turkopticon, DAIR Institute)

Building on the existing understanding of the exploitative potential of data work, this exhibit invites participants to engage with the challenging and overlooked elements experienced by crowdworkers. Through tangible and interactive experiences, Artisanal AI aims to disrupt conventional mental models, prompting participants to reconsider assumptions, perspectives, and societal dynamics underlying the creation and curation of labeled data. The contribution of crowdworkers' towards making AI work is intentionally hidden, creating a disparity between the profits of tech and data labeling companies and the earnings of those performing the work. This mirrors historical patterns of colonial projects, with data labeling resembling a modern-day extraction of meaning from data, perpetuating cycles of dependence and inequality. The exhibit invites us to move beyond a merely intellectual understanding of the role data work has in the development of AI systems and towards a more embodied understanding of what it means for data work to involve human touch.


Artists and the Impact of AI Art: A Discussion on AI Regulation Activism

Harry Jiang, Carnegie Mellon University

After more than a year of commercial "generative AI Art" tools reaching the market, harms to creative industries have undeniably shaken workers in the sector. At the same time, a clear shift in power has occurred from artists to the corporations who have developed such tools. This session, featuring artists and voice actors, aims to present the organization that has occurred among workers in creative industries and highlight ways activism efforts have formed in support of AI regulation and labour protection. In addition, this session hopes to encourage dialogue with practitioners in computing and promote collaboration for technical resistance against exploitation by AI.


Eureka Moment: Feminism, Culture and AI to Reimagine our Futures

Julie Ricard, Data-Pop Alliance Ivette Yáñez Soria, Data-Pop Alliance Anna Spinardi, Data-Pop Alliance Leticia Hora, Data-Pop Alliance Amanda Quiterio, Data-Pop Alliance.

Our workshop is a space for critical and transformative dialogue about artificial intelligence and technology through an intersectional feminist perspective. We leverage the power of literature, cinema, and creative expression using AI-generated images to facilitate critical reflection about how technology and humanity intersect. The methodology draws from popular education principles and involves presenting participants with excerpts from relevant books and films, alongside a presentation on the importance of feminist critical perspectives on AI and technology. Participants will then have the opportunity to generate their own AI-based images to illustrate their "Eureka Moment" about their vision of the future and the transformations triggered by AI and technology in our societies. The workshop aims to both provide participants with practical strategies to engage with technology, at the same time as offering a critical perspective on AI and the tech industry. Ultimately, we hope to inspire participants to help build the future they want by leveraging the positive potential of technology.


Neurotechnology without technoableism? Collectively imagining just futures

Shazeda Ahmed and Xiaowei Wang, UCLA Center on Race and Digital Justice

Our workshop invites participants to imagine three futures for neurotechnologies where we work through how to prioritize the needs and desires of disabled communities. What roles and responsibilities would we want to see tech firms, workplaces, insurance providers, caregivers, and other actors take on? We will open with definitions of terminology, historicization of neurotech and its precursors, lightning talks where key participants will share their work on disability justice and technology. We will then conduct a speculative exercise where participants self-select into one of three scenarios: a future world in which neurotech is banned, another in which it is regulated, and a third in which it is unregulated. What structures of support do participants want in each scenario? How should we pursue them? Through a series of guided questions and prompts, participants will reflect on what justice, labor, and equality might look like in these worlds.


The Futures for FAccT conference 2045 CANCELLED

Freyja van den Boom, Hybrid AI research fellow with CNRS@CREATE, Singapore

Dear ACM FAcct community,

We regret to inform you that given the recent AI developments the 2045 and all future editions of FAcct will be cancelled

Sincerely,

The FAccTv-team

Imagine you would receive this, what would be these 'developments' that have made the need for us to discuss fairness, accountability, and transparency in socio-technical systems no longer necessary or perhaps legal or ethical to hold?

Would you like to live in a future safe city without crime but constant tracking through biometrical recognition drones? Would you sign up to services that will provide an online personal assistant trained upon your personal data and (social) media? How would you change a world where your rights and freedoms are algorithmically determined and optimised for..?

What Futures do we want?

We invite you to come speculate why Futures FAccT Conferences might be cancelled based on what you hope and fear may happen between now and then.

We believe that by sharing our own different cultural, social and economic perspectives and experiences we can contribute with more rich and representative futures and to help us better understand the challenges and opportunities that lie ahead.

Sincerely,

The Futures of FAcct-team