The research program of ACM FAccT solicits academic work from a wide variety of disciplines, including computer science, statistics, law, social sciences, the humanities, and policy, and multidisciplinary scholarship on fairness, accountability, and transparency in computational systems (broadly construed). We welcome contributions that consider a wide range of technical, policy, societal, and normative issues. These include, but are not limited to, issues of structural and individual (in)equity, justice in systems and policy; the material, environmental, and economic effects of computational systems.

List of accepted papers

Archival Papers

Ingber, Alexis Shore and Andalibi, Nazanin. Emotion AI in Job Interviews: Injustice, Emotional Labor, Identity, and Privacy

Sabuncuoglu, Alpay and Burr, Christopher and Maple, Carsten. Justified Evidence Collection for Argument-based AI Fairness Assurance

Gerchick, Marissa Kumar and Encarnaci\'{o}n, Ro and Tanigawa-Lau, Cole and Armstrong, Lena and Guti\'{e}rrez, Ana and Metaxa, Dana\'{e}. Auditing the Audits: Lessons for Algorithmic Accountability from Local Law 144's Bias Audits

Kapania, Shivani and Ballard, Stephanie and Kessler, Alex and Vaughan, Jennifer Wortman. Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline

Varoquaux, Gael and Luccioni, Sasha and Whittaker, Meredith. Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI

Luccioni, Alexandra Sasha and Strubell, Emma and Crawford, Kate. From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate

De Toni, Giovanni and Teso, Stefano and Lepri, Bruno and Passerini, Andrea. Time Can Invalidate Algorithmic Recourse

Hutchinson, Ben and Louro, Celeste Rodr\'{\i}guez and Collard, Glenys and Cooper, Ned. Designing Speech Technologies for Australian Aboriginal English: Opportunities, Risks and Participation

J\"{a}\"{a}skel\"{a}inen, Petra and Sanchez, Camilo and Holzapfel, Andr\'{e}. Anticipatory Technology Ethics Reflection By Eliciting Creative AI Imaginaries Through Fictional Research Abstracts

Ma, Jiali and Takigawa, Ichigaku and Yamamoto, Akihiro. C2Explainer: Customizable Mask-based Counterfactual Explanation for Graph Neural Networks

Jourdan, Fanny and Chevalier, Yannick and Favre, C\'{e}cile. FairTranslate: an English-French Dataset for Gender Bias Evaluation in Machine Translation by Overcoming Gender Binarity

Ingber, Alexis Shore and Andalibi, Nazanin. Regulating Emotion AI in the United States: Insights from Empirical Inquiry

Sargeant, Holli and Magnusson, M\r{a}ns. Formalising Anti-Discrimination Law in Automated Decision Systems

Sargeant, Holli and Waldetoft, Hannes and Magnusson, M\r{a}ns. Classifying Hate: Legal and Ethical Evaluations of ML-Assisted Hate Crime Classification and Estimation in Sweden

Hopkins, Aspen and Struckman, Isabella and Klyman, Kevin and Silbey, Susan S.. Recourse, Repair, Reparation, \& Prevention: A Stakeholder Analysis of AI Supply Chains

Michel, Shira and Kaur, Sufi and Gillespie, Sarah Elizabeth and Gleason, Jeffrey and Wilson, Christo and Ghosh, Avijit. “It’s not a representation of me”: Examining Accent Bias and Digital Exclusion in Synthetic AI Voice Services

Magomere, Jabez and Ishida, Shu and Afonja, Tejumade and Salama, Aya and Kochin, Daniel and Foutse, Yuehgoh and Hamzaoui, Imane and Sefala, Raesetje and Alaagib, Aisha and Dalal, Samantha and Marchegiani, Beatrice and Semenova, Elizaveta and Crais, Lauren and Hall, Siobhan Mackenzie. The World Wide recipe: A community-centred framework for fine-grained data collection and regional bias operationalisation

Nolte, Henrik and Rateike, Miriam and Finck, Mich\`{e}le. Robustness and Cybersecurity in the EU Artificial Intelligence Act

Qu, Jiaming and Arguello, Jaime and Wang, Yue. Understanding the Effects of Explaining Predictive but Unintuitive Features in Human-XAI Interaction

Konya, Andrew and Thorburn, Luke and Almasri, Wasim and Leshem, Oded Adomi and Procaccia, Ariel and Schirch, Lisa and Bakker, Michiel. Using collective dialogues and AI to find common ground between Israeli and Palestinian peacebuilders

Barrett, Teanna and Okolo, Chinasa T. and Biira, B. and Sherif, Eman and Zhang, Amy and Battle, Leilani. African Data Ethics: A Discursive Framework for Black Decolonial AI

Ruster, Lorenn P and Davis, Jenny L. The Gaps that Never Were: Reconsidering Responsible AI’s Principle-Practice Problem

Scharowski, Nicolas and Perrig, Sebastian A. C. and von Felten, Nick and Aeschbach, Lena Fanya and Opwis, Klaus and Wintersberger, Philipp and Br\"{u}hlmann, Florian. To Trust or Distrust AI: A Questionnaire Validation Study

Urman, Aleksandra and Makhortykh, Mykola and Hannak, Aniko. WEIRD Audits? Research Trends, Linguistic and Geographical Disparities in the Algorithm Audits of Online Platforms - A Systematic Literature Review

Valdivia, Ana. Data Ecofeminism

Yew, Rui-Jie and Marino, Bill and Venkatasubramanian, Suresh. Red Teaming AI Policy: A Taxonomy of Avoision and the EU AI Act

Alpherts, Tim and Ghebreab, Sennay and van Noord, Nanne. Artifacts of Idiosyncracy in Global Street View Data

Shaw, Andrew and Ye, Andre and Krishna, Ranjay and Zhang, Amy. Agonistic Image Generation: Unsettling the Hegemony of Intention

Kinney, David. Aggregating Concepts of Fairness and Accuracy in Prediction Algorithms

Tridgell, Jennifer and Singh, Jatinder. ‘Stewardship’ as a Fair, Accountable and Transparent Model for Free and Open-Source Software Governance? Looking Beyond the EU’s Cyber Resilience Act

Wang, Angelina. Identities are not Interchangeable: The Problem of Overgeneralization in Fair Machine Learning

Ingber, Alexis Shore and Haimson, Oliver L. and Andalibi, Nazanin. Distinguishing Emotion AI: Factors Shaping Perceptions Including Input Data, Emotion Data Recipients, and Identity

Jim\'{e}nez-S\'{a}nchez, Amelia and Avlona, Natalia-Rozalia and de Boer, Sarah and Campello, V\'{\i}ctor M. and Feragen, Aasa and Ferrante, Enzo and Ganz, Melanie and Gichoya, Judy Wawira and Gonzalez, Camila and Groefsema, Steff and Hering, Alessa and Hulman, Adam and Joskowicz, Leo and Juodelyte, Dovile and Kandemir, Melih and Kooi, Thijs and L\'{e}rida, Jorge del Pozo and Li, Livie Yumeng and Pacheco, Andre and R\"{a}dsch, Tim and Reyes, Mauricio and Sourget, Th\'{e}o and van Ginneken, Bram and Wen, David and Weng, Nina and Xu, Jack Junchi and Zaja\c{c}, Hubert Dariusz and Zuluaga, Maria A. and Cheplygina, Veronika. In the Picture: Medical Imaging Datasets, Artifacts, and their Living Review

Sekwenz, Marie-Therese and Wagner, Ben and Parkin, Simon. “It is unfair, and it would be unwise to expect the user to know the law!” – Evaluating reporting mechanisms under the Digital Services Act

Lynn, Jonathan and Kim, Rachel Y. and Gao, Sicun and Schneider, Daniel and Pandya, Sachin S. and Lee, Min Kyung. Regulating Algorithmic Management: A Multi-Stakeholder Study of Challenges in Aligning Software and the Law for Workplace Scheduling

Neumann, Anna and Kirsten, Elisabeth and Zafar, Muhammad Bilal and Singh, Jatinder. Position is Power: System Prompts as a Mechanism of Bias in Large Language Models (LLMs)

Moore, Jared and Grabb, Declan and Agnew, William and Klyman, Kevin and Chancellor, Stevie and Ong, Desmond C. and Haber, Nick. Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers.

Smith, Jessie J. and Madaio, Michael and Burke, Robin and Fiesler, Casey. Pragmatic Fairness: Evaluating ML Fairness Within the Constraints of Industry

Hausladen, Carina I and Knott, Manuel and Camerer, Colin F and Perona, Pietro. Social Perception of Faces in a Vision-Language Model

Cachel, Kathleen and Rundensteiner, Elke. Group Fair Rated Preference Aggregation: Ties Are (Mostly) All You Need

Wojtowicz, Zachary and Jain, Shrey and Vincent, Nicholas. Push and Pull: A Framework for Measuring Attentional Agency on Digital Platforms

Sachdeva, Pratik and van Nuenen, Tom. Normative Evaluation of Large Language Models with Everyday Moral Dilemmas

Basoah, Jeffrey and Chechelnitsky, Daniel and Long, Tao and Reinecke, Katharina and Zerva, Chrysoula and Zhou, Kaitlyn and D\'{\i}az, Mark and Sap, Maarten. Not Like Us, Hunty: Measuring Perceptions and Behavioral Effects of Minoritized Anthropomorphic Cues in LLMs

Wang, Angelina and Bai, Xuechunzi and Barocas, Solon and Blodgett, Su Lin. Measuring Machine Learning Harms from Stereotypes Requires Understanding Who Is Harmed by Which Errors in What Ways

Peppin, Aidan and Reuel, Anka and Casper, Stephen and Jones, Elliot and Strait, Andrew and Anwar, Usman and Agrawal, Anurag and Kapoor, Sayash and Koyejo, Sanmi and Pellat, Marie and Bommasani, Rishi and Frosst, Nick and Hooker, Sara. The Reality of AI and Biorisk

Johnson, Nari and Silva, Elise and Leon, Harrison and Eslami, Motahhare and Schwanke, Beth and Dotan, Ravit and Heidari, Hoda. Legacy Procurement Practices Shape How U.S. Cities Govern AI: Understanding Government Employees' Practices, Challenges, and Needs

Chen, Rex and Wang, Ruiyi and Sadeh, Norman and Fang, Fei. Missing Pieces: How Do Designs that Expose Uncertainty Longitudinally Impact Trust in AI Decision Aids? An In Situ Study of Gig Drivers

Sahebdel, Mahsa and Zeynali, Ali and Bashir, Noman and Shenoy, Prashant and Hajiesmaili, Mohammad. LEAD: Towards Learning-Based Equity-Aware Decarbonization in Ridesharing Platforms

Panigutti, Cecilia and Yela, Delia Fano and Porcaro, Lorenzo and Bertrand, Astrid and Garrido, Josep Soler. How to investigate algorithmic-driven risks in online platforms and search engines? A narrative review through the lens of the EU Digital Services Act

Palla, Konstantina and Garc\'{\i}a, Jos\'{e} Luis Redondo and Hauff, Claudia and Fabbri, Francesco and Damianou, Andreas and Lindstr\"{o}m, Henrik and Taber, Dan and Lalmas, Mounia. Policy-as-Prompt: Rethinking Content Moderation in the Age of Large Language Models

De, Ankolika and Kanthawala, Shaheen and Maddox, Jessica. Who Gets Heard? Calling Out the "Hard-to-Reach" Myth for Non-WEIRD Populations’ Recruitment and Involvement in Research

Ebrahimi, Raman and Vaccaro, Kristen and Naghizadeh, Parinaz. The Double-Edged Sword of Behavioral Responses in Strategic Classification: Theory and User Studies

Owens, Kentrell and Eiger, Yael and Radka, Basia and Kohno, Tadayoshi and Roesner, Franziska. Understanding experiences with compulsory immigration surveillance in the U.S.

Croskey, Payton and Offert, Fabian and Jacobs, Jennifer and Thaler, Kai M.. Liberatory Collections and Ethical AI: Reimagining AI Development from Black Community Archives and Datasets

Alanoca, Sacha and Gur-Arieh, Shira and Zick, Tom and Klyman, Kevin. Comparing Apples to Oranges: A Taxonomy for Navigating the Global Landscape of AI Regulation

Verma, Rajeev and Fischer, Volker and Nalisnick, Eric. On Calibration in Multi-Distribution Learning

Taylor, Jordan and Mire, Joel and Spektor, Franchesca and DeVrio, Alicia and Sap, Maarten and Zhu, Haiyi and Fox, Sarah E. Un-Straightening Generative AI: How Queer Artists Surface and Challenge Model Normativity

Becerra Sandoval, Juana Catalina and Jing, Felicia S.. Rethinking AI Safety: Provocations from the History of Community-based Practices of Road and Driver Safety

Chandra, Mohit and Naik, Suchismita and Ford, Denae and Okoli, Ebele and De Choudhury, Munmun and Ershadi, Mahsa and Ramos, Gonzalo and Hernandez, Javier and Bhattacharjee, Ananya and Warreth, Shahed and Suh, Jina. From Lived Experience to Insight: Unpacking the Psychological Risks of Using AI Conversational Agents

Lin, Zilin and Trogrlic, Gabriela and de Vreese, Claes and Helberger, Natali. Dangerous Criminals and Beautiful Prostitutes? Investigating Harmful Representations in Dutch Language Models

Liu, Yifan and Yao, Ruichen and Liu, Yaokun and Zong, Ruohan and Li, Zelin and Zhang, Yang and Wang, Dong. Component-Based Fairness in Face Attribute Classification with Bayesian Network-informed Meta Learning

Zollo, Thomas and Rajaneesh, Nikita and Zemel, Richard and Gillis, Talia and Black, Emily. Towards Effective Discrimination Testing for Generative AI

Christian, Brian and Kirk, Hannah Rose and Thompson, Jessica A.F. and Summerfield, Christopher and Dumbalska, Tsvetomira. Reward Model Interpretability via Optimal and Pessimal Tokens

Kallina, Emma and Bohn\'{e}, Thomas and Singh, Jatinder. Stakeholder Participation for Responsible AI Development: Disconnects Between Guidance and Current Practice

Schoeffer, Jakob and De-Arteaga, Maria and Elmer, Jonathan. Perils of Label Indeterminacy: A Case Study on Prediction of Neurological Recovery After Cardiac Arrest

Ajmani, Leah Hope and Abdelkadir, Nuredin Ali and Chancellor, Stevie. Secondary Stakeholders in AI: Fighting for, Brokering, and Navigating Agency

Khosrowi, Donal and Ahlers, Markus and van Basshuysen, Philippe. When Predictions are More Than Predictions: Self-Fulfilling Performativity and the Road Towards Morally Responsible Predictive Systems

Chowdhary, Shreya and Ingber, Alexis Shore and Andalibi, Nazanin. Technical Solutions to Emotion AI's Privacy Harms: A Systematic Literature Review

Blake, Annabel and Carter, Marcus and Velloso, Eduardo. Are Measures of Children's Parasocial Relationships Ready for Conversational AI?

Karan, Aditya and Balepur, Naina and Sundaram, Hari. Oh the Prices You’ll See: Designing a Fair Exchange System to Mitigate Personalized Pricing

Jo, Jeongwon and Zhang, He and Cai, Jie and Goyal, Nitesh. AI Trust Reshaping Administrative Burdens: Understanding Trust-Burden Dynamics in LLM-Assisted Benefits Systems

Hollanek, Tomasz and Pi, Yulu and Fiorini, Cosimo and Vignali, Virginia and Peters, Dorian and Drage, Eleanor. A Toolkit for Compliance, a Toolkit for Justice: Drawing on Cross-sectoral Expertise to Develop a Pro-justice EU AI Act Toolkit

Schroeder, Hope and Pareek, Akshansh and Barocas, Solon. Disclosure without Engagement: An Empirical Review of Positionality Statements at FAccT

Siddiqui, Saad and Ibrahim, Lujain and Loke, Kristy and Clare, Stephen and Lu, Marianne and Richardson, Aris and McGlynn, Conor and Ding, Jeffrey. Promising Topics for US–China Dialogues on AI Risks and Governance

Haimson, Oliver L. and Mayworm, Samuel Reiji and Ingber, Alexis Shore and Andalibi, Nazanin. AI Attitudes Among Marginalized Populations in the U.S.: Nonbinary, Transgender, and Disabled Individuals Report More Negative AI Attitudes

Almeda, Shm and Netzorg, Robin and Li, Isabel and Tam, Ethan and Ma, Skyla and Wei, Bob Tianqi. Labor, Power, and Belonging: The Work of Voice in the Age of AI Reproduction

Rehak, Rainer. AI Narrative Breakdown. A Critical Assessment of Power and Promise

Zipperling, Domenique and Deck, Luca and Lanzl, Julia and K\"{u}hl, Niklas. It's only fair when I think it's fair: How Gender Bias Alignment Undermines Distributive Fairness in Human-AI Collaboration

Smith, Becca and Tolbert, Alexander Williams. The Problem of Generics in LLM Training

Z\"{u}ger, Theresa and Mahlow, Philipp and Pothmann, Daniel and Mosene, Katharina and Burmeister, Fabian and Kettemann, Matthias and Schulz, Wolfgang. Crediting Humans: A Systematic Assessment of Influencing Factors for Human-in-the-Loop Figurations in Consumer Credit Lending Decisions

Paris, Tamara and Moon, AJung and Guo, Jin L.C.. Opening the Scope of Openness in AI

G\"{o}r\"{u}c\"{u}, Sinem and Ren, Yuheng and Samuel, Gabrielle and Panagiotidou, Georgia. "As an individual, I suppose you can't really do much": Environmental Sustainability Perceptions of Machine Learning Practitioners

Narayanan Venkit, Pranav and Laban, Philippe and Zhou, Yilun and Mao, Yixin and Wu, Chien-Sheng. Search Engines in the AI Era: A Qualitative Understanding to the False Promise of Factual and Verifiable Source-Cited Responses in LLM-based Search

Yuan, Xiangzhe and Wang, Jiajun and Wan, Qian and Hu, Siying. A Day in Their Shoes: Using LLM-Based Perspective-Taking Interactive Fiction to Reduce Stigma Toward Dirty Work

Ferrario, Andrea. A Trustworthiness-based Metaphysics of Artificial Intelligence Systems

Becerra Sandoval, Juana Catalina and Jing, Felicia S.. Historical Methods for AI Evaluations, Assessments, and Audits

Kanepajs, Arturs and Basu, Aditi and Ghose, Sankalpa and Li, Constance and Mehta, Akshat and Mehta, Ronak and Tucker-Davis, Samuel David and Fischer, Bob and Anthis, Jacy Reese. What do Large Language Models Say About Animals? Investigating Risks of Animal Harm in Generated Text

Lameiro, Francesca and Dunagan, Lavinia and Card, Dallas and Gilbert, Eric and Haimson, Oliver. TIDEs: A Transgender and Nonbinary Community-Labeled Dataset and Model for Transphobia Identification in Digital Environments

Barnett, Julia and Kieslich, Kimon and Helberger, Natali and Diakopoulos, Nicholas. Envisioning Stakeholder-Action Pairs to Mitigate Negative Impacts of AI: A Participatory Approach to Inform Policy Making

Adam, Hammaad and Bermea, Rene S. and Yang, Ming Ying and Celi, Leo Anthony and Ghassemi, Marzyeh. Lost in Transplantation: Characterizing Racial Gaps in Physician Organ Offer Acceptance

Karan, Aditya and Vincent, Nicholas and Karahalios, Karrie and Sundaram, Hari. Algorithmic Collective Action with Two Collectives

Binns, Reuben and Stein, Jake and Datta, Siddhartha and Van Kleek, Max and Shadbolt, Nigel. Not Even Nice Work If You Can Get It; A Longitudinal Study of Uber's Algorithmic Pay and Pricing

Dejl, Adam and Zhang, Dekai and Ayoobi, Hamed and Williams, Matthew and Toni, Francesca. Hidden Conflicts in Neural Networks and their Implications for Explainability

Coelho, Thamiris and Ribeiro, Leo Sampaio Ferraz and Macedo, Jo\~{a}o and dos Santos, Jefersson A. and Avila, Sandra. Minimizing Risk Through Minimizing Model-Data Interaction: A Protocol For Relying on Proxy Tasks When Designing Child Sexual Abuse Imagery Detection Models

Abdenebaoui, Larbi and Aljuneidi, Saja and Horstmannshoff, Fynn and Meyer, Jochen and Boll, Susanne. Value-Driven Design for Public Administration: Insights from a Generative Chatbot in a Housing Application Case Study

Shailya, Krithi and Rajpal, Shreya and Krishnan, Gokul S and Ravindran, Balaraman. LExT: Towards Evaluating Trustworthiness of Natural Language Explanations

Hwang, Hyunseung and Bell, Andrew and Fonseca, Joao and Pliatsika, Venetia and Stoyanovich, Julia and Whang, Steven Euijong. SHAP-based Explanations are Sensitive to Feature Representation

Hawkins, Will and Mittelstadt, Brent and Russell, Chris. Deepfakes on Demand: The rise of accessible non-consensual deepfake image generators: The rise of accessible non-consensual deepfake image generators

Jones, Cameron Robert and Rathi, Ishika and Taylor, Sydney and Bergen, Benjamin K.. People cannot distinguish GPT-4 from a human in a Turing test

Bateyko, Dan and Levy, Karen. One Bad NOFO? AI Governance in Federal Grantmaking

Perello, Nicholas and Cousins, Cyrus and Zick, Yair and Grabowicz, Przemyslaw. Discrimination Induced by Algorithmic Recourse Objectives

Bampis, Evripidis and Ehrhard, Thomas and Escoffier, Bruno and Mathieu, Claire and Pascual, Fanny and Saulpic, David. Estimating the Electoral Consequences of Legislative Redistricting in France

Hafner, Franziska Sofia and Valdivia, Ana and Rocher, Luc. Gender Trouble in Language Models: An Empirical Audit Guided by Gender Performativity Theory

Holm, Sune. Algorithmic Fairness, Decision Thresholds, and the Separateness of Persons

Poirier, Lindsay and Huang, Juniper and MacGibbon, Casey. What Remains Opaque in Transparency Initiatives: Visualizing Phantom Reductions through Devious Data Analysis

Paraschou, Eva and Michali, Maria and Yfantidou, Sofia and Karamanidis, Stelios and Kalogeros, Stefanos Rafail and Vakali, Athena. Ties of Trust: a bowtie model to uncover trustor-trustee relationships in LLMs

Khadar, Malik and Timma, Amoligha and Ludden, Luka and Kaur, Harmanpreet. Explain it like I'm me: User Characteristics in XAI Decision-Making

Bagriacik, Meryem and Otero, Fernando. Fairness-Guided Pruning of Decision Trees

Djeffal, Christian. Reflexive Prompt Engineering: A Framework for Responsible Prompt Engineering and AI Interaction Design

Mayeda, Cass and Singh, Arinjay and Mahale, Arnav and Sakr, Laila Shereen and ElSherief, Mai. Applying Data Feminism Principles to Assess Bias in English and Arabic NLP Research

Pendse, Sachin R. and Gergle, Darren and Kornfield, Rachel and Meyerhoff, Jonah and Mohr, David and Suh, Jina and Wescott, Annie and Williams, Casey and Schleider, Jessica. When Testing AI Tests Us: Safeguarding Mental Health on the Digital Frontlines

Gourabathina, Abinitha and Gerych, Walter and Pan, Eileen and Ghassemi, Marzyeh. The Medium is the Message: How Non-Clinical Information Shapes Clinical Decisions in LLMs

de Troya, \'{I}\~{n}igo and Kernahan, Jacqueline and Doorn, Neelke and Dignum, Virginia and Dobbe, Roel. Misabstraction in Sociotechnical Systems

Gosciak, Jennah and Balagopalan, Aparna and Ouyang, Derek and Koenecke, Allison and Ghassemi, Marzyeh and Ho, Daniel E.. Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments

Qiu, Tian and Nichani, Arjun and Tadayontahmasebi, Rasta and Jeong, Haewon. Gone With the Bits: Revealing Racial Bias in Low-Rate Neural Compression for Facial Images

Roy, Arjun and Rizou, Stavroula and Papadopoulos, Symeon and Ntoutsi, Eirini. Achieving Socio-Economic Parity through the Lens of EU AI Act

Doh, Miriam and Canali, Corinna and Oliver, Nuria. What TikTok Claims, What Bold Glamour Does: A Filter’s Paradox

Benson, Lauren and Okutan, Ahmet and Vasan, Roopa. Uncovering the Linguistic Roots of Bias: Insights and Mitigation in Large Language Models

Phutane, Mahika and Seelam, Ananya and Vashistha, Aditya. “Cold, Calculated, and Condescending”: How AI Identifies and Explains Ableism Compared to Disabled People

Kocyigit, Emre and Rossi, Arianna and Sergeeva, Anastasia and Negri Ribalta, Claudia and Farjami, Ali and Lenzini, Gabriele. DeceptiLens: an Approach supporting Transparency in Deceptive Pattern Detection based on a Multimodal Large Language Model

Bappy, Faisal Haque and Cheon, EunJeong and Islam, Tariqul. Centralized Trust in Decentralized Systems: Unveiling Hidden Contradictions in Blockchain and Cryptocurrency

Sim, Mattea and Radka, Basia and Kohno, Tadayoshi and Roesner, Franziska and Hugenberg, Kurt. Characterizing the Default Persona During Design: Mental Representations of Technology Users are Gendered

Rebrean, Maria-Lucia and Malgieri, Gianclaudio. Vulnerability in the EU AI Act: building an interpretation

Manzini, Thomas and Perali, Priyankari and Tripathi, Jayesh and Murphy, Robin R.. Now you see it, Now you don’t: Damage Label Agreement in Drone \& Satellite Post-Disaster Imagery

Safir, Abdullah Hasan and McInerney, Kerry and Blackwell, Alan F. and Debnath, Ramit. Distributive Epistemic Injustice in AI Ethics: A Co-productionist Account of Global North-South Politics in Knowledge Production

Harvey, Emma and Kizilcec, Rene F. and Koenecke, Allison. A Framework for Auditing Chatbots for Dialect-Based Quality-of-Service Harms

Jain, Shomik and Wang, Margaret and Creel, Kathleen and Wilson, Ashia. Allocation Multiplicity: Evaluating the Promises of the Rashomon Set

Wernick, Alina and Medlar, Alan and S\"{o}derholm, Sofia and G\l{}owacka, Dorota. Evaluating the Contextual Integrity of False Positives in Algorithmic Travel Surveillance

Walsh, Melanie and Rey, Connor and Ge, Chang and Nowak, Tina and Tomkins, Sabina. Algorithms in the Stacks: Investigating automated, for-profit diversity audits in public libraries

Waller, Madeleine and Rodrigues, Odinaldo and Cocarascu, Oana. Beyond Consistency: Nuanced Metrics for Individual Fairness

Solyst, Jaemarie and Peng, Cindy and Deng, Wesley Hanwen and Pratapa, Praneetha and Ogan, Amy and Hammer, Jessica and Hong, Jason and Eslami, Motahhare. Investigating Youth AI Auditing

Cartier van Dissel, Mauritz N. and Martin-Gutierrez, Samuel and Esp\'{\i}n-Noboa, Lisette and Jaramillo, Ana Mar\'{\i}a and Karimi, Fariba. hyperFA*IR: A hypergeometric approach to fair rankings with finite candidate pool

Malgieri, Gianclaudio. Scalable Fairness: The legal tool against power

Fazelpour, Sina and Fleisher, Will. The Value of Disagreement in AI Design, Evaluation, and Alignment

Khan, Ariba and Casper, Stephen and Hadfield-Menell, Dylan. Randomness, Not Representation: The Unreliability of Evaluating Cultural Alignment in LLMs

Maga\~{n}a, Maria Isabel and Shilton, Katie. Frameworks, Methods and Shared Tasks: Connecting Participatory AI to Trustworthy AI Through a Systematic Review of Global Projects

Strasser Ceballos, Clara and Kern, Christoph. Location matching on shaky grounds: Re-evaluating algorithms for refugee allocation

Carichon, Florian and Rampa, Romain and Farnadi, Golnoosh. Crossing Boundaries: Leveraging Semantic Divergences to Explore Cultural Novelty in Cooking Recipes

Monke, Helena and Sae-Chew, Benjamin and Fresz, Benjamin and Huber, Marco F.. From Confusion to Clarity: ProtoScore - A Framework for Evaluating Prototype-Based XAI

Corbucci, Luca and Heilmann, Xenia and Cerrato, Mattia. Benefits of the Federation? Analyzing the Impact of Fair Federated Learning at the Client Level

Olojo, Seyi and Zakrzewski, Janina and Smart, Andrew and van Liemt, Erin and Miceli, Milagros and Ebinama, Amber and Amugongo, Lameck Mbangula. Lost in Machine Translation: The Sociocultural Implications of Language Technologies in Nigeria

Hudig, Anna Ida and Mandalari, Anna Maria and Norval, Chris and Haddadi, Hamed and Binns, Reuben and Singh, Jatinder. Rights Out of Sight: Data Practices and Transparency Gaps in Smart Consumer IoT Ecosystems

Lee, Min Hun and Tok, Martyn Zhe Yu. Towards Uncertainty Aware Task Delegation and Human-AI Collaborative Decision-Making

Fitzsimons, Aidan Z. and Gerber, Elizabeth M. and Long, Duri. AI constructs gendered struggle narratives: Implications for self-concept and systems design.

Verhagen, Ruben S. and Neerincx, Mark A. and Tielman, Myrthe L.. Agent Allocation of Moral Decisions in Human-Agent Teams: Raise Human Involvement and Explain Potential Consequences

De Rosa Palmini, Maria-Teresa and Cetinic, Eva. Exploring Language Patterns of Prompts in Text-to-Image Generation and Their Impact on Visual Diversity

Ye, Jinyi and Luceri, Luca and Ferrara, Emilio. Auditing Political Exposure Bias: Algorithmic Amplification on Twitter/X During the 2024 U.S. Presidential Election

Asiedu, Mercy and Haykel, Iskandar and Dieng, Awa and Kauer, Kerrie and Ahmed, Tousif and Ofori, Florence and Chan, Charisma and Pfohl, Stephen and Rostamzadeh, Negar and Heller, Katherine. Nteasee: Understanding Needs in AI for Health in Africa - A Mixed-Methods Study of Expert and General Population Perspectives

Ma, Sibo and Salinas, Alejandro and Nyarko, Julian and Henderson, Peter. Breaking Down Bias: On The Limits of Generalizable Pruning Strategies

Alam, Sabriya Maryam and Abdulhai, Marwa and Salehi, Niloufar. Blind Faith? User Preference and Expert Assessment of AI-Generated Religious Content

Liang, Jia-Wei and Wang, Hao-Chuan. Is It Fair Enough? Supporting Equitable Group Work Assignment with Work Division Dashboard

Powar, Jovan and Janssen, Heleen and Cloete, Richard and Singh, Jatinder. From policy to practice in data governance and responsible data stewardship: system design for data intermediaries

Reuel, Anka and Connolly, Patrick and Meimandi, Kiana Jafari and Tewari, Shekhar and Wiatrak, Jakub and Venkatesh, Dikshita and Kochenderfer, Mykel. Responsible AI in the Global Context: Maturity Model and Survey

Caetano, Carlos and Santos, Gabriel O. dos and Petrucci, Caio and Barros, Artur and Laranjeira, Camila and Ribeiro, Leo Sampaio Ferraz and de Mendon\c{c}a, J\'{u}lia Fernandes and dos Santos, Jefersson A. and Avila, Sandra. Neglected Risks: The Disturbing Reality of Children’s Images in Datasets and the Urgent Call for Accountability

Skirzynski, Julian and Danks, David and Ustun, Berk. Discrimination Exposed? On the Reliability of Explanations for Discrimination Detection

Gehrmann, Sebastian and Huang, Claire and Teng, Xian and Yurovski, Sergei and Bhorkar, Arjun and Thomas, Naveen and Doucette, John and Rosenberg, David and Dredze, Mark and Rabinowitz, David. Understanding and Mitigating Risks of Generative AI in Financial Services

Kelly, Markelle and Tahaei, Mohammad and Smyth, Padhraic and Wilcox, Lauren. Understanding Gender Bias in AI-Generated Product Descriptions

Imana, Basileal and Shen, Zeyu and Heidemann, John and Korolova, Aleksandra. External Evaluation of Discrimination Mitigation Efforts in Meta's Ad Delivery

Cook, Cynthia M. and Cuffney, Laurie and Howard, John J. and Sirotin, Yevgeniy B. and Tipton, Jerry L. and Vemury, Arun R.. Performance Differentials in Deployed Biometric Systems Caused by Open-Source Face Detectors

Imana, Basileal and Korolova, Aleksandra and Heidemann, John. Auditing for Bias in Ad Delivery Using Inferred Demographic Attributes

Eslami, Motahhare and Fox, Sarah and Shen, Hong and Fan, Bobbie and Lin, Yu-Ru and Farzan, Rosta and Schwanke, Beth. From Margins to the Table: Charting the Potential for Public Participatory Governance of Algorithmic Decision Making

Kneer, Markus and Viehoff, Juri. The Hard Problem of AI Alignment: Value Forks in Moral Judgment

Perdomo, Juan Carlos and Britton, Tolani and Hardt, Moritz and Abebe, Rediet. Difficult Lessons on Social Prediction from Wisconsin Public Schools

Yin, Yaxuan and Karumbaiah, Shamya and Acquaye, Shona. Responsible AI in Education: Understanding Teachers’ Priorities and Contextual Challenges

S.Treiman, Lauren and Ho, Chien-Ju and Kool, Wouter. Do People Think Fast or Slow When Training AI?

Castleman, Jane and Korolova, Aleksandra. Adultification Bias in LLMs and Text-to-Image Models

Li, Jingjin and Li, Qisheng and Gong, Rong and Wang, Lezhi and Wu, Shaomei. Our Collective Voices: The Social and Technical Values of a Grassroots Chinese Stuttered Speech Dataset

Aneja, Urvashi and Gupta, Aarushi and Vashistha, Aditya. Beyond Semantics: Examining Gender Bias in LLMs Deployed within Low-resource Contexts in India

G\"{o}rge, Rebekka and Mock, Michael and Allende-Cid, H\'{e}ctor. Detecting Linguistic Indicators for Stereotype Assessment with Large Language Models

Lyu, Hanjia and Luo, Jiebo and Kang, Jian and Koenecke, Allison. Characterizing Bias: Benchmarking Large Language Models in Simplified versus Traditional Chinese

Olson, Lauren and Anna-Lena Fischer, Ricarda and Kunneman, Florian and Guzm\'{a}n, Emitz\'{a}. Who Speaks for Ethics? How Demographics Shape Ethical Advocacy in Software Development

Mehta, Manisha and Giunchiglia, Fausto. Understanding Gen Alpha's Digital Language: Evaluation of LLM Safety Systems for Content Moderation

Franchi, Matt and Sandhaus, Hauke and Choksi, Madiha Zahrah and Engelmann, Severin and Ju, Wendy and Nissenbaum, Helen. Privacy of Groups in Dense Street Imagery

Lee, Jun Rui and Emami, Sadegh and Hollins, Michael David and Wong, Timothy C. H. and Villalobos S\'{a}nchez, Carlos Ignacio and Toni, Francesca and Zhang, Dekai and Dejl, Adam. XAI-Units: Benchmarking Explainability Methods with Unit Tests

Freel, Alicia and Pias, Sabid Bin Habib and \v{S}abanovi\'{c}, Selma and Kapadia, Apu. How Misclassification Severity and Timing Influence User Trust in AI Image Classification: User Perceptions of High- and Low-Stakes Contexts

Tylstedt, Beatrice and Sadowski, Helga and Eliasson, Caroline and Eklund, Lina. Entangled Pregnancies: Expectants' Experiences of Using Pregnancy Apps

Tsurumi, Takehiro and Beretta, Elena. Social Bias in Vision Transformers: A Comparative Study Across Architectures and Learning Paradigms

Liu, David and Baek, Jackie and Eliassi-Rad, Tina. When Collaborative Filtering is not Collaborative: Unfairness of PCA for Recommendations

Kuehnert, Blaine and Kim, Rachel and Forlizzi, Jodi and Heidari, Hoda. The ``Who", ``What", and ``How" of Responsible AI Governance: A Systematic Review and Meta-Analysis of (Actor, Stage)-Specific Tools

Dhaini, Mahdi and Erdogan, Ege and Feldhus, Nils and Kasneci, Gjergji. Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods

Tian-Zheng Wei, Johnny and Wang, Maggie and Godbole, Ameya and Choi, Jonathan and Jia, Robin. Interrogating LLM design under copyright law

Buyl, Maarten and Khalaf, Hadi and Mayrink Verdun, Claudio and Monteiro Paes, Lucas and Vieira Machado, Caio Cesar and du Pin Calmon, Flavio. AI Alignment at Your Discretion

Ganesh, Prakhar and Tran, Cuong and Shokri, Reza and Fioretto, Ferdinando. The Data Minimization Principle in Machine Learning

Ovalle, Anaelia and Pavasovic, Krunoslav Lehman and Martin, Louis and Zettlemoyer, Luke and Smith, Eric Michael and Chang, Kai-Wei and Williams, Adina and Sagun, Levent. The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models

Pereira, Jansen Silva de Brito and Valdrighi, Giovani and Raimundo, Marcos Medeiros. M²FGB: A Min-Max Gradient Boosting Framework for Subgroup Fairness

Tseng, Emily and Young, Meg and Le Qu\'{e}r\'{e}, Marianne Aubin and Rinehart, Aimee and Suresh, Harini. "Ownership, Not Just Happy Talk": Co-Designing a Participatory Large Language Model for Journalism

de Jonge, Tim and Schraffenberger, Hanna and Geels, Jorrit and Hoepman, Jaap-Henk and Simon, Marie-Sophie and Borgesius, Frederik Zuiderveen. If Deceptive Patterns are the problem, are Fair Patterns the solution?

Claucich, Estanislao and Hooker, Sara and Milone, Diego H. and Ferrante, Enzo and Echeveste, Rodrigo. Fairness of Deep Ensembles: On the interplay between per-group task difficulty and under-representation

Bucknall, Ben and Siddiqui, Saad and Thurnherr, Lara and McGurk, Conor and Harack, Ben and Reuel, Anka and Paskov, Patricia and Mahoney, Casey and Mindermann, S\"{o}ren and Singer, Scott and Hiremath, Vinay and Segerie, Charbel-Rapha\"{e}l and Delaney, Oscar and Abate, Alessandro and Barez, Fazl and Cohen, Michael K. and Torr, Philip and Husz\'{a}r, Ferenc and Calinescu, Anisoara and Jones, Gabriel Davis and Trager, Robert. In Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?

Lepp, Haley and Smith, Daniel Scott. “You Cannot Sound Like GPT": Signs of language discrimination and resistance in computer science publishing

Bendoukha, Adda-Akram and Arcolezi, H\'{e}ber Hwang and Kaaniche, Nesrine and Boudguiga, Aymen and Sirdey, Renaud and Clet, Pierre-Emmanuel. FADE: Federated Aggregation with Discrimination Elimination

Kumar, Ashwin and He, Yuzi and Markosyan, Aram H and Chern, Bobbie and Arrieta-Ibarra, Imanol. Detecting Prefix Bias in LLM-based Reward Models

Yang, Joshua C. and Bachmann, Fynn. Bridging Voting and Deliberation with Algorithms: Field Insights from vTaiwan and Kultur Komitee

Choksi, Madiha Zahrah and Mandel, Ilan and Benthall, Sebastian. The Brief and Wondrous Life of Open Models

Lim, Hyunseung and Choi, Dasom and Hong, Hwajung. How Do Users Identify and Perceive Stereotypes? Understanding User Perspectives on Stereotypical Biases in Large Language Models

Gupta, Ishita and Joshi, Ishika and Dey, Adrita and Parikh, Tapan. “Since Lawyers are Males..”: Examining Implicit Gender Bias in Hindi Language Generation by LLMs

Goree, Samuel and Domingo, Jackson and Crandall, David. Human-Centered Evaluation of Aesthetic Quality Assessment Models Using a Smartphone Camera Application

Vethman, Steven and Smit, Quirine T. S. and van Liebergen, Nina M. and Veenman, Cor J.. Fairness Beyond the Algorithmic Frame: Actionable Recommendations for an Intersectional Approach

Yaghini, Mohammad and Liu, Patty and Magnuson, Andrew and Dullerud, Natalie and Papernot, Nicolas. Trustworthy ML Regulation as a Principal-Agent Problem

Li, Yuxuan and Shirado, Hirokazu and Das, Sauvik. Actions Speak Louder than Words: Agent Decisions Reveal Implicit Biases in Language Models

Wood, Alexandra and Edenberg, Elizabeth. Epistemic Injustice in Algorithmic Systems and the Limits of Civil Rights Law

Gur-Arieh, Shira and Lee, Christina. Consistently Arbitrary or Arbitrarily Consistent: Navigating the Tensions Between Homogenization and Multiplicity in Algorithmic Decision-Making

Behzad, Tina and Casacuberta, S\'{\i}lvia and Diana, Emily Ruth and Tolbert, Alexander Williams. Reconciling Predictive Multiplicity in Practice

de Jong, Sander and Jacobsen, Rune M\o{}berg and Wester, Joel and Wijenayake, Senuri and Goncalves, Jorge and van Berkel, Niels. Impact of Agent-Generated Rationales on Online Social Conformity

Rizvi, Naba and Smith, Taggert and Vidyala, Tanvi and Bolds, Mya and Strickland, Harper and Begel, Andrew and Williams, Rua and Munyaka, Imani. “I Hadn't Thought About That”: Creators of Human-like AI Weigh in on Ethics \& Neurodivergence

Rawal, Kaivalya and Fu, Zihao and Delaney, Eoin and Russell, Chris. Evaluating Model Explanations without Ground Truth

Nonarchival Papers

A. Elmachtoub, H. Kim. Fair Fares for Vehicle Sharing Systems

T. Lin, L. Huang. AI, Normality, and Oppressive Things

S. Lazar, L. Manuali. Can LLMs Advance Democratic Values?

G. Smith. Mindsets & Management: AI & Gender (In)Equitable Access to Finance

C. Chen, H. Liu, J. Yang, B. Mervak, B. Kalaycioglu, G. Lee, E. Cakmakli, M. Bonatti, S. Pudu, O. Kahraman, G. Pamuk, A. Oto, A. Chatterjee, C. Tan. Can Domain Experts Rely on AI Appropriately? A Case Study on AI-Assisted Prostate Cancer MRI Diagnosis

Y. Kong. What Is the Point of Equality in Machine Learning Fairness? Beyond Equality of Opportunity

S. Bell, S. Wang. The Pragmatics of Spurious Correlations in Machine Learning: Interpreting How and Why They Matter

K. Sheng, F. Tuyttens, M. von Keyserlingk. The erasure of intensive livestock farming in text-to-image generative AI

Y. Wang, V. Chen. End-to-End Fairness Optimization with Fair Decision Focused Learning

R. Dotan, T. Gershoni, I. Hadar, G. Luria. Decoupling in AI Ethics: Learning how to Walk the Talk

M. Cheng, A. Lee, K. Rapuano, K. Niederhoffer, A. Liebscher, J. Hancock. From tools to thieves: Measuring and understanding public perceptions of AI through crowdsourced metaphors