List of accepted papers

Archival Papers

A. Ingber, N. Andalibi. Emotion AI in Job Interviews: Injustice, Emotional Labor, Identity, and Privacy

A. Sabuncuoglu, C. Burr, C. Maple. Justified Evidence Collection for Argument-based AI Fairness Assurance

M. Gerchick, R. Encarnación, C. Tanigawa-Lau, L. Armstrong, A. Gutiérrez, D. Metaxa. Auditing the Audits: Lessons for Algorithmic Accountability from Local Law 144's Bias Audits

S. Kapania, S. Ballard, A. Kessler, J. Vaughan. Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline

G. Varoquaux, S. Luccioni, M. Whittaker. Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI

A. Luccioni, E. Strubell, K. Crawford. From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate

G. De Toni, S. Teso, B. Lepri, A. Passerini. Time Can Invalidate Algorithmic Recourse

B. Hutchinson, C. Louro, G. Collard, N. Cooper. Designing Speech Technologies for Australian Aboriginal English: Opportunities, Risks and Participation

P. Jääskeläinen, C. Sanchez, A. Holzapfel. Anticipatory Technology Ethics Reflection By Eliciting Creative AI Imaginaries Through Fictional Research Abstracts

J. Ma, I. Takigawa, A. Yamamoto. C2Explainer: Customizable Mask-based Counterfactual Explanation for Graph Neural Networks

F. Jourdan, Y. Chevalier, C. Favre. FairTranslate: an English-French Dataset for Gender Bias Evaluation in Machine Translation by Overcoming Gender Binarity

A. Ingber, N. Andalibi. Regulating Emotion AI in the United States: Insights from Empirical Inquiry

H. Sargeant, M. Magnusson. Formalising Anti-Discrimination Law in Automated Decision Systems

H. Sargeant, H. Waldetoft, M. Magnusson. Classifying Hate: Legal and Ethical Evaluations of ML-Assisted Hate Crime Classification and Estimation in Sweden

A. Hopkins, I. Struckman, K. Klyman, S. Silbey. Recourse, Repair, Reparation, & Prevention: A Stakeholder Analysis of AI Supply Chains

S. Michel, S. Kaur, S. Gillespie, J. Gleason, C. Wilson, A. Ghosh. “It’s not a representation of me”: Examining Accent Bias and Digital Exclusion in Synthetic AI Voice Services

J. Magomere, S. Ishida, T. Afonja, A. Salama, D. Kochin, Y. Foutse, I. Hamzaoui, R. Sefala, A. Alaagib, S. Dalal, B. Marchegiani, E. Semenova, L. Crais, S. Hall. The World Wide recipe: A community-centred framework for fine-grained data collection and regional bias operationalisation

H. Nolte, M. Rateike, M. Finck. Robustness and Cybersecurity in the EU Artificial Intelligence Act

J. Qu, J. Arguello, Y. Wang. Understanding the Effects of Explaining Predictive but Unintuitive Features in Human-XAI Interaction

A. Konya, L. Thorburn, W. Almasri, O. Leshem, A. Procaccia, L. Schirch, M. Bakker. Using collective dialogues and AI to find common ground between Israeli and Palestinian peacebuilders

T. Barrett, C. Okolo, B. Biira, E. Sherif, A. Zhang, L. Battle. African Data Ethics: A Discursive Framework for Black Decolonial AI

L. Ruster, J. Davis. The Gaps that Never Were: Reconsidering Responsible AI’s Principle-Practice Problem

N. Scharowski, S. Perrig, N. von Felten, L. Aeschbach, K. Opwis, P. Wintersberger, F. Brühlmann. To Trust or Distrust AI: A Questionnaire Validation Study

A. Urman, M. Makhortykh, A. Hannak. WEIRD Audits? Research Trends, Linguistic and Geographical Disparities in the Algorithm Audits of Online Platforms - A Systematic Literature Review

A. Valdivia. Data Ecofeminism

R. Yew, B. Marino, S. Venkatasubramanian. Red Teaming AI Policy: A Taxonomy of Avoision and the EU AI Act

T. Alpherts, S. Ghebreab, N. van Noord. Artifacts of Idiosyncracy in Global Street View Data

A. Shaw, A. Ye, R. Krishna, A. Zhang. Agonistic Image Generation: Unsettling the Hegemony of Intention

D. Kinney. Aggregating Concepts of Fairness and Accuracy in Prediction Algorithms

J. Tridgell, J. Singh. ‘Stewardship’ as a Fair, Accountable and Transparent Model for Free and Open-Source Software Governance? Looking Beyond the EU’s Cyber Resilience Act

A. Wang. Identities are not Interchangeable: The Problem of Overgeneralization in Fair Machine Learning

A. Ingber, O. Haimson, N. Andalibi. Distinguishing Emotion AI: Factors Shaping Perceptions Including Input Data, Emotion Data Recipients, and Identity

A. Jiménez-Sánchez, N. Avlona, S. de Boer, V. Campello, A. Feragen, E. Ferrante, M. Ganz, J. Gichoya, C. Gonzalez, S. Groefsema, A. Hering, A. Hulman, L. Joskowicz, D. Juodelyte, M. Kandemir, T. Kooi, J. Lérida, L. Li, A. Pacheco, T. Rädsch, M. Reyes, T. Sourget, B. van Ginneken, D. Wen, N. Weng, J. Xu, H. Zajaç, M. Zuluaga, V. Cheplygina. In the Picture: Medical Imaging Datasets, Artifacts, and their Living Review

M. Sekwenz, B. Wagner, S. Parkin. “It is unfair, and it would be unwise to expect the user to know the law!” – Evaluating reporting mechanisms under the Digital Services Act

J. Lynn, R. Kim, S. Gao, D. Schneider, S. Pandya, M. Lee. Regulating Algorithmic Management: A Multi-Stakeholder Study of Challenges in Aligning Software and the Law for Workplace Scheduling

A. Neumann, E. Kirsten, M. Zafar, J. Singh. Position is Power: System Prompts as a Mechanism of Bias in Large Language Models (LLMs)

J. Moore, D. Grabb, W. Agnew, K. Klyman, S. Chancellor, D. Ong, N. Haber. Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers.

J. Smith, M. Madaio, R. Burke, C. Fiesler. Pragmatic Fairness: Evaluating ML Fairness Within the Constraints of Industry

C. Hausladen, M. Knott, C. Camerer, P. Perona. Social Perception of Faces in a Vision-Language Model

K. Cachel, E. Rundensteiner. Group Fair Rated Preference Aggregation: Ties Are (Mostly) All You Need

Z. Wojtowicz, S. Jain, N. Vincent. Push and Pull: A Framework for Measuring Attentional Agency on Digital Platforms

P. Sachdeva, T. van Nuenen. Normative Evaluation of Large Language Models with Everyday Moral Dilemmas

J. Basoah, D. Chechelnitsky, T. Long, K. Reinecke, C. Zerva, K. Zhou, M. Díaz, M. Sap. Not Like Us, Hunty: Measuring Perceptions and Behavioral Effects of Minoritized Anthropomorphic Cues in LLMs

A. Wang, X. Bai, S. Barocas, S. Blodgett. Measuring Machine Learning Harms from Stereotypes Requires Understanding Who Is Harmed by Which Errors in What Ways

A. Peppin, A. Reuel, S. Casper, E. Jones, A. Strait, U. Anwar, A. Agrawal, S. Kapoor, S. Koyejo, M. Pellat, R. Bommasani, N. Frosst, S. Hooker. The Reality of AI and Biorisk

N. Johnson, E. Silva, H. Leon, M. Eslami, B. Schwanke, R. Dotan, H. Heidari. Legacy Procurement Practices Shape How U.S. Cities Govern AI: Understanding Government Employees' Practices, Challenges, and Needs

R. Chen, R. Wang, N. Sadeh, F. Fang. Missing Pieces: How Do Designs that Expose Uncertainty Longitudinally Impact Trust in AI Decision Aids? An In Situ Study of Gig Drivers

M. Sahebdel, A. Zeynali, N. Bashir, P. Shenoy, M. Hajiesmaili. LEAD: Towards Learning-Based Equity-Aware Decarbonization in Ridesharing Platforms

C. Panigutti, D. Yela, L. Porcaro, A. Bertrand, J. Garrido. How to investigate algorithmic-driven risks in online platforms and search engines? A narrative review through the lens of the EU Digital Services Act

K. Palla, J. García, C. Hauff, F. Fabbri, A. Damianou, H. Lindström, D. Taber, M. Lalmas. Policy-as-Prompt: Rethinking Content Moderation in the Age of Large Language Models

A. De, S. Kanthawala, J. Maddox. Who Gets Heard? Calling Out the "Hard-to-Reach" Myth for Non-WEIRD Populations' Recruitment and Involvement in Research

R. Ebrahimi, K. Vaccaro, P. Naghizadeh. The Double-Edged Sword of Behavioral Responses in Strategic Classification: Theory and User Studies

K. Owens, Y. Eiger, B. Radka, T. Kohno, F. Roesner. Understanding experiences with compulsory immigration surveillance in the U.S.

P. Croskey, F. Offert, J. Jacobs, K. Thaler. Liberatory Collections and Ethical AI: Reimagining AI Development from Black Community Archives and Datasets

S. Alanoca, S. Gur-Arieh, T. Zick, K. Klyman. Comparing Apples to Oranges: A Taxonomy for Navigating the Global Landscape of AI Regulation

R. Verma, V. Fischer, E. Nalisnick. On Calibration in Multi-Distribution Learning

J. Taylor, J. Mire, F. Spektor, A. DeVrio, M. Sap, H. Zhu, S. Fox. Un-Straightening Generative AI: How Queer Artists Surface and Challenge Model Normativity

J. Becerra, F. Jing. Rethinking AI Safety: Provocations from the History of Community-based Practices of Road and Driver Safety

M. Chandra, S. Naik, D. Ford, E. Okoli, M. De Choudhury, M. Ershadi, G. Ramos, J. Hernandez, A. Bhattacharjee, S. Warreth, J. Suh. From Lived Experience to Insight: Unpacking the Psychological Risks of Using AI Conversational Agents

Z. Lin, G. Trogrlic, C. de Vreese, N. Helberger. Dangerous Criminals and Beautiful Prostitutes? Investigating Harmful Representations in Dutch Language Models

Y. Liu, R. Yao, Y. Liu, R. Zong, Z. Li, Y. Zhang, D. Wang. Component-Based Fairness in Face Attribute Classification with Bayesian Network-informed Meta Learning

T. Zollo, N. Rajaneesh, R. Zemel, T. Gillis, E. Black. Towards Effective Discrimination Testing for Generative AI

B. Christian, H. Kirk, J. Thompson, C. Summerfield, T. Dumbalska. Reward Model Interpretability Via Optimal and Pessimal Tokens

E. Kallina, T. Bohné, J. Singh. Stakeholder Participation for Responsible AI Development: Disconnects Between Guidance and Current Practice

J. Schoeffer, M. De-Arteaga, J. Elmer. Perils of Label Indeterminacy: A Case Study on Prediction of Neurological Recovery After Cardiac Arrest

L. Ajmani, N. Abdelkadir, S. Chancellor. Secondary Stakeholders in AI: Fighting for, Brokering, and Navigating Agency

D. Khosrowi, M. Ahlers, P. van Basshuysen. When Predictions are More Than Predictions: Self-Fulfilling Performativity and the Road Towards Morally Responsible Predictive Systems

S. Chowdhary, A. Ingber, N. Andalibi. Technical Solutions to Emotion AI's Privacy Harms: A Systematic Literature Review

A. Blake, M. Carter, E. Velloso. Are Measures of Children's Parasocial Relationships Ready for Conversational AI?

A. Karan, N. Balepur, H. Sundaram. Oh the Prices You’ll See: Designing a Fair Exchange System to Mitigate Personalized Pricing

J. Jo, H. Zhang, J. Cai, N. Goyal. AI Trust Reshaping Administrative Burdens: Understanding Trust-Burden Dynamics in LLM-Assisted Benefits Systems

T. Hollanek, Y. Pi, C. Fiorini, V. Vignali, D. Peters, E. Drage. A Toolkit for Compliance, a Toolkit for Justice: Drawing on Cross-sectoral Expertise to Develop a Pro-justice EU AI Act Toolkit

H. Schroeder, A. Pareek, S. Barocas. Disclosure without Engagement: An Empirical Review of Positionality Statements at FAccT

S. Siddiqui, K. Loke, S. Clare, M. Lu, A. Richardson, L. Ibrahim, C. McGlynn, J. Ding. Promising Topics for US–China Dialogues on AI Risks and Governance

O. Haimson, S. Mayworm, A. Ingber, N. Andalibi. AI Attitudes Among Marginalized Populations in the U.S.: Nonbinary, Transgender, and Disabled Individuals Report More Negative AI Attitudes

S. Almeda, R. Netzorg, I. Li, E. Tam, S. Ma, B. Wei. Labor, Power, and Belonging: The Work of Voice in the Age of AI Reproduction

R. Rehak. AI Narrative Breakdown. A Critical Assessment of Power and Promise

D. Zipperling, L. Deck, J. Lanzl, N. Kühl. It's only fair when I think it's fair: How Gender Bias Alignment Undermines Distributive Fairness in Human-AI Collaboration

A. Tolbert, B. Smith. The Problem of Generics in LLM Training

T. Züger, P. Mahlow, D. Pothmann, K. Mosene, F. Burmeister, M. Kettemann, W. Schulz. Crediting Humans: A Systematic Assessment of Influencing Factors for Human-in-the-Loop Figurations in Consumer Credit Lending Decisions

T. Paris, A. Moon, J. Guo. Opening the Scope of Openness in AI

S. Görücü, Y. Ren, G. Samuel, G. Panagiotidou. "As an individual, I suppose you can't really do much": Environmental Sustainability Perceptions of Machine Learning Practitioners

P. Narayanan Venkit, P. Laban, Y. Zhou, Y. Mao, C. Wu. Search Engines in the AI Era: A Qualitative Understanding to the False Promise of Factual and Verifiable Source-Cited Responses in LLM-based Search

X. Yuan, J. Wang, Q. WAN, S. HU. A Day in Their Shoes: Using LLM-Based Perspective-Taking Interactive Fiction to Reduce Stigma Toward Dirty Work

A. Ferrario. A Trustworthiness-based Metaphysics of Artificial Intelligence Systems

J. Sandoval, F. Jing. Historical Methods for AI Evaluations, Assessments, and Audits

A. Kanepajs, A. Basu, S. Ghose, C. Li, A. Mehta, R. Mehta, S. Tucker-Davis, B. Fischer, J. Anthis. What do Large Language Models Say About Animals? Investigating Risks of Animal Harm in Generated Text

F. Lameiro, L. Dunagan, D. Card, E. Gilbert, O. Haimson. TIDEs: A Transgender and Nonbinary Community-Labeled Dataset and Model for Transphobia Identification in Digital Environments

J. Barnett, K. Kieslich, N. Helberger, N. Diakopoulos. Envisioning Stakeholder-Action Pairs to Mitigate Negative Impacts of AI: A Participatory Approach to Inform Policy Making

H. Adam, R. Bermea, M. Yang, L. Celi, M. Ghassemi. Lost in Transplantation: Characterizing Racial Gaps in Physician Organ Offer Acceptance

A. Karan, N. Vincent, K. Karahalios, H. Sundaram. Algorithmic Collective Action with Two Collectives

R. Binns, J. Stein, S. Datta, M. Van Kleek, N. Shadbolt. Not Even Nice Work If You Can Get It; A Longitudinal Study of Uber's Algorithmic Pay and Pricing

A. Dejl, D. Zhang, H. Ayoobi, M. Williams, F. Toni. Hidden Conflicts in Neural Networks and their Implications for Explainability

T. Coelho, L. Ribeiro, J. Macedo, J. Santos, S. Avila. Minimizing Risk Through Minimizing Model-Data Interaction: A Protocol For Relying on Proxy Tasks When Designing Child Sexual Abuse Imagery Detection Models

L. Abdenebaoui, S. Aljuneidi, F. Horstmannshoff, J. Meyer, S. Boll. Value-Driven Design for Public Administration: Insights from a Generative Chatbot in a Housing Application Case Study

K. Shailya, S. Rajpal, G. Krishnan, B. Ravindran. LExT: Towards Evaluating Trustworthiness of Natural Language Explanations

H. Hwang, A. Bell, J. Fonseca, V. Pliatsika, J. Stoyanovich, S. Whang. SHAP-based Explanations are Sensitive to Feature Representation

W. Hawkins, B. Mittelstadt, C. Russell. Deepfakes on Demand

C. Jones, I. Rathi, S. Taylor, B. Bergen. People cannot distinguish GPT-4 from a human in a Turing test

D. Bateyko, K. Levy. One Bad NOFO? AI Governance in Federal Grantmaking

N. Perello, C. Cousins, Y. Zick, P. Grabowicz. Discrimination Induced by Algorithmic Recourse Objectives

E. Bampis, T. Ehrhard, B. Escoffier, C. Mathieu, F. Pascual, D. Saulpic. Estimating the Electoral Consequences of Legislative Redistricting in France

F. Hafner, A. Valdivia, L. Rocher. Gender Trouble in Language Models: An Empirical Audit Guided by Gender Performativity Theory

S. Holm. Algorithmic Fairness, Decision Thresholds, and the Separateness of Persons

L. Poirier, J. Huang, C. MacGibbon. What Remains Opaque in Transparency Initiatives: Visualizing Phantom Reductions through Devious Data Analysis

E. Paraschou, M. Michali, S. Yfantidou, S. Karamanidis, S. Kalogeros, A. Vakali. Ties of Trust: a bowtie model to uncover trustor-trustee relationships in LLMs

M. Khadar, A. Timma, L. Ludden, H. Kaur. Explain it like I'm me: User characteristics in XAI decision-making

M. Bagriacik, F. Otero. Fairness-Guided Pruning of Decision Trees

C. Djeffal. Reflexive Prompt Engineering

C. Mayeda, A. Singh, A. Mahale, L. Sakr, M. ElSherief. Applying Data Feminism Principles to Assess Bias in English and Arabic NLP Research

S. Pendse, D. Gergle, R. Kornfield, J. Meyerhoff, D. Mohr, J. Suh, A. Wescott, C. Williams, J. Schleider. When Testing AI Tests Us: Safeguarding Mental Health on the Digital Frontlines

A. Gourabathina, W. Gerych, E. Pan, M. Ghassemi. The Medium is the Message: How Non-Clinical Information Shapes Clinical Decisions in LLMs

Í. Martínez de Rituerto de Troya, J. Kernahan, N. Doorn, V. Dignum, R. Dobbe. Misabstraction in sociotechnical systems

J. Gosciak, A. Balagopalan, D. Ouyang, A. Koenecke, M. Ghassemi, D. Ho. Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments

T. Qiu, A. Nichani, R. Tadayontahmasebi, H. Jeong. Gone With the Bits: Revealing Racial Bias in Low-Rate Neural Compression for Facial Images

A. Roy, S. Rizou, S. Papadopoulos, E. Ntoutsi. Achieving Socio-Economic Parity through the Lens of EU AI Act

M. Doh, C. Canali, N. Oliver. What TikTok Claims, What Bold Glamour Does: A Filter’s Paradox

L. Benson, A. Okutan, R. Vasan. Uncovering the Linguistic Roots of Bias: Insights and Mitigation in Large Language Models

M. Phutane, A. Seelam, A. Vashistha. “Cold, Calculated, and Condescending”: How AI Identifies and Explains Ableism Compared to Disabled People

E. Kocyigit, A. Rossi, A. Sergeeva, C. Ribalta, A. Farjami, G. Lenzini. DeceptiLens: an Approach supporting Transparency in Deceptive Pattern Detection based on a Multimodal Large Language Model

F. Bappy, E. Cheon, T. Islam. Centralized Trust in Decentralized Systems: Unveiling Hidden Contradictions in Blockchain and Cryptocurrency

M. Sim, B. Radka, T. Kohno, F. Roesner, K. Hugenberg. Characterizing the Default Persona During Design: Mental Representations of Technology Users are Gendered

M. Rebrean, G. Malgieri. Vulnerability in the EU AI Act: building an interpretation

T. Manzini, P. Perali, J. Tripathi, R. Murphy. Now you see it, Now you don’t: Damage Label Agreement in Drone & Satellite Post-Disaster Imagery

A. Safir, K. McInerney, A. Blackwell, R. Debnath. Distributive Epistemic Injustice in AI Ethics: A Co-productionist Account of Global North-South Politics in Knowledge Production

E. Harvey, R. Kizilcec, A. Koenecke. A Framework for Auditing Chatbots for Dialect-Based Quality of Service Harms

S. Jain, M. Wang, K. Creel, A. Wilson. Allocation Multiplicity: Evaluating the Promises of the Rashomon Set

A. Wernick, A. Medlar, S. Söderholm, D. Głowacka. Evaluating the Contextual Integrity of False Positives in Algorithmic Travel Surveillance

M. Walsh, C. Rey, C. Ge, T. Nowak, S. Tomkins. Algorithms in the Stacks: Investigating automated, for-profit diversity audits in public libraries

M. Waller, O. Rodrigues, O. Cocarascu. Beyond Consistency: Nuanced Metrics for Individual Fairness

J. Solyst, C. Peng, W. Deng, P. Pratapa, A. Ogan, J. Hammer, J. Hong, M. Eslami. Investigating Youth AI Auditing

M. van Dissel, S. Martin-Gutierrez, L. Espín-Noboa, A. Jaramillo, F. Karimi. hyperFA*IR: A hypergeometric approach to fair rankings with finite candidate pool

G. Malgieri. Scalable Fairness

S. Fazelpour, W. Fleisher. The Value of Disagreement in AI Design, Evaluation, and Alignment

A. Khan, S. Casper, D. Hadfield-Menell. Randomness, Not Representation: The Unreliability of Evaluating Cultural Alignment in LLMs

M. Magaña, K. Shilton. Frameworks, Methods and Shared Tasks: Connecting Participatory AI to Trustworthy AI Through a Systematic Review of Global Projects

C. Strasser Ceballos, C. Kern. Location matching on shaky grounds: Re-evaluating algorithms for refugee allocation

F. Carichon, R. Rampa, G. Farnadi. Crossing Boundaries: Leveraging Semantic Divergences to Explore Cultural Novelty in Cooking Recipes

H. Monke, B. Sae-Chew, B. Fresz, M. Huber. From Confusion to Clarity: ProtoScore - A Framework for Evaluating Prototype-Based XAI

L. Corbucci, X. Heilmann, M. Cerrato. Benefits of the Federation? Analyzing the Impact of Fair Federated Learning at the Client Level

S. Olojo, J. Zakrzewski, A. Smart, E. van Liemt, M. Miceli, A. Ebinama, L. Amugongo. Lost in Machine Translation: The Sociocultural Implications of Language Technologies in Nigeria

A. Hudig, A. Mandalari, C. Norval, H. Haddadi, R. Binns, J. Singh. Rights Out of Sight: Data Practices and Transparency Gaps in Smart Consumer IoT Ecosystems

M. Lee, M. Tok. Towards Uncertainty Aware Task Delegation and Human-AI Collaborative Decision-Making

A. Fitzsimons, E. Gerber, D. Long. AI constructs gendered struggle narratives: Implications for self-concept and systems design.

R. Verhagen, M. Neerincx, M. Tielman. Agent Allocation of Moral Decisions in Human-Agent Teams: Raise Human Involvement and Explain Potential Consequences

M. De Rosa Palmini, E. Cetinic. Exploring Language Patterns of Prompts in Text-to-Image Generation and Their Impact on Visual Diversity

J. Ye, L. Luceri, E. Ferrara. Auditing Political Exposure Bias: Algorithmic Amplification on Twitter/X During the 2024 U.S. Presidential Election

M. Asiedu, I. Haykel, A. Dieng, K. Kauer, T. Ahmed, F. Ofori, C. Chan, S. Pfohl, N. Rostamzadeh, K. Heller. Nteasee: Understanding Needs in AI for Health in Africa - A Mixed-Methods Study of Expert and General Population Perspectives

A. Salinas, S. Ma, J. Nyarko, P. Henderson. Breaking Down Bias: On The Limits of Generalizable Pruning Strategies

S. Alam, M. Abdulhai, N. Salehi. Blind Faith? User Preference and Expert Assessment of AI-Generated Religious Content

J. Liang, H. Wang. Is It Fair Enough? Supporting Equitable Group Work Assignment with Work Division Dashboard

J. Powar, H. Janssen, R. Cloete, J. Singh. From policy to practice in data governance and responsible data stewardship: system design for data intermediaries

A. Reuel, P. Connolly, K. Meimandi, S. Tewari, J. Wiatrak, D. Venkatesh, M. Kochenderfer. Responsible AI in the Global Context: Maturity Model and Survey

C. Caetano, G. Santos, C. Petrucci, A. Barros, C. Laranjeira, L. Ribeiro, J. de Mendonça, J. Santos, S. Avila. Neglected Risks: The Disturbing Reality of Children’s Images in Datasets and the Urgent Call for Accountability

J. Skirzynski, D. Danks, B. Ustun. Discrimination Exposed? On the Reliability of Explanations for Discrimination Detection

S. Gehrmann, C. Huang, X. Teng, S. Yurovski, A. Bhorkar, N. Thomas, J. Doucette, D. Rosenberg, M. Dredze, D. Rabinowitz. Understanding and Mitigating Risks of Generative AI in Financial Services

M. Kelly, M. Tahaei, P. Smyth, L. Wilcox. Understanding Gender Bias in AI-Generated Product Descriptions

B. Imana, Z. Shen, J. Heidemann, A. Korolova. External Evaluation of Discrimination Mitigation Efforts in Meta's Ad Delivery

C. Cook, L. Cuffney, J. Howard, Y. Sirotin, J. Tipton, A. Vemury. Performance Differentials in Deployed Biometric Systems Caused by Open-Source Face Detectors

B. Imana, A. Korolova, J. Heidemann. Auditing for Bias in Ad Delivery Using Inferred Demographic Attributes

M. Eslami, S. Fox, H. Shen, B. Fan, Y. Lin, R. Farzan, B. Schwanke. From Margins to the Table: Charting the Potential for Public Participatory Governance of Algorithmic Decision Making

M. Kneer, J. Viehoff. The Hard Problem of AI Alignment: Value Forks in Moral Judgment

J. Perdomo, T. Britton, M. Hardt, R. Abebe. Difficult Lessons on Social Prediction from Wisconsin Public Schools

Y. Yin, S. Karumbaiah, S. Acquaye. Responsible AI in Education: Understanding Teachers’ Priorities and Contextual Challenges

L. Treiman, C. Ho, W. Kool. Do People Think Fast or Slow When Training AI?

J. Castleman, A. Korolova. Adultification Bias in LLMs and Text-to-Image Models

J. Li, Q. Li, R. Gong, L. Wang, S. Wu. Our Collective Voices: The Social and Technical Values of a Grassroots Chinese Stuttered Speech Dataset

U. Aneja, A. Gupta, A. Vashistha. Beyond Semantics: Examining Gender Bias in LLMs Deployed within Low-resource Contexts in India

R. Görge, H. Allende-Cid, M. Mock. Detecting Linguistic Indicators for Stereotype Assessment with Large Language Models

H. Lyu, J. Luo, J. Kang, A. Koenecke. Characterizing Bias: Benchmarking Large Language Models in Simplified versus Traditional Chinese

L. Olson, R. Fischer, F. Kunneman, E. Guzman. Who Speaks for Ethics? How Demographics Shape Ethical Advocacy in Software Development

M. Mehta, F. Giunchiglia. Understanding Gen Alpha's Digital Language: Evaluation of LLM Safety Systems for Content Moderation

M. Franchi, H. Sandhaus, M. Choksi, S. Engelmann, W. Ju, H. Nissenbaum. Privacy of Groups in Dense Street Imagery

J. Lee, S. Emami, M. Hollins, T. Wong, C. Sánchez, F. Toni, D. Zhang, A. Dejl. XAI-Units: Benchmarking Explainability Methods with Unit Tests

A. Freel, S. Pias, S. Šabanović, A. Kapadia. How Misclassification Severity and Timing Influence User Trust in AI Image Classification: User Perceptions of High- and Low-Stakes Contexts

B. Tylstedt, H. Sadowski, C. Eliasson, L. Eklund. Entangled Pregnancies: Expectants' Experiences of Using Pregnancy Apps

T. Tsurumi, E. Beretta. Social Bias in Vision Transformers: A Comparative Study Across Architectures and Learning Paradigms

D. Liu, J. Baek, T. Eliassi-Rad. When Collaborative Filtering is not Collaborative: Unfairness of PCA for Recommendations

B. Kuehnert, R. Kim, J. Forlizzi, H. Heidari. The ``Who'', ``What'', and ``How'' of Responsible AI Governance: A Systematic Review and Meta-Analysis of (Actor, Stage)-Specific Tools

M. Dhaini, E. Erdogan, N. Feldhus, G. Kasneci. Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods

J. Wei, M. Wang, A. Godbole, J. Choi, R. Jia. Interrogating LLM design under copyright law

M. Buyl, H. Khalaf, C. Verdun, L. Paes, C. Machado, F. Calmon. AI Alignment at Your Discretion

P. Ganesh, C. Tran, R. Shokri, F. Fioretto. The Data Minimization Principle in Machine Learning

A. Ovalle, K. Pavasovic, L. Martin, L. Zettlemoyer, E. Smith, K. Chang, A. Williams, L. Sagun. The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models

J. Pereira, G. Valdrighi, M. Raimundo. M²FGB: A Min-Max Gradient Boosting Framework for Subgroup Fairness

E. Tseng, M. Young, M. Le Quéré, A. Rinehart, H. Suresh. "Ownership, Not Just Happy Talk": Co-Designing a Participatory Large Language Model for Journalism

T. de Jonge, H. Schraffenberger, J. Geels, J. Hoepman, M. Simon, F. Borgesius. If Deceptive Patterns are the problem, are Fair Patterns the solution?

E. Claucich, S. Hooker, D. Milone, E. Ferrante, R. Echeveste. Fairness of Deep Ensembles: On the interplay between per-group task difficulty and under-representation

B. Bucknall, S. Siddiqui, L. Thurnherr, C. McGurk, B. Harack, A. Reuel, P. Paskov, C. Mahoney, S. Mindermann, S. Singer, V. Hiremath, C. Segerie, O. Delaney, A. Abate, F. Barez, M. Cohen, P. Torr, F. Huszár, A. Calinescu, G. Jones, R. Trager. In Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?

H. Lepp, D. Smith. “You Cannot Sound Like GPT": Signs of language discrimination and resistance in computer science publishing

A. Bendoukha, H. Arcolezi, N. Kaaniche, A. Boudguiga, R. Sirdey, P. Clet. FADE: Federated Aggregation with Discrimination Elimination

A. Kumar, Y. He, A. Markosyan, B. Chern, I. Arrieta-Ibarra. Detecting Prefix Bias in LLM-based Reward Models

J. Yang, F. Bachmann. Bridging Voting and Deliberation with Algorithms: Field Insights from vTaiwan and Kultur Komitee

M. Choksi, I. Mandel, S. Benthall. The Brief and Wondrous Life of Open Models

H. Lim, D. Choi, H. Hong. How Do Users Identify and Perceive Stereotypes? Understanding User Perspectives on Stereotypical Biases in Large Language Models

I. Gupta, I. Joshi, A. Dey, T. Parikh. “Since Lawyers are Males..”: Examining Implicit Gender Bias in Hindi Language Generation by LLMs

S. Goree, J. Domingo, D. Crandall. Human-Centered Evaluation of Aesthetic Quality Assessment Models Using a Smartphone Camera Application

S. Vethman, Q. Smit, N. van Liebergen, C. Veenman. Fairness beyond the Algorithmic Frame: Actionable Recommendations for an Intersectional Approach

M. Yaghini, P. Liu, A. Magnuson, N. Dullerud, N. Papernot. Trustworthy ML Regulation as a Principal-Agent Problem

Y. Li, H. Shirado, S. Das. Actions Speak Louder than Words: Agent Decisions Reveal Implicit Biases in Language Models

A. Wood, E. Edenberg. Epistemic Injustice in Algorithmic Systems and the Limits of Civil Rights Law

S. Gur-Arieh, C. Lee. Consistently Arbitrary or Arbitrarily Consistent: Navigating the Tensions Between Homogenization and Multiplicity in Algorithmic Decision-Making

T. Behzad, S. Casacuberta, E. Diana, A. Tolbert. Reconciling Predictive Multiplicity in Practice

S. de Jong, R. Jacobsen, J. Wester, S. Wijenayake, J. Goncalves, N. van Berkel. Impact of Agent-Generated Rationales on Online Social Conformity

N. Rizvi, T. Smith, T. Vidyala, M. Bolds, A. Begel, I. Munyaka, H. Strickland, R. Williams. “I Hadn't Thought About That”: Creators of Human-like AI Weigh in on Ethics & Neurodivergence

K. Rawal, Z. Fu, E. Delaney, C. Russell. Evaluating Model Explanations without Ground Truth

Non-Archival Papers

A. Elmachtoub, H. Kim. Fair Fares for Vehicle Sharing Systems

T. Lin, L. Huang. AI, Normality, and Oppressive Things

S. Lazar, L. Manuali. Can LLMs Advance Democratic Values?

G. Smith. Mindsets & Management: AI & Gender (In)Equitable Access to Finance

C. Chen, H. Liu, J. Yang, B. Mervak, B. Kalaycioglu, G. Lee, E. Cakmakli, M. Bonatti, S. Pudu, O. Kahraman, G. Pamuk, A. Oto, A. Chatterjee, C. Tan. Can Domain Experts Rely on AI Appropriately? A Case Study on AI-Assisted Prostate Cancer MRI Diagnosis

Y. Kong. What Is the Point of Equality in Machine Learning Fairness? Beyond Equality of Opportunity

S. Bell, S. Wang. The Pragmatics of Spurious Correlations in Machine Learning: Interpreting How and Why They Matter

K. Sheng, F. Tuyttens, M. von Keyserlingk. The erasure of intensive livestock farming in text-to-image generative AI

Y. Wang, V. Chen. End-to-End Fairness Optimization with Fair Decision Focused Learning

R. Dotan, T. Gershoni, I. Hadar, G. Luria. Decoupling in AI Ethics: Learning how to Walk the Talk

M. Cheng, A. Lee, K. Rapuano, K. Niederhoffer, A. Liebscher, J. Hancock. From tools to thieves: Measuring and understanding public perceptions of AI through crowdsourced metaphors