Gerchick, Marissa Kumar and Encarnaci\'{o}n, Ro and Tanigawa-Lau, Cole and Armstrong, Lena and Guti\'{e}rrez, Ana and Metaxa, Dana\'{e}. Auditing the Audits: Lessons for Algorithmic Accountability from Local Law 144's Bias Audits
Kapania, Shivani and Ballard, Stephanie and Kessler, Alex and Vaughan, Jennifer Wortman. Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline
De Toni, Giovanni and Teso, Stefano and Lepri, Bruno and Passerini, Andrea. Time Can Invalidate Algorithmic Recourse
Michel, Shira and Kaur, Sufi and Gillespie, Sarah Elizabeth and Gleason, Jeffrey and Wilson, Christo and Ghosh, Avijit. âItâs not a representation of meâ: Examining Accent Bias and Digital Exclusion in Synthetic AI Voice Services
Magomere, Jabez and Ishida, Shu and Afonja, Tejumade and Salama, Aya and Kochin, Daniel and Foutse, Yuehgoh and Hamzaoui, Imane and Sefala, Raesetje and Alaagib, Aisha and Dalal, Samantha and Marchegiani, Beatrice and Semenova, Elizaveta and Crais, Lauren and Hall, Siobhan Mackenzie. The World Wide recipe: A community-centred framework for fine-grained data collection and regional bias operationalisation
Konya, Andrew and Thorburn, Luke and Almasri, Wasim and Leshem, Oded Adomi and Procaccia, Ariel and Schirch, Lisa and Bakker, Michiel. Using collective dialogues and AI to find common ground between Israeli and Palestinian peacebuilders
Barrett, Teanna and Okolo, Chinasa T. and Biira, B. and Sherif, Eman and Zhang, Amy and Battle, Leilani. African Data Ethics: A Discursive Framework for Black Decolonial AI
Scharowski, Nicolas and Perrig, Sebastian A. C. and von Felten, Nick and Aeschbach, Lena Fanya and Opwis, Klaus and Wintersberger, Philipp and Br\"{u}hlmann, Florian. To Trust or Distrust AI: A Questionnaire Validation Study
Alpherts, Tim and Ghebreab, Sennay and van Noord, Nanne. Artifacts of Idiosyncracy in Global Street View Data
Jim\'{e}nez-S\'{a}nchez, Amelia and Avlona, Natalia-Rozalia and de Boer, Sarah and Campello, V\'{\i}ctor M. and Feragen, Aasa and Ferrante, Enzo and Ganz, Melanie and Gichoya, Judy Wawira and Gonzalez, Camila and Groefsema, Steff and Hering, Alessa and Hulman, Adam and Joskowicz, Leo and Juodelyte, Dovile and Kandemir, Melih and Kooi, Thijs and L\'{e}rida, Jorge del Pozo and Li, Livie Yumeng and Pacheco, Andre and R\"{a}dsch, Tim and Reyes, Mauricio and Sourget, Th\'{e}o and van Ginneken, Bram and Wen, David and Weng, Nina and Xu, Jack Junchi and Zaja\c{c}, Hubert Dariusz and Zuluaga, Maria A. and Cheplygina, Veronika. In the Picture: Medical Imaging Datasets, Artifacts, and their Living Review
Moore, Jared and Grabb, Declan and Agnew, William and Klyman, Kevin and Chancellor, Stevie and Ong, Desmond C. and Haber, Nick. Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers.
Hausladen, Carina I and Knott, Manuel and Camerer, Colin F and Perona, Pietro. Social Perception of Faces in a Vision-Language Model
Basoah, Jeffrey and Chechelnitsky, Daniel and Long, Tao and Reinecke, Katharina and Zerva, Chrysoula and Zhou, Kaitlyn and D\'{\i}az, Mark and Sap, Maarten. Not Like Us, Hunty: Measuring Perceptions and Behavioral Effects of Minoritized Anthropomorphic Cues in LLMs
Peppin, Aidan and Reuel, Anka and Casper, Stephen and Jones, Elliot and Strait, Andrew and Anwar, Usman and Agrawal, Anurag and Kapoor, Sayash and Koyejo, Sanmi and Pellat, Marie and Bommasani, Rishi and Frosst, Nick and Hooker, Sara. The Reality of AI and Biorisk
Sahebdel, Mahsa and Zeynali, Ali and Bashir, Noman and Shenoy, Prashant and Hajiesmaili, Mohammad. LEAD: Towards Learning-Based Equity-Aware Decarbonization in Ridesharing Platforms
Palla, Konstantina and Garc\'{\i}a, Jos\'{e} Luis Redondo and Hauff, Claudia and Fabbri, Francesco and Damianou, Andreas and Lindstr\"{o}m, Henrik and Taber, Dan and Lalmas, Mounia. Policy-as-Prompt: Rethinking Content Moderation in the Age of Large Language Models
Owens, Kentrell and Eiger, Yael and Radka, Basia and Kohno, Tadayoshi and Roesner, Franziska. Understanding experiences with compulsory immigration surveillance in the U.S.
Verma, Rajeev and Fischer, Volker and Nalisnick, Eric. On Calibration in Multi-Distribution Learning
Taylor, Jordan and Mire, Joel and Spektor, Franchesca and DeVrio, Alicia and Sap, Maarten and Zhu, Haiyi and Fox, Sarah E. Un-Straightening Generative AI: How Queer Artists Surface and Challenge Model Normativity
Chandra, Mohit and Naik, Suchismita and Ford, Denae and Okoli, Ebele and De Choudhury, Munmun and Ershadi, Mahsa and Ramos, Gonzalo and Hernandez, Javier and Bhattacharjee, Ananya and Warreth, Shahed and Suh, Jina. From Lived Experience to Insight: Unpacking the Psychological Risks of Using AI Conversational Agents
Liu, Yifan and Yao, Ruichen and Liu, Yaokun and Zong, Ruohan and Li, Zelin and Zhang, Yang and Wang, Dong. Component-Based Fairness in Face Attribute Classification with Bayesian Network-informed Meta Learning
Zollo, Thomas and Rajaneesh, Nikita and Zemel, Richard and Gillis, Talia and Black, Emily. Towards Effective Discrimination Testing for Generative AI
Christian, Brian and Kirk, Hannah Rose and Thompson, Jessica A.F. and Summerfield, Christopher and Dumbalska, Tsvetomira. Reward Model Interpretability via Optimal and Pessimal Tokens
Siddiqui, Saad and Ibrahim, Lujain and Loke, Kristy and Clare, Stephen and Lu, Marianne and Richardson, Aris and McGlynn, Conor and Ding, Jeffrey. Promising Topics for USâChina Dialogues on AI Risks and Governance
Almeda, Shm and Netzorg, Robin and Li, Isabel and Tam, Ethan and Ma, Skyla and Wei, Bob Tianqi. Labor, Power, and Belonging: The Work of Voice in the Age of AI Reproduction
Smith, Becca and Tolbert, Alexander Williams. The Problem of Generics in LLM Training
Z\"{u}ger, Theresa and Mahlow, Philipp and Pothmann, Daniel and Mosene, Katharina and Burmeister, Fabian and Kettemann, Matthias and Schulz, Wolfgang. Crediting Humans: A Systematic Assessment of Influencing Factors for Human-in-the-Loop Figurations in Consumer Credit Lending Decisions
Paris, Tamara and Moon, AJung and Guo, Jin L.C.. Opening the Scope of Openness in AI
Kanepajs, Arturs and Basu, Aditi and Ghose, Sankalpa and Li, Constance and Mehta, Akshat and Mehta, Ronak and Tucker-Davis, Samuel David and Fischer, Bob and Anthis, Jacy Reese. What do Large Language Models Say About Animals? Investigating Risks of Animal Harm in Generated Text
Adam, Hammaad and Bermea, Rene S. and Yang, Ming Ying and Celi, Leo Anthony and Ghassemi, Marzyeh. Lost in Transplantation: Characterizing Racial Gaps in Physician Organ Offer Acceptance
Karan, Aditya and Vincent, Nicholas and Karahalios, Karrie and Sundaram, Hari. Algorithmic Collective Action with Two Collectives
Dejl, Adam and Zhang, Dekai and Ayoobi, Hamed and Williams, Matthew and Toni, Francesca. Hidden Conflicts in Neural Networks and their Implications for Explainability
Shailya, Krithi and Rajpal, Shreya and Krishnan, Gokul S and Ravindran, Balaraman. LExT: Towards Evaluating Trustworthiness of Natural Language Explanations
Hwang, Hyunseung and Bell, Andrew and Fonseca, Joao and Pliatsika, Venetia and Stoyanovich, Julia and Whang, Steven Euijong. SHAP-based Explanations are Sensitive to Feature Representation
Jones, Cameron Robert and Rathi, Ishika and Taylor, Sydney and Bergen, Benjamin K.. People cannot distinguish GPT-4 from a human in a Turing test
Perello, Nicholas and Cousins, Cyrus and Zick, Yair and Grabowicz, Przemyslaw. Discrimination Induced by Algorithmic Recourse Objectives
Bampis, Evripidis and Ehrhard, Thomas and Escoffier, Bruno and Mathieu, Claire and Pascual, Fanny and Saulpic, David. Estimating the Electoral Consequences of Legislative Redistricting in France
Paraschou, Eva and Michali, Maria and Yfantidou, Sofia and Karamanidis, Stelios and Kalogeros, Stefanos Rafail and Vakali, Athena. Ties of Trust: a bowtie model to uncover trustor-trustee relationships in LLMs
Khadar, Malik and Timma, Amoligha and Ludden, Luka and Kaur, Harmanpreet. Explain it like I'm me: User Characteristics in XAI Decision-Making
Mayeda, Cass and Singh, Arinjay and Mahale, Arnav and Sakr, Laila Shereen and ElSherief, Mai. Applying Data Feminism Principles to Assess Bias in English and Arabic NLP Research
Pendse, Sachin R. and Gergle, Darren and Kornfield, Rachel and Meyerhoff, Jonah and Mohr, David and Suh, Jina and Wescott, Annie and Williams, Casey and Schleider, Jessica. When Testing AI Tests Us: Safeguarding Mental Health on the Digital Frontlines
de Troya, \'{I}\~{n}igo and Kernahan, Jacqueline and Doorn, Neelke and Dignum, Virginia and Dobbe, Roel. Misabstraction in Sociotechnical Systems
Gosciak, Jennah and Balagopalan, Aparna and Ouyang, Derek and Koenecke, Allison and Ghassemi, Marzyeh and Ho, Daniel E.. Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments
Roy, Arjun and Rizou, Stavroula and Papadopoulos, Symeon and Ntoutsi, Eirini. Achieving Socio-Economic Parity through the Lens of EU AI Act
Kocyigit, Emre and Rossi, Arianna and Sergeeva, Anastasia and Negri Ribalta, Claudia and Farjami, Ali and Lenzini, Gabriele. DeceptiLens: an Approach supporting Transparency in Deceptive Pattern Detection based on a Multimodal Large Language Model
Jain, Shomik and Wang, Margaret and Creel, Kathleen and Wilson, Ashia. Allocation Multiplicity: Evaluating the Promises of the Rashomon Set
Waller, Madeleine and Rodrigues, Odinaldo and Cocarascu, Oana. Beyond Consistency: Nuanced Metrics for Individual Fairness
Solyst, Jaemarie and Peng, Cindy and Deng, Wesley Hanwen and Pratapa, Praneetha and Ogan, Amy and Hammer, Jessica and Hong, Jason and Eslami, Motahhare. Investigating Youth AI Auditing
Cartier van Dissel, Mauritz N. and Martin-Gutierrez, Samuel and Esp\'{\i}n-Noboa, Lisette and Jaramillo, Ana Mar\'{\i}a and Karimi, Fariba. hyperFA*IR: A hypergeometric approach to fair rankings with finite candidate pool
Olojo, Seyi and Zakrzewski, Janina and Smart, Andrew and van Liemt, Erin and Miceli, Milagros and Ebinama, Amber and Amugongo, Lameck Mbangula. Lost in Machine Translation: The Sociocultural Implications of Language Technologies in Nigeria
Hudig, Anna Ida and Mandalari, Anna Maria and Norval, Chris and Haddadi, Hamed and Binns, Reuben and Singh, Jatinder. Rights Out of Sight: Data Practices and Transparency Gaps in Smart Consumer IoT Ecosystems
Asiedu, Mercy and Haykel, Iskandar and Dieng, Awa and Kauer, Kerrie and Ahmed, Tousif and Ofori, Florence and Chan, Charisma and Pfohl, Stephen and Rostamzadeh, Negar and Heller, Katherine. Nteasee: Understanding Needs in AI for Health in Africa - A Mixed-Methods Study of Expert and General Population Perspectives
Ma, Sibo and Salinas, Alejandro and Nyarko, Julian and Henderson, Peter. Breaking Down Bias: On The Limits of Generalizable Pruning Strategies
Reuel, Anka and Connolly, Patrick and Meimandi, Kiana Jafari and Tewari, Shekhar and Wiatrak, Jakub and Venkatesh, Dikshita and Kochenderfer, Mykel. Responsible AI in the Global Context: Maturity Model and Survey
Caetano, Carlos and Santos, Gabriel O. dos and Petrucci, Caio and Barros, Artur and Laranjeira, Camila and Ribeiro, Leo Sampaio Ferraz and de Mendon\c{c}a, J\'{u}lia Fernandes and dos Santos, Jefersson A. and Avila, Sandra. Neglected Risks: The Disturbing Reality of Childrenâs Images in Datasets and the Urgent Call for Accountability
Gehrmann, Sebastian and Huang, Claire and Teng, Xian and Yurovski, Sergei and Bhorkar, Arjun and Thomas, Naveen and Doucette, John and Rosenberg, David and Dredze, Mark and Rabinowitz, David. Understanding and Mitigating Risks of Generative AI in Financial Services
Kelly, Markelle and Tahaei, Mohammad and Smyth, Padhraic and Wilcox, Lauren. Understanding Gender Bias in AI-Generated Product Descriptions
Cook, Cynthia M. and Cuffney, Laurie and Howard, John J. and Sirotin, Yevgeniy B. and Tipton, Jerry L. and Vemury, Arun R.. Performance Differentials in Deployed Biometric Systems Caused by Open-Source Face Detectors
Eslami, Motahhare and Fox, Sarah and Shen, Hong and Fan, Bobbie and Lin, Yu-Ru and Farzan, Rosta and Schwanke, Beth. From Margins to the Table: Charting the Potential for Public Participatory Governance of Algorithmic Decision Making
Perdomo, Juan Carlos and Britton, Tolani and Hardt, Moritz and Abebe, Rediet. Difficult Lessons on Social Prediction from Wisconsin Public Schools
S.Treiman, Lauren and Ho, Chien-Ju and Kool, Wouter. Do People Think Fast or Slow When Training AI?
Olson, Lauren and Anna-Lena Fischer, Ricarda and Kunneman, Florian and Guzm\'{a}n, Emitz\'{a}. Who Speaks for Ethics? How Demographics Shape Ethical Advocacy in Software Development
Franchi, Matt and Sandhaus, Hauke and Choksi, Madiha Zahrah and Engelmann, Severin and Ju, Wendy and Nissenbaum, Helen. Privacy of Groups in Dense Street Imagery
Lee, Jun Rui and Emami, Sadegh and Hollins, Michael David and Wong, Timothy C. H. and Villalobos S\'{a}nchez, Carlos Ignacio and Toni, Francesca and Zhang, Dekai and Dejl, Adam. XAI-Units: Benchmarking Explainability Methods with Unit Tests
Tylstedt, Beatrice and Sadowski, Helga and Eliasson, Caroline and Eklund, Lina. Entangled Pregnancies: Expectants' Experiences of Using Pregnancy Apps
Tian-Zheng Wei, Johnny and Wang, Maggie and Godbole, Ameya and Choi, Jonathan and Jia, Robin. Interrogating LLM design under copyright law
Buyl, Maarten and Khalaf, Hadi and Mayrink Verdun, Claudio and Monteiro Paes, Lucas and Vieira Machado, Caio Cesar and du Pin Calmon, Flavio. AI Alignment at Your Discretion
Ganesh, Prakhar and Tran, Cuong and Shokri, Reza and Fioretto, Ferdinando. The Data Minimization Principle in Machine Learning
Ovalle, Anaelia and Pavasovic, Krunoslav Lehman and Martin, Louis and Zettlemoyer, Luke and Smith, Eric Michael and Chang, Kai-Wei and Williams, Adina and Sagun, Levent. The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models
Pereira, Jansen Silva de Brito and Valdrighi, Giovani and Raimundo, Marcos Medeiros. M²FGB: A Min-Max Gradient Boosting Framework for Subgroup Fairness
Tseng, Emily and Young, Meg and Le Qu\'{e}r\'{e}, Marianne Aubin and Rinehart, Aimee and Suresh, Harini. "Ownership, Not Just Happy Talk": Co-Designing a Participatory Large Language Model for Journalism
de Jonge, Tim and Schraffenberger, Hanna and Geels, Jorrit and Hoepman, Jaap-Henk and Simon, Marie-Sophie and Borgesius, Frederik Zuiderveen. If Deceptive Patterns are the problem, are Fair Patterns the solution?
Bucknall, Ben and Siddiqui, Saad and Thurnherr, Lara and McGurk, Conor and Harack, Ben and Reuel, Anka and Paskov, Patricia and Mahoney, Casey and Mindermann, S\"{o}ren and Singer, Scott and Hiremath, Vinay and Segerie, Charbel-Rapha\"{e}l and Delaney, Oscar and Abate, Alessandro and Barez, Fazl and Cohen, Michael K. and Torr, Philip and Husz\'{a}r, Ferenc and Calinescu, Anisoara and Jones, Gabriel Davis and Trager, Robert. In Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?
Bendoukha, Adda-Akram and Arcolezi, H\'{e}ber Hwang and Kaaniche, Nesrine and Boudguiga, Aymen and Sirdey, Renaud and Clet, Pierre-Emmanuel. FADE: Federated Aggregation with Discrimination Elimination
Kumar, Ashwin and He, Yuzi and Markosyan, Aram H and Chern, Bobbie and Arrieta-Ibarra, Imanol. Detecting Prefix Bias in LLM-based Reward Models
Choksi, Madiha Zahrah and Mandel, Ilan and Benthall, Sebastian. The Brief and Wondrous Life of Open Models
Yaghini, Mohammad and Liu, Patty and Magnuson, Andrew and Dullerud, Natalie and Papernot, Nicolas. Trustworthy ML Regulation as a Principal-Agent Problem
Behzad, Tina and Casacuberta, S\'{\i}lvia and Diana, Emily Ruth and Tolbert, Alexander Williams. Reconciling Predictive Multiplicity in Practice
de Jong, Sander and Jacobsen, Rune M\o{}berg and Wester, Joel and Wijenayake, Senuri and Goncalves, Jorge and van Berkel, Niels. Impact of Agent-Generated Rationales on Online Social Conformity
Rizvi, Naba and Smith, Taggert and Vidyala, Tanvi and Bolds, Mya and Strickland, Harper and Begel, Andrew and Williams, Rua and Munyaka, Imani. âI Hadn't Thought About Thatâ: Creators of Human-like AI Weigh in on Ethics \& Neurodivergence
Rawal, Kaivalya and Fu, Zihao and Delaney, Eoin and Russell, Chris. Evaluating Model Explanations without Ground Truth