9:00am - 10:30am Doctoral Colloquium I and CRAFT I
Doctoral Colloquium DEI Fellows Session CRAFT 1
Jackson A ONLINE ONLY Jackson C
(Only for PhD students selected through the call for DC.) FUBU: Community and Care in Conferences and Research #DragVsAI: Exploring Facial Recognition Technologies through Embodied Algorithmic Tinkering
CRAFT 2 CRAFT 3
Jackson B ONLINE
Responsibly Working with Crowdsourced Data Towards an India-first Responsible AI research agenda
10:30am - 11:00am Break
11:00am - 12:00pm Doctoral Colloquium II and CRAFT II
Doctoral Colloquium (cont.) CRAFT 4 (ends at 12:30pm) CRAFT 5 (ends at 12:15pm)
Jackson A Jackson B Jackson C
Language Models and Society: Bridging Research and Policy The Road to the Table: Laying the Foundation for a Black Feminist Impact Assessment
CRAFT 6 cancelled CRAFT 7 (ends at 12:30pm)
Online
Mapping the Risk Surface of Text-to-Image AI: A Participatory, Cross-Disciplinary Workshop
12:00pm - 1:30pm Lunch
1:30pm - 1:45pm Welcome by General Chairs
W196BC
1:45pm - 2:45pm Keynote I: Dr. Alondra Nelson, Institute for Advanced Study
W196BC
2:45pm - 3:15pm Break
3:15pm - 4:15pm Proceedings I
Paper session 1: Decisions, Trust, & Reliance Paper session 2: Revisiting Trade-Offs and Impossibility Theorems Paper session 3: Auditing
W196BC W196A W195
· Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study
· On the Impact of Explanations on Understanding of Algorithmic Decision-Making
· Towards a Science of Human-AI Decision Making: An Overview of Design Space in Empirical Human-Subject Studies
· Machine Explanations and Human Understanding
· Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application
· The Possibility of Fairness: Revisiting the Impossibility Theorem in Practice
· Maximal Fairness
· Enhancing AI fairness through impact assessment in the European Union: a legal and computer science perspective
· Help or Hinder? Evaluating the Impact of Fairness Metrics and Algorithms in Visualizations for Consensus Ranking
· Simplicity Bias Leads to Amplified Performance Disparities
· A Sociotechnical Audit: Assessing Police Use of Facial Recognition
· Navigating the Audit Landscape: A framework for developing transparent and auditable XR
· Which Stereotypes Are Moderated and Under-Moderated in Search Engine Autocompletion?
· AI Regulation Is (not) All You Need
· Examining risks of racial biases in NLP tools for child protective services
Paper session 4: Self-reflection
W194B
· Interrogating the T in FAccT
· Representation in AI Evaluations
· A Systematic Review of Ethics Disclosures in Predictive Mental Health Research
· The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices.
· WEIRD FAccTs: How Western, Educated, Industrialized, Rich, and Democratic is FAccT?
4:15pm - 4:30pm Break
4:30pm - 6:00pm Tutorials I and CRAFT III
Tutorial 1 Tutorial 2 Tutorial 3
W196BC W196A W195
Practice Tutorial: Finding and Using Undocumented APIs for Algorithm Audits Practice Tutorial: Using the NIST AI Risk Management Framework Practice Tutorial: Responsible AI Toolbox
CRAFT 8 CRAFT 9 CRAFT 10
W194B Jackson B Jackson C
From Research Insight to Policy Impact - How You Can Engage in Current AI Policy Debates User Engagement in Algorithm Testing and Auditing: Exploring Opportunities and Tensions between Practitioners and End Users Automation of Elder Care

8:30am - 8:45am Welcome Notes
W196BC
8:45am - 9:45am Keynote II: Ziad Obermeyer - Dissecting health algorithms
W196BC
9:45am - 10:15am Break
10:15am - 11:45am Tutorials II and CRAFT IV
Tutorial 4 Tutorials 5a, 5b CRAFT 11
W196BC W196A W195
Implications Tutorial: Generative AI meets Responsible AI: Practical Challenges and Opportunities (10:15am) Translation Tutorial: Contextualizing AI with Cross-Cultural Perspectives
(11:00am) Implications Tutorial: AI Governance and Policy in the US - Spotlight on the Blueprint for an AI Bill of Rights
The “Digital Trade” Sneak Attack on AI Oversight & Regulation Underway Today
CRAFT 12 CRAFT 13
W194B Jackson B
Digital Apartheid and the Horn of Africa AI ethics landscape in Chinese tech industry: regulatory policy, research, and practice
11:45am - 1:15pm Lunch
1:15pm - 2:15pm Keynote III: Alex Hanna (DAIR Institute), Brook Hansen, Krista Pawloski, Krystal Kauffman (Turkopticon)
W196BC
2:25pm - 3:25pm Proceedings II
Paper session 5: Explainability (limitations) Paper session 6: Racial bias Paper session 7: Values and representation
W196BC W196A W195
· Diagnosing AI Explanation Methods with Folk Concepts of Behavior
· How to Explain and Justify Almost Any Decision: Potential Pitfalls for Accountability in AI Decision-Making
· Questioning the ability of feature-based explanations to empower non-experts in robo-advised financial decision-making
· Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven Decision Support using Evaluative AI
· Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US and UK
· An Empirical Analysis of Racial Categories in the Algorithmic Fairness Literature
· Datafication Genealogies beyond Algorithmic Fairness: Making Up of Racialised Subjects
· How Redundant are Redundant Encodings? Blindness in the Wild and Racial Disparity when Race is Unobserved
· Envisioning Equitable Speech Technologies for Black Older Adults
· Skin Deep: Investigating Subjectivity in Skin Tone Annotations for Computer Vision Benchmark Datasets
· Broadening AI Ethics Narratives: An Indic Art View
· AI’s Regimes of Representation: A Community-centered Study of Text-to-Image Models in South Asia
· Invigorating Ubuntu Ethics in AI for healthcare: Enabling equitable care
· Honor Ethics: The Challenge of Globalizing Value Alignment in AI
· In her Shoes: Gendered Labelling in Crowdsourced Safety Perceptions Data from India
Paper session 8: Healthcare
W194B
· Care and Coordination in Algorithmic Systems: An Economies of Worth Approach
· Organizational governance of emerging technologies: AI adoption in healthcare
· Two Reasons for Subjecting Medical AI Systems to Lower Standards than Humans
· What’s fair is... fair? Presenting JustEFAB, an ethical framework for operationalizing medical ethics and social justice in the integration of clinical machine learning
· Improving Fairness in AI Models on Electronic Health Records: the case for Federated Learning Methods
3:25pm - 3:45pm Break
3:45pm - 4:45pm Proceedings III
Paper session 9: Market-based approaches Paper session 10: Normative philosophy & AI Paper session 11: Networks & Platforms
W196BC W196A W195
· FairAssign: Stochastically Fair Driver Assignment in Gig Delivery Platforms
· Implementing Fairness Constraints in Markets Using Taxes and Subsidies
· Using Supervised Learning to Estimate Inequality in the Size and Persistence of Income Shocks
· Your Browsing History May Cost You: A Framework for Discovering Differential Pricing in Non-Transparent Markets
· A Theory of Auditability for Allocation and Social Choice Mechanisms
· Optimization's Neglected Normative Commitments
· Making Intelligence: Ethical Values in IQ and ML Benchmarks
· Harms from Increasingly Agentic Algorithmic Systems
· On the Site of Predictive Justice
· Reducing Access Disparities in Networks using Edge Augmentation
· Group fairness without demographics using social networks
· Delayed and Indirect Impacts of Link Recommendations
· Does AI-Assisted Fact-Checking Disproportionately Benefit Majority Groups Online?
· Addressing contingency in algorithmic (mis)information classification: Toward a responsible machine learning agenda
Paper session 12: Participatory AI
W194B
· Towards Labor Transparency in Situated Computational Impact Research
· The Dimensions of Data Labor: A Road Map for Researchers, Activists, and Policymakers to Empower Data Producers
· Going public: the role of public participation approaches in commercial AI labs
· Queer In AI: A Case Study in Community-Led Participatory AI
· AI in the Public Eye: Building Public AI Literacy through Critical AI Art
4:45pm - 5:00pm Break
5:00pm - 6:30pm Tutorials III and CRAFT V
Tutorial 6 (ends at 6:00pm) Tutorial 7 (ends at 6:30pm) CRAFT 14
W196BC W196A W195
Practice Tutorial: Integrating Notions of Fairness and Demographic Variance Into Large Personalized Advertising Systems Translation Tutorial: Using Technical Skills to Fight Actual Public Benefits Cuts and Austerity Policies, with the Benefits Tech Advocacy Hub "AI Art" and Its Impact on Artists
CRAFT 15 CRAFT 16
W194B Jackson B
Assessing the Impacts of Generative AI Systems Across Modalities and Society Bringing People In to High-Stakes System Design: Concrete Challenges and Emerging Lessons
7:00pm - 8:30pm Recursion: a play by Andrew Gelman and Jessica Hullman
W196BC

8:30am - 8:45am Welcome Notes
W196BC
8:45am - 9:45am Keynote IV: Moritz Hardt, Director at Max Planck Institute for Intelligent Systems - The power of predictions
W196BC
9:45am - 10:15am Break
10:15am - 11:45am Tutorials IV and CRAFT VI
Tutorials 8a, 8b Tutorial 9 CRAFT 17
W196BC W196A W195
(10:15am) Translation Tutorial: Steering Language Models with Reinforcement Learning from Human Feedback and Constitutional AI
(11:00am) Translation Tutorial: Theories of Propaganda and New Technology: Applications and Interventions
Translation Tutorial: Practices and Limitations of Participatory Methods: Views From Computer Science, Political Science and Design Lived Experiences of «Reform Tech» in Criminal Legal and Immigration Systems
CRAFT 18 CRAFT 19
W194B ONLINE ONLY
Theories of Change in Responsible AI Investigating and Challenging Algorithmic Welfare Systems
11:45am - 1:15pm Lunch
1:15pm - 2:15pm Keynote V: Payal Arora, Professor at Erasmus University Rotterdam - Feminist design principles in the future of global work
W196BC
2:25pm - 3:25pm Proceedings IV
Paper session 13: Targeted advertising Paper session 14: Practical methods Paper session 15: Speech & emotion recognition
W196BC W196A W195
· Reconciling Governmental Use of Online Targeting With Democracy
· Discrimination through Image Selection by Job Advertisers on Facebook
· Representation, Self-Determination, and Refusal: Queer People's Experiences with Targeted Advertising
· The slow violence of surveillance capitalism: How online behavioral advertising harms people
· Diverse Perspectives Can Mitigate Political Bias in Crowdsourced Content Moderation
· Preventing Discriminatory Decision-making in Evolving Data Stream
· Group-Fair Classification with Strategic Agents
· Add-Remove-or-Relabel: Practitioner-Friendly Bias Mitigation via Influential Fairness
· Fairer Together: Mitigating Disparate Exposure in Kemeny Rank Aggregation
· Domain Adaptive Decision Trees: Implications for Accuracy and Fairness
· You Sound Depressed: A Case Study on Sonde Health's Diagnostic Use of Voice Analysis AI
· (Anti)-Intentional Harms: The Conceptual Pitfalls of Emotion AI in Education
· On the Praxes and Politics of AI Speech Emotion Recognition
· Ethical considerations in the early detection of Alzheimer's disease using speech and AI
· Augmented Datasheets for Speech Datasets and Ethical Decision-Making
Paper session 16: Measurement in the wild
W194B
· Algorithms as Social-Ecological-Technological Systems: an Environmental Justice lens on Algorithmic Audits
· Detecting disparities in police deployments using dashcam data
· Measuring and mitigating voting access disparities: a study of race and polling locations in Florida and North Carolina
· Cross-Institutional Transfer Learning for Educational Models: Implications for Model Performance, Fairness, and Equity
· Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks
3:25pm - 3:45pm Break
3:45pm - 4:45pm Proceedings V
Paper session 17: Risks and policy of generative models Paper session 18: Privacy Paper session 19: Sociology & machine learning
W196BC W196A W195
· Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias
· "I wouldn't say offensive but...": Disability-Centered Perspectives on Large Language Models
· The Gradient of Generative AI Release: Methods and Considerations
· Disparities in Text-to-Image Model Concept Possession Across Languages
· On the Independence of Association Bias and Empirical Fairness in Language Models
· More Data Types More Problems: A Temporal Analysis of Complexity, Stability, and Sensitivity in Privacy Policies
· Can Querying for Bias Leak Protected Attributes? Achieving Privacy With Smooth Sensitivity
· Data Collaboratives with the Use of Decentralised Learning - an Alternative Approach to Data Governance
· In the Name of Fairness: Assessing the Bias in Clinical Record De-identification
· The Privacy-Bias Tradeoff: Data Minimization and Racial Disparity Assessments in U.S. Government
· Affordances' for Machine Learning
· Fairness in machine learning from the perspective of sociology of statistics: How machine learning is becoming scientific by turning its back on metrological realism
· Capturing Humans' Mental Models of AI: An Item Response Theory Approach
· Algorithmic Decisions, Desire for Control, and the Preference for Human Review over Algorithmic Review
· Emotions and Dynamic Assemblages: A Study of Automated Social Security Using Qualitative Longitudinal Research
Paper session 20: Recommendation systems
W194B
· UNFair: Search Engine Manipulation, Undetectable by Amortized Inequity
· Auditing Cross-Cultural Consistency of Human-Annotated Labels for Recommendation Systems
· The Many Faces of Fairness: Exploring the Institutional Logics of Multistakeholder Microlending Recommendation
· Representation Online Matters: Practical End-to-end Diversification in Search and Recommender Systems
· "I Think You Might Like This": Exploring Effects of Confidence Signal Patterns on Trust in and Reliance on Conversational Recommender Systems
4:45pm - 5:00pm Break
5:00pm - 6:00pm Proceedings VI
Paper session 21: Algorithms in the wild Paper session 22: AI Regulation & the EU AI Act Paper session 23: Fairness and diversity I
W196BC W196A W195
· Disentangling and Operationalizing AI Fairness at LinkedIn
· Algorithmic Transparency and Accountability through Crowdsourcing: A Study of the NYC School Admission Lottery
· The Devil is in the Details: Interrogating Values Embedded in the Allegheny Family Screening Tool
· Bias as Boundary Object: Unpacking the Politics of an Austerity Algorithm using Bias Frameworks
· The Progression of Disparities within the Criminal Justice System: Differential Enforcement and Risk Assessment Instruments
· Stronger Together: on the Articulation of Ethical Charters, Legal Tools, and Technical Documentation in ML
· To Be High-Risk, or Not To Be—Semantic Specifications and Implications of the AI Act's High-Risk AI Applications and Harmonised Standards
· The role of explainable AI in the context of the AI Act
· Regulating ChatGPT and other Large Generative AI Models
· Understanding accountability in algorithmic supply chains
· "I'm fully who I am": Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation
· Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
· On The Impact of Machine Learning Randomness on Group Fairness
· Detection and Mitigation of Algorithmic Bias via Predictive Parity
Paper session 24: Model evaluation
W194B
· Ground(less) Truth: A Causal Framework for Proxy Labels in Human-Algorithm Decision-Making
· ACROCPoLis: A Descriptive Framework for Making Sense of Fairness
· "How Biased are Your Features?": Computing Fairness Influence Functions with Global Sensitivity Analysis
· Fairness Auditing of Urban Decisions using LP-based Data Combination
· Counterfactual Prediction Under Outcome Measurement Error

8:30am - 8:45am Welcome Notes
W196BC
8:45am - 9:45am Keynote VI: Conversation with US Equal Employment Opportunity Commission (EEOC) Chair, Charlotte Burrows: Civil Rights and AI in Employment
W196BC
9:45am - 10:15am Break
10:15am - 11:45am Tutorials V and CRAFT VII
Tutorials 10a, 10b Tutorial 11 CRAFT 20 cancelled
W196BC W196A
(10:15am) Translation Tutorial: A Guiding Framework for Vetting Technology Vendors Operating in the Public Sector
(11:00am) Practice Tutorial: When the Rubber Meets the Road: Experience Implementing AI Governance in a Public Agency with the City of San José
Translation Tutorial: A Hands-On Introduction to Large Language Models for Fairness, Accountability, and Transparency Researchers
CRAFT 21 CRAFT 22 CRAFT 23
W194B W195 Online
Resisting the New Jim Code in the Old South: Lessons from the Field Community-collaborative visions for computing research Humanitarian AI for the Global South
11:45am - 1:15pm Lunch
1:15pm - 2:15pm Proceedings VII
Paper session 25: Algorithmic Discrimination - Legal/Technical perspectives Paper session 26: Transparency Paper session 27: Metrics
W196BC W196A W195
· Gender Animus Can Still Exist Under Favorable Disparate Impact: a Cautionary Tale from Online P2P Lending
· Multi-dimensional discrimination in Law and Machine Learning - A comparative overview
· Algorithmic Unfairness through the Lens of EU Non-Discrimination Law: Or Why the Law is Not a Decision Tree
· Legal Taxonomies of Machine Bias: Revisiting Direct Discrimination
· Striving for Affirmative Algorithmic Futures: How the Social Sciences can Promote more Equitable and Just Algorithmic System Design
· Rethinking Transparency as a Communicative Constellation
· Algorithmic Transparency from the South Examining the state of algorithmic transparency in Chile's public administration algorithms
· Co-Design Perspectives on Algorithm Transparency Reporting: Guidelines and Prototypes
· Welfarist Moral Grounding for Transparent Artificial Intelligence
· Co-Designing for Transparency: Lessons from Building a Document Organization Tool in the Criminal Justice Domain
· On (assessing) the fairness of risk score models
· Runtime Monitoring of Dynamic Fairness Properties
· The Misuse of AUC: What High Impact Risk Assessment Gets Wrong
· On the Richness of Calibration
· Bias on Demand: A Modelling Framework That Generates Synthetic Data With Bias
Paper session 28: Multiplicity
W194B
· Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy
· Reconciling Individual Probability Forecasts
· The Dataset Multiplicity Problem: How Unreliable Data Impacts Predictions
· Multi-Target Multiplicity: Flexibility and Fairness in Target Specification under Resource Constraints
· Arbitrary Decisions are a Hidden Cost of Differentially Private Training
2:15pm - 2:30pm Break
2:30pm - 3:30pm Proceedings VII and CRAFT VIII
Paper session 29: Workplace Paper session 30: Power & resistance Paper session 31: Fairness and diversity II
W196BC W196A W195
· It's about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?
· Can Workers Meaningfully Consent to Workplace Wellbeing Technologies?
· Investigating Practices and Opportunities for Cross-functional Collaboration around AI Fairness in Industry Practice
· "We try to empower them" - Exploring Future Technologies to Support Migrant Jobseekers
· Walking the Walk of AI Ethics: Organizational Challenges and the Individualization of Risk among Ethics Entrepreneurs
· Who Should Pay When Machines Cause Harm? Laypeople's Expectations of Legal Damages for Machine-Caused Harm
· Taking Algorithms to Courts: A Relational Approach to Algorithmic Accountability
· Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument
· Trustworthy AI and the Logics of Intersectional Resistance
· Power and Resistance in the Twitter Bias Discourse
· Achieving diversity in counterfactual explanations: a review and a discussion
· Saliency Cards: A Framework to Characterize and Compare Saliency Methods
· Robustness implies Fairness in Casual Algorithmic Recourse
· Personalized Pricing with Group Fairness Constraint
· We are adults and deserve control of our phones': Examining the risks and opportunities of a right to repair for mobile apps
CRAFT 24 (ends at 3:45pm)
W194B
Legal Accountability in Europe — what litigation and regulatory action against big tech digital media platforms could mean for global tech
3:30pm - 4:00pm Break
4:00pm - 5:00pm Keynote VII: Trina Reynolds-Tyler and Tarak Shah - The Community Built a Model: Using Participatory AI to Analyze Chicago Police Data
W196BC
5:00pm - 6:00pm Town Hall
W196BC