Description

This track seeks theoretical, methodological, and experimental contributions for understanding and accounting for fairness, accountability, and transparency of algorithmic systems and for mitigating discrimination, inequality, and other harms resulting from the deployment of such systems in real-world contexts. 

Understanding includes detecting and measuring how and which forms of bias are manifested in datasets and models; determining how algorithmic systems may introduce and exacerbate discrimination and unjust outcomes; measuring the efficacy of existing techniques for explaining and interpreting automated decisions; evaluating perceptions of fairness and algorithmic bias. Accounting includes the governance of the design, development and deployment of algorithmic systems, which takes into consideration all stakeholders and interactions with socio-technical systems. Mitigating includes introducing techniques for data collection and analysis and processing that incorporate and acknowledge the systemic bias and discrimination that may be present in datasets and models; formalizing fairness objectives based on notions from the social sciences, law, and humanistic studies; building socio-technical systems which incorporate these insights to minimize harm on historically disadvantaged communities and empower them; introducing methods for decision validation, correction and participation in co-designing algorithmic systems. 

We welcome papers from various sub-disciplines of CS (see list below). Paper submissions must indicate at least one area of interest (see list below) and at least one sub-discipline upon abstract registration. If your paper concerns the study of a deployed system, or the description of software or other developed materials, please consider submitting to Track 4.

Evaluation

Each paper will be reviewed by 3 CS (peer reviews) and, possibly, by 1 non-CS (cross-disciplinary review) program committee members. Peer reviewers will be selected from the same sub-discipline(s) of the paper to ensure expert reviews. The evaluation criteria for the review will include: 

  • Relevance to the themes of the conference;
  • Quality of submission as measured by accuracy, clarity, comprehensiveness, and depth of exposition, including contextualizing the work in the relevant field(s);
  • Novelty of the contributions and problem domain; and
  • Potential for broader impact, including across other disciplines and real-world systems.

Papers are required to present novel, rigorous and significant scientific contributions and engage with work from relevant disciplines. When relevant, reviewers will also take into consideration reproducibility of the results.

Areas of interest

1.1 Fairness, equity, and justice by design: methodologies and techniques to build computing systems that incorporate fairness, equity and justice desiderata informed by legal, social, and philosophical models. Examples include fairness-aware machine learning algorithms, human language generation that mitigates issues of bias, and model-agnostic methods for data sanitization or post-processing. 

1.2 Methods to audit, measure, and evaluate fairness: methods and techniques to check and measure the fairness (or unfairness) of existing computing systems and to assess associated risks. Examples include metrics and formal testing procedures to evaluate fairness, quantify the risk of fairness violations, or explicitly show tradeoffs. 

1.3 Methods involving human factors and humans-in-the-loop: methods and techniques that center on the human-machine relationship. Examples include visual analytics for fairness exploration, cognitive evaluation of explanations, and systems that combine human and algorithmic elements.

1.4 Accountability, transparency, and interpretability by design: methodologies for governing the accountability and transparency of new computing systems, and for working these goals into existing systems using a by-design approach. Examples include machine learning algorithms to create interpretable white-box models, software engineering process models and software metrics, and the documentation of accountable systems.

1.5 Methods to assess explainability, transparency and interpretability: methods and techniques for assessing accountability and transparency of existing computing systems. Examples include the explanation of black-boxes, counterfactual and what-if reasoning.

Sub-disciplines

Authors should select one or more CS discipline(s) from the following list when submitting their paper: computer systems, computer vision, databases, data/web mining, data science, human/natural language technologies, human-computer interaction (quantitative), human-computer interaction (qualitative), information retrieval and recommender systems, machine learning, programming languages, robotics, software engineering, statistical analysis and learning, theoretical computer science, visual analytics, and others.

Peer reviewers for a paper will be experts in the sub-discipline(s) selected upon its submission, so please select your relevant disciplines judiciously. 

Program Committee (to be updated)

  • Shipra Agrawal, Columbia University
  • Virgilio Almeida, Universidade Federal de Minas Gerais
  • Siddharth Barman, Indian Institute of Science
  • Solon Barocas, Microsoft Research and Cornell University
  • Emily Bender, University of Washington
  • Jeffrey Bigham, Carnegie Mellon University
  • Stefan Bucur, Google
  • Toon Calders, University of Antwerp
  • Ran Canetti, Boston University and Tel Aviv University
  • Florian Cech, Vienna University of Technology / TU Wien
  • Augustin Chaintreau, Columbia University
  • Guanliang Chen, Monash University
  • Irene Chen, MIT
  • Alexandra Chouldechova, Carnegie Mellon University
  • Moustapha Cisse, Facebook
  • Amanda Coston, Carnegie Mellon University
  • Rachel Cummings, Georgia Tech
  • Anupam Datta, Carnegie Mellon University
  • Francien Dechesne, Leiden University
  • Nicholas Diakopoulos, Northwestern University
  • Josep Domingo-Ferrer, Universitat Rovira i Virgili
  • Michel Dumontier, University of Maastricht
  • Edgar Duéñez-Guzmán, DeepMind
  • Michael Ekstrand, Boise State University
  • Lisette Espin Noboa, GESIS
  • Denae Ford, Microsoft Research
  • Sorelle Friedler, Haverford College
  • Kazuto Fukuchi, RIKEN Center for Advanced Intelligence Project
  • Timnit Gebru, Google AI
  • Elena Glassman, Harvard University
  • Sharad Goel, Stanford University
  • Riccardo Guidotti, ISTI-CNR
  • Sara Hajian, NTENT
  • Tristan Henderson, University of St Andrews
  • Michael Hind, IBM Research
  • Lily Hu, Harvard University
  • Soheil Human, Vienna University of Economics and Business
  • William Isaac, DeepMind
  • Kalapriya Kannan, IBM Research
  • Niki Kilbertus, MPI for Intelligent Systems, Cambridge University
  • Isabel Kloumann, Facebook
  • Junpei Komiyama, New York University
  • Joshua Kroll, UC Berkely
  • Eren Kursun, Columbia University
  • Himabindu Lakkaraju, Harvard University
  • Nikolaos Laoutaris, IMDEA Networks Institute
  • Tomas Laurenzo, School of Creative Media, City University of Hong Kong
  • Katrina Ligett, Hebrew University
  • Zachary Lipton, Carnegie Mellon University
  • Yang Liu, UC Santa Cruz
  • Kristian Lum, Human Rights Data Analysis Group
  • Nishtha Madaan, IBM Research AI- India
  • Lassana Magassa, University of Washington
  • Smitha Milli, UC Berkeley
  • Yeshimabeit Milner, Data for Black Lives
  • Alan Mislove, Northeastern University
  • Shira Mitchell, Civis Analytics
  • Anna Monreale, University of Pisa
  • Laura Montoya, Accel.AI
  • Nyalleng Moorosi, Google AI
  • Sendhil Mullainathan, University of Chicago
  • Arvind Narayanan, Princeton University
  • Rosin Ngueveu, Université du Québec À Montréal
  • Eirini Ntoutsi, Leibniz University of Hanover
  • Alexandra Olteanu, Microsoft Research Montreal
  • Mykola Pechenizkiy, TU Eindhoven
  • Evaggelia Pitoura, University of Ioannina
  • Manish Raghavan, Cornell University
  • Inioluwa Deborah Raji, University of Toronto
  • Sekou Remy, IBM Research
  • Pablo Rivas, Marist College
  • Maria Riveiro, Jönköping University
  • Michael Rovatsos, University of Edinburgh
  • Jun Sakuma, University of Tsukuba
  • Nithya Sambasivan, Google
  • Christian Sandvig, University of Michigan
  • Prasanna Sattigeri, IBM Research
  • Natalie Schluter, IT University in Copenhagen
  • Ravi Shroff, New York University
  • Ricardo Silva, University College London
  • Jatinder Singh, University of Cambridge
  • Adam Smith, Boston University
  • Steffen Staab, Institute for Web Science and Technologies, University of Koblenz-Landau and WAIS Research Group, University of Southampton
  • Julia Stoyanovich, New York University
  • Cecilia Sönströd, University of Boras
  • Kentaro Toyama, Univ of Michigan
  • Michelle Trim, UMass Amherst
  • Franco Turini, University of Pisa
  • Isabel Valera, Max Planck Institue for Intelligent Systems
  • Nisheeth Vishnoi, Yale University
  • Christo Wilson, Northeastern University
  • Marcelo Worsley, Northwestern University
  • Gal Yona, Weizmann Institute of Science
  • Muhammad Bilal Zafar, Bosch Center for Artificial Intelligence
  • Angela Zhou, Cornell University and Cornell Tech
  • Yair Zick, National University of Singapore
  • Indre Zliobaite, University of Helsinki

Track Chairs