Presented by James Wexler and Andrew Zaldivar; work with Tolga Bolukbasi, Mahima Pushkarna, and Sara Robinson (Google)
As more and more industries use machine learning, it's important to understand how these models make predictions, and where bias can be introduced in the process. In this tutorial we'll walk through two open source frameworks for analyzing your models from a fairness perspective. We'll start with the What-If Tool, a visualization tool that you can run inside a Python notebook to analyze an ML model. With the What-If Tool, you can identify dataset imbalances, see how individual features impact your model's prediction through partial dependence plots, and analyze human-centered ML models from a fairness perspective using various optimization strategies. Then we'll look at SHAP, a tool for interpreting the output of any machine learning model, and seeing how a model arrived at predictions for individual datapoints. We will then show how to use SHAP and the What-If Tool together. After the tutorial you'll have the skills to get started with both of these tools on your own datasets, and be better equipped to analyze your models from a fairness perspective.
Presented by Vijay Arya, Amit Dhurandhar, and Dennis Wei; work with R. K. E. Bellamy, P. Chen, M. Hind, S. Hoffman, S. Houde, V. Liao, R. Luss, A. Mojsilovic, S. Mourad, P. Pedemonte, R. Raghavendra, J. T. Richards, P. Sattigeri, K. Shanmugam, M. Singh, K. R. Varshney, and Y. Zhang (IBM)
This tutorial will teach participants to use and contribute to a new open-source Python package named AI Explainability 360 (AIX360), a comprehensive and extensible toolkit that supports interpretability and explainability of data and machine learning models. The tutorial will serve participants with different backgrounds and programming skill levels. We will present an interactive web experience as a gentle introduction to the concepts and capabilities, and three use cases featuring different industry domains and explanation methods, where data scientists can gain hands-on experience. All participants will benefit from being taught which classes of explainability methods are appropriate for a given use case with the aid of a taxonomy that we have developed.
Presented by Robin Burke (U. of Colorado) and Masoud Mansoury (Eindhoven U. of Technology); work with Nasim Sonboli (U. of Colorado)
The field of machine learning fairness has developed some well-understood metrics, methodologies, and data sets for experimenting with and developing classification algorithms with an eye to their fairness properties. However, equivalent research is lacking in the area of personalized recommender systems, even though such systems are crucial components of the applications through which millions of individuals daily access news, shopping, social connections, and employment opportunities. This 180-minute hands-on tutorial will introduce participants to concepts in fairness-aware recommendation (as a distinct from classification-oriented systems) and metrics and methodologies in evaluating recommendation fairness. The tutorial will introduce LibRec, a well-developed platform for recommender systems evaluation, and fairness-aware extensions to it. They will also gain hands-on experience with conducting experiments with LibRec using the librec-auto scripting platform, and learn the steps required to configure their own experiments, incorporate their own data sets, and design their own algorithms and metrics.
Corinne Cath-Speth (Oxford Internet Institute), Mark Latonero (Data & Society), Vidushi Marda (ARTICLE 19‘s Team Digital), Roya Pakzad (Taraaz)
The premise of this translation tutorial is that human rights serves as a complementary framework -- in addition to Fairness, Accountability, Transparency, and Ethics -- for guiding and governing artificial intelligence (AI) and machine learning research and development. Attendees will participate in an activity using a case study, which will demonstrate show how a human rights framework, grounded in international law, fundamental values, and global systems of accountability, can offer the technical community a practical approach to addressing global AI risks and harms.
Marion Oswald (U. of Northumbria), David Powell (Hampshire Constabulary and PhD candidate at U. of Northumbria)
This tutorial aims to increase understanding of the importance of discretion in police decision-making. It will guide computer scientists in considering practical, legal and technical issues crucial to avoiding the prejudicial and instead developing algorithms that are supportive – a ‘friend’– to legitimate discretionary decision-making. It combines explanation of the relevant law and related literature with discussion based upon deep operational experience in the area of preventative and protective policing work.
Maya Indira Ganesh (Leuphana U.), Francien Dechesne (Leiden U.), Zeerak Waseem (U. of Sheffield)
In a workshop organized in December 2017 in Leiden, the Netherlands, a group of lawyers, computer scientists, artists, activists and social and cultural scientists collectively read a computer science paper about 'improving fairness'. This session was perceived by many participants as eye-opening on how different epistemologies shape approaches to the problem, method and solutions, thus enabling further cross-disciplinary discussions during the rest of the workshop. For many participants it was both refreshing and challenging, in equal measure, to understand how another discipline approached the problem of fairness. Now, as a follow-up we propose a translation tutorial that will engage participants at the ACM FAT* conference in a similar exercise. We will invite participants to work in small groups reading excerpts of academic papers from different disciplinary perspectives on the same theme. We argue that most of us do not read outside our disciplines and thus are not familiar with how the same issues might be framed and addressed by our peers. Thus the purpose will be to have participants reflect on the different genealogies of knowledge in research, and how they erect walls, or generate opportunities for more productive inter-disciplinary work. We argue that addressing, through technical measures or otherwise, matters of ethics, bias and discrimination in AI/ML technologies in society is complicated by the different constructions of knowledge about what ethics (or bias or discrimination) means to different groups of practitioners. In the current academic structure, there are scarce resources to test, build on-or even discard-methods to talk across disciplinary lines. This tutorial is thus proposed to see if this particular method might work.
Christine Kaeser-Chen (Google), Elizabeth Dubois (U. of Ottawa), Friederike Schüür (Cityblock Health), Emanuel Moss (CUNY Graduate Center / Data & Society)
Positionality is the social and political context that influences, and potentially biases, a person's unique but partial understanding of the world. Machine learning (ML) systems have positionality too, which is embedded through choices we make when developing classification systems and datasets. In this tutorial, we uncover positionality in ML systems with a focus on the design of classification systems, study the impact of embedded positionality, and discuss potential intervention mechanisms.
Natasha Duarte (Upturn), Stan Adams (Center for Democracy & Technology)
This tutorial will introduce the US (federal and state) and EU policymaking processes and provide guidance to experts in other domains (such as data and computer science) who want to get involved in policymaking.
Presented by Indira Sen and Fabian Flöck; work with Katrin Weller, Bernd Weiß, Claudia Wagner (GESIS)
The activities and interactions of hundreds of millions of people worldwide are recorded as digital traces. These data offer increasingly comprehensive pictures of both individuals and groups on different platforms, but also allow inferences about broader target populations beyond those platforms. Notwithstanding the many advantages, studying the errors that can occur when digital traces are used to learn about humans and social phenomena is essential. Incidentally, many similar errors also affect survey estimates, which survey designers have been addressing for decades using error conceptualization frameworks, most notably the Total Survey Error Framework (TSE). In this tutorial, we will first introduce the audience to the concepts and guidelines of the TSE and how they are applied by survey practitioners in the social sciences, guided by our interdisciplinary background and experience. Second, we will introduce our own conceptual framework to diagnose, understand, and avoid errors that may occur in studies that are based on digital traces of humans. Our framework leverages the systematic approach of the TSE and maps its concepts to digital trace settings where applicable. Participants will hence gain insights on (i) how to critically reflect on survey research based on the TSE and (ii) how to translate the recommendations of the TSE to a digital trace setting.
Abigail Jacobs (U. of Michigan), Su Lin Blodgett (U. of Massachusetts Amherst), Solon Barocas (Microsoft and Cornell University), Hal Daumé III (U. of Maryland), Hanna Wallach (Microsoft)
The recent interest in identifying and mitigating bias in computational systems has introduced a wide range of different---and occasionally incomparable---proposals for what constitutes bias in such systems. This tutorial aims to introduce the language of \emph{measurement modeling} from the quantitative social sciences as a framework for understanding fairness in computational systems by examining how social, organizational, and political values enter these systems. We show that this framework helps to clarify the way unobservable theoretical constructs---such as "creditworthiness," "risk to society," or "tweet toxicity"---are implicitly operationalized by measurement models in computational systems. We also show how systematically assessing the \emph{construct validity} and \emph{reliability} of these measurements can be used to detect and characterize fairness-related harms, which often arise from mismatches between constructs and their operationalizations. Through a series of case studies of previous approaches to examining "bias" in NLP models, ranging from work on embedding spaces to machine translation and hate speech detection, we demonstrate how we apply this framework to identify these approaches' implicit constructs and to critique the measurement models operationalizing them. This process illustrates the limits of current so-called "debiasing" techniques, which have obscured the specific harms whose measurements they implicitly aim to reduce. By introducing the language of measurement modeling, we provide the ACM FAT* community with a process for making explicit and testing assumptions about unobservable theoretical constructs, thereby making it easier to identify, characterize, and even mitigate fairness-related harms.
Bogdana Rakova (Partnership on AI and Accenture), Jingying Yang (Partnership on AI), Rumman Chowdhury (Accenture)
The work within the Fairness, Accountability, and Transparency of ML (fair-ML) community will positively benefit from appreciating the role of organizational culture and structure in the effective practice of fair-ML efforts of individuals, teams, or initiatives within industry. In this tutorial session we will explore various organizational structures and possible leverage points to effectively intervene in the process of development and deployment of AI systems towards contributing to positive fair-ML outcomes. We will begin by presenting the results of interviews conducted during the first stage of an ethnographic study among practitioners working in industry, including themes related to: origination and evolution, common challenges, ethical tensions, and effective enablers. The study was designed through the lens of Industrial Organizational Psychology and aims to create a mapping of the current state of the fair-ML organizational structures inside major AI companies. We also look at the most-desired future state to enable effective work to increase algorithmic accountability, as well as the key elements in the transition from the current to that future state. We investigate drivers for change as well as the tensions between creating an 'ethical' system vs one that is 'ethical' enough. After presenting our preliminary findings, the rest of the tutorial will be highly interactive. Starting with a facilitated activity in break out groups, we will discuss the already identified challenges and share best practices and mitigation strategies. Finally, we hope to create space for productive discussion among AI practitioners in industry, academic researchers within various fields working directly on algorithmic accountability and transparency research, and advocates for various communities most impacted by technology. Based on the interactive component of the tutorial, facilitators and interested participants will collaborate on further developing the discussed challenges into scenarios and guidelines that will be published as a follow up report.
Krishna Gade (Fiddler Labs), Sahin Cem Geyik (LinkedIn), Krishnaram Kenthapadi (Amazon AWS AI), Varun Mithal (LinkedIn), Ankur Taly (Fiddler Labs)
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, as well as critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling. As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks. In this tutorial, we will first motivate the need for model interpretability and explainability in AI from societal, legal, customer/end-user, and model developer perspectives. [Note: Due to time constraints, we will not focus on techniques/tools for providing explainability as part of AI/ML systems.] Then, we will focus on the real-world application of explainability techniques in industry, wherein we present practical challenges / implications for using explainability techniques effectively and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning application domains such as search and recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we will identify open problems and research directions for the research community.
Gloria González Fuster (Vrije U. Brussel)
Gender is nowadays in a paradoxical state. On the one hand, some national laws, apps and online platforms appear to increasingly welcome the right of individuals to self-determine their own gender, which – it is more and more accepted – might even go beyond the traditional gender binary, potentially encompassing a multitude of diverse, non-confirming gender identities across a presumably fluid spectrum of possibilities. On the other hand, however, a variety of ‘AI solutions’ and related technologies are openly based on their presumed ability to ‘detect’ whether people are male or female: from facial recognition relying on ‘automated gender recognition’ to automated gender classification by social media based on the widespread monitoring of online ‘interests and behaviour’, the ways in which we are all being continuously gendered (and thus potentially re- and mis-gendered) are multiplying, almost always in an opaque manner. This interactive tutorial aims at exploring the potential of the General Data Protection Regulation (GDPR) in helping us interrogating ongoing gendering data practices. For this purpose, it will introduce the key relevant notions and mechanisms of the GDPR, and open up the question of which are the rights of the data subject to know, and possibly ‘rectify’ (also creatively), their attributed genders. Hopefully, the audience will then suggest ways to go forward in this transparency endeavour.
Damian Clifford (Australian National U. and U. of London), Jef Ausloos (U. of Amsterdam), Laurens Naudts (KU Leuven)
Fair machine learning aims to embed fairness into the very design of machine learning techniques to reduce the occurrence of unfair outcomes. In other words, the ‘learning processes’ are designed so that the outcome matches what has been pre-defined as ‘fair’. Broadly speaking, most research in this field seeks to protect ‘sensitive attributes’ which normally correspond to the protected grounds in non-discrimination legislation. However, tempting as it may be to pursue such a well-constrained interpretation of ‘fairness’, the concept has a much wider ambit in law. Fairness is a core principle of EU data protection law being grounded in both the right to data protection in Article 8 of the Charter of Fundamental Rights of the European Union and Article 6(1)(a) of the General Data Protection Regulation (GDPR). Nevertheless, despite its key positioning, the precise contours of this principle remain uncertain largely due to its role in the context dependant balancing of competing rights and interests.
The inherent ambiguity of the fairness principle has led, at least implicitly, to two connected strands of analysis namely, literature (1) calling for the alignment of different policy agendas (i.e. such as data protection and consumer protection law), and (2) exploring the relationship between the fairness principle in the GDPR and the role of ‘(AI/data) ethics’ in determining the appropriate uses of personal data. A common thread here is the need to more precisely determine what practices should be deemed de facto unfair. However, as will be explored in this tutorial, the overlaps between these respective areas is far from clear. Against this more doctrinal background the tutorial aims to plot the role of the fairness principle in concrete cases to demonstrate the significant importance, ongoing uncertainties and (more pointedly) broad substantive scope of the principle. The tutorial will demonstrate the application of fairness by focusing in on several “manipulative” practices designed to tease out the role of the principle and the overlaps and interactions with other policy agendas and developments. The aim therefore, is to connect the ACM FAT* community with the legal analysis of fairness in EU law and to then position the limitations of the principle, in light of its jurisdictional constraints, in order to foster interdisciplinary awareness.