This track invites papers from the range of social sciences and humanities (SSH) disciplines that engage with issues of justice and fairness in computational systems. With this track we seek both to complement and to critically interrogate the research that has taken place within this field so far. We particularly encourage submissions that explore existing and potential interfaces between computer science/engineering and SSH, and that either build on debates already present in the ACM FAT* research community, or pose new questions and areas of research. The areas of interest purposely incorporate a broad set of domains that have so far debated computation and social justice separately. We welcome both ‘deep dive’ approaches that use disciplinary insights to illuminate a particular issue in new ways, and interdisciplinary contributions that identify oppositional or complementary approaches that challenge or interrogate established ideas or perspectives, and that translate key ideas across perspectives. One key aim for this new track is to build an agenda for SSH to contribute to the advancement of social justice in computing systems, and to integrate work that has been ongoing in these fields within the view of the ACM community.
If your paper concerns the auditing or evaluation of one or several systems used currently or in the past from the SSH perspective, or studies of emerging social movements, social justice and activism with relation to computational and algorithmic systems, please consider Track 4.
Each paper will be reviewed by 3 relevant SSH (peer review) and, possibly, by 1 CS (cross-disciplinary review) program committee members. The evaluation criteria for the review will include:
In particular, satisfying these criteria would include presenting a proper grounding in the relevant literature of the paper’s disciplinary background (e.g. STS, criminology, political philosophy, ethics), depth of theoretical grounding, and methodological rigour and integrity. The paper should aim to be accessible to and/or translate between a CS and SSH audience.
3.1. Sociology, Science and Technology Studies (STS)
Including (but not limited to) comparative, ethnographic, cultural, political, structural or political economy approaches, grounded in the disciplines of sociology, anthropology, geography, political science, STS, public administration, or other related approaches to established and emerging ACM FAT* topics; history and philosophy of science and technology; critical big data and algorithm studies, and social justice approaches to FAT-related topics.
What is the current role CS approaches to fairness, accountability and transparency are playing with relation to algorithmic practices and cultures in society? How does the way these concerns are voiced and addressed differ with geography and culture? What can interdisciplinary approaches tell us about the advantages and drawbacks of formalizing complex values such as fairness, accountability and transparency (e.g., historicising processes of conceptual formalization by different SSH disciplines; placing political philosophy approaches to ACM FAT* in dialogue with CS and public administration perspectives; and what can comparative historical approaches tell us about how to mediate between different and incompatible disciplinary approaches and definitions of concepts such as fairness?)
What are the social justice dimensions of ACM FAT*, and how have they been surfaced and addressed by CS and law research so far? What is missing and what gaps can SSH approaches help to fill? What is the relationship of ACM FAT* research to the concerns of marginalized groups about algorithmic discrimination, fairness and justice, and what is the potential for SSH ACM FAT* research to centre problems that are currently insufficiently visible? If that potential exists, under what conditions (e.g. structural, funding, organizational, institutional) could it be realized?
How should the field address the problems and opportunities of intersectional perspectives on fairness? What kinds of marginalization are relevant to the concerns of ACM FAT*? Can perspectives and tools from gender/LGBTQ studies, postcolonial studies, indigenous studies, racial and ethnic studies, migration studies, work on disability and other relevant domains inform a broad perspective on how to frame and advance social justice in relation to design of computational systems?
What are the implications of algorithmic optimization processes for ACM FAT* concerns, and what research strategies can SSH provide to surface and interrogate these processes?
3.2. Philosophy, Political Philosophy, and Digital Ethics
Including (but not limited to) philosophy, political philosophy, digital ethics, and related studies of ACM FAT* topics. We invite consideration of what other philosophical perspectives not commonly seen in ACM FAT* research might add. What, for example, do classic and historical debates on fairness have to offer contemporary efforts to define it? What philosophical and analytical perspectives are already embedded in ACM FAT* and how might we distinguish them? What might be the role of philosophy and digital ethics in informing work on ACM FAT*?
How can philosophy of information and philosophy of design help with the design of algorithms? What can existing research in this field tell us about the epistemology of ACM FAT* research, and about its engagement with the social impact of digital technologies?
What are the assumptions of ACM FAT* research with respect to the political philosophy grounding of notions of accountability and transparency? What idea of society is ACM FAT* research grounded in, and what are the implications of this choice? Can theories of justice shed any light on our understanding of fair algorithms/AI?
Within digital ethics, we also invite consideration of which perspectives have been foregrounded so far in CS work on ACM FAT*, and which others might merit consideration. For example, what is the current role of bioethics with respect to ACM FAT* CS, for instance with regard to setting the goals and boundaries for technical, legal and policy experimentation with direct social impacts? We also invite papers focusing on possible models for the governance of digital technologies in general, and AI in particular. What kinds of guidance does digital ethics provide for organisations or communities to address ethical issues? What kind of theories/frameworks should we use to ascribe moral responsibility for the actions of algorithmic systems? How should ethical auditing mechanisms for algorithmic decisions be defined? Should there be redressing mechanisms for unintended consequences or failures of algorithms? Are there any fundamental values/factors which should underpin the design of algorithms to ensure fairness and socially good outcomes?
Authors should select one or more SSH sub-disciplines or domains of study from the following list when submitting their paper. Peer reviewers for a paper will be experts in the domain or sub-discipline(s) selected upon its submission, so please select your relevant domains judiciously.
SSH domains: justice and democracy, implications for the common good, group privacy, autonomy, responsibility, ethics of data, ethics of algorithms, ethics of practices, transparency, auditing of algorithms, ethical design, governance of the digital; philosophy of information; philosophy of design; historical perspectives; automization, optimization and (computational) statistics in relation to social justice; intersectional concerns; ethnographies of communities/practices/labour in socio-technical systems; qualitative/quantitative studies of engagement with/attitudes towards algorithmic systems.
SSH subdisciplines: gender and sexuality studies, postcolonial studies, migration studies, disability studies, racial and ethnic studies, indigenous studies, critical HCI and the design of algorithmic systems, ethics, history/philosophy of science and technology, critical data/algorithm studies, surveillance studies, criminology.