This track aims to uncover and explore available legal solutions to tackle bias and unfairness in algorithmic decision-making. While recent debates have focused on transparency, the existing legal tools to mitigate risks to the rights and freedoms of individuals related to bias and unfairness or to sanction bias and unfairness when they occur is far more complex. The two areas of focus for this year’s Law Track are (1) data protection law and fundamental rights and freedoms and (2) non-discrimination law, justice and fairness.
Data protection law, and in particular the GDPR, is a fertile source of legal solutions that complement transparency to ensure fairness, lawfulness and accountability of every system that processes personal data, while at the same time empowering individuals with private rights of action and other subjective rights, like access and the right to object. Some countries have put in place similar systems to the GDPR - comprehensive and robust, while others focus on algorithmic accountability, risk assessments or sectoral approaches.
Authors are invited to explore topics such as, but not limited to, the nature and consequences of the prohibition of solely automated decision-making that may result in a legal or significant effect for the person subjected to it, the relevant safeguards that should be adopted for lawful automated decision-making (explanation, contestation, human in the loop), the role of data protection by design and by default in general, but also the impact of having them codified as legal obligation, the role of a Data Protection Impact Assessment or other impact assessments in building fair algorithms or whether the paradox of applying data minimization and purpose limitation principles to personal data that feed machine learning/AI applications exists or is a myth. Other topics like deconstructing and operationalizing the principle of fair processing, or the effectiveness of the rights of the data subject in granting (some) control to the individual over automated decision-making are also of interest, either in private law or public law (including law enforcement and criminal justice).
In addition, private rights of action and liability rules play an important role in repairing material or even moral damage caused to individuals. Similarly administrative and consumer law rules sometimes give procedural protections to individuals. But are these rules equipped to deal with automated decisions? If so, how can they be applied to ensure effective judicial redress? Finally, authors are also invited to bring new perspectives on the fundamental rights dimension of automated decision-making, both in horizontal relationships (between private parties) and vertical relationships (between individuals and the state, be it represented by law enforcement or by public administration). Comparative legal approaches to bias and unfairness in algorithms are also welcome.
All of the following topics, among others, are encouraged for the Call for Papers and will be considered in the framework of any relevant legal domain: predictive policing, behavioral advertising, price discrimination, consumer harm, surveillance, facial recognition, data-driven public services, large-scale data analytics, automated decision-making, algorithmic decision-making systems, impact assessments, law enforcement, non-discrimination, inequality, fairness, liability, constitutional implications, rule of law, human rights implications, legal governance of algorithmic systems.
If your paper concerns analysis based on experiences with real-world systems, please consider submitting under Track 4.
Each paper will be reviewed by 3 Law (peer review), and, possibly, 1 CS (cross-disciplinary review) program committee members. The evaluation criteria for the review will include:
The evaluation for the peer review will be focused on a proper grounding in positive law and/or relevant theoretical exploration, taking into account that the paper must be sufficiently accessible to a CS audience, and may be focused on explaining core legal issues to CS scholarship.
Carefully read the information below and when submitting your proposal, please indicate:
2.1. Data protection law and fundamental rights and freedoms: What redress possibilities does the law offer people who are harmed by the unfair outcome of machine learning, AI, or other unfair use of personal data? What role, if any, could data protection principles or fair information principles (FIPs) play to defend people against unfair or discriminatory data processing or machine learning? How should the rules on automated decision-making in the GDPR, the modernised Convention 108, and emerging US state law be interpreted? What type of safeguards should be in place when automated decision-making is allowed? How should EU data protection law’s fairness principle be operationalised? Should the law aim for fair machine learning, and if so: how should fairness be operationalised? To what extent can impact assessments, such as human rights impact assessments or data protection impact assessments, help to protect people against unfair machine learning? Which human rights are threatened by machine learning? How could law be adapted to better protect against unfair or otherwise problematic types of automated decision-making systems?
2.2. Non-discrimination law, justice and fairness: This track tackles both public and private sectors. For the private sector, this may include e.g.: price discrimination, or any other discrimination or consumer harm that may result from automated decision-making; analysis of safeguards to prevent bias; claims as to material and non-material damages in civil actions as a result of direct or indirect discrimination. For the public sector, this refers to: law enforcement and other areas of public policy and unfairness that may result from automated or semi-automated decision-making. Law enforcement may include e.g.: predictive policing, facial recognition systems, safeguards available in data protection, privacy laws, equality law, human rights law and administrative law/public law in the law enforcement sector, interoperability of large-scale IT systems within the EU, through a fairness lens. Issues in relation to other public bodies include e.g.: discrimination that may result from algorithms used to determine social benefits or liabilities to the state (such as tax and social security fraud detection), and the available safeguards in data protection, privacy laws, equality law, human rights laws and administrative law/public law as well as their effectiveness to protect against unfairness.
Authors should select one or more legal domains from the following list when submitting their paper: Constitutional Law, Administrative Law, Criminal Law, Human Rights Law, Private Law, Law of Obligations, Torts, Civil liability, Contract Law, Criminal Procedure, Civil Procedure, Comparative Law, Private International Law, Public International Law, Labor Law, Data Protection Law. Peer reviewers for a paper will be experts in the domains and jurisdictions selected upon its submission, so please select your relevant domains judiciously.
Please also indicate the regions/jurisdictions that are relevant for your submission: Asia, Australia and New Zealand, Africa, Europe, Latin America, North America, Other.