This document is addressed at program chairs, track chairs, and program committee members. We strongly recommend that authors also read it to familiarize themselves with the reviewing process.
Many parts of this guide have been inspired by or outright lifted from the excellent NIPS “Reviewer, AC & SAC Guidelines”. We’ve marked the sections of this document with ‡.
ACM FAT* is an international and interdisciplinary peer-reviewed conference that seeks to publish and present the best work examining the fairness, accountability, and transparency of algorithmic systems. The research community is young but growing fast.
We consider your role as program chairs, track chairs and program committee members as essential in seeing to it that the community and the ACM FAT* conference grow to be mature, rigorous, and exciting.
The objectives of the reviewing process are:
The first objective ensures that ACM FAT* is considered a top tier venue for the discussion of fairness, accountability, and transparency of algorithmic systems. “Best” can be defined in a variety of ways, including quality, clarity, originality, interdisciplinarity and significance. We will expand on this below.
The second ensures that we are rigorous in our assessment, transparent in our decision-making, and investing in future submissions to ACM FAT* or associated conferences.
To this end, we adopt a double-blind peer review system consisting of,
The cross-disciplinary nature of this conference requires specific attention to the review process. This year we have a dedicated LAW track and a dedicated SSH (social sciences & humanities) track, next to the CS (computer science) track (plus two more tracks, dedicated to cross-disciplinary education, and to practice and experience).
Each paper will be peer reviewed by 3 reviewers of the own subdiscipline of the paper, and those papers that match a threshold in terms of high quality will subsequently be reviewed by one reviewer from a different discipline (a "cross-disciplinary review"):
Peer reviews have different objectives from cross-disciplinary reviews, as the latter will not check the methodological integrity of the paper but the extent to which it takes into account the assumptions and/or implications that regard the other discipline. Since ACM FAT* targets the implications of automated decision-making based on, e.g., machine learning, the cross-disciplinary reviews are pivotal. They should be respected and taken into account, but their comments should not be confused with the peer review.
Peer-reviewing and cross-disciplinary reviewing should be centered around the following objectives,
Conflicts of interest:
Reviewers should never try to identify authors, but in certain cases a reviewer may be able to unintentionally yet correctly deduce the authors’ identities despite the authors’ best efforts at anonymity.
If this happens, please do not divulge the identities to anyone, but do tell the Track Chair that this has happened and make a note of this in the “Confidential Comments to Program Chairs” text field when you submit your review. Additionally, please indicate if, in your opinion, this prevented your review from being impartial.
ACM FAT* 2020 is using the FAT Conf 2020 HotCRP. Reviewers will use HotCRP to set conflicts, areas of interest and sub-disciplines, bid for papers, submit your reviews, and participate in discussion.
Bidding. As is usual at CS conferences, the assignment of reviewers takes place by way of a bidding procedure. This means that you provide your preferences (a.o. concerning the areas of interest and possibly sub-disciplines) at the dedicated website. Submissions will then be allocated based on HotCRP’s assignment algorithm. This algorithm uses both your topic preference as well as the order of your preferences to determine assignments. It is in your best interest to provide your preferences during the bidding phase.
Review content serves two purposes. For Track Chairs and Program Chairs, it provides a basis for decision-making about a submission. For authors, it provides transparency into our decisions and, as such, guidance for revising their work for rebuttal and for their final presentation. Please make your review as detailed and informative as possible; short, superficial reviews that venture uninformed opinions or guesses are worse than no review since they may result in the rejection of a high-quality submission.
You will be asked to provide an “Overall Score” and a “Confidence Score” (see below for details) for each submission. You should explain these values in the "Detailed Comments " text field. Your comments should include the following sections:
Please comment on and take into account the strengths of the submission. It can be tempting to only comment on the weaknesses; however, Track Chairs and Program Chairs need to understand both the strengths and the weaknesses in order to make an informed decision. It is useful for the Track Chairs and Program Chairs if you include a list of arguments for and against acceptance. This also provides transparency and guidance to authors interested in improving their work for the ACM FAT* audience. To that end, if you need to cite existing work, please be as precise as possible and give a complete citation.
Your comments should be detailed, specific, and polite. Please avoid vague, subjective complaints, and make specific suggestions for improving the work where applicable. Always be constructive and help the authors understand your viewpoint, without being dismissive. Using inappropriate or derogatory language is not acceptable.
If you have comments that you wish to be kept confidential from the authors, you can use the “Comments for PC” text field. Such comments might include explicit comparisons of the submission to other submissions and criticisms that are more bluntly stated. If you accidentally find out the identities of the authors, please do not divulge the identities to anyone, but do tell your Track Chair that this has happened and make a note of this in the “Confidential Comments to Track Chairs and PC Chairs” text field.
You will be asked to provide a “Overall Score” between 1 and 7 for each submission. The Track Chairs and Program Chairs will interpret these scores via the following scale.
“Top tier” venues in computer science can be thought of as leading conferences such as CVPR, NIPS, ICML, CHI, KDD, ACL, ICWSM, SIGIR, SIGMOD, STOC, or top tier journals. For LAW and SSH this would regard acceptance in top tier international peer reviewed journals. Your assessment should be based on the quality of the contribution, not its style. ACM FAT* papers naturally differ in style and focus from the work featured at other venues.
You should NOT assume that you were assigned a representative sample of submissions, nor should you adjust your scores to match the overall conference acceptance rates. The “Overall Score” for each submission should reflect your assessment of the submission’s contributions.
Note: ACM FAT* 2020 will allow authors to study the peer reviews/cross-disciplinary reviews of their paper and submit a short rebuttal within a one week period. Although you may have the opportunity to review this rebuttal and revise your recommendation, generally speaking, your overall assessment should reflect on whether you believe the paper merits acceptance except for “minor revisions”. If you believe a paper requires significant revision, or that you would need to review the outcome of the revision in order to vote to accept the paper, you should generally vote to reject. There is one key exception. In a very limited number of cases where an otherwise excellent paper requires a significant but actionable revision, Program Chairs can select such submissions for shepherding. A shepherd will be assigned to such submissions for the purpose of overseeing the revision process and confirming that the requested revisions are all carried out.
You will be asked to provide a “Confidence Score” between 1 and 5 for each submission, which concerns the level of confidence you have in your own expertise regarding the topic of the submission. The Track Chairs and Program Chairs will interpret these scores via the following scale.
Note: If you feel that your confidence rating is likely to be a 1 at the end of a review due to your lack of expertise in the given subject area, you should notify the Track Chair as early as possible in the process. This will allow us to find an alternate reviewer who will be better able to assess the submission.
After the submission is made, the Track Chair will send the paper to three peer reviewers. Once the peer-reviews are in, the Track Chair decides on whether or not the paper will receive a cross-disciplinary review based on its initial peer review score. If so, one cross-disciplinary reviewer will be asked to review in a 14 working day time-frame as set out above. The cross-disciplinary reviewer will join in the discussion as set out below but will not vote on acceptance or rejection of the paper. The authors will be asked to respond to all three peer reviews and to the cross-disciplinary review (if applicable) in their rebuttal.
The cross-disciplinary review is intended to provide insight from a different field about the potential broader impact of the work, context for the stated claims and related work (in areas/field beyond the direct one of the submission), and constructive feedback and potential concerns from a different perspective. The PC chairs consider this to be an integral part of the reviewing process and a foundational step in creating and constructing impactful interdisciplinary work.
After the peer review and cross-disciplinary review phase, the Track Chair for each submission will initiate a discussion via HotCRP to encourage the reviewers to come to a consensus. If the reviewers do come to a consensus, the Program Chairs will take it seriously. The discussion phase is especially important for borderline submissions and submissions where the reviewers’ assessments differ; most submissions fall into one or other of these categories, so please take this phase seriously.
When discussing a submission, try to remember that different people have different backgrounds and different points of view. Ask yourself, “Do the other reviewers' comments make sense?" and do consider changing your mind in light of their comments, if appropriate. That said, if you think the other reviewers are not correct, you are not required to change your mind. Reviewer consensus is valuable, but it is not mandatory.
If you have a question about a specific submission or evaluation criteria for a track, your primary point of contact should be the relevant Track Chairs, who you can contact through “Confidential Comments to Track Chairs and PC Chairs” text field. If necessary, this can be escalated to the PC chairs.
If you have a question about the reviewing system, your primary point of contact should be the PC Chairs.