Last Updated: December 2025
Reviewers are key drivers of the conference peer review process, and Reviewing is a major commitment from members of the communities represented by ACM FAccT. As such, the conference makes a commitment to ensure that Reviewers are provided with the information they need clearly and in advance, are supported by an effective decision-making structure, and are recognized for their efforts. In turn, Reviewers are expected to be timely with their reviews, be active throughout the reviewing and discussion process, use their best professional judgment, and support and mentor the work under review. This guidance provides information Reviewers need to do their work.
Papers will be evaluated based on scholarly criteria in their designated area, including
Your review is important for two reasons:
Please aim to make your review as detailed and informative as possible–thorough, thoughtful feedback is essential to ensuring high-quality submissions are fairly assessed. As you prepare your reviews for papers submitted to FAccT 2026, please make sure to carefully review the guidance below. Remember that you are not only assessing whether a paper is worth publishing, but that you are also providing feedback to the authors. Try to be helpful, constructive, and kind in your reviews. If you are uncertain about what such a review looks like, please don’t hesitate to reach out to your AC, or to members of the Program Committee.
The reviewing process proceeds linearly from submission to final decision over several months, as follows:
FAccT does not allow Reviewers to delegate their reviews to external reviewers or use LLMs to write reviews. If you feel you are not personally equipped to review a paper, please contact the Program Chairs as soon as possible so that the paper can be assigned to a different Reviewer if necessary. For concerns and questions, please contact the PCs at program-chairs@facctconference.org
The overall scholarly criteria listed above are intended to guide your assessment of the contributions, potential impact, and how appropriately the work was conducted. See the Reviewer Form for more details on assessment criteria.
Good research does not require complicated methods, and comments that methods are too simple should be avoided. A paper should also not be penalized for not using the latest ML/AI models, especially when the research contributions or the problem the authors try to illustrate do not depend on models being used for experimental purposes.
Reviewers are expected to hold papers to a high standard of ethical practice. The ACM Code of Ethics and Professional Conduct provides the basis upon which ethical considerations are made. The code of ethics comprises seven principles:
This general set of principles allows reviewers to apply them to the specific papers they review. For FAccT, we ask that Reviewers use their best judgment to assess whether a paper might need further consideration in areas listed above, and specifically: improper treatment of human subjects or data annotators, including issues to do with fair pay; use of unethically collected datasets, datasets that have been retracted, or datasets not allowed to be used in the research context; potential negative downstream harms that require additional scrutiny or mitigation; environmental concerns; rights concerns; privacy concerns; safety or security concerns; legal concerns, including around copyright; dangerous materials and practices; warfare; discrimination, bias, and fairness issues; deception or harassment; and other topics of concern.
Authors may have addressed ethical concerns directly in their paper (for instance, in an ethical impacts statement at the end of the paper), and when they have concerns, reviewers should check these statements to see if they are satisfied. Reviewers are not expected to assess and provide evidence on these concerns, but to assess whether there are ethical concerns that warrant further expert review and outside decision-making on the suitability of the paper for inclusion in the proceedings. Reviewers are given a checkbox to flag any concerning submissions to the ACs and PCs, and the ability to leave comments with further details on ethical issues in a dedicated field in the Reviewer Form.
Any flagged papers are triaged as follows: flagged papers are studied by the Program Chairs using all available information to confirm the concern. Additional Reviewers in the areas of concern may then be assigned to provide further assessment. Program Chairs will use this to guide the final decision-making and provide these additional reviews to authors to support their work. Discussions with authors in such cases are handled sensitively in line with the best practices recommended by the Committee on Publication Ethics (COPE) which FAccT uses. Also see the ACM guide on the use of the code of ethics.
If you have evidence of plagiarism or other severe research integrity issues, please email the Program Chairs directly as soon as possible.
Note that this timeline is subject to change; please use the communications from the Program Committee for the most up-to-date deadlines.
Can I use LLMs to write reviews?
No, you should not use LLMs to write reviews. The spirit of peer review is a dialogue between the reviewers and authors. Using LLMs to write your review for you is a breach of that implicit contract. Reviewers should refer to the April 2023 ACM Policy on Authorship and use of large language models (LLMs), and the associated SIGCHI blog post, which also apply to reviews and meta-reviews, though please note that FAccT may have more stringent standards than the baseline ACM and SIGCHI policy on LLM usage in reviews.
However, the use of third-party generative AI tools or software (local or cloud) is permitted, though not encouraged, to check fluency, formatting, grammar, spelling, punctuation, and language translation of the review that you wrote. You should carefully review the text that is returned in these use cases, to ensure that the revised text does not contain spurious additions or hallucinations.
You should not upload the submissions or text fragments to any third-party website (i.e., outside of the review platform), such as an LLM interface or similar, in accordance with the ACM Peer Review Policy, as this may breach the confidentiality of the paper submissions.
What if a submission does not cite recent work?
It is not uncommon for different groups to be working on the same idea (referred to as contemporaneous work), and for these to appear on preprint repositories. For FAccT, contemporaneous work are papers released 4 months prior to the conference deadline. Hence, lack of awareness, citations, or comparisons to such works is not a basis for rejection of a paper. Similarly, it is not reasonable to expect papers to engage with work published close to or after the FAccT submission deadline.
Should I comment on an author’s command of English?
You should avoid commenting on the author's command of English, and instead focus on whether the paper is clear and readable. If a review contains problematic statements that the Reviewer is unable or unwilling to address, the Program Chairs (PC) and Area Chairs (ACs) reserve the right to make necessary edits to ensure the review meets the conference’s standards.
What should I do if I have a concern about potential plagiarism or other severe research integrity issues?
In such cases, please contact the Program Chairs immediately at program-chairs@facctconference.org.