The ACM FAT* 2019 Conference will take place at the Sheraton Atlanta at 165 Courtland St NE, Atlanta, GA 30303. For more Information on how to get to the venue read here
The Registration and Information Desk will open on Tuesday: 8:00am - 6:00pm and on Wednesday and Thursday from 7:00 am - 6:00 pm
9AM-12PM: | Doctoral Consortium (Room: Georgia 6) | ||
12PM-1PM: | Lunch (Doctoral Consortium participants only) | ||
1:00PM-2:30PM: | Tutorials 1: 90 mins | ||
Room: Georgia Hall 2, 3 Session Chair: Emily Denton (Google) Translation Tutorial: A History of Quantitative Fairness in Testing Ben Hutchinson, Margaret Mitchell |
Room: Georgia Hall 4, 5 Session Chair: Joshua Kroll (UC Berkeley) Implications Tutorial: Building Community Governance of Risk Assessment Hannah Sassaman, Reuben Jones, David Robinson. Slides |
Room: Georgia Hall 7, 8 Session Chair: Ben Fish (Microsoft Research) Hands-on Tutorial: pip install fairness: a fairness-aware classification toolkit Sorelle Friedler, Carlos Scheidegger, Suresh Venkatasubramanian |
Room: Georgia Hall 9 Session Chair: Michael Ekstrand (Boise State University) Translation Tutorial: Values, Reflection and Engagement in Automated Decision-Making Roel Dobbe, Morgan Ames SlidesBlog post |
2:30PM-3:00PM: | Coffee Break: 30 mins | ||
3:00PM-4:30PM: | Tutorials 2: 90 mins | ||
Room: Georgia Hall 2, 3 Session Chair: Bo Cowgill (Columbia University) (3:00PM-3:45PM) Invited Tutorial: Keegan Hankes, Swathi Shanmugasundaram
(3:45PM-4:30PM) Implications Tutorial: Parole denied: One Man's Fight Against a COMPAS Risk Assessment Cynthia Conti-Cook, Glenn Rodriguez |
Room: Georgia Hall 4, 5 Session Chair: Luke Stark (Microsoft Research) (3:00PM-3:45PM) Translation Tutorial: Toward a Theory of Race for Fairness in Machine Learning Emanuel Moss Slides (3:45PM-4:30PM) Jacob Metcalf |
Room: Georgia Hall 7, 8 Session Chair: Aylin Caliskan (George Washington University) Hands-on Tutorial: AI Fairness 360 (part 1) Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilović, Seema Nagar, Karthikeyan Natesan Ramamurthy, John T. Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush Varshney, Dakuo Wang, Yunfeng Zhang Slides |
Room: Georgia Hall 9 Session Chair: Tolga Bolukbasi (Google) Hands-on Tutorial: Measuring Unintended Bias in Text Classification Models with Real Data Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, Lucy Vasserman Slides IPython Notebook |
4:30PM-5:00PM: | Coffee Break: 30 mins | ||
5:00PM-6:30PM: | Tutorials 3: 90 mins | ||
Room: Georgia Hall 2, 3 Session Chair: Swati Gupta (Georgia Tech) Translation Tutorial: Challenges of incorporating algorithmic fairness into industry practice Henriette Cramer, Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miroslav Dudík, Hanna Wallach, Sravana Reddy, Jean Garcia-Gathright Slides |
Room: Georgia Hall 4, 5 Session Chair: Joshua Kroll (UC Berkeley) Suchi Saria, Adarsh Subbaswamy |
Room: Georgia Hall 7, 8 Session Chair: Aylin Caliskan (George Washington University) Hands-on Tutorial: AI Fairness 360(part 2) Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilović, Seema Nagar, Karthikeyan Natesan Ramamurthy, John T. Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush Varshney, Dakuo Wang, Yunfeng Zhang |
Room: Georgia Hall 9 Session Chair: Bo Cowgill (Columbia University) (5:00PM-5:45PM) Translation Tutorial: What technologists need to know about AI for international development Craig Jolley, Shachee Doshi Slides available upon request. Contact: cjolley@usaid.gov |
8AM-8:45AM: | Continental Breakfast (Capitol Prefunction) | ||
8:45AM-09AM: | Opening Remarks | ||
09AM-10:00AM: | Keynote 1 | ||
Speaker: Jon KleinbergFairness, Rankings, and Behavioral BiasesMany of the settings in which we seek to quantify notions of fairness and equity are based on screening decisions, where we must select from among a pool of candidates, often by ranking them. We consider how human behavioral biases can interact with the process of ranking, how we might build formal models of these biases and their effects, and what these models suggest about possible interventions. The resulting analysis provides some constructive examples of the principle that when dealing with biased agents, constraining their behavior in specific ways can sometimes both ameliorate the bias and improve the agents' performance in terms of the objectives they have set for themselves. This talk is based on joint work with Sendhil Mullainathan and Manish Raghavan. Discussant: Jennifer Wortman Vaughan (Microsoft Research) | |||
10:00AM - 10:50AM: | Session 1: Framing and Abstraction Video | ||
Session Chair: Hanna Wallach
| |||
10:50AM - 11:20AM | Break | ||
11:20AM - 12:10PM: | Session 2: Systems and Measurement Video | ||
Session Chair: Suresh Venkatasubramanian
| |||
12:10PM - 13:10PM: | Lunch (Garden Courtyard) | ||
13:10PM - 14:00PM: | Session 3: Profiling and Representation Video | ||
Session Chair: Carlos Castillo
| |||
14:00PM - 14:50PM: | Session 4: Fairness methods Video | ||
Session Chair: Zack Lipton
| |||
14:50PM - 15:20PM: | Break | ||
15:20PM - 16:10PM: | Session 5: Content Distribution Video | ||
Session Chair: Robin Burke
| |||
16:10PM - 17:00PM: | Session 6: Law and Policy Video | ||
Session Chair: Andrew Selbst
| |||
17:30PM-19:00PM: | Reception (Garden Courtyard) |
8AM - 8:45AM: | Continental Breakfast (Capitol Prefunction) | ||
8:45AM - 9:00AM: | Opening Remarks | ||
9:00 - 10:00AM: | Keynote 2 | ||
Speaker: Deirdre MulliganBeyond algorithmic scapegoating: fostering cultures of algorithmic responsibility through administrative law and designAlgorithmic systems and those that design and sell them are being routinely called out for the biases they embed. Yet surely organizations that purchase them, and professionals who use them share some responsibility for the algorithmic tools they choose. Through case studies this talk will explore factors that lead government agencies to acquire and use algorithmic systems that are misaligned with their goals and values, and propose two interventions to foster cultures of algorithmic responsibility in the public sector: novel uses of administrative law to police government adoption of algorithmic systems with insufficient attention to the politics they embed; and, ‘contestable’ design which publicizes values-significant parameters and settings and assists end-users in understanding and selecting them. Together these interventions cultivate skepticism, reflection, and critical engagement with algorithms in the wild, and ensure government agencies attend to values during the acquisition and deployment of algorithmic systems. Discussant:Lilian Edwards (Newcastle University) | |||
10:00AM - 10:50PM: | Session 7: Explainability Video | ||
Session Chair: Giles Hooker
| |||
10:50AM - 11:20PM: | Break | ||
11:20AM - 12:10PM: | Session 8: Economic Models I Video | ||
Session Chair: Nathan Srebro
| |||
12:10PM - 13:10PM: | Lunch (Garden Courtyard) | ||
13:10PM - 14:00PM: | Town Hall | ||
14:00PM - 14:50PM: | Session 9: Learning Algorithms Video | ||
Session Chair: Nicole Immorlica
| |||
14:50PM - 15:20PM: | Break | ||
15:20PM - 16:10PM: | Session 10: Empirical Studies Video | ||
Session Chair: Karen Levy
| |||
16:10PM - 17:00PM: | Session 11: Economic Models II Video | ||
Session Chair: Inbal Talgam-Cohen
|