Hacking the Human Bias in AI Video
Tuesday 28th January 2020 — Discussant: Meredith Morris
People tend to overtrust sophisticated computing devices, including robotic systems. As these systems become more fully interactive with humans during the performance of day-to-day activities, the role of bias in these human-robot interaction scenarios must be more carefully investigated. Bias is a feature of human life that is intertwined, or used interchangeably, with many different names and labels – stereotypes, prejudice, implicit or subconsciously held beliefs. In the digital age, this bias has often been encoded in and can manifest itself through AI algorithms, which humans then take guidance from, resulting in the phenomenon of excessive trust. Trust conveys the concept that when interacting with intelligent systems, humans tend to exhibit similar behaviors as when interacting with other humans; thus, the concern is that people may under-appreciate or misunderstand the risk associated with handing over decisions to an intelligent agent. Bias further impacts this potential risk for trust, or overtrust, in that these systems are learning by mimicking our own thinking processes, inheriting our own implicit biases. Consequently, the propensity for trust and the potential of bias may have a direct impact on the overall quality of the interaction between humans and machines, whether the interaction is in the domains of healthcare, job-placement, or other high-impact life scenarios. In this talk, we will discuss this phenomenon of integrated trust and bias through the lens of intelligent systems that interact with people in scenarios that are realizable in the near-term.
Ayanna Howard is the Linda J. and Mark C. Smith Professor and Chair of the School of Interactive Computing at the Georgia Institute of Technology. She also holds a faculty appointment in the School of Electrical and Computer Engineering and serves on the Board of Directors for the Partnership on AI and Autodesk. Dr. Howard’s career focus is on intelligent technologies that must adapt to and function within a human-centered world. Her work, which encompasses advancements in artificial intelligence (AI), assistive technologies, and robotics, has resulted in over 250 peer-reviewed publications in a number of projects - from healthcare robots in the home to AI-powered STEM apps for children with diverse learning needs. To date, her unique accomplishments have been highlighted through a number of awards and articles, including highlights in USA Today, Upscale, and TIME Magazine, as well as being recognized as one of the 23 most powerful women engineers in the world by Business Insider and one of the Top 50 U.S. Women in Tech by Forbes. In 2013, she also founded Zyrobotics, which develops STEM educational products to engage children of all abilities. Prior to Georgia Tech, Dr. Howard was a Senior Robotics Researcher and Deputy Manager in the Office of the Chief Scientist at NASA's Jet Propulsion Laboratory. She has also served as the Associate Director of Research for the Institute for Robotics and Intelligent Machines, Chair of the Robotics Ph.D. program, and the Associate Chair for Faculty Development in ECE at Georgia Tech.
Productivity and Power: The Role of Technology in Political Economy Video
Wednesday 29th January 2020 — Discussant: Lina Dencik
Market democracies struggle with economic insecurity and growing inequality that presents new threats to democracy. The revival of “political economy” offers a frame for understanding the relationship between productivity and justice in market societies. It reintegrates power and the social and material context—institutions, ideology, and technology—into our analysis of social relations of production, or how we make and distribute what we need and want to have. Organizations and individuals, alone and in networks, struggle over how much of a society’s production happens in a market sphere, how much happens in nonmarket relations, and how embedded those aspects that do occur in markets are in social relations of mutual obligation and solidarism. These struggles involve efforts to shape institutions, ideology, and technology in ways that trade off productivity and power, both in the short and long term. The outcome of this struggle shapes the highly divergent paths that diverse market societies take, from oligarchic to egalitarian, and their stability as pluralistic democracies.
Yochai Benkler is the Berkman Professor of Entrepreneurial Legal Studies at Harvard Law School and the Berkman Klein Center for Internet and Society at Harvard University. Since the 1990s he has played a role in characterizing the role of information commons and decentralized collaboration to innovation, information production, and freedom in the networked economy and society. His books include Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (Oxford University Press 2018) and The Wealth of Networks: How social production transforms markets and freedom (Yale University Press 2006), which won academic awards from the American Political Science Association, the American Sociological Association, and the McGannon award for social and ethical relevance in communications. In 2012 he received a lifetime achievement award from Oxford University “in recognition of his extraordinary contribution to the study and public understanding of the Internet and information goods.” His work is socially engaged, winning him the Ford Foundation Visionaries Award in 2011, the Electronic Frontier Foundation's Pioneer Award for 2007, and the Public Knowledge IP3 Award in 2006. It is also anchored in the realities of markets, and was cited as "perhaps the best work yet about the fast moving, enthusiast-driven Internet" by the Financial Times and named best business book about the future in 2006 by Strategy and Business. Benkler has advised governments and international organizations on innovation policy and telecommunications, and serves on the boards or advisory boards of several nonprofits engaged in working towards an open society. His work can be freely accessed at benkler.org.
Making Accountability Real: Strategic Litigation Video
Thursday 30th January 2020 — Discussant: Janet Haven
How can we make fairness, accountability and transparency a reality? Litigation is an effective tool for pushing for these principles in the design and deployment of automated decision-making technologies. The courts can be strong guarantors of our rights in a variety of different contexts and have shown already that they are willing to do so in the digital rights setting. As automated decisions are increasingly impacting every aspect of our lives, we need to engage the courts on these complex issues and enable them to protect our human rights in the digital sphere. We are already seeing cases being taken to challenge facial recognition technology, predictive policing systems, and systems that conduct needs assessments in the provision of public services. However, we still have much work to do in this space. What opportunities do the different frameworks in this area, and especially European regulations such as the GDPR offer, and how can we maximise their potential?
Nani Jansen Reventlow is the founding Director of the Digital Freedom Fund, which supports partners in Europe to advance digital rights through strategic litigation. She is a recognised international lawyer and expert in human rights litigation responsible for groundbreaking freedom of expression cases across several national and international jurisdictions. Nani is a Lecturer in Law at Columbia Law School and an Associate Tenant at Doughty Street Chambers. She is also a Senior Fellow at Columbia Law School’s Human Rights Institute and an Affiliate at the Berkman Klein Center for Internet & Society at Harvard University, where she was a 2016-2017 Fellow. Nani has been an advisor to Harvard’s Cyberlaw Clinic since 2016.