When Not to Design, Build, or Deploy

An interactive discussion at the ACM FAT* 2020 conference

As part of the ACM conference on Fairness, Accountability, and Transparency (FAT*), taking place from January 27th to January 30th in Barcelona, Spain, we cordially invite participation in a plenary discussion on when we should not design, build, and deploy models and systems. This is part of the Critiquing and Rethinking Accountability, Fairness and Transparency (CRAFT) track at FAT*.

Session date and time: The 90-min session will take place on Wed, Jan 29, 5-6:30 pm, in the conference’s plenary room. (See https://fatconference.org/2020/programschedule.html.)

Background

While much FAT* work to date has focused on various model and system interventions, the goal of this session will be to foster discussion of when we should not design, build, and deploy models and systems in the first place. Given the recent push for moratoria on facial recognition, protests around the sale of digital technologies, and the ongoing harms to marginalized groups from automated systems such as risk prediction, a broader discussion around how, when, and why to say no as academics, practitioners, activists, and society, seems both relevant and urgent.

This interactive gathering will feature diverse invited and contributed perspectives presented in a fishbowl conversation format, accompanied by questions and comments from the audience. While the central focus of this discussion will be on individual instances of pat refusal efforts, the high-level goal of the session is to create a foundation for answering long-term questions of: (i) relevant historical and disciplinary contexts, (ii) frameworks and guidelines for practitioners to use when reasoning about ‘not designing, building, or deploying’, and (iii) the broader politics and aftermaths of refusal.

Session slides

Session slides, including slides accompanying the opening presentations of featured participants can be found here.

Session plan

I. (10 mins)

II. When Not to Design, Build, or Deploy (frameworks, contexts) (40 mins)

III. How to Refuse to Design, Build, or Deploy (practice, politics) (40 mins)

Please click on a speaker name to see their bio.

1. When Not to Design, Build, or Deploy (frameworks, contexts)

De’Aira Bryant (Georgia Institute of Technology) De’Aira Bryant is a computer science doctoral student in the School of Interactive Computing at the Georgia Institute of Technology. Her research areas span the fields of human-robot interaction and artificial intelligence where she explores the possibilities for interactive communication between children and intelligent embodied systems. In particular, she is interested in developing emotionally intelligent robotic systems specialized for children, using AI for social good and assuring the protection of vulnerable populations. De’Aira is a NSF-GRFP, GEM Consortium, and Georgia Tech SLOAN fellow. She was also recently honored as the 2019 guest scholar at the Aspen Institute Roundtable on Artificial Intelligence.
Melissa Hall (Facebook) I am a software engineer on the Facebook AI team responsible for ensuring that our products and AI systems are fair and unbiased. I graduated from the University of Texas at Austin in May 2019 with degrees in Electrical & Computer Engineering and Plan II, an interdisciplinary Liberal Arts honors major. I was an undergraduate Fellow with the Clements Center for National Security and an Archer Fellow in Washington, D.C. where I worked at the Atlantic Council’s Digital Forensic Research Lab. I also developed the curriculum, and served as the teaching assistant, for a freshman class entitled Pathways to Civic Engagement.
Mathana (Tech Ethicist & Fellow, Centre for Internet and Human Rights, European University Viadrina) Mathana Stender is a Berlin-based tech ethicist, rights advocate and storyteller who investigates the impact of emerging technologies on individuals, communities and culture. Their multidisciplinary research brings together art, anthropology, philosophy and socio-economic analysis. Mathana undertakes research for transparency initiatives that track global biometric surveillance, drafts ethical frameworks around the emerging VR technological ecosystem, contributes technical expertise to international disarmament initiatives, and seeks to create long-term solutions to data agency with the preservation-focused digital archivist organization OpenArchive. They are also a member of working groups at the IEEE's Global Initiative for Ethical AI and Autonomous Systems where they co-author technical standards around algorithmic bias and personal AI assistants, Mathana is a fellow at the Centre for Internet and Human Rights at the European University Viadrina, and holds an MA in Global Communication at the Chinese University of Hong Kong and a BA in international relations and law from the University of Texas at Dallas.
Jeffrey Sorensen (Jigsaw/Google) Sorensen is part of the original team at Jigsaw that launched the Perspective API. Jeff joined Google in 2010 to work with the speech team, developing compact language models for use in on-device recognizer for mobile devices, and lead a team responsible for data collection and annotation. Jeffrey Sorensen has worked on machine learning models for speech recognition and translation, both for Google and previously for IBM.
Catherine Stinson (University of Bonn & University of Cambridge) Catherine has a MSc in computer science from the University of Toronto, and a PhD in History & Philosophy of Science from the University of Pittsburgh. She used to work on AI projects, and now writes critically about them. Catherine’s articles on the epistemology and ethics of ML have appeared in The Routledge Handbook of the Computational Mind, Philosophy of Science, The Globe and Mail, THIS Magazine, think tank reports, and education policy volumes.
Lucy Vasserman (Jigsaw/Google) Lucy Vasserman is a manager and technical lead on the Conversation AI team, which studies how computers can learn to understand the nuances and context of abusive language at scale. Lucy works on machine learning research to improve the Conversation AI's core models, with a focus on combating algorithmic bias. She also collaborates with internal and external users to ensure the Conversation AI models capture their needs. Prior to joining Jigsaw, Lucy worked on machine learning research and engineering for several other Google teams including Speech Recognition and Google Shopping.

2. How Not to Design, Build, or Deploy (practice, politics)

Laurence Diver (Vrije Universiteit Brussel) Laurence is a postdoctoral researcher in COHUBICOL, a multidisciplinary ERC Advanced Grant-funded project investigating how the foundational principles that underpin modern law can be retained, and if necessary adapted, as legal practice and the legal system are increasingly mediated by code- and data-driven systems. He holds first-class LLB and LLM degrees from the University of Edinburgh, where recently he defended a multidisciplinary Ph.D. synthesising legal-theoretical notions of legitimacy with the theory of design (title: “Digisprudence: the affordance of legitimacy in code-as-law”). You can read more about his research at laurenced.org, or follow him on twitter @laurence_diver.
Varoon Mathur (AI Now Institute) As a Technology Fellow, I conduct interdisciplinary research on the social implications of machine learning and related technologies on public domains. My specific interests include research on the limitation of Electronic Health Records (EHRs) for the purposes of building predictive algorithms for health care, as well as the specific limitations of fairness metrics and bias mitigation techniques for emotion/facial recognition, and how social systems analysis can be a part of auditing sociotechnical systems. I joined AI Now after serving as a Microsoft Data Science for Social Good Fellow, and earning his software engineering degree from UBC with a focus in machine learning. I am also a global health activist, and a Co-ordinating Committee Fellow with Universities Allied for Essential Medicines, a student-driven NGO focused on access to medicines issues for vulnerable and marginalized population. I have advised on global health policy and research at the World Health Organization, and am a former TEDx Speaker on AI Ethics in Health Care.
Bogdana Rakova (Accenture) Bogdana is working on Ethics and Governance of AI through standardization and other regulatory innovation, currently as part of the IEEE P7010 working group focused on well-being metrics for the implications of AI. She is also part of a newly started team called Responsible AI at Accenture where her work is focused on technical and ethnographic research about the interactions between people and AI Systems. She is part of the editors board at the forthcoming Intersections of AI and Community Well-Being, Special Issue of the Springer International Journal of Community Well-Being. Also, she recently joined the board of directors at Happiness Alliance where her work is contributing to establishing frameworks which allow communities, organizations, and governments to design, develop, and deploy AI safely and responsibly.
Jat Singh (University of Cambridge) Dr Jat Singh is based at the Dept. Computer Science & Technology, University of Cambridge. He leads the multi-disciplinary Compliant and Accountable Systems research group, which works at the intersection of computer science and law -- exploring means for better aligning technology with legal concerns, and vice-versa. He also co-chairs the Cambridge Trust & Technology Initiative, which drives research exploring the dynamics of trust and distrust in relation to internet technologies, society and power. Jat is a Fellow of the Alan Turing Institute, and is active in the tech-policy space, having served on advisory councils for the Dept. Business, Innovation & Skills, Centre for Data Ethics & Innovation, and the Financial Conduct Authority.
Irene Solaiman (Open AI) Irene Solaiman is a policy researcher at OpenAI. She conducts social impact and fairness analysis and policymaker engagement as part of the Policy Team. She was a fellow at Harvard’s Berkman Klein Center as part of the Assembly Student Fellowship (formerly known as Techtopia) researching the ethics and governance of AI. Irene holds a Master in Public Policy from the Harvard Kennedy School and a self-designed B.A. in International Relations from the University of Maryland.

Organizers

Solon Barocas
Solon Barocas
Cornell University, Microsoft Research New York City
Asia J. Biega
Asia J. Biega
Microsoft Research Montréal

Benjamin Fish
Benjamin Fish
Microsoft Research Montréal
Jedrzej Niklas
Jedrzej Niklas
London School of Economics and Political Science
Luke Stark
Luke Stark
Microsoft Research Montréal

Contact

Should you have any questions, please e-mail us at when.not.to.build@gmail.com.