This workshop is a part of the 2020 Neural Information Processing Systems conference (NeurIPS), and the event was held virtually along with other workshops on Saturday December 12, 2020.

Watch the recordings

Following growing concerns with both harmful research impact and research conduct in computer science [1], including concerns with research published at NeurIPS [2] [3] [4], this year’s conference introduced two new mechanisms for ethical oversight: a requirement that authors include a “broader impact statement” in their paper submissions and additional evaluation criteria asking paper reviewers to identify any potential ethical issues with the submissions [5] [6].

These efforts reflect a recognition that existing research norms have failed to address the impacts of AI research [7] [8] [9] [10], and take place against the backdrop of a larger reckoning with the role of AI in perpetuating injustice [11] [12] [13] [14] [15]. The changes have been met with both praise and criticism [16] some within and outside the community see them as a crucial first step towards integrating ethical reflection and review into the research process, fostering necessary changes to protect populations at risk of harm. Others worry that AI researchers are not well placed to recognize and reason about the potential impacts of their work, as effective ethical deliberation may require different expertise and the involvement of other stakeholders.

This debate reveals that even as the AI research community is beginning to grapple with the legitimacy of certain research questions and critically reflect on its research practices, there remains many open questions about how to ensure effective ethical oversight. This workshop therefore aims to examine how concerns with harmful impacts should affect the way the research community develops its research agendas, conducts its research, evaluates its research contributions, and handles the publication and dissemination of its findings. This event complements other NeurIPS workshops this year (e.g., [17] [18]) devoted to normative issues in AI and builds on others from years past (e.g., [19] [20] [21]), but adopts a distinct focus on the ethics of research practice and the ethical obligations of researchers.

Call for Participation

The submission deadline has passed, it was October 12, 2020.

The workshop will include contributed papers. All accepted papers will be allocated either a virtual poster presentation or a virtual talk slot. Authors will have the option to have final versions of workshop papers and talk recordings linked on the workshop website.

Submissions can be 4 pages maximum, excluding references and supplementary materials, and formatted in the provided NeurIPS general submissions templates. Papers should not include any identifying information about the authors to allow for anonymous review. Previously published work (or work under review) is acceptable, with the exception of previously published machine learning research.

We invite submissions relating to the role of the research community in navigating the broader impacts of AI research. Workshop paper submissions can include case studies, surveys, analyses, and position papers, including but not limited to the following topics:

  • Mechanisms of ethical oversight in AI research: What are some of the practical mechanisms for anticipating future risks and mitigating harms caused by AI research? Are such practices actually effective in improving societal outcomes and protecting vulnerable populations? To what extent do they help in bridging the gap between AI researchers and those with other perspectives and expertise, including the populations at risk of harm?
    • Analysis of the strengths and limitations of the NeurIPS broader impact statement as a mechanism for ethical oversight
    • Reflections on experiences with this year’s NeurIPS ethical oversight process
    • Ideas for alternative ethical review procedures, including how such determinations should be made and who should be involved in these determinations [22]
    • Assessments of the strengths and limitations of research ethics and institutional review boards [23], particularly with respect to the formulation of research questions and the broader impact of research findings [24]
    • Examples of how other fields engaged in high-risk research have handled the issue of ethical oversight (e.g., nuclear energy, nanotechnology, synthetic biology, geoengineering, etc.)
    • Lessons from research traditions that work directly with affected communities to develop research questions and research designs
  • Challenges of AI research practice and responsible publication: What practices are appropriate for the responsible development, conduct, and dissemination of AI research? How can we ensure wide-spread adoption?
    • Surveys of responsible research practice in AI research, including common practices around data collection, crowdsourced labeling, documentation and reporting requirements, declaration of conflict of interest, etc. [25] [26] [27] [28]
    • Limitations and benefits of the conference-based publication format, peer review, and other characteristics of AI publication norms, including alternative proposals (e.g., gated or staged release) [29] [30] [31]
  • Collective and individual responsibility in AI research: Who is best placed to anticipate and address potential research impacts? What should be the role of AI researchers and the AI research community? And how do we get there?
    • Discussions of the role and obligations of different stakeholders (e.g., conference organizers, institutions, funders, researchers, users/customers, etc.) in ensuring ethical reflection and anticipating impacts of AI research [32]
    • How does the lack of diversity in the AI research community contribute to the problem of overlooking or underestimating potential harms?
    • Proposals for how to empower the impacted populations to shape research agendas, practices, and publication norms [33]
    • What makes for a quality ethical reflection? How can researchers prepare themselves for ethical reflection?
    • How do the obligations of researchers and practitioners differ when considering the potential impacts of their work? Are there meaningful differences across research and applied contexts?
    • Reflections on how ethical review could be integrated into different parts of the research pipeline, such as the funding process, IRB requirements, etc.
  • Anticipated risks and known harms of AI research: How should researchers identify the relevant risks posed by their work and what should be the different dimensions of concern? How can we ensure that researchers are well aware of known harms caused by related research and make sure that the field is responsive to the needs and concerns of affected communities?
    • Examples of effective and ineffective mechanisms for identifying relevant risks, including ethical review of research proposals and pre-publication review [34]
    • Proposals for creative approaches to understanding the impacts of research and prioritizing the protection of affected communities [35] [36]
    • Case studies of AI research that had harmful impacts
    • Examples of AI research that had unanticipated consequences

Authors will be notified of acceptance by October 30, 2020.

Agenda (Eastern Time)

08:30 - 08:45 Welcome
08:45 - 09:15 Keynote
  • Hanna Wallach (Microsoft)
09:15 - 10:15 Panel: Ethical oversight in the peer review process
  • Sarah Brown (University of Rhode Island)
  • Heather Douglas (Michigan State University)
  • Iason Gabriel (DeepMind, NeurIPS Ethics Advisor)
  • Brent Hecht (Northwestern University, Microsoft)
Chaired by Rosie Campbell
10:15 - 10:30 Break
10:30 - 11:30 Panel: Harms from AI research
  • Anna Lauren Hoffmann (University of Washington)
  • Nyalleng Moorosi (Google AI)
  • Vinay Prabhu (UnifyID)
  • Jake Metcalf (Data & Society)
  • Sherry Stanley (Amazon Mechanical Turk)
Chaired by Deborah Raji
11:30 - 12:30 Panel: How should researchers engage with controversial applications of AI?
  • Logan Koepke (Upturn)
  • Cathy O'Neil (O'Neil Risk Consulting & Algorithmic Auditing)
  • Tawana Petty (Stanford University)
  • Cynthia Rudin (Duke University)
  • Shawn Bushway (University at Albany)
Chaired by Deborah Raji
12:30 - 13:30 Break: Lunch and watch lightning talks (in parallel) of accepted papers (Lightning talks are each 5-7 mins)
13:30 - 14:30 Parallel discussions with authors of submitted papers
14:30 - 15:30 Panel: Responsible publication: NLP case study
  • Miles Brundage (OpenAI)
  • Bryan McCann (Formerly Salesforce)
  • Colin Raffel (University of North Carolina at Chapel Hill, Google Brain)
  • Natalie Schluter (Google Brain, IT University of Copenhagen)
  • Zeerak Waseem (University of Sheffield)
Chaired by Rosie Campbell
15:30 - 15:45 Break
15:45 - 16:45 Panel: Strategies for anticipating and mitigating risks
  • Ashley Casovan (AI Global)
  • Timnit Gebru (Google)
  • Shakir Mohamed (DeepMind)
  • Aviv Ovadya (Thoughtful Technology Project)
Chaired by Solon Barocas
16:45 - 17:45 Panel: The roles of different parts of the research ecosystem in navigating broader impacts
  • Josh Greenberg (Alfred P. Sloan Foundation)
  • Liesbeth Venema (Nature)
  • Ben Zevenbergen (Google)
  • Lilly Irani (UC San Diego)
Chaired by Solon Barocas
17:45 - 18:00 Closing remarks

Accepted Papers

Program Committee

  • Alex Hanna, Google
  • Angus Galloway, University of Guelph
  • Asia Biega, Microsoft Research
  • Aviv Ovadya, Thoughtful Technology Project
  • Bran Knowles, Lancaster University
  • Carina Prunkl, University of Oxford
  • Carolina Aguerre, Centre for Global Cooperation Research
  • Casey Fiesler, University of Colorado
  • David Robinson, Cornell University
  • Gillian Hadfield, University of Toronto
  • Grace Abuhamad, Element AI
  • Jake Metcalf, Data and Society
  • Jasmine Wang, Partnership on AI
  • Karrie Karahalios, University of Illinois at Urbana-Champaign
  • Kate Vredenburgh, London School of Economics
  • Katie Shilton, University of Maryland
  • Luke Stark, University of Western Ontario
  • Malavika Jayaram, Harvard University
  • Maria De-Arteaga, University of Texas at Austin
  • Markus Anderljung, Oxford
  • Matthew BUI, NYU
  • McKane Andrus, Partnership on AI
  • Michael Zimmer, Marquette
  • Nyalleng Moorosi, Google
  • Rumman Chowdhury, Accenture AI
  • Seda Gürses, Delft
  • Seth Lazar, Australian National University
  • Stevie Chancellor, University of Minnesota
  • Toby Shevlane, University of Oxford
  • Zachary Lipton, Carnegie Mellon University

Recordings of live panel sessions and talks


Welcome

Carolyn Ashurst (GovAI) and Rosie Campbell (PAI)

Keynote

Hanna Wallach (Microsoft)

Panel: Ethical oversight in the peer review process

Sarah Brown (University of Rhode Island), Heather Douglas (Michigan State University), Iason Gabriel (DeepMind, NeurIPS Ethics Advisor), Brent Hecht (Northwestern University, Microsoft). Chaired by Rosie Campbell (PAI).

Panel: Harms from AI research

Anna Lauren Hoffmann (University of Washington), Nyalleng Moorosi (Google AI), Vinay Prabhu (UnifyID), Jake Metcalf (Data & Society), Sherry Stanley (Amazon Mechanical Turk). Chaired by Deborah Raji (Mozilla).

Panel: How should researchers engage with controversial applications of AI?

Cathy O'Neil (O'Neil Risk Consulting & Algorithmic Auditing), Tawana Petty (Stanford University), Cynthia Rudin (Duke University). Chaired by Deborah Raji (Mozilla).

Panel: Responsible publication: NLP case study

Miles Brundage (OpenAI), Bryan McCann (Formerly Salesforce), Colin Raffel (University of North Carolina at Chapel Hill, Google Brain), Natalie Schluter (Google Brain, IT University of Copenhagen), Zeerak Waseem (University of Sheffield). Chaired by Rosie Campbell (PAI).

Panel: Strategies for anticipating and mitigating risks

Ashley Casovan (AI Global), Timnit Gebru (Google), Shakir Mohamed (DeepMind), Aviv Ovadya (Thoughtful Technology Project). Chaired by Solon Barocas (Microsoft).

Panel: The roles of different parts of the research ecosystem in navigating broader impacts

Josh Greenberg (Alfred P. Sloan Foundation), Liesbeth Venema (Nature), Ben Zevenbergen (Google), Lilly Irani (UC San Diego). Chaired by Solon Barocas (Microsoft).

Closing remarks

Stuart Russell (CHAI)

Pre-recorded lightning talks of accepted papers


Ideal theory in AI ethics

Daniel Estrada (paper)

AI in the “Real World”: Examining the Impact of AI Deployment in Low-Resource Contexts

Chinasa T Okolo (paper)

Nose to Glass: Looking In to Get Beyond

Josephine Seah (paper)

Anticipatory Ethics and the Role of Uncertainty

Priyanka Nanayakkara, Nicholas Diakopoulos, Jessica Hullman (paper)

Auditing Government AI: Assessing ethical vulnerability of machine learning

Alayna A Kennedy (paper)

Like a Researcher Stating Broader Impact For the Very First Time

Grace Abuhamad, Claudel Rheault (paper)

Overcoming Failures of Imagination in AI Infused System Development and Deployment

Margarita Boyarskaya, Alexandra Olteanu, Kate Crawford (paper)

Training Ethically Responsible AI Researchers: a Case Study

Hang Yuan, Claudia Vanea, Federica Lucivero, Nina Hallowell (paper)

Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics

Bo Cowgill, Fabrizio Dell'Acqua, Augustin Chaintreau, Nakul Verma, Samuel Deng, Daniel Hsu (paper)

The Managerial Effects of Algorithmic Fairness Activism

Bo Cowgill, Fabrizio Dell'Acqua, Sandra Matz (paper)

Ethical Testing in the Real World: Recommendations for Physical Testing of Adversarial Machine Learning Attacks

Ram Shankar Siva Kumar, Maggie Delano, Kendra Albert, Afsaneh Rigot, Jonathon Penney (paper)

Analyzing the Machine Learning Conference Review Process

David Tran, Alex Valtchanov, Keshav R Ganapathy, Raymond Feng, Eric Slud, Micah Goldblum, Tom Goldstein (paper)

Non-Portability of Algorithmic Fairness in India

Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Vinodkumar Prabhakaran (paper)

An Ethical Highlighter for People-Centric Dataset Creation

Margot Hanley, Apoorv Khandelwal, Hadar Averbuch-Elor, Noah Snavely, Helen Nissenbaum (paper)