This workshop is a part of the 2020 Neural Information Processing Systems conference (NeurIPS), and the event was held virtually along with other workshops on Saturday December 12, 2020.
Following growing concerns with both harmful research impact and research conduct in computer science [1], including concerns with research published at NeurIPS [2] [3] [4], this year’s conference introduced two new mechanisms for ethical oversight: a requirement that authors include a “broader impact statement” in their paper submissions and additional evaluation criteria asking paper reviewers to identify any potential ethical issues with the submissions [5] [6].
These efforts reflect a recognition that existing research norms have failed to address the impacts of AI research [7] [8] [9] [10], and take place against the backdrop of a larger reckoning with the role of AI in perpetuating injustice [11] [12] [13] [14] [15]. The changes have been met with both praise and criticism [16] some within and outside the community see them as a crucial first step towards integrating ethical reflection and review into the research process, fostering necessary changes to protect populations at risk of harm. Others worry that AI researchers are not well placed to recognize and reason about the potential impacts of their work, as effective ethical deliberation may require different expertise and the involvement of other stakeholders.
This debate reveals that even as the AI research community is beginning to grapple with the legitimacy of certain research questions and critically reflect on its research practices, there remains many open questions about how to ensure effective ethical oversight. This workshop therefore aims to examine how concerns with harmful impacts should affect the way the research community develops its research agendas, conducts its research, evaluates its research contributions, and handles the publication and dissemination of its findings. This event complements other NeurIPS workshops this year (e.g., [17] [18]) devoted to normative issues in AI and builds on others from years past (e.g., [19] [20] [21]), but adopts a distinct focus on the ethics of research practice and the ethical obligations of researchers.