Minnesota Natural Language Processing Seminar Series: Maarten Sap
The Minnesota Natural Language Processing (NLP) Seminar is a venue for faculty, postdocs, students, and anyone else interested in theoretical, computational, and human-centric aspects of natural language processing to exchange ideas and foster collaboration. The talks are every other Friday from 2 - 3 p.m. during the fall 2022 semester.
This week's speaker, Maarten Sap (CMU), will be giving a talk titled "Towards Prosocial NLP: Reasoning about and Responding to Toxicity in Language".
Data-driven AI systems, such as conversational AI agents, are increasingly capable and powerful, yet still suffer from severe toxic outputs. This harmful behavior hinders their safe deployment in the real world. In this talk, I will first examine how data-driven conversational AI systems acquire toxic behavior, by studying the conversation dynamics of contextually toxic language. In a dataset called ToxiChat, we collect annotations of the toxicity and stance of human and model responses to toxic inputs, finding that both humans and models are more likely to agree with toxic content than neutral content. Then, I will present Prosocial Dialogues, a new large-scale multi-turn dialogue dataset to teach conversational AI systems to respond to problematic content. By grounding responses in social norms or rules-of-thumb predicted by our safety model Canary, dialogue models can push back in the face of toxic or problematic inputs and generate socially acceptable responses. Finally, I will discuss the subjectivity challenges in conceptualizing toxicity detection as an NLP task, by examining perceptions of offensiveness of text depending on reader attitudes and identities. Through an online study, we find several correlates between over- or under-detecting text as toxic based on political leaning, attitudes about racism and free speech. I will conclude with future directions designing NLP systems with positive societal impact.
Maarten Sap is an assistant professor in Carnegie Mellon University's Language Technologies Department (CMU LTI). His research focuses on making NLP systems socially intelligent and understanding social inequality and bias in language. He has presented his work in top-tier NLP and AI conferences, receiving a best short paper nomination at ACL 2019 and a best paper award at the WeCNLP 2020 summit. His research has been covered in the New York Times, Forbes, Fortune, and Vox. Additionally, he and his team won the inaugural 2017 Amazon Alexa Prize, a social chatbot competition. Before joining CMU, he was a postdoc/young investigator at the Allen Institute for AI (AI2) on project MOSAIC. He received his PhD from the University of Washington's Paul G. Allen School of Computer Science & Engineering where he was advised by Yejin Choi and Noah Smith. In the past, he has interned at the Allen Institute for AI working on social commonsense reasoning, and at Microsoft Research working on deep learning models for understanding human cognition.