CS&E Colloquium: The Ins and Outs of Explanations in NLP

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m.

This week's speaker, Sam Carton (University of Chicago), will be giving a talk titled "The Ins and Outs of Explanations in NLP".

Abstract

In natural language processing (NLP) as in other areas of machine learning, the rise of large neural networks has led to increased interest in model explainability as a means to approach safety and ethics problems in applying such models to human-impactful decision tasks. In this talk I consider two perspectives on explanations in NLP: 1) as additional context by which humans can verify model predictions for improved human-model collaboration; and 2) as a mechanism by which to exert more fine-tuned control over model behavior--to make model predictions more robust, more aligned with human reasoning and even more accurate. I argue that ultimately these two perspectives form a virtuous circle of information flow from model to human and back, and that it is important to consider both in designing new explanation techniques and evaluations. I discuss my work on both perspectives before concluding with an agenda for future work in this area.

Biography

Sam Carton is a postdoctoral fellow working on explainable natural language processing with Chenhao Tan, initially at the University of Colorado Boulder and presently at the University of Chicago Department of Computer Science. He completed his PhD at the University of Michigan School of Information, working with Paul Resnick and Qiaozhu Mei. Sam publishes across a range of conferences from human-computer interaction to natural language processing. His work has been supported by grants from various sources including the NSF, Amazon and Salesforce. 

Category
Start date
Monday, March 21, 2022, 11:15 a.m.
End date
Monday, March 21, 2022, 12:15 p.m.
Location

Mechanical Engineering 108 or online via Zoom

Share