CSE DSI Machine Learning Seminar with Qianwen Wang
Interpreting and Steering AI Explanations with Interactive Visualizations
Artificial Intelligence (AI) has advanced at a rapid pace and is expected to revolutionize many biomedical applications. However, current AI methods are usually developed via a data-centric approach regardless of the usage context and the end users, posing challenges for domain users in interpreting AI, obtaining actionable insights, and collaborating with AI in decision-making and knowledge discovery.
In this talk, I discussed how this challenge can be addressed by combining interactive visualizations with interpretable AI. Specifically, I present two methodologies: 1) visualizations that explain AI models and predictions and 2) interaction mechanisms that integrate user feedback into AI models. Despite some challenges, I will conclude on an optimistic note: interactive visual explanations should be indispensable for human-AI collaboration. The methodology discussed can be applied generally to other applications where human-AI collaborations are involved, assisting domain experts in data exploration and insight generation with the help of AI.
Qianwen Wang is a tenure-track assistant professor at the Department of Computer Science and Engineering at the University of Minnesota. Before joining UMN, she was a postdoctoral fellow at Harvard University. Her research aims to enhance communication and collaboration between domain users and AI through interactive visualizations, particularly focusing on their applications in addressing biomedical challenges.
Her research in visualization, human-computer interaction, and bioinformatics has been recognized with awards and featured in prestigious outlets such as MIT News and Nature Technology Features. She has earned multiple accolades, including two best abstract awards from BioVis ISMB, one best paper award from IMLH@ICML, one best paper honorable mention from IEEE VIS, and the HDSI Postdoctoral Research Fund.