Upcoming events

CS&E Colloquium: Devansh Saxena

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. More details about the spring 2023 series will be provided at the beginning of the semester. This week's speaker, Devansh Saxena (Marquette University), will be giving a talk titled "Designing Human-Centered Algorithms for the Public Sector: A Case Study of the Child-Welfare System". 

Abstract

Public sector agencies in the United States are increasingly seeking to emulate business models of the private sector centered in efficiency, cost reduction, and innovation through the adoption of algorithmic systems. These data-driven systems purportedly improve decision-making; however, the public sector poses its own unique challenges where all decisions are mediated by policies, practices, and organizational constraints. Drawing upon a case study of the child-welfare system, I highlight how algorithms that do not account for these pertinent aspects of professional practice frustrate caseworkers and diminish the quality of human discretionary work. Why haven’t these algorithms lived up to expectations? And how might we be able to improve them? A human-centered research agenda can help us develop algorithms centered in social-ecological theories that support the decision-making processes of caseworkers, incorporate novel sources of data, as well as offer a means to evaluate algorithms in their real-world contexts.

Biography

Devansh Saxena is a doctoral candidate in the Department of Computer Science at Marquette University and a member of the Social and Ethical Computing Research Lab where he is co-advised by Dr. Shion Guha and Dr. Michael Zimmer. His research interests include investigating and developing algorithmic systems employed in the public sector, especially the Child-Welfare System. His current research examines collaborative child-welfare practices where decisions are mediated by policies, practices, and algorithms. His work is driven by Human-Centered Data Science and sits at the intersection of Human-Computer Interaction, Machine Learning, and FAccT (Fairness, Accountability, and Transparency in Sociotechnical Systems).

CS&E Colloquium: Tianshi Li

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. More details about the spring 2023 series will be provided at the beginning of the semester. This week's speaker, Tianshi Li (Carnegie Mellon University), will be giving a talk titled "Protecting User Privacy by Helping Developers".

Abstract

Data has driven many technological advancements, while the ubiquitous collection and sharing of data has caused a privacy trust crisis in our society. Developers play a critical role in making apps that respect user privacy, yet many lack the necessary awareness, knowledge, and time to ensure their apps meet privacy requirements. How can we support average developers (who are oftentimes not privacy experts) in building privacy-friendly apps? In this talk, I will discuss how my research at the intersection of Privacy, HCI, and Software Engineering is engaging developers to better protect user privacy. I will talk about two main threads of my work: (1) empirical HCI studies to identify the challenges developers face in handling privacy requirements, and (2) system building work to tackle the identified challenges by building IDE plugins and breaking down privacy responsibilities into lightweight code annotating tasks. In the final remarks, I will discuss my future research agenda of creating a safe and trustworthy world by helping developers.

Biography

Tianshi Li is a Ph.D. Candidate at the Human-Computer Interaction Institute at Carnegie Mellon University, advised by Prof. Jason Hong. Her main research interest lies at the intersection of Human-Computer Interaction, Security and Privacy, and Software Engineering. Before coming to CMU, she received a bachelor's degree in Computer Science from Peking University. She interned at Google during her Ph.D. study, working on research about novel mobile text entry techniques and intelligent notification management systems. Her work has been published at top-tier venues (CHI, CSCW, IMWUT, TOCHI) and has won a best paper honorable mention award at ACM CHI 2022. She was awarded a CMU CyLab Presidential Fellowship in 2021 and named an EECS Rising Star in 2022.

 

Graduate Programs Online Information Session

RSVP today!.

During each session, the graduate staff will review:

  • Requirements (general)
  • Applying
  • Prerequisite requirements
  • What makes a strong applicant
  • Funding
  • Resources
  • Common questions
  • Questions from attendees

Students considering the following programs should attend:

BICB Colloquium: Nansu Zong

BICB Colloquium Faculty Nomination Talks: Join us in person on the UMR campus in room 414, on the Twin Cities campus in MCB 2-122 or virtually at 5 p.m.
 
Nansu Zong is an Assistant Professor of Biomedical Informatics at Mayo Clinic.

CS&E Colloquium: Vedant Das Swain

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. More details about the spring 2023 series will be provided at the beginning of the semester. This week's speaker, Vedant Das Swain (Georgia Institute of Technology), will be giving a talk.

CS&E Colloquium: Harmanpreet Kaur

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. More details about the spring 2023 series will be provided at the beginning of the semester. This week's speaker, Harmanpreet Kaur (University of Michigan), will be giving a talk titled "Leveraging Social Theories to Enhance Human-AI Interaction".


Abstract

Human-AI partnerships are increasingly commonplace. Yet, systems that rely on these partnerships are unable to effectively capture the dynamic needs of people, or explain complex AI reasoning and outputs. The resulting socio-technical gap has led to harmful outcomes such as propagation of biases against marginalized populations and missed edge cases in sensitive domains. My work follows the belief that for human-AI interaction to be effective and safe, technical development in AI must come in concert with an understanding of human-centric cognitive, social, and organizational phenomena. Using human-AI interaction in the context of ML-based decision-support systems as a case study, in this talk, I will discuss my work that explains why interpretability tools do not work in practice. Interpretability tools exacerbate the bounded nature of human rationality, encouraging people to apply cognitive and social heuristics. These heuristics serve as mental shortcuts that make people's decision-making faster by not having to carefully reason about the information being presented. Looking ahead, I will share my research agenda that incorporates social theories to design human-AI systems that not only take advantage of the complementarity between people and AI, but also account for the incompatibilities in how (much) they understand each other.

  
Biography

Harman Kaur is a PhD candidate in both the department of Computer Science and the School of Information at the University of Michigan, where she is advised by Eric Gilbert and Cliff Lampe. Her research interests lie in human-AI collaboration and interpretable ML. Specifically, she designs and evaluates human-AI systems such that they effectively incorporate what people and AI are each good at, but also mitigate harms by accounting for the incompatibilities between the two. She has published several papers at top-tier human-computer interaction venues, such as CHI, CSCW, IUI, UIST, and FAccT. She has also completed several internships at Microsoft Research and the Allen Institute for AI, and is a recipient of the Google PhD fellowship. Prior to Michigan, Harman received a BS in Computer Science from the University of Minnesota.

CS&E Colloquium: Mitchell Gordon

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. More details about the spring 2023 series will be provided at the beginning of the semester. This week's speaker, Mitchell Gordon (Stanford University), will be giving a talk titled "Human-AI Interaction Under Societal Disagreement".

Abstract

Whose voices — whose labels — should a machine learning algorithm learn to emulate? For AI tasks ranging from online comment toxicity detection to poster design to medical treatment, different groups in society may have irreconcilable disagreements about what constitutes ground truth. Today’s supervised machine learning (ML) pipeline typically resolves these disagreements implicitly by aggregating over annotators’ opinions. This approach abstracts individual people out of the pipeline and collapses their labels into an aggregate pseudo-human, ignoring minority groups’ labels. In this talk, I will present Jury Learning: an interactive ML architecture that enables developers to explicitly reason over whose voice a model ought to emulate through the metaphor of a jury. Through our exploratory interface, practitioners can declaratively define which people or groups, in what proportion, determine the classifier's prediction. To evaluate models under societal disagreement, I will also present The Disagreement Deconvolution: a metric transformation showing how, in abstracting away the individual people that models impact, current metrics dramatically overstate the performance of many user-facing ML tasks. These components become building blocks of a new pipeline for encoding our goals and values in human-AI systems, which strives to bridge principles of human-computer interaction with the realities of machine learning.

Biography

Mitchell L. Gordon is a computer science PhD student at Stanford University in the Human-Computer Interaction group, advised by Michael Bernstein and James Landay. He designs interactive systems and evaluation approaches that bridge principles of human-computer interaction with the realities of machine learning. His work has won awards at top conferences in human-computer interaction and artificial intelligence, including a Best Paper award at CHI and an Oral at NeurIPS. He is supported by an Apple PhD Fellowship in AI/ML, and has interned at Apple, Google, and CMU HCII.

CS&E Colloquium: Tianfan Fu

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. More details about the spring 2023 series will be provided at the beginning of the semester. This week's speaker, Tianfan Fu (Georgia Institute of Technology), will be giving a talk.

BICB Colloquium: Amy Kinsley

BICB Colloquium Faculty Nomination Talks: Join us in person on the UMR campus in room 414, on the Twin Cities campus in MCB 2-122 or virtually at 5 p.m.
 
Amy Kinsley is an Assistant Professor, Department of Veterinary Population Medicine (VPM) UMN.

BICB Colloquium: Arslan Zaidi

BICB Colloquium Faculty Nomination Talks: Join us in person on the UMR campus in room 414, on the Twin Cities campus in MCB 2-122 or virtually at 5 p.m.
 
Arslan Zaidi is an Assistant Professor Institute for Health Informatics UMN.