Past events

CS&E Colloquium: Huaxiu Yao

This week's speaker, Huaxiu Yao (Stanford University), will be giving a talk titled, "Embracing Change: Tackling In-the-Wild Shifts in Machine Learning".

Abstract

The real-world deployment of machine learning algorithms often poses challenges due to shifts in data distributions and tasks. These shifts can lead to a degradation in model performance, as the model may not have seen such changes during training. It can also make it difficult for the model to generalize to new scenarios and can lead to poor performance in real-world applications. In this talk, I will present our research on building machine learning models that are unbiased, widely generalizable, and easily adaptable to different shifts. Specifically, I will first discuss our approach to learning unbiased models through selective augmentation for scenarios with subpopulation shifts. Second, I will also delve into the utilization of domain relational information to enhance model generalizability for arbitrary domain shifts. Then, I will present our techniques for quickly adapting models to new tasks with limited labeled data. Additionally, I will show our success practices for addressing shifts in real-world applications, such as in the healthcare, e-commerce, and transportation industries. The talk will also cover the remaining challenges and future research directions in this area.


Biography

Huaxiu Yao is a Postdoctoral Scholar in Computer Science at Stanford University, working with Prof. Chelsea Finn. Currently, his research interests focus on building machine learning models that are building machine learning models that are unbiased, widely generalizable, and easily adaptable to changing environments and tasks. He is also dedicated to applying these methods to solve real-world data science applications, such as healthcare, transportation, and online education. Huaxiu earned his Ph.D. degree from Pennsylvania State University. He has over 30 publications in leading machine learning and data science venues such as ICML, ICLR, and NeurIPS. He also has organized and co-organized workshops at ICML and NeurIPS, and has served as a tutorial speaker at conferences such as KDD, AAAI and IJCAI. Additionally, Huaxiu has extensive industry experience, having interned at companies such as Amazon Science, Salesforce Research. For more information, please visit https://huaxiuyao.mystrikingly.com/.

BICB Colloquium: Arslan Zaidi

BICB Colloquium Faculty Nomination Talks: Join us in person on the UMR campus in room 414, on the Twin Cities campus in MCB 2-122 or virtually at 5 p.m.
 
Arslan Zaidi is an Assistant Professor Institute for Health Informatics UMN. They will be giving a talk titled, "Interpreting heritability in admixed populations".
 

Abstract

Heritability is a fundamental concept in human genetics. Defined (in the narrow sense) as the proportion of phenotypic variance that is due to additive genetic effects, it is central to our ability to describe and make inferences about genetic variation underlying complex traits. For example, both the power to discover variants in genome-wide association studies and the upper limit of polygenic prediction accuracy are functions of heritability, making it important for the design of genetic studies. However, heritability is often estimated assuming the population is randomly mating, which is almost never true, especially in admixed and multi-ethnic cohorts. In this seminar, I will use theory and simulations to describe the behavior of heritability in admixed populations and evaluate the effect of the 'random mating' assumption on heritability estimation. I will discuss the implications of these results for genome-wide association studies, polygenic prediction, and our understand of the genetic architecture of complex traits.

CS&E Colloquium: Vishwanath Saragadam

This week's speaker, Vishwanath Saragadam (Rice University), will be giving a talk titled, "Co-designing Optics and Algorithms for High-dimensional Visual Computing".

Abstract

The past two decades have seen tremendous advances in computer vision, all powered by the ubiquitous RGB sensor. However, the richness of visual information goes beyond just the spatial dimensions. A complete description requires representing the data with multiple independent variables including space, angle, spectrum, polarization, and time. Building cameras that leverage these dimensions of light will not only dramatically improve vision applications of today, but will also be key to unlocking new capabilities across multiple domains such as  robotics, augmented and virtual reality, biomedical imaging, security and biometrics, and environmental monitoring. 

Cameras for sampling such high dimensions need to solve three big challenges – (1) how do we sense multiple dimensions in an efficient manner (2) how do we model the data concisely, and (3) how do we build algorithms that scale to such high dimensions. In this talk, I will be introducing my work on high-dimensional visual computing that focuses on sampling, modeling, and inferring of visual data beyond the RGB. My talk will focus on the synergy across three key thrusts: modeling high-dimensional interactions with neural representations, building new optics and cameras with meta-optics, and inferring while sensing with optical computing. The ideas presented in the talk will pave the way for the future of computer vision where cameras do not merely sense but are part of the computing pipeline.


Biography

Vishwanath Saragadam is a postdoctoral researcher at Rice University with Prof. Richard G. Baraniuk and Prof. Ashok Veeraraghavan. Vishwanath’s research is at the intersection of  computational imaging, meta-optics, and neural representations, and focuses on co-designing optics, sensors, and algorithms for solving challenging problems in vision. He received his PhD from Carnegie Mellon University, advised by Prof. Aswin Sankaranarayanan, where his thesis won the A. G. Jordan outstanding thesis award in 2020. Vishwanath is a recipient of the best paper award at ICCP 2022, Prabhu and Poonam Goel graduate fellowship in 2019, and an outstanding teaching assistant award in 2018. In his free time, Vishwanath likes following James Webb Telescope updates, capturing thermal images while baking, and biking long distances.

CS&E Colloquium: Zhutian Chen

This week's speaker, Zhutian Chen (Harvard), will be giving a talk titled, "When Data Meets Reality: Augmenting Dynamic Scenes with Visualizations".

Abstract

We live in a dynamic world that produces a growing volume of accessible data. Visualizing this data within its physical context can aid situational awareness, improve decision-making, enhance daily activities like driving and watching sports, and even save lives in tasks such as performing surgery or navigating hazardous environments. Augmented Reality (AR) offers a unique opportunity to achieve this contextualization of data by overlaying digital content onto the physical world. However, visualizing data in its physical context using AR devices (e.g., headsets or smartphones) is challenging for users due to the complexities involved in creating and accurately placing the visualizations within the physical world. This process can be even more pronounced in dynamic scenarios with temporal constraints.

In this talk, I will introduce a novel approach, which uses sports video streams as a testbed and proxy for dynamic scenes, to explore the design, implementation, and evaluation of AR visualization systems that enable users efficiently visualize data in dynamic scenes. I will first present three systems allowing users to visualize data in sports videos through touch, natural language, and gaze interactions, and then discuss how these interaction techniques can be generalized to other AR scenarios. The designs of these systems collectively form a unified framework that serves as a preliminary solution for helping users visualize data in dynamic scenes using AR. I will next share my latest progress in using Virtual Reality (VR) simulations as a more advanced testbed, compared to videos, for AR visualization research. Finally, building on my framework and testbeds, I will describe my long-term vision and roadmap for using AR visualizations to advance our world in becoming more connected, accessible, and efficient.

Biography

Zhutian Chen is a PostDoc Fellow in the Visual Computing Group at Harvard University. His research is at the intersection of Data Visualization, Human-Computer Interaction, and Augmented Reality, with a focus on advancing human-data interaction in everyday activities. His research has been published as full papers in top venues such as IEEE VIS, ACM CHI, and TVCG, and received three best paper nominations in IEEE VIS, the premier venue in data visualization. Before joining Harvard, he was a PostDoc in the Design Lab at UC San Diego. Zhutian received his Ph.D. in Computer Science and Engineering from the Hong Kong University of Science and Technology.

CS&E Colloquium: Minjia Zhang

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. More details about the spring 2023 series will be provided at the beginning of the semester. This week's speaker, Minjia Zhang (Microsoft Research), will be giving a talk titled "Efficient System and Algorithm Design for Large-Scale Deep Learning Training and Inference".

Abstract

Deep learning models have achieved significantly breakthroughs in the past few years. However, it is challenging to provide efficient computation and memory capabilities for both DNN training and inference, given that the model size and complexities keep increasing rapidly. From the training aspect, it is too slow to train high-quality models on massive data, and large-scale model training often requires complex refactoring of models and access to prohibitively expensive GPU clusters, which are not always accessible to many practitioners. On the serving side, many DL models suffer from long inference latency and high costs, preventing them from meeting latency and cost goals. In this talk, I will introduce my work on tackling efficiency problems in DNN/ML from system, algorithm, and modeling optimizations.

Biography

Minjia Zhang is a Principal Researcher at Microsoft. His primary research focus is efficient AI/ML,  with a special emphasis on the intersection of large-scale deep learning training and inference system optimization and novel machine learning algorithms. His research has led to research publications on major computer science conferences, such as top-tier system conferences, including ASPLOS, NSDI, USENIX ATC, and top-tier machine learning conferences, including ICML, NeurIPS, and ICLR. Several of his research results have been transferred to industry systems and products, such as Microsoft Bing, Ads, Azure SQL, Windows, leading to significant latency and capacity improvement.

BICB Colloquium: Amy Kinsley

BICB Colloquium Faculty Nomination Talks: Join us in person on the UMR campus in room 419, on the Twin Cities campus in MCB 2-122 or virtually at 5 p.m.
 

Amy Kinsley (Assistant Professor in Veterinary Population Medicine, UMN), will be going a talk titled, "Decision Support Tools to Prevent the Spread of Aquatic Invasive Species in Minnesota".

Abstract

Aquatic invasive species (AIS) are a significant threat to the health of ecosystems leading to losses in biodiversity that ultimately impact human, animal, and environmental health. Effective management of AIS requires managers to identify which actions are likely to achieve specific objectives most efficiently and how we can best use limited resources to achieve a desired outcome. In this presentation, I will discuss the development of an online interactive dashboard, AIS Explorer, informed by a dynamic network-based simulation model and prioritization algorithm. The dashboard supports data-driven decision-making on AIS surveillance and prevention and serves as an important resource for many county and local government AIS managers in Minnesota.


Biography

Amy Kinsley is an Assistant Professor in Veterinary Population Medicine at the University of Minnesota. Her research aims to protect natural aquatic resources from infectious diseases, invasive species, and pollutants by understanding critical mechanisms associated with their dispersal to develop resource-efficient mitigation plans. It involves a combination of applied and theoretical epidemiology that breaks down barriers between disciplines such as veterinary medicine, engineering, and computational biology and has vital components of stakeholder and community engagement. Dr. Kinsley completed her B.S. (2005) at the University of Florida in Civil Engineering. She attended the University of Minnesota, where she received her DVM (2014) and Ph.D. (2018) in Veterinary Medicine.

CS&E Colloquium: Tianfan Fu

This week's speaker, Tianfan Fu (Georgia Institute of Technology), will be giving a talk titled, "Deep Learning for Drug Discovery and Development".

Abstract

Artificial intelligence (AI) has become woven into therapeutic discovery to accelerate drug discovery and development processes since the emergence of deep learning. For drug discovery, the goal is to identify drug molecules with desirable pharmaceutical properties. I will discuss our deep generative models that relax the discrete molecule space into a differentiable one and reformulate the combinatorial optimization problem into a differentiable optimization problem, which can be solved efficiently. On the other hand, drug development focuses on conducting clinical trials to evaluate the safety and effectiveness of the drug on human bodies. To predict clinical trial outcomes, I design deep representation learning methods to capture the interaction between multi-modal clinical trial features (e.g., drug molecules, patient information, disease information), which achieves 0.847 F1 score in predicting phase III approval. Finally, I will present my future works in geometric deep learning for drug discovery and predictive model for drug development.

Biography

Tianfan Fu is a Ph.D. candidate in the School of Computational Science and Engineering at the Georgia Institute of Technology, advised by Prof. Jimeng Sun. His research interest lies in machine learning for drug discovery and development. Particularly, he is interested in generative models on both small-molecule & macro-molecule drug design and deep representation learning on drug development. The results of his research have been published in leading AI conferences, including AAAI, AISTATS, ICLR, IJCAI, KDD, NeurIPS, UAI, and top domain journals such as Nature, Cell Patterns, Nature Chemical Biology, and Bioinformatics. His work on clinical trial outcome prediction has been selected as the cover paper on Cell Patterns. In addition, Tianfan is an active community builder. He co-organized the first three AI4Science workshops on leading AI conferences; he co-founded Therapeutic Data Commons (TDC) initiative, an ecosystem with AI-solvable tasks, AI-ready datasets, and benchmarks in therapeutic science. Additional information is available at futianfan.github.io/

CS&E Colloquium: Mitchell Gordon

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. More details about the spring 2023 series will be provided at the beginning of the semester. This week's speaker, Mitchell Gordon (Stanford University), will be giving a talk titled "Human-AI Interaction Under Societal Disagreement".

Abstract

Whose voices — whose labels — should a machine learning algorithm learn to emulate? For AI tasks ranging from online comment toxicity detection to poster design to medical treatment, different groups in society may have irreconcilable disagreements about what constitutes ground truth. Today’s supervised machine learning (ML) pipeline typically resolves these disagreements implicitly by aggregating over annotators’ opinions. This approach abstracts individual people out of the pipeline and collapses their labels into an aggregate pseudo-human, ignoring minority groups’ labels. In this talk, I will present Jury Learning: an interactive ML architecture that enables developers to explicitly reason over whose voice a model ought to emulate through the metaphor of a jury. Through our exploratory interface, practitioners can declaratively define which people or groups, in what proportion, determine the classifier's prediction. To evaluate models under societal disagreement, I will also present The Disagreement Deconvolution: a metric transformation showing how, in abstracting away the individual people that models impact, current metrics dramatically overstate the performance of many user-facing ML tasks. These components become building blocks of a new pipeline for encoding our goals and values in human-AI systems, which strives to bridge principles of human-computer interaction with the realities of machine learning.

Biography

Mitchell L. Gordon is a computer science PhD student at Stanford University in the Human-Computer Interaction group, advised by Michael Bernstein and James Landay. He designs interactive systems and evaluation approaches that bridge principles of human-computer interaction with the realities of machine learning. His work has won awards at top conferences in human-computer interaction and artificial intelligence, including a Best Paper award at CHI and an Oral at NeurIPS. He is supported by an Apple PhD Fellowship in AI/ML, and has interned at Apple, Google, and CMU HCII.

CS&E Colloquium: Harmanpreet Kaur

The computer science colloquium mainly takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Harmanpreet Kaur (University of Michigan), will be giving a talk titled "Leveraging Social Theories to Enhance Human-AI Interaction".


Abstract

Human-AI partnerships are increasingly commonplace. Yet, systems that rely on these partnerships are unable to effectively capture the dynamic needs of people, or explain complex AI reasoning and outputs. The resulting socio-technical gap has led to harmful outcomes such as propagation of biases against marginalized populations and missed edge cases in sensitive domains. My work follows the belief that for human-AI interaction to be effective and safe, technical development in AI must come in concert with an understanding of human-centric cognitive, social, and organizational phenomena. Using human-AI interaction in the context of ML-based decision-support systems as a case study, in this talk, I will discuss my work that explains why interpretability tools do not work in practice. Interpretability tools exacerbate the bounded nature of human rationality, encouraging people to apply cognitive and social heuristics. These heuristics serve as mental shortcuts that make people's decision-making faster by not having to carefully reason about the information being presented. Looking ahead, I will share my research agenda that incorporates social theories to design human-AI systems that not only take advantage of the complementarity between people and AI, but also account for the incompatibilities in how (much) they understand each other.

  
Biography

Harman Kaur is a PhD candidate in both the department of Computer Science and the School of Information at the University of Michigan, where she is advised by Eric Gilbert and Cliff Lampe. Her research interests lie in human-AI collaboration and interpretable ML. Specifically, she designs and evaluates human-AI systems such that they effectively incorporate what people and AI are each good at, but also mitigate harms by accounting for the incompatibilities between the two. She has published several papers at top-tier human-computer interaction venues, such as CHI, CSCW, IUI, UIST, and FAccT. She has also completed several internships at Microsoft Research and the Allen Institute for AI, and is a recipient of the Google PhD fellowship. Prior to Michigan, Harman received a BS in Computer Science from the University of Minnesota.

CS&E Colloquium: Vedant Das Swain

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. More details about the spring 2023 series will be provided at the beginning of the semester. This week's speaker, Vedant Das Swain (Georgia Institute of Technology), will be giving a talk titled "Passive Sensing Frameworks for the Future of Information Work".

Abstract

We live in a time when our conception of a thriving worker is in flux. These changing definitions are especially affecting information workers, who are increasingly unsatisfied with the care they get at work. Organizations are failing to identify these trends and promote positive behaviors. We need applications that provide precise and actionable insight to help information workers strive for better wellbeing. I believe that sensing day-level behaviors in a passive (automatic, unobtrusive, and continuous) way can offer unique insights into worker success. My research investigates approaches to leverage everyday digital technologies as sensors that enable algorithmic insights for information worker behaviors. In this talk, I will show how passive sensing can reveal personal and social behaviors linked to better performance and mental wellbeing. I will also demonstrate the methodological and societal challenges in predictive applications for work wellbeing with passive sensing. Finally, I will describe my vision to design passive sensing applications as tools to empower information workers towards holistic success.

Biography

Vedant Das Swain is a Ph.D. Candidate in the School of Interactive Computing at the Georgia Institute of Technology, advised by Munmun De Choudhury and Gregory Abowd. His research contributes to the future of work and behavioral wellbeing in general. He identifies, develops, and critiques opportunities to leverage ubiquitous technologies for algorithmic inference of performance and mental wellbeing. He consistently works with organizational psychologists to inform his investigations and also collaborates with Microsoft Research to develop better tools for worker wellbeing. His research has been published at top-tier computing venues like CHI, CSCW, UbiComp/IMWUT, ACII, and IEEE CogMI. His paper at CHI 2022 won a Best Paper Honorable Mention award. He is the winner of the Gaetano Borriello Outstanding Student Award at UbiComp 2022 and the GVU Foley Scholar Award 2022. His research has been supported by IARPA, NSF, CDC, ORNL, and Semiconductor Research Corporation.