Past events

CS&E Colloquium: Jing Ma

This week's speaker, Jing Ma (University of Virginia), will be giving a talk titled, "When Causal Inference Meets Graph Machine Learning: Unleashing the Potential of Mutual Benefit".

Abstract

Recent years have witnessed rapid development in graph-based machine learning (ML) in various high-impact domains (e.g., healthcare, recommendation, and security), especially those powered by effective graph neural networks (GNNs). Currently, the mainstream graph ML methods are based on statistical learning, e.g., utilizing the statistical correlations between node features, graph structure, and labels for node classification. However, statistical learning has been widely criticized for only capturing the superficial relations between variables in the data system, and consequently, rendering the lack of trustworthiness in real-world applications. For example, ML models often make biased predictions toward underrepresented groups. Besides, these ML models often lack explanation for human. Therefore, it is crucial to understand the causality in the data system and the learning process. Causal inference is the discipline that investigates the causality inside a system, for example, to identify and estimate the causal effect of a certain treatment (e.g., wearing a face mask) on an important outcome (e.g., COVID-19 infection). Involving the concepts and philosophy of causal inference into ML methods is often considered as a significant component of human-level intelligence and can serve as the foundation of artificial intelligence (AI).  However, most traditional causal inference studies rely on strong assumptions, and focus on independent and identically distributed (i.i.d.) data, while causal inference on graphs is faced with many barriers in effectiveness. Fortunately, the interplay between causal inference and graph ML has the potential to bring mutual benefit to each other. In this talk, we will present the challenges and our contributions for bridging the gap between causal inference and graph ML, mainly including two directions: 1) leveraging graph ML methods to facilitate causal inference in effectiveness; and 2) leveraging causality to facilitate graph ML models in model trustworthiness (e.g., model fairness and explanation).


Biography

Jing Ma is a Ph.D. candidate in the Department of Computer Science at University of Virginia, under the supervision of Dr. Jundong Li and Dr. Aidong Zhang. She received her B.Eng. degree and M.Eng. degree at Shanghai Jiao Tong University with Outstanding Graduate Award. Her research interests broadly cover machine learning and data mining, especially include causal inference, graph mining, fairness, trustworthiness, and AI for social good. Her recent work focuses on bridging the gap between causality and machine learning. Her research papers have been published in top conferences and journals such as KDD, NeurIPS, IJCAI, WWW, AAAI, TKDE, WSDM, SIGIR, ECML-PKDD, AI Magazine, and IPSN. She has rich internship experience in companies and academic organizations such as Microsoft Research. She has won some important awards such as SIGKDD 2022 Best Paper Award and CAPWIC 2022 Best Poster Award.

BICB Colloquium: Nuri Ince

BICB Colloquium Faculty Nomination Talks: Join us in person on the UMR campus in room 414, on the Twin Cities campus in MCB 2-122 or virtually at 5 p.m.

 
Nuri Ince is an Associate Professor of Biomedical Engineering at University of Houston (will be joining the Mayo Clinic soon).


Title: Investigation of Functional Utility of High Frequency Oscillations: Applications in Neuromodulation and Functional Neurosurgery

Abstract: Despite the recent advances in neural engineering to process oscillatory brain activity in different scenarios such as brain machine interfaces, limited progress has been done towards the interpretation of oscillatory neural activity (such as LFPs, iEEG or ECoG) with computational intelligence for clinical decision making. In this talk, I will summarize our efforts towards mapping of subcortical regions during awake brain surgeries using machine learning and neural signal processing for the optimization of DBS in PD. Moreover, I provide additional perspectives regarding the use of machine intelligence for the detection of localized high frequency oscillations (HFOs) in large scale iEEG datasets for identification of seizure onset zone in epilepsy.

ML Seminar: Zhi Ding

The UMN Machine Learning Seminar Series brings together faculty, students, and local industrial partners who are interested in the theoretical, computational, and applied aspects of machine learning, to pose problems, exchange ideas, and foster collaborations. The talks are every Tuesday from 11 a.m. - 12 p.m. during the Spring 2023 semester.

This week's speaker, Zhi Ding (ECE, UC Davis), will be giving a talk titled "Non-Blackbox Deep Learning for Massive MIMO Wireless Communication Systems".

Abstract

The proliferation of advanced wireless services, such as virtual reality, autonomous driving, and internet of things, has generated increasingly intense pressure to develop intelligent wireless communication systems to meet networking needs posed by extremely high data rates, massive numbers of connected devices, and ultra-low latency. Deep learning (DL) has been recently emerged as an exciting design tool to advance the development of wireless communication system with some demonstrated successes. In this tutorial, we review the principles of applying DL for improving wireless network performance by integrating the underlying characteristics of channels in practical massive MIMO deployment. We introduce important insights derived from the physical RF channel properties and present a comprehensive overview on the application of DL for accurately estimating channel state information (CSI) of forward channels with low feedback overhead. We provide examples of successful DL application in CSI estimation for massive MIMO wireless systems and highlight several promising directions for future research.

Biography

Dr. Zhi Ding (S'88-M'90-SM'95-F'03, IEEE) holds the position of Distinguished Professor of Electrical and Computer Engineering at the University of California, Davis. He received his Ph.D. degree in Electrical Engineering from Cornell University in 1990. From 1990 to 2000, he was a faculty member of Auburn University and later, the University of Iowa. He has coauthored over 400 technical papers and two books. Dr. Ding is a coauthor of the text: Modern Digital and Analog Communication Systems, 4th edition and 5th edition, Oxford University Press.

Dr. Ding is a Fellow of IEEE and has been an active member of IEEE, serving on technical programs of several workshops and conferences. He served both as a Member and also the Chair of the IEEE Transactions on Wireless Communications Steering Committee from 2007-2001. Dr. Ding was the Technical Program Chair of the 2006 IEEE Globecom and the General Chair of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). He served as an IEEE Distinguished Lecturer (Circuits and Systems Society, 2004-06, Communications Society, 2008-09). He received the 2012 Wireless Communications Recognition Award and the 2020 Education Award from the IEEE Communications Society.

CS&E Colloquium: Yogatheesan Varatharajah

This week's speaker, Yogatheesan Varatharajah (University of Illinois), will be giving a talk titled, "Trustworthy Machine Learning for Health via Domain-guided Modeling: The Case for Neurological Diseases".

Abstract

Recent advances in wearables, brain implants, and sensing technology have enabled us to design systems that continuously monitor patients' brain health and ascertain individualized treatments for neurological diseases. However, there is a lack of efficient methods that translate continuous physiological data streams into meaningful biological models of underlying diseases, relate them to existing clinical knowledge and biomarkers, and provide actionable treatment parameters. Machine learning (ML) holds great promise in tackling these challenges; however, the mainstream black-box-ML approaches have proven to be untrustworthy because of label inconsistencies, spurious correlations, and the lack of deployment robustness. My goal is to ensure trustworthiness in ML for healthcare, particularly neurology, via a novel framework known as “Domain-guided Machine Learning” or “DGML” that merges machine learning with clinical domain expertise. In this talk, I will discuss the need for trustworthy ML in healthcare, how to leverage clinical domain knowledge to engineer trustworthy ML models, and several real-world applications of DGML in neurological care and decision making.


Biography

Dr. Yoga Varatharajah is currently a Research Assistant Professor in the Department of Bioengineering at the University of Illinois at Urbana-Champaign and a Visiting Scientist at the Mayo Clinic, Rochester. He obtained his Ph.D. in Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign under the supervision of Prof. Ravishankar Iyer. Over the past seven years, he has been working closely with domain experts at Mayo Clinic and Cleveland Clinic to develop, evaluate, and deploy domain-guided ML models to inform clinical decisions related to neurological diseases. His research has been published at reputed engineering conferences (e.g., Neurips, ML4H, BIBM, ISBI, EMBC, NER) and medical journals (e.g., Scientific Reports, Journal of Neural Engineering, Brain Communications, Epilepsia, Neuroimage), has contributed to an ongoing clinical trial in neuromodulation for epilepsy, and has resulted in a joint patent between Mayo and Illinois. He also received several honors, including a CSL Ph.D. Thesis Award, a Mayo-Clinic-Illinois Alliance Fellowship, an American Epilepsy Society Young Investigator Award, an NSF CRII Research Initiation Award, and several best paper awards and nominations.

ML Seminar: Tan Bui-Thanh

The UMN Machine Learning Seminar Series brings together faculty, students, and local industrial partners who are interested in the theoretical, computational, and applied aspects of machine learning, to pose problems, exchange ideas, and foster collaborations. The talks are every Tuesday from 11 a.m. - 12 p.m. during the Spring 2023 semester.

This week's speaker, Tan Bui-Thanh (Oden Institute for Computational Engineering & Sciences, University of Texas at Austin), will be giving a talk titled "Enabling approaches for real-time deployment, calibration, and UQ for digital twins".

Abstract

Digital twins (DTs) are digital replicas of systems and processes. At the core of a DT is a physical/mathematical model which captures the behavior of the real system across temporal and spatial scales. One of the key roles of DTs is enabling “what if” scenario testing of hypothetical simulations to understand the implications at any point throughout the life cycle of the process, to monitor the process, and to calibrate parameters to match the actual process. In this talk we will present two real time approaches: 1) mcTangent (a model-constrained tangent slope learning) approach for learning dynamical systems; and 2) TNet (a model-constrained Tikhonov network) approach for learning inverse solutions. Both theoretical and numerical results for various problems including transport, heat, Burgers and Navier-Stokes equations will be presented.

Biography

Tan Bui-Thanh is an associate professor, and the endowed William J Murray Jr. Fellow in Engineering No. 4, of the Oden Institute for Computational Engineering & Sciences, and the Department of Aerospace Engineering & Engineering Mechanics at the University of Texas at Austin. Bui-Thanh obtained his PhD from the Massachusetts Institute of Technology in 2007, Master of Sciences from the Singapore MIT-Alliance in 2003, and Bachelor of Engineering from the Ho Chi Minh City University of Technology (DHBK) in 2001. He has decades of experience and expertise on multidisciplinary research across the boundaries of different branches of computational science, engineering, and mathematics. Bui-Thanh is currently a Co-Director of the Center for Scientific Machine Learning at the Oden Institute. He is a former elected vice president of the SIAM Texas-Louisiana Section, and currently the elected secretary of the SIAM SIAG/CSE. Bui-Thanh was an NSF (OAC/DMS) early CAREER recipient, the Oden Institute distinguished research award, and a two-time winner of the Moncrief Faculty Challenging award.

CS&E Colloquium: Muhao Chen

This week's speaker, Muhao Chen (University of Southern California), will be giving a talk titled, "Robust and Indirectly Supervised Knowledge Acquisition from Natural Language".

Abstract

Information extraction (IE) refers to the process of automatically determining the concepts and relations present in natural language text. IE is not only the fundamental task for evaluating a machine's ability to understand natural language. More importantly, it is the essential step for acquiring structured knowledge representation required by any knowledge-driven AI systems. Despite the importance, obtaining direct supervision for IE tasks is always challenging due to the difficulty in locating complex structures in long documents by expert annotators. Therefore, a robust and accountable IE model has to be achievable with minimal and imperfect supervision. Towards this mission, this talk presents recent advances of machine learning and inference technologies that (i) grant robustness against noise and perturbation, (ii) prevent systematic errors caused by spurious correlations, and (iii) provide indirect supervision for label-efficient and logically consistent IE.

Biography

Muhao Chen is an Assistant Research Professor of Computer Science at USC, and the director of the USC Language Understanding and Knowledge Acquisition (LUKA) Lab (https://luka-group.github.io/). His research focuses on robust and minimally supervised machine learning for natural language understanding, structured data processing, and knowledge acquisition from unstructured data. His work has been recognized with an NSF CRII Award, faculty research awards from Amazon, Cisco and the Keston Foundation, an ACM SIGBio Best Student Paper Award and a best paper nomination at CoNLL. Dr. Chen obtained his Ph.D. degree from UCLA Department of Computer Science in 2019, and was a postdoctoral researcher at UPenn prior to joining USC.

CS&E Colloquium: Huaxiu Yao

This week's speaker, Huaxiu Yao (Stanford University), will be giving a talk titled, "Embracing Change: Tackling In-the-Wild Shifts in Machine Learning".

Abstract

The real-world deployment of machine learning algorithms often poses challenges due to shifts in data distributions and tasks. These shifts can lead to a degradation in model performance, as the model may not have seen such changes during training. It can also make it difficult for the model to generalize to new scenarios and can lead to poor performance in real-world applications. In this talk, I will present our research on building machine learning models that are unbiased, widely generalizable, and easily adaptable to different shifts. Specifically, I will first discuss our approach to learning unbiased models through selective augmentation for scenarios with subpopulation shifts. Second, I will also delve into the utilization of domain relational information to enhance model generalizability for arbitrary domain shifts. Then, I will present our techniques for quickly adapting models to new tasks with limited labeled data. Additionally, I will show our success practices for addressing shifts in real-world applications, such as in the healthcare, e-commerce, and transportation industries. The talk will also cover the remaining challenges and future research directions in this area.


Biography

Huaxiu Yao is a Postdoctoral Scholar in Computer Science at Stanford University, working with Prof. Chelsea Finn. Currently, his research interests focus on building machine learning models that are building machine learning models that are unbiased, widely generalizable, and easily adaptable to changing environments and tasks. He is also dedicated to applying these methods to solve real-world data science applications, such as healthcare, transportation, and online education. Huaxiu earned his Ph.D. degree from Pennsylvania State University. He has over 30 publications in leading machine learning and data science venues such as ICML, ICLR, and NeurIPS. He also has organized and co-organized workshops at ICML and NeurIPS, and has served as a tutorial speaker at conferences such as KDD, AAAI and IJCAI. Additionally, Huaxiu has extensive industry experience, having interned at companies such as Amazon Science, Salesforce Research. For more information, please visit https://huaxiuyao.mystrikingly.com/.

BICB Colloquium: Arslan Zaidi

BICB Colloquium Faculty Nomination Talks: Join us in person on the UMR campus in room 414, on the Twin Cities campus in MCB 2-122 or virtually at 5 p.m.
 
Arslan Zaidi is an Assistant Professor Institute for Health Informatics UMN. They will be giving a talk titled, "Interpreting heritability in admixed populations".
 

Abstract

Heritability is a fundamental concept in human genetics. Defined (in the narrow sense) as the proportion of phenotypic variance that is due to additive genetic effects, it is central to our ability to describe and make inferences about genetic variation underlying complex traits. For example, both the power to discover variants in genome-wide association studies and the upper limit of polygenic prediction accuracy are functions of heritability, making it important for the design of genetic studies. However, heritability is often estimated assuming the population is randomly mating, which is almost never true, especially in admixed and multi-ethnic cohorts. In this seminar, I will use theory and simulations to describe the behavior of heritability in admixed populations and evaluate the effect of the 'random mating' assumption on heritability estimation. I will discuss the implications of these results for genome-wide association studies, polygenic prediction, and our understand of the genetic architecture of complex traits.

CS&E Colloquium: Vishwanath Saragadam

This week's speaker, Vishwanath Saragadam (Rice University), will be giving a talk titled, "Co-designing Optics and Algorithms for High-dimensional Visual Computing".

Abstract

The past two decades have seen tremendous advances in computer vision, all powered by the ubiquitous RGB sensor. However, the richness of visual information goes beyond just the spatial dimensions. A complete description requires representing the data with multiple independent variables including space, angle, spectrum, polarization, and time. Building cameras that leverage these dimensions of light will not only dramatically improve vision applications of today, but will also be key to unlocking new capabilities across multiple domains such as  robotics, augmented and virtual reality, biomedical imaging, security and biometrics, and environmental monitoring. 

Cameras for sampling such high dimensions need to solve three big challenges – (1) how do we sense multiple dimensions in an efficient manner (2) how do we model the data concisely, and (3) how do we build algorithms that scale to such high dimensions. In this talk, I will be introducing my work on high-dimensional visual computing that focuses on sampling, modeling, and inferring of visual data beyond the RGB. My talk will focus on the synergy across three key thrusts: modeling high-dimensional interactions with neural representations, building new optics and cameras with meta-optics, and inferring while sensing with optical computing. The ideas presented in the talk will pave the way for the future of computer vision where cameras do not merely sense but are part of the computing pipeline.


Biography

Vishwanath Saragadam is a postdoctoral researcher at Rice University with Prof. Richard G. Baraniuk and Prof. Ashok Veeraraghavan. Vishwanath’s research is at the intersection of  computational imaging, meta-optics, and neural representations, and focuses on co-designing optics, sensors, and algorithms for solving challenging problems in vision. He received his PhD from Carnegie Mellon University, advised by Prof. Aswin Sankaranarayanan, where his thesis won the A. G. Jordan outstanding thesis award in 2020. Vishwanath is a recipient of the best paper award at ICCP 2022, Prabhu and Poonam Goel graduate fellowship in 2019, and an outstanding teaching assistant award in 2018. In his free time, Vishwanath likes following James Webb Telescope updates, capturing thermal images while baking, and biking long distances.

CS&E Colloquium: Zhutian Chen

This week's speaker, Zhutian Chen (Harvard), will be giving a talk titled, "When Data Meets Reality: Augmenting Dynamic Scenes with Visualizations".

Abstract

We live in a dynamic world that produces a growing volume of accessible data. Visualizing this data within its physical context can aid situational awareness, improve decision-making, enhance daily activities like driving and watching sports, and even save lives in tasks such as performing surgery or navigating hazardous environments. Augmented Reality (AR) offers a unique opportunity to achieve this contextualization of data by overlaying digital content onto the physical world. However, visualizing data in its physical context using AR devices (e.g., headsets or smartphones) is challenging for users due to the complexities involved in creating and accurately placing the visualizations within the physical world. This process can be even more pronounced in dynamic scenarios with temporal constraints.

In this talk, I will introduce a novel approach, which uses sports video streams as a testbed and proxy for dynamic scenes, to explore the design, implementation, and evaluation of AR visualization systems that enable users efficiently visualize data in dynamic scenes. I will first present three systems allowing users to visualize data in sports videos through touch, natural language, and gaze interactions, and then discuss how these interaction techniques can be generalized to other AR scenarios. The designs of these systems collectively form a unified framework that serves as a preliminary solution for helping users visualize data in dynamic scenes using AR. I will next share my latest progress in using Virtual Reality (VR) simulations as a more advanced testbed, compared to videos, for AR visualization research. Finally, building on my framework and testbeds, I will describe my long-term vision and roadmap for using AR visualizations to advance our world in becoming more connected, accessible, and efficient.

Biography

Zhutian Chen is a PostDoc Fellow in the Visual Computing Group at Harvard University. His research is at the intersection of Data Visualization, Human-Computer Interaction, and Augmented Reality, with a focus on advancing human-data interaction in everyday activities. His research has been published as full papers in top venues such as IEEE VIS, ACM CHI, and TVCG, and received three best paper nominations in IEEE VIS, the premier venue in data visualization. Before joining Harvard, he was a PostDoc in the Design Lab at UC San Diego. Zhutian received his Ph.D. in Computer Science and Engineering from the Hong Kong University of Science and Technology.