Past events

CS&E Colloquium: Designing Algorithms for Massive Graph

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Yu Chen (EPFL), will be giving a talk titled, "Designing Algorithms for Massive Graph".

Abstract

As the scale of the problems we want to solve in real life becomes larger, it is difficult to store the whole input or take a very long time to read the entire input. In these cases, the classical algorithms, even when they run in linear time and linear space, may no longer be feasible options as the input size is too large. To deal with this situation, we need to design algorithms that use much smaller space or time than the input size. We call this kind of algorithm a sublinear algorithm.

My primary research interest is in designing sublinear algorithms for combinatorial problems and proving lower bounds to understand the limits of sublinear computation. I also study graph sparsification problems, which is an important technique for designing sublinear algorithms on graphs. It is usually used as a pre-processing step to speed up algorithms. 

In this talk, I’ll cover some of my work in sublinear algorithms and graph sparsifications. I’ll give more details on my recent works about vertex sparsifiers.

Biography

I'm a postdoc in the theory group at EPFL. I obtained my PhD from University of Pennsylvania, where I was advised by Sampath Kannan and Sanjeev Khanna. Before that, I did my undergraduate study at Shanghai Jiao Tong University. I’m a recipient of the Morris and Dorothy Rubinoff Award at University of Pennsylvania and the Best Paper award at SODA’19.

CS&E Colloquium: Co-Designing Algorithms and Hardware for Efficient Machine Learning (ML): Advancing the Democratization of ML

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Caiwen Ding (University of Connecticut), will be giving a talk titled, "Co-Designing Algorithms and Hardware for Efficient Machine Learning (ML): Advancing the Democratization of ML". 

Abstract

The rapid deployment of ML has witnessed various challenges such as prolonged computation and high memory footprint on systems. In this talk, we will present several ML acceleration frameworks through algorithm-hardware co-design on various computing platforms. The first part presents a fine-grained crossbar-based ML accelerator. Instead of attempting to map the trained positive/negative weights
afterwards, our key principle is to proactively ensure that all weights in the same column of a crossbar have the same sign, to reduce area. We divide the crossbar into sub-arrays, providing a unique opportunity for input zero-bit skipping. Next, we focus on co-designing Transformer architecture, and introduce on-the-fly attention and attention-aware pruning to significantly reduce runtime latency. Then, we will focus on co-design graph neural network training. To explore training sparsity and assist explainable ML, we propose a hardware friendly MaxK nonlinearity, and tailor a GPU kernel. Our methods outperform the state-of-the-arts on different tasks. Finally, we will discuss today's challenges related to secure edge AI and large language models (LLMs)-aided agile hardware design, and outline our research plans aimed at addressing these issues.

Biography

Caiwen Ding is an assistant professor in the School of Computing at the University of Connecticut (UConn). He received his Ph.D. degree from Northeastern University, Boston, in 2019, supervised by Prof. Yanzhi Wang. His research interests mainly include efficient embedded and high-performance systems for machine learning, machine learning for hardware design, and efficient privacy-preserving machine learning. His work has been published in high-impact venues (e.g., DAC, ICCAD, ASPLOS, ISCA, MICRO, HPCA, SC, FPGA, Oakland, NeurIPS, ICCV, IJCAI, AAAI, ACL, EMNLP). He is a recipient of the 2024 NSF CAREER Award, Amazon Research Award, and CISCO Research Award. He received the best paper nomination at 2018 DATE and 2021 DATE, the best paper award at the DL-Hardware Co-Design for AI
Acceleration (DCAA) workshop at 2023 AAAI, outstanding student paper award at 2023 HPEC, publicity paper at 2022 DAC, and the 2021 Excellence in Teaching Award from UConn Provost. His team won first place in accuracy and fourth place overall at the 2022 TinyML Design Contest at ICCAD. He was ranked among Stanford’s World’s Top 2% Scientists in 2023. His research has been mainly funded by NSF, DOE,
DOT, USDA, SRC, and multiple industrial sponsors.

Thirst for Knowledge: AI in Health and Medicine

Join the Department of Computer Science & Engineering (CS&E) for this all-alumni event to discuss AI in health and medicine, featuring Chad Myers, Ju Sun, Yogatheesan Varatharajah, and Qianwen Wang. Enjoy hosted beverages and appetizers, and the chance to reconnect with former classmates, colleagues, instructors, and friends. All alumni of the University of Minnesota CS&E programs (Computer Science, Data Science, MSSE) are invited to attend, and guests are welcome. 

There is no charge to attend our event, but pre-registration is required. 

About the Program

While tools like ChatGPT allow the public to use AI for various tasks, computer scientists around the world are hard at work applying AI to some of the most critical problems in society. CS&E researchers are applying AI techniques to combat problems in the healthcare space - like clinician burnout, disease prediction, and data imbalance issues in biomedical data science.

Learn more about our AI efforts at z.umn.edu/AIforchange 
Check out our medical AI initiatives at z.umn.edu/MedicalAIPrograms 

ML Seminar: Policy Learning Methods for Confounded POMDPs

The UMN Machine Learning Seminar Series brings together faculty, students, and local industrial partners who are interested in the theoretical, computational, and applied aspects of machine learning, to pose problems, exchange ideas, and foster collaborations. The talks are every Tuesday from 11 a.m. - 12 p.m. during the Spring 2024 semester.

This week's speaker, Zhengling Qi (George Washington University), will be giving a talk, titled "Policy Learning Methods for Confounded POMDPs".

Abstract

In this talk I will present a policy gradient method for confounded partially observable Markov decision processes (POMDPs) with continuous state and observation spaces in the offline setting. We first establish a novel identification result to non-parametrically estimate any history-dependent policy gradient under POMDPs using the offline data. The identification enables us to solve a sequence of conditional moment restrictions and adopt the min-max learning procedure with general function approximation for estimating the policy gradient. We then provide a finite-sample non-asymptotic bound for estimating the gradient uniformly over a pre-specified policy class in terms of the sample size, length of horizon, concentratability coefficient and the measure of ill-posedness in solving the conditional moment restrictions. Lastly, by deploying the proposed gradient estimation in the gradient ascent algorithm, we show the global convergence of the proposed algorithm in finding the history-dependent optimal policy under some technical conditions. To the best of our knowledge, this is the first work studying the policy gradient method for POMDPs under the offline setting. If time permits, I will describe a model-based method for confounded POMDPs.

Biography

Zhengling Qi is an assistant professor at the School of Business, the George Washington University. He got his PhD degree from the Department of Statistics and Operations Research at the University of North Carolina, Chapel Hill. His research has been focused on statistical machine learning and related non-convex optimization. He is mainly working on reinforcement learning and causal inference problems. 

CS&E Colloquium: Modern Algorithms for Massive Graphs: Structure and Compression

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Zihan Tan (Rutgers University), will be giving a talk titled "Modern Algorithms for Massive Graphs: Structure and Compression."

Abstract

In the era of big data, the significant growth in graph size renders numerous traditional algorithms, including those with polynomial or even linear time complexity, inefficient. Therefore, we need novel approaches for efficiently processing massive graphs. In this talk, I will discuss two modern approaches towards this goal: structure exploitation and graph compression. I will first show how to utilize graph structure to design better approximation algorithms, showcasing my work on the Graph Crossing Number problem. I will then show how to compress massive graphs into smaller ones while preserving their flow/cut/distance structures and thereby obtaining faster algorithms.

Biography

Zihan Tan is a postdoctoral associate at DIMACS, Rutgers University. Before joining DIMACS, he obtained his Ph.D. from the University of Chicago, where he was advised by Julia Chuzhoy. He is broadly interested in theoretical computer science, with a focus on graph algorithms and graph theory.

CS&E Colloquium: The marriage of (provable) algorithm design and machine learning

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Sandeep Silwal (MIT), will be giving a talk titled "The marriage of (provable) algorithm design and machine learning".

Abstract

The talk is motivated by two questions at the interplay between algorithm design and machine learning: (1) How can we leverage the predictive power of machine learning in algorithm design? and (2) How can algorithms alleviate the computational demands of modern machine learning?
 
Towards the first question, I will demonstrate the power of data-driven and learning-augmented algorithm design. I will argue that data should be a central component in the algorithm design process itself. Indeed in many instances, inputs are similar across different algorithm executions. Thus, we can hope to extract information from past inputs or other learned information to improve future performance. Towards this end, I will zoom in on a fruitful template for incorporating learning into algorithm design and highlight a success story in designing space efficient data structures for processing large data streams. I hope to convey that learning-augmented algorithm design should be a tool in every algorithmist's toolkit.
 
Then I will discuss algorithms for scalable ML computations to address the second question. I will focus on my works in understanding global similarity relationships in large high-dimensional datasets, encoded in a similarity matrix. By exploiting geometric structure of specific similarity functions, such as distance or kernel functions, we can understand the capabilities -- and fundamental limitations -- of computing on similarity matrices. Overall, my main message is that sublinear algorithms design principles are instrumental in designing scalable algorithms for big data. 
 
I will conclude with some exciting directions in pushing the boundaries of learning-augmented algorithms, as well as new algorithmic challenges in scalable computations for faster ML.

Biography

Sandeep is a final year PhD student at MIT, advised by Piotr Indyk. His interests are broadly in fast algorithm design. Recently, he has been working in the intersection of machine learning and classical algorithms by designing provable algorithms in various ML settings, such as efficient algorithms for processing large datasets, as well as using ML to inspire algorithm design.

ML Seminar: Scientific Innovations in the Age of Generative AI

Scientific Innovations in the Age of Generative AI

CS&E Colloquium: Digital Safety and Security for Survivors of Technology-Mediated Harms

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Emily Tseng (Cornell Tech), will be giving a talk titled, "Digital Safety and Security for Survivors of Technology-Mediated Harms".

Abstract

Platforms, devices, and algorithms are increasingly weaponized to control and harass the most vulnerable among us. Some of these harms occur at the individual and interpersonal level: for example, abusers in intimate partner violence (IPV) use smartphones and social media to surveil and stalk their victims. Others are more subtle, at the level of social structure: for example, in organizations, workplace technologies can inadvertently scaffold exploitative labor practices. This talk will discuss my research (1) investigating these harms via online measurement studies, (2) building interventions to directly assist survivors with their security and privacy; and (3) instrumenting these interventions as observatories, to enable scientific research into new types of harms as attackers and technologies evolve. I will close by sharing my vision for centering inclusion and equity in digital safety, security and privacy, towards brighter technological futures for us all.

Biography

Emily Tseng is a PhD candidate in Information Science at Cornell University. Her research develops the systems, interventions, and design principles we need to make digital technology safe and affirming for everyone. Emily’s work has been published at top-tier venues in human-computer interaction (ACM CHI, CSCW) and computer security and privacy (USENIX Security, IEEE Oakland). For 5 years, she has worked as a researcher-practitioner with the Clinic to End Tech Abuse, where her work has enabled specialized security services for over 500 survivors of intimate partner violence (IPV). Emily is the recipient of a Microsoft Research PhD Fellowship, Rising Stars in EECS, Best Paper Awards at CHI, CSCW, and USENIX Security, and third place in the Internet Defense Prize. She has interned at Google and with the Social Media Collective at Microsoft Research. She holds a Bachelor’s from Princeton University.

CS&E Colloquium: Toward Practical Quantum Computing Systems with Intelligent Cross-Stack Co-Design

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Hanrui Wang (MIT), will be giving a talk titled "Toward Practical Quantum Computing Systems with Intelligent Cross-Stack Co-Design".

Abstract

Quantum Computing (QC) has the potential to solve classically hard problems with greater speed and efficiency, and we have witnessed exciting advancements in QC in recent years. However, there remain substantial gaps between the application requirements and the available devices in terms of reliability, software framework support, and efficiency. To close the gaps and fully unleash quantum power, it is critical to perform AI-enhanced co-design across various technology stacks, from algorithm and program design, to compilation, and hardware architecture.

In this talk, I will provide an overview of my contributions to the architecture and system supports for quantum computing. At the algorithm and program level, I will introduce QuantumNAS, a framework for quantum program structure (ansatz) design for variational quantum algorithms. QuantumNAS adopts an intelligent search engine and utilizes the noisy feedback from quantum devices to search for program structure and qubit mapping tailored for specific hardware, leading to notable resource reduction and reliability enhancements. Then, at the compilation and control level, I will discuss Q-Pilot, a compilation framework for the Field-Programmable Qubit Array (FPQA) implemented by the emerging reconfigurable atom arrays. This framework leverages movable atoms for routing 2Q gates and generates atom movements and gate scheduling with high scalability and parallelism. On the hardware architecture and design automation front, I will present SpAtten, an algorithm-architecture-circuit co-design aimed at Transformer-based quantum error correction decoding. SpAtten supports on-the-fly error pattern pruning to eliminate less critical inputs and boost efficiency. Finally, I will conclude with an overview of my ongoing work and my research vision toward building software and hardware supports for practical quantum advantages.

Biography

Hanrui Wang is a Ph.D. Candidate at MIT EECS, advised by Prof. Song Han. His research focuses on architecture and system-level supports for quantum computing, and AI for quantum. His work appears in conferences such as MICRO, HPCA, QCE, DAC, ICCAD, and NeurIPS and has been recognized by the QCE 2023 Best Paper Award, ICML RL4RL 2019 Best Paper Award, ACM Student Research Competition 1st Place Award, Best Poster Award at NSF AI Institute, Best Demo Award at DAC University Demo, MLCommons Rising Star in ML and Systems, and ISSCC 2024 Rising Star. His work is supported by the Qualcomm Innovation Fellowship, Baidu Fellowship, and Unitary Fund. He is the creator of the TorchQuantum library, which has been adopted by the IBM Qiskit Ecosystem and PyTorch Ecosystem with 1.2K+ stars on GitHub. He is passionate about teaching and has served as a course developer and co-instructor for a new course on efficient ML and quantum computing at MIT. He is also the co-founder of the QuCS "Quantum Computer Systems" forum for quantum education.

ML Seminar: Vertical Reasoning Enhanced Learning, Generation and Scientific Discovery

Vertical Reasoning Enhanced Learning, Generation and Scientific Discovery