Past events

ML Seminar: Numerical understanding of neural networks: from representation to learning dynamics

Numerical understanding of neural networks: from representation to learning dynamics

CS-IDEA Self Defense Seminar

The CS-IDEA Committee is hosting a self-defense seminar for students, researchers, and staff in the Department of Computer Science & Engineering. There will be some physical contact with others during the seminar, mainly wrist grabs. 

The event will be held on Monday, April 15 from 9-11 a.m. at the University Recreation and Wellness Center - Multipurpose Room 2. 

Pizza and beverages will be provided at the conclusion of the event for participants. The event is free, but registration is required; please RSVP by Friday, April 12. 
 


The Computer Science & Engineering (CS&E) department is committed to supporting and recruiting a diverse community of students, staff, and faculty and helping everyone in this community to thrive. This requires deliberate work to build an inclusive and supportive environment for those from historically underrepresented and non-traditional backgrounds. The Computer Science Inclusivity, Diversity, Equity, and Advocacy (CS-IDEA) committee aims to attract and retain diverse students, staff, and faculty in computer science and engineering and help all students, staff, and faculty thrive within the Department of Computer Science & Engineering at the University of Minnesota.  

MSSE Information Session (Virtual)

Interested in learning more about the University of Minnesota's Master of Science in Software Engineering program?

Reserve a spot at an upcoming virtual information session to get all your questions answered.

Info sessions are recommended for those who have at least 1-2 years of software engineering experience.

During each session, MSSE staff will review:

  • Requirements (general)
  • Applying
  • Prerequisite requirements
  • What makes a strong applicant
  • Funding
  • Resources
  • Common questions
  • Questions from attendees
     

RSVP for the next information session now

ML Seminar: Renbo Zhao

The UMN Machine Learning Seminar Series brings together faculty, students, and local industrial partners who are interested in the theoretical, computational, and applied aspects of machine learning, to pose problems, exchange ideas, and foster collaborations. The talks are every Tuesday from 11 a.m. - 12 p.m. during the Spring 2024 semester.

This week's speaker, Renbo Zhao (University of Iowa), will be giving a talk.

CS&E Colloquium: Designing Algorithms for Massive Graph

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Yu Chen (EPFL), will be giving a talk titled, "Designing Algorithms for Massive Graph".

Abstract

As the scale of the problems we want to solve in real life becomes larger, it is difficult to store the whole input or take a very long time to read the entire input. In these cases, the classical algorithms, even when they run in linear time and linear space, may no longer be feasible options as the input size is too large. To deal with this situation, we need to design algorithms that use much smaller space or time than the input size. We call this kind of algorithm a sublinear algorithm.

My primary research interest is in designing sublinear algorithms for combinatorial problems and proving lower bounds to understand the limits of sublinear computation. I also study graph sparsification problems, which is an important technique for designing sublinear algorithms on graphs. It is usually used as a pre-processing step to speed up algorithms. 

In this talk, I’ll cover some of my work in sublinear algorithms and graph sparsifications. I’ll give more details on my recent works about vertex sparsifiers.

Biography

I'm a postdoc in the theory group at EPFL. I obtained my PhD from University of Pennsylvania, where I was advised by Sampath Kannan and Sanjeev Khanna. Before that, I did my undergraduate study at Shanghai Jiao Tong University. I’m a recipient of the Morris and Dorothy Rubinoff Award at University of Pennsylvania and the Best Paper award at SODA’19.

CS&E Colloquium: Co-Designing Algorithms and Hardware for Efficient Machine Learning (ML): Advancing the Democratization of ML

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Caiwen Ding (University of Connecticut), will be giving a talk titled, "Co-Designing Algorithms and Hardware for Efficient Machine Learning (ML): Advancing the Democratization of ML". 

Abstract

The rapid deployment of ML has witnessed various challenges such as prolonged computation and high memory footprint on systems. In this talk, we will present several ML acceleration frameworks through algorithm-hardware co-design on various computing platforms. The first part presents a fine-grained crossbar-based ML accelerator. Instead of attempting to map the trained positive/negative weights
afterwards, our key principle is to proactively ensure that all weights in the same column of a crossbar have the same sign, to reduce area. We divide the crossbar into sub-arrays, providing a unique opportunity for input zero-bit skipping. Next, we focus on co-designing Transformer architecture, and introduce on-the-fly attention and attention-aware pruning to significantly reduce runtime latency. Then, we will focus on co-design graph neural network training. To explore training sparsity and assist explainable ML, we propose a hardware friendly MaxK nonlinearity, and tailor a GPU kernel. Our methods outperform the state-of-the-arts on different tasks. Finally, we will discuss today's challenges related to secure edge AI and large language models (LLMs)-aided agile hardware design, and outline our research plans aimed at addressing these issues.

Biography

Caiwen Ding is an assistant professor in the School of Computing at the University of Connecticut (UConn). He received his Ph.D. degree from Northeastern University, Boston, in 2019, supervised by Prof. Yanzhi Wang. His research interests mainly include efficient embedded and high-performance systems for machine learning, machine learning for hardware design, and efficient privacy-preserving machine learning. His work has been published in high-impact venues (e.g., DAC, ICCAD, ASPLOS, ISCA, MICRO, HPCA, SC, FPGA, Oakland, NeurIPS, ICCV, IJCAI, AAAI, ACL, EMNLP). He is a recipient of the 2024 NSF CAREER Award, Amazon Research Award, and CISCO Research Award. He received the best paper nomination at 2018 DATE and 2021 DATE, the best paper award at the DL-Hardware Co-Design for AI
Acceleration (DCAA) workshop at 2023 AAAI, outstanding student paper award at 2023 HPEC, publicity paper at 2022 DAC, and the 2021 Excellence in Teaching Award from UConn Provost. His team won first place in accuracy and fourth place overall at the 2022 TinyML Design Contest at ICCAD. He was ranked among Stanford’s World’s Top 2% Scientists in 2023. His research has been mainly funded by NSF, DOE,
DOT, USDA, SRC, and multiple industrial sponsors.

Thirst for Knowledge: AI in Health and Medicine

Join the Department of Computer Science & Engineering (CS&E) for this all-alumni event to discuss AI in health and medicine, featuring Chad Myers, Ju Sun, Yogatheesan Varatharajah, and Qianwen Wang. Enjoy hosted beverages and appetizers, and the chance to reconnect with former classmates, colleagues, instructors, and friends. All alumni of the University of Minnesota CS&E programs (Computer Science, Data Science, MSSE) are invited to attend, and guests are welcome. 

There is no charge to attend our event, but pre-registration is required. 

About the Program

While tools like ChatGPT allow the public to use AI for various tasks, computer scientists around the world are hard at work applying AI to some of the most critical problems in society. CS&E researchers are applying AI techniques to combat problems in the healthcare space - like clinician burnout, disease prediction, and data imbalance issues in biomedical data science.

Learn more about our AI efforts at z.umn.edu/AIforchange 
Check out our medical AI initiatives at z.umn.edu/MedicalAIPrograms 

ML Seminar: Policy Learning Methods for Confounded POMDPs

The UMN Machine Learning Seminar Series brings together faculty, students, and local industrial partners who are interested in the theoretical, computational, and applied aspects of machine learning, to pose problems, exchange ideas, and foster collaborations. The talks are every Tuesday from 11 a.m. - 12 p.m. during the Spring 2024 semester.

This week's speaker, Zhengling Qi (George Washington University), will be giving a talk, titled "Policy Learning Methods for Confounded POMDPs".

Abstract

In this talk I will present a policy gradient method for confounded partially observable Markov decision processes (POMDPs) with continuous state and observation spaces in the offline setting. We first establish a novel identification result to non-parametrically estimate any history-dependent policy gradient under POMDPs using the offline data. The identification enables us to solve a sequence of conditional moment restrictions and adopt the min-max learning procedure with general function approximation for estimating the policy gradient. We then provide a finite-sample non-asymptotic bound for estimating the gradient uniformly over a pre-specified policy class in terms of the sample size, length of horizon, concentratability coefficient and the measure of ill-posedness in solving the conditional moment restrictions. Lastly, by deploying the proposed gradient estimation in the gradient ascent algorithm, we show the global convergence of the proposed algorithm in finding the history-dependent optimal policy under some technical conditions. To the best of our knowledge, this is the first work studying the policy gradient method for POMDPs under the offline setting. If time permits, I will describe a model-based method for confounded POMDPs.

Biography

Zhengling Qi is an assistant professor at the School of Business, the George Washington University. He got his PhD degree from the Department of Statistics and Operations Research at the University of North Carolina, Chapel Hill. His research has been focused on statistical machine learning and related non-convex optimization. He is mainly working on reinforcement learning and causal inference problems. 

CS&E Colloquium: Modern Algorithms for Massive Graphs: Structure and Compression

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Zihan Tan (Rutgers University), will be giving a talk titled "Modern Algorithms for Massive Graphs: Structure and Compression."

Abstract

In the era of big data, the significant growth in graph size renders numerous traditional algorithms, including those with polynomial or even linear time complexity, inefficient. Therefore, we need novel approaches for efficiently processing massive graphs. In this talk, I will discuss two modern approaches towards this goal: structure exploitation and graph compression. I will first show how to utilize graph structure to design better approximation algorithms, showcasing my work on the Graph Crossing Number problem. I will then show how to compress massive graphs into smaller ones while preserving their flow/cut/distance structures and thereby obtaining faster algorithms.

Biography

Zihan Tan is a postdoctoral associate at DIMACS, Rutgers University. Before joining DIMACS, he obtained his Ph.D. from the University of Chicago, where he was advised by Julia Chuzhoy. He is broadly interested in theoretical computer science, with a focus on graph algorithms and graph theory.

CS&E Colloquium: The marriage of (provable) algorithm design and machine learning

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Sandeep Silwal (MIT), will be giving a talk titled "The marriage of (provable) algorithm design and machine learning".

Abstract

The talk is motivated by two questions at the interplay between algorithm design and machine learning: (1) How can we leverage the predictive power of machine learning in algorithm design? and (2) How can algorithms alleviate the computational demands of modern machine learning?
 
Towards the first question, I will demonstrate the power of data-driven and learning-augmented algorithm design. I will argue that data should be a central component in the algorithm design process itself. Indeed in many instances, inputs are similar across different algorithm executions. Thus, we can hope to extract information from past inputs or other learned information to improve future performance. Towards this end, I will zoom in on a fruitful template for incorporating learning into algorithm design and highlight a success story in designing space efficient data structures for processing large data streams. I hope to convey that learning-augmented algorithm design should be a tool in every algorithmist's toolkit.
 
Then I will discuss algorithms for scalable ML computations to address the second question. I will focus on my works in understanding global similarity relationships in large high-dimensional datasets, encoded in a similarity matrix. By exploiting geometric structure of specific similarity functions, such as distance or kernel functions, we can understand the capabilities -- and fundamental limitations -- of computing on similarity matrices. Overall, my main message is that sublinear algorithms design principles are instrumental in designing scalable algorithms for big data. 
 
I will conclude with some exciting directions in pushing the boundaries of learning-augmented algorithms, as well as new algorithmic challenges in scalable computations for faster ML.

Biography

Sandeep is a final year PhD student at MIT, advised by Piotr Indyk. His interests are broadly in fast algorithm design. Recently, he has been working in the intersection of machine learning and classical algorithms by designing provable algorithms in various ML settings, such as efficient algorithms for processing large datasets, as well as using ML to inspire algorithm design.