Upcoming events

CS&E Colloquium: The marriage of (provable) algorithm design and machine learning

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Sandeep Silwal (MIT), will be giving a talk titled "The marriage of (provable) algorithm design and machine learning".

Abstract

The talk is motivated by two questions at the interplay between algorithm design and machine learning: (1) How can we leverage the predictive power of machine learning in algorithm design? and (2) How can algorithms alleviate the computational demands of modern machine learning?
 
Towards the first question, I will demonstrate the power of data-driven and learning-augmented algorithm design. I will argue that data should be a central component in the algorithm design process itself. Indeed in many instances, inputs are similar across different algorithm executions. Thus, we can hope to extract information from past inputs or other learned information to improve future performance. Towards this end, I will zoom in on a fruitful template for incorporating learning into algorithm design and highlight a success story in designing space efficient data structures for processing large data streams. I hope to convey that learning-augmented algorithm design should be a tool in every algorithmist's toolkit.
 
Then I will discuss algorithms for scalable ML computations to address the second question. I will focus on my works in understanding global similarity relationships in large high-dimensional datasets, encoded in a similarity matrix. By exploiting geometric structure of specific similarity functions, such as distance or kernel functions, we can understand the capabilities -- and fundamental limitations -- of computing on similarity matrices. Overall, my main message is that sublinear algorithms design principles are instrumental in designing scalable algorithms for big data. 
 
I will conclude with some exciting directions in pushing the boundaries of learning-augmented algorithms, as well as new algorithmic challenges in scalable computations for faster ML.

Biography

Sandeep is a final year PhD student at MIT, advised by Piotr Indyk. His interests are broadly in fast algorithm design. Recently, he has been working in the intersection of machine learning and classical algorithms by designing provable algorithms in various ML settings, such as efficient algorithms for processing large datasets, as well as using ML to inspire algorithm design.

CS&E Colloquium: Modern Algorithms for Massive Graphs: Structure and Compression

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Zihan Tan (Rutgers University), will be giving a talk titled "Modern Algorithms for Massive Graphs: Structure and Compression."

Abstract

In the era of big data, the significant growth in graph size renders numerous traditional algorithms, including those with polynomial or even linear time complexity, inefficient. Therefore, we need novel approaches for efficiently processing massive graphs. In this talk, I will discuss two modern approaches towards this goal: structure exploitation and graph compression. I will first show how to utilize graph structure to design better approximation algorithms, showcasing my work on the Graph Crossing Number problem. I will then show how to compress massive graphs into smaller ones while preserving their flow/cut/distance structures and thereby obtaining faster algorithms.

Biography

Zihan Tan is a postdoctoral associate at DIMACS, Rutgers University. Before joining DIMACS, he obtained his Ph.D. from the University of Chicago, where he was advised by Julia Chuzhoy. He is broadly interested in theoretical computer science, with a focus on graph algorithms and graph theory.

ML Seminar: Policy Learning Methods for Confounded POMDPs

The UMN Machine Learning Seminar Series brings together faculty, students, and local industrial partners who are interested in the theoretical, computational, and applied aspects of machine learning, to pose problems, exchange ideas, and foster collaborations. The talks are every Tuesday from 11 a.m. - 12 p.m. during the Spring 2024 semester.

This week's speaker, Zhengling Qi (George Washington University), will be giving a talk, titled "Policy Learning Methods for Confounded POMDPs".

Abstract

In this talk I will present a policy gradient method for confounded partially observable Markov decision processes (POMDPs) with continuous state and observation spaces in the offline setting. We first establish a novel identification result to non-parametrically estimate any history-dependent policy gradient under POMDPs using the offline data. The identification enables us to solve a sequence of conditional moment restrictions and adopt the min-max learning procedure with general function approximation for estimating the policy gradient. We then provide a finite-sample non-asymptotic bound for estimating the gradient uniformly over a pre-specified policy class in terms of the sample size, length of horizon, concentratability coefficient and the measure of ill-posedness in solving the conditional moment restrictions. Lastly, by deploying the proposed gradient estimation in the gradient ascent algorithm, we show the global convergence of the proposed algorithm in finding the history-dependent optimal policy under some technical conditions. To the best of our knowledge, this is the first work studying the policy gradient method for POMDPs under the offline setting. If time permits, I will describe a model-based method for confounded POMDPs.

Biography

Zhengling Qi is an assistant professor at the School of Business, the George Washington University. He got his PhD degree from the Department of Statistics and Operations Research at the University of North Carolina, Chapel Hill. His research has been focused on statistical machine learning and related non-convex optimization. He is mainly working on reinforcement learning and causal inference problems. 

Thirst for Knowledge: AI in Health and Medicine

Join the Department of Computer Science & Engineering (CS&E) for this all-alumni event to discuss AI in health and medicine, featuring Chad Myers, Ju Sun, Yogatheesan Varatharajah, and Qianwen Wang. Enjoy hosted beverages and appetizers, and the chance to reconnect with former classmates, colleagues, instructors, and friends. All alumni of the University of Minnesota CS&E programs (Computer Science, Data Science, MSSE) are invited to attend, and guests are welcome. 

There is no charge to attend our event, but pre-registration is required. 

About the Program

While tools like ChatGPT allow the public to use AI for various tasks, computer scientists around the world are hard at work applying AI to some of the most critical problems in society. CS&E researchers are applying AI techniques to combat problems in the healthcare space - like clinician burnout, disease prediction, and data imbalance issues in biomedical data science.

Learn more about our AI efforts at z.umn.edu/AIforchange 
Check out our medical AI initiatives at z.umn.edu/MedicalAIPrograms 

CS&E Colloquium: Co-Designing Algorithms and Hardware for Efficient Machine Learning (ML): Advancing the Democratization of ML

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Caiwen Ding (University of Connecticut), will be giving a talk titled, "Co-Designing Algorithms and Hardware for Efficient Machine Learning (ML): Advancing the Democratization of ML". 

Abstract

The rapid deployment of ML has witnessed various challenges such as prolonged computation and high memory footprint on systems. In this talk, we will present several ML acceleration frameworks through algorithm-hardware co-design on various computing platforms. The first part presents a fine-grained crossbar-based ML accelerator. Instead of attempting to map the trained positive/negative weights
afterwards, our key principle is to proactively ensure that all weights in the same column of a crossbar have the same sign, to reduce area. We divide the crossbar into sub-arrays, providing a unique opportunity for input zero-bit skipping. Next, we focus on co-designing Transformer architecture, and introduce on-the-fly attention and attention-aware pruning to significantly reduce runtime latency. Then, we will focus on co-design graph neural network training. To explore training sparsity and assist explainable ML, we propose a hardware friendly MaxK nonlinearity, and tailor a GPU kernel. Our methods outperform the state-of-the-arts on different tasks. Finally, we will discuss today's challenges related to secure edge AI and large language models (LLMs)-aided agile hardware design, and outline our research plans aimed at addressing these issues.

Biography

Caiwen Ding is an assistant professor in the School of Computing at the University of Connecticut (UConn). He received his Ph.D. degree from Northeastern University, Boston, in 2019, supervised by Prof. Yanzhi Wang. His research interests mainly include efficient embedded and high-performance systems for machine learning, machine learning for hardware design, and efficient privacy-preserving machine learning. His work has been published in high-impact venues (e.g., DAC, ICCAD, ASPLOS, ISCA, MICRO, HPCA, SC, FPGA, Oakland, NeurIPS, ICCV, IJCAI, AAAI, ACL, EMNLP). He is a recipient of the 2024 NSF CAREER Award, Amazon Research Award, and CISCO Research Award. He received the best paper nomination at 2018 DATE and 2021 DATE, the best paper award at the DL-Hardware Co-Design for AI
Acceleration (DCAA) workshop at 2023 AAAI, outstanding student paper award at 2023 HPEC, publicity paper at 2022 DAC, and the 2021 Excellence in Teaching Award from UConn Provost. His team won first place in accuracy and fourth place overall at the 2022 TinyML Design Contest at ICCAD. He was ranked among Stanford’s World’s Top 2% Scientists in 2023. His research has been mainly funded by NSF, DOE,
DOT, USDA, SRC, and multiple industrial sponsors.

CS&E Colloquium: Yu Chen

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Yu Chen (EPFL), will be giving a talk.

ML Seminar: Renbo Zhao

The UMN Machine Learning Seminar Series brings together faculty, students, and local industrial partners who are interested in the theoretical, computational, and applied aspects of machine learning, to pose problems, exchange ideas, and foster collaborations. The talks are every Tuesday from 11 a.m. - 12 p.m. during the Spring 2024 semester.

This week's speaker, Renbo Zhao (University of Iowa), will be giving a talk.

MSSE Information Session (Virtual)

Interested in learning more about the University of Minnesota's Master of Science in Software Engineering program?

Reserve a spot at an upcoming virtual information session to get all your questions answered.

Info sessions are recommended for those who have at least 1-2 years of software engineering experience.

During each session, MSSE staff will review:

  • Requirements (general)
  • Applying
  • Prerequisite requirements
  • What makes a strong applicant
  • Funding
  • Resources
  • Common questions
  • Questions from attendees
     

RSVP for the next information session now

ML Seminar: Numerical understanding of neural networks: from representation to learning dynamics

Numerical understanding of neural networks: from representation to learning dynamics

Spring 2024 Data Science Poster Fair

There will be two, one hour long sessions, and student presenters will only need to present for one of the two sessions.

 

Spring 2024 Posters

Full poster details

Session 1: 10 - 11 am 
Jashwin Acharya
Advisor:  Wei Pan, School of Public Health

"Use of a large language model for few-shot learning to predict dementia"
Aviral Bhatnagar
Advisor: Jaideep Srivastava, Department of Computer Science and Engineering
 
"Genome Sequencing"
Jiahao He
Advisor: Erich Kummerfeld, Institute for Health Informatics
 
"Data processing, and predictive and causal modeling, to describe and understand MN K-12 health and education outcome disparities in a local school district"
Jooyong Lee
Advisor: Erich Kummerfeld, Institute for Health Informatics

"Causal inference to identify factors contributing to a decrease in student's GPA"
Hahnemann Ortiz
Advisor: Daniel Boley, Department of Computer Science and Engineering
 
"Convergence of AI and DLT"
Jong Inn Park
Advisor: Dongyeop Kang, Department of Computer Science and Engineering

"Graphical Text Summarization Using Generative AI"
Hari Veeramallu
Advisor: Junaed Sattar, Department of Computer Science and Engineering
 
"Study the feasibility of generating a top-down view of an Underwater Robot given an input stream from n RGB camera sensors."
Tianhong Zhang
Advisor: 
Tianxi Li, School of Statistics
 
"TBD"
 
Session 2: 11 am - 12 noon
Venkata Sai Krishna Abbaraju
Advisor: Jaideep Srivastava, Department of Computer Science and Engineering

"Reviving lost data: Applying ML to impute missing data in factory datasets"
Dinesh Reddy Challa
Advisor: William Northrop, Department of Mechanical Engineering
 
"Influence of Snowfall on the Fuel Consumption of Winter Maintenance Vehicles"
Amrutha Shetty Jayaram Shetty
Advisor: Dongyeop Kang, Department of Computer Science and Engineering
 
"Bridging AI Dimensions: Small Model Precision Meets Large Model Depth in Therapy"
Rahul Mehta
Advisor: Erich Kummerfeld, Institute for Health Informatics

"Causal Discovery Analysis on Bipolar Disorder Patients"
Sam Penders
Advisor: Vuk Mandic, School of Physics and Astronomy
 
"LIGO All-Sky Long-Duration Transient Search Using Deep Learning"
Eric Trempe
Advisor: Tianxi Li, School of Statistics

"Predicting Patient Cancer Types Through Medical Measures"
Keith Willard
Advisor: Xiaotong Shen, School of Statistics
 
"Using BART generative synthetic data to improve BERT parsing of patient prescription instructions."
Linjun Xia
Advisor: Erich Kummerfeld, Institute for Health Informatics
 
"A Correlation and Causality Study of Student Behavioral Conditions with Health and Achievement in Hopkins public schools"