Past events

Graduate Programs Online Information Session

RSVP today!.

During each session, the graduate staff will review:

  • Requirements (general)
  • Applying
  • Prerequisite requirements
  • What makes a strong applicant
  • Funding
  • Resources
  • Common questions
  • Questions from attendees

Students considering the following programs should attend:

CS&E Undergraduate Student Graduation Event

RSVP Link
Thursday, May 11th, 2:00 pm - 4:00 pm
Coffman Memorial Union - Great Hall

All graduating undergraduate students and their families and friends are invited to join the Department of Computer Science & Engineering in celebrating their accomplishments. This is a casual event to mingle with other graduates, take photos, and listen to speakers. There will be light snacks and beverages at the event. This does not include a ceremony where names will be read and a stage crossing takes place.

College/University Commencement - Questions about the University events (commencement@umn.edu)

Undergraduate Student Conferral Ceremony
Saturday, May 13, 2023 - 1 p.m.
Huntington Bank Stadium

Stage Crossings
Thursday, May 11–Saturday, May 13, 2023
University of Minnesota Field House

Registration for the Conferral Ceremonies and Stage Crossings is open until April 10, 11:59 p.m. Central Time.
Graduates should receive emails from Marching Order, our University vendor. If you have any technical issues with the registration site for stage crossings, please reach out to Marching Order tech help.

Graduates will have the opportunity to sign up to cross a stage while their guests have a front-row viewing experience to cheer and take photos and video. 

Graduates will choose a specific day and time where they will have their name announced, cross the stage, and be congratulated by a University leader. Graduates may choose to coordinate with their friends and colleagues to cross the stage sequentially. Professional photographers will also be available to take photos.

UMN Commencement Page

Information on the CSE Commencement

Information on the CLA Commencement

GradFest
Wednesday, March 22 and Thursday, March 23, 2023
10 a.m. – 5 p.m. each day
Coffman Memorial Union, Great Hall

Everything graduates need—all in one place.

Gradfest website

Diploma Covers will be distributed at the respective Huntington Bank Commencement event for all students in all colleges.

Distinction Cords will be available to undergraduate students with qualifying GPAs. Contact CSE or CLA Student Services for more details. 

CS&E Graduate Student Graduation Event

RSVP Link
Thursday, May 11th, 10:00 am - 12: 00 pm
Coffman Memorial Union - Great Hall

All graduating graduate students and their families and friends are invited to join the Department of Computer Science & Engineering in celebrating their accomplishments. This is a casual event to mingle with other graduates, take photos, and listen to speakers.  There will be light snacks and beverages at the event. This does not include a ceremony where names will be read and a stage crossing takes place.

College/University Commencement

UMN Commencement Website
Arts, Sciences, and Engineering Graduate Student Commencement

Graduate Student Conferral Ceremony
Friday, May 12, 2023 - 5 p.m.
Huntington Bank Stadium

The graduate ceremony will include master’s and doctoral degree students.

Stage Crossings
Thursday, May 11–Saturday, May 13, 2023

University of Minnesota Field House

Graduates will have the opportunity to sign up to cross a stage while their guests have a front-row viewing experience to cheer and take photos and video. CSE associate deans and other CSE faculty will be joining at multiple times during the stage crossings.

Graduates will choose a specific day and time where they will have their name announced, cross the stage, and be congratulated by a University leader. Graduates may choose to coordinate with their friends and colleagues to cross the stage sequentially. In addition, Ph.D. students may choose to invite their advisors and arrange to be hooded during their scheduled stage crossing time. Professional photographers will also be available to take photos. 

Registration for the Conferral Ceremonies and Stage Crossings is open until April 10, 11:59 p.m. Central Time.
Graduates should receive emails from Marching Order, our University vendor. If you have any technical issues with the registration site for stage crossings, please reach out to Marching Order tech help.

GradFest
Wednesday, March 22 and Thursday, March 23, 2023
10 a.m. – 5 p.m. each day
Coffman Memorial Union, Great Hall

Everything graduates need—all in one place.

Gradfest website

Diploma Covers will be distributed at the respective Huntington Bank Commencement event for all students in all colleges. 

Distinction Cords will be available to undergraduate students with qualifying GPAs. Contact CSE or CLA Student Services for more details. 

Spring 2023 Data Science Poster Fair

Every year, data science M.S. students present their capstone projects during this event as a part of their degree requirements. 

The poster fair is open to the public and all interested undergraduate and graduate students, alumni, staff, faculty, and industry professionals are encouraged to attend.

For more information about each presenter, check out the detailed breakdown of each session.

10 am - 11 am - Session 1

11 am - 12 pm - Session 2

 

Please contact Allison Small at csgradmn@umn.edu with any questions.

BICB Colloquium: PingHsun Hsieh

BICB Colloquium Faculty Nomination Talks: Join us in person on the UMR campus in Usq 419, on the Twin Cities campus in MCB 2-122 or virtually at 5 p.m.
 

 

PingHsun Hsieh is Assistant Professor of Genetics, Cell Biology, and Development

 
Title: Structural Variation in Humans: Insights from Evolutionary Theory and Long-Read Sequencing
 
Abstract: Evolutionary theory is essential for studying human biology and health, from identifying variants that could result in genetic novelties via evolutionary processes like hybridization and selection, to comprehending the genetic basis of adaptive traits and disease risk in populations. Recent advancements in long-read sequencing now allow us to examine previously inaccessible variations in some of the most challenging regions of the human genome, particularly for structural variation, which is a significant yet understudied genomic variation affecting far more bases than single-nucleotide variants. In this presentation, I will showcase our efforts in understanding the evolution and biological implications of genomic variation and illustrate how evolutionary-based inferences can enhance our knowledge of human biology and health.

 

 

ML Seminar: Yingbin Liang

The UMN Machine Learning Seminar Series brings together faculty, students, and local industrial partners who are interested in the theoretical, computational, and applied aspects of machine learning, to pose problems, exchange ideas, and foster collaborations. The talks are every Tuesday from 11 a.m. - 12 p.m. during the Spring 2023 semester.

This week's speaker, Yingbin Liang (Ohio State University), will be giving a talk titled "Reward-free Reinforcement Learning via Sample-Efficient Representation Learning".

Abstract

As reward-free reinforcement learning (RL) becomes a powerful framework for a variety of multi-objective applications, representation learning arises as an effective technique to deal with the curse of dimensionality in reward-free RL. However, the existing algorithms of representation learning in reward-free RL still suffers seriously from high sample complexity, although they are polynomially efficient. In this talk, I will first present a novel representation learning algorithm that we propose for reward-free RL. We show that such an algorithm provably finds near-optimal policy as well as attaining near-accurate system identification via reward-free exploration, with significantly improved sample complexity compared to the best- known result before. I will then present our characterization of the benefit of representation learning in reward-free multitask (a.k.a. meta) RL as well as the benefit of employing the learned representation from upstream to downstream tasks. I will conclude my talk with remarks of future directions.

The work to be presented was jointly with Yuan Cheng (USTC), Ruiquan Huang (PSU), Dr. Songtao Feng (OSU), Prof. Jing Yang (PSU), and Prof. Hong Zhang (USTC).

Biography

Dr. Yingbin Liang is currently a Professor in the Department of Electrical and Computer Engineering at the Ohio State University (OSU), and a core faculty of the Ohio State Translational Data Analytics Institute (TDAI). She also serves as the Deputy Director of the AI-Edge Institute at OSU. Dr. Liang received the Ph.D. degree in Electrical Engineering from the University of Illinois at Urbana-Champaign in 2005, and served on the faculty of University of Hawaii and Syracuse University before she joined OSU. Dr. Liang's research interests include machine learning, optimization, information theory, and statistical signal processing. Dr. Liang received the National Science Foundation CAREER Award and the State of Hawaii Governor Innovation Award in 2009. She also received EURASIP Best Paper Award in 2014.

CS&E Colloquium: Mohamed Elgharib

This week's speaker, Mohamed Elgharib (Max Planck Institute for Informatics), will be giving a talk titled, "Neural Reconstruction and Rendering: An Implicit Perspective".

Abstract

Digitising the world around us is of increasing importance, with several applications in extended reality, movie and media production, telecommunications, video games, medicine, robotics, and many more. The vast majority of existing works use explicit means of representing the scenes, such as meshes, point clouds, and so on. While these representations produce very good results, processing them with deep learning still has limitations in 3D reconstruction and rendering. Only recently a new class of scene representation emerged, known as implicits. Unlike explicits, implicits are represented via continuous fields. They are 3D by design and formulated using neural networks. This makes them very suitable for neural reconstruction and rendering. In this talk, I am going to discuss implicit scene representations for the important problem of relighting. This is a challenging problem as it requires careful extraction and manipulation of scene intrinsics. We will show how implicit scene representations bring important benefits to the state of the art, such as producing 3D-consistent relightings. We will also show how the continuous nature of implicits allow editing the full image, such as relighting the full human head including the scalp’s hair. We believe that implicit scene representations can positively impact neural reconstruction and rendering. This will allow us to move fewer steps closer to the ultimate goal of fully digitising our world.


Biography

Mohamed Elgharib is a Research Group Leader at the Max Planck Institute for Informatics. His areas of expertise are computer vision, computer graphics, machine learning and artificial intelligence. His work is on building digital models of our world to allow novel applications in extended reality, VR/AR and others. Topics of interest include 3D scene modeling and reconstruction, deep generative modeling, neural rendering, 3D pose estimation, relighting and others. His work usually includes a heavy machine and deep learning component through supervised, self-supervised or unsupervised learning. He worked with different types of data, including monocular RGB, multiview RGB, audio, depth, and even with biologically inspired and neuromorphic based sensors such as event cameras. Mohamed Elgharib co-authored more than 40 peer-reviewed publications, has three granted US patents and has collaborated with a wide spectrum of academic and industrial institutes. Some of Mohamed’s publications were featured in media outlets such as BBC News and MIT News, others won awards such as the Best Paper Award Honourable Mention in BMVC 2022, and a start-up was largely inspired by one of his publications.

ML Seminar: Yongxin Chen

The UMN Machine Learning Seminar Series brings together faculty, students, and local industrial partners who are interested in the theoretical, computational, and applied aspects of machine learning, to pose problems, exchange ideas, and foster collaborations. The talks are every Tuesday from 11 a.m. - 12 p.m. during the Spring 2023 semester.

This week's speaker, Yongxin Chen (Georgia Institute of Technology), will be giving a talk titled "Fast Sampling of Diffusion Models".

Abstract

Diffusion models are a class of generative models that have led to the recent revolution of AI content generation, chief among which is the text-to-image application (e.g., DALLE2, Imagen, Stable diffusion, etc). Compared with other generative modeling techniques such as GANs, diffusion models achieve the best performance in terms of sample/image quality. However, the time consumption to generate a sample is considerably higher (typically several magnitudes of order more expensive than GANs). Diffusion models are built on the key idea of bridging a simple (Gaussian) distribution and a target distribution with a proper diffusion process modeled by a stochastic differential equation, and one needs to solve this stochastic differential equation through discretization to generate a new sample, incurring a high computational cost. In this talk, I will present three methods we proposed to accelerate the sampling of a diffusion model. The first method unifies the diffusion model and the normalizing flow, termed diffusion normalizing flow (DiffFlow), for generative modeling, by making the predefined forward process in diffusion models trainable. It is closely related to the Schrodinger bridge problem. In the second method, we develop an efficient algorithm to solve the learned backward process by leveraging certain structures of it. The resulting algorithm, termed diffusion exponential integrator sampler (DEIS), is currently the most efficient sampling algorithm for diffusion models (DEIS can generate high quality samples within 10 NFEs). In the last method, we consider the task of large content generation where the training data is limited. Our method, termed DiffCollage, makes it possible to efficiently generate large content using diffusion models trained on generating pieces of the large content.

Biography

Yongxin Chen received his BSc from Shanghai Jiao Tong University in 2011 and Ph.D. from University of Minnesota in 2016, both in Mechanical Engineering. He is currently an Assistant Professor in the School of Aerospace Engineering at Georgia Institute of Technology. He received the George S. Axelby Best Paper Award in 2017, the NSF Faculty Early Career Development Program (CAREER) Award in 2020, the A.V. `Bal' Balakrishnan Award in 2021, and the Donald P. Eckman Award in 2022. His current research interests are in the areas of control theory, machine learning, optimization, and robotics. He enjoys developing new algorithms and theoretical frameworks for real world applications.

CRAY Colloquium: Anil Jain

The computer science colloquium takes place on Mondays from 11:15 a.m. - 12:15 p.m. 

This week's talk is a part of the Cray Distinguished Speaker Series. This series was established in 1981 by an endowment from Cray Research and brings distinguished visitors to the Department of Computer Science & Engineering every year.

This week's speaker, Anil Jain (Michigan State University), will be giving a talk titled "Fingerprint Recognition".

Abstract:

If you look closely at your fingertips, your palms, or underneath your feet, you will notice that while the skin is smooth and devoid of any hair, it is etched with regularly spaced ridges and intervening valleys. These ridge-valley patterns are collectively referred to as friction ridge patterns and more specifically as fingerprints, palm-prints and footprints. It has been almost 150 years since the pioneering giant of modern-day fingerprint recognition, Sir Francis Galton, first described minutiae, the small details woven throughout the papillary ridges on each of our fingers. Galton believed that minutiae imparted the individuality and permanence properties of fingerprints necessary for accurately identifying individuals over time. Since Galton’s ground breaking observations, automated fingerprint recognition systems (AFIS) have become ubiquitous in forensics and law enforcement, access control, mobile unlock and payments, immigration, and civil registration. To date, virtually all AFIS continue to rely upon the location and orientation of minutiae within fingerprint images for recognition. Although AFIS based on minutiae (i.e., handcrafted features) have enjoyed significant success, not much effort has been devoted to augment them with learned features from deep networks to improve the recognition accuracy and to reduce the complexity of large-scale search. I will present our ongoing work on building accurate and real-time fingerprint recognition systems and highlight a number of challenging issues related to fingerprint image quality, fingerprint spoofs and fingerprint database security.

Bio:

Anil Jain is a University Distinguished Professor at Michigan State University. He received Guggenheim fellowship, Humboldt award, Fulbright fellowship, IEEE W. Wallace McDowell award, and IAPR King-Sun Fu Prize. He served as the Editor-in-Chief of the IEEE Transactions on Pattern Analysis and Machine Intelligence and was appointed to the United States Defense Science Board and Forensic Science Standards Board. Jain is  a member of the National Academy of Engineering and foreign member of the Indian National Academy of Engineering and Chinese Academy of Sciences. 

CRAY Colloquium: Anil Jain

This week's speaker, Anil Jain (Michigan State University), will be giving a talk titled Fingerprint Recognition. This week's talk is a part of the Cray Distinguished Speaker Series.