Past events

CS&E Colloquium: Junaed Sattar

This week's speaker, Junaed Sattar (University of Minnesota), will be giving a talk titled, "Addressing Perception and Interaction Challenges in Underwater Robotics for Preserving Life Underwater".


The United Nations lists conservation and sustainable use of “Life Below Water” as the 14th global goal that will change our lives in the foreseeable future. Most aquatic and marine preservation tasks (e.g., long-term oceanographic surveys, search-and-rescue, infrastructure inspection) are performed by humans, sometimes using remotely operated vehicles (ROVs) to assist in these missions. However, in recent decades, the advent of smaller AUVs suitable for working closely with humans (termed co-AUVs) has enabled robots and humans to collaborate on many subsea tasks.  The underwater domain, nonetheless, is unique in many ways and stands out with its numerous challenges -- in sensing, control, and human-robot interaction -- that can justifiably be considered extreme. Our research at the Interactive Robotics and Vision Lab at the University of Minnesota looks into numerous issues in robust underwater human-robot collaboration. We primarily investigate computational solutions to these problems, and use methods from robotics, machine vision, stochastic reasoning, and (deep) machine learning. This talk will present a brief overview of our research and present an in-depth discussion of some recent work in underwater human-robot interaction and visual object detection challenges.


Junaed is an Associate Professor at the Department of Computer Science and Engineering at the University of Minnesota and a MnDrive (Minnesota Discovery, Research, and Innovation Economy) faculty, and a member of the Minnesota Robotics Institute. He is the founding director of the Interactive Robotics and Vision Lab, where he and his students investigate problems in field robotics, robot vision, human-robot communication, assisted driving, and applied (deep) machine learning, and develop rugged robotic systems. His graduate degrees are from McGill University in Canada, and he has a BS in Engineering degree from the Bangladesh University of Engineering and Technology. Before coming to the UoM, he worked as a post-doctoral fellow at the University of British Columbia where his research focused on human-robot dialog and assistive wheelchair robots, and at Clarkson University in New York as an Assistant Professor. Find him at, and the IRV Lab at, @irvlab on Twitter, and their YouTube page at

Advancing Molecules and Materials via Data Science

Register now! (free)

Event Website

About the workshop

The goal of the workshop is to bring together experts working at the intersection of data science and materials science and explore promising data science approaches and techniques that could support major advances in materials science in the coming years. The registration is free and required. Lunch will be provided for registered attendees.


The forum will include sessions and panel discussions led by experts from UMN, MIT, UIUC, UT Dallas, Argonne, NIST, Google, and NSF. The full schedule of the workshop can be found on the workshop website.

Poster session

Students and postdocs are encouraged to attend and to make contributions in the form of poster presentations (please submit an abstract on the registration form).

Organizing committee

Vuk Mandic, Chris Bartel, Sapna Sarupria, Ellad Tadmor, Ke Wang

For more information, please reach out to Prof. Vuk Mandic at 

Cray Colloquium: Geometry and Latent Representations in Machine Learning

The computer science colloquium takes place on Mondays from 11:15 a.m. - 12:15 p.m.

This week's talk is a part of the Cray Distinguished Speaker Series. This series was established in 1981 by an endowment from Cray Research and brings distinguished visitors to the Department of Computer Science & Engineering every year.

This week's speaker, Daniel D. Lee (Cornell Tech), will be giving a talk titled "Geometry and Latent Representations in Machine Learnings".


The advent of deep neural networks has brought significant advancements in the development and deployment of novel AI technologies. Recent large-scale neural network architectures have shown significantly better performance for object classification, segmentation, scene understanding and multimodal representations.  How can we understand how the representations of sensor input signals are transformed by deep neural networks? I will show how statistical insights can be gained by analyzing the high-dimensional geometrical structure of these representations as they are reformatted in neural network hierarchies.


Dr. Daniel D. Lee is the Tisch University Professor in Electrical and Computer Engineering at Cornell Tech and recently served as Global Head of AI for Samsung Research. He received his B.A. summa cum laude in Physics from Harvard University and his Ph.D. in Condensed Matter Physics from the Massachusetts Institute of Technology. He was also a researcher at Bell Labs in the Theoretical Physics and Biological Computation departments. He is a Fellow of the IEEE and AAAI and has received the NSF CAREER award and the Lindback award for distinguished teaching. He was also a fellow of the Hebrew University Institute of Advanced Studies in Jerusalem, an affiliate of the Korea Advanced Institute of Science and Technology, and organized the US-Japan National Academy of Engineering Frontiers of Engineering symposium and Neural Information Processing Systems (NeurIPS) conference. His group focuses on understanding general computational principles in biological systems and on applying that knowledge to build autonomous systems.

ML Seminar: Ziyue Xu (Nvidia)

CSE DSI Machine Learning seminars will be held Tuesdays 11a.m. - 12 p.m. Central Time in hybrid mode. We hope facilitate face-to-face interactions among faculty, students, and partners from industry, government, and NGOs by hosting some of the seminars in-person. See individual dates for more information.

This week's speaker, Ziyue Xu (Nvidia), will be giving a talk titled, "Federated Learning: Image, Language, and Beyond".


Federated learning (FL) is a method used for training artificial intelligence models with data from multiple sources while maintaining data anonymity, thus removing many barriers to data sharing. In this talk, we will discuss two major aspects of FL: the research towards FL model development, and the tool needed to perform a real-life multi-institute FL study. Specifically, we will cover recent works on personalized FL, vertical FL, and client-contribution, and will illustrate the implementation of FL under various model settings using NVFlare - the NVIDIA Federated Learning Application Runtime Environment.


Dr. Ziyue Xu, IEEE Senior Member, is a Senior Scientist at Nvidia. His research interests lie in the area of image analysis and machine learning with applications in biomedical and clinical imaging, and is among the earliest researchers in adopting deep learning in this field. Before joining Nvidia, he was a Staff Scientist at National Institutes of Health.

Dr. Xu obtained his B.S. from Tsinghua University, and M.S./Ph.D. from the University of Iowa. He is an Associate Editor for the journals of International Journal of Computer Vision (IJCV), IEEE Transactions on Medical Imaging (TMI), IEEE Journal of Biomedical and Health Informatics (JBHI), Computerized Medical Imaging and Graphics (CMIG), and Computers in Biology and Medicine (CBM).

ML Seminar: Zhiqi Bu

CSE DSI Machine Learning seminars will be held Tuesdays 11a.m. - 12 p.m. Central Time in hybrid mode. We hope facilitate face-to-face interactions among faculty, students, and partners from industry, government, and NGOs by hosting some of the seminars in-person. See individual dates for more information.

This week's speaker, Zhiqi (Woody) Bu (Amazon AWS AI), will be giving a talk titled, "On the Computational Efficiency of Differentially Private Deep Learning".


Differentially private (DP) optimization is the standard paradigm to learn large neural networks that are accurate and privacy-preserving. The computational cost for DP deep learning, however, is notoriously heavy due to the per-sample gradient clipping. Existing DP implementations are 2−1000× more costly in time and space complexity than the standard (non-private) training. In this work, we develop a novel Book-Keeping (BK) technique that implements existing DP optimizers (thus achieving the same accuracy), with a substantial improvement on the computational cost. Specifically, BK enables DP training on large models and high dimensional data to be roughly as efficient as the standard training, whereas previous DP algorithms can be inefficient or incapable of training due to memory error. The computational advantage of BK is supported by the complexity analysis as well as extensive experiments on vision and language tasks. Our implementation achieves state-of-the-art (SOTA) accuracy with very small extra cost: on GPT2 and at the same memory cost, BK has 1.0× the time complexity of the standard training (0.75× training speed in practice), and 0.6× the time complexity of the most efficient DP implementation (1.24× training speed in practice). We will open-source the codebase for the BK algorithm.


Dr. Zhiqi Bu is an Applied Research Scientist at Amazon AWS AI, focusing on the optimization of large-scale deep learning, especially with differential privacy. Dr. Bu obtained his Ph.D. in the Applied Math and Computational Science program (AMCS) at the University of Pennsylvania in 2021, under Benjamin Franklin Fellowship, where he also obtained his M.A. in Statistics from Wharton School. Dr. Bu completed his B.A. (Honors) in Mathematics at the University of Cambridge in 2015.

CS&E Colloquium: Yao-Yi Chiang

This week's speaker, Yao-Yi Chiang (University of Minnesota), will be giving a talk titled, "Spatial AI and Its Applications in an Interdisciplinary World".


Knowing what has happened, where and when, and how it has changed over space and time is the key to modeling complex spatiotemporal phenomena and understanding how humans depend on, adapt, and modify them. Today, many disciplines produce and use an increasing volume of data containing location and time information, either explicitly, e.g., mobility data, air quality data, satellite imagery, or implicitly, e.g., scanned historical maps and text documents. However, the substantial heterogeneity in these data coupled with inconsistencies in their spatiotemporal scales often result in existing methods focusing on a few data sources and treating the space and time dimensions as an afterthought, limiting their capability to solve critical problems. This talk will present recent highlights of our research results in Spatial Artificial Intelligence. The talk will present machine learning methods leveraging spatial science theories for predicting spatiotemporal phenomena and building a spatial language model to facilitate geospatial entity typing and linking. This talk will also outline our ongoing research directions in Spatial AI and interdisciplinary impact in public health, transportation, national security, geography, history, library, and digital humanities.


Dr. Yao-Yi Chiang is an Associate Professor in the Computer Science & Engineering Department at the University of Minnesota. Previously, he was an Associate Professor (Research) in Spatial Sciences at the University of Southern California. Dr. Chiang is an Action Editor of GeoInformatica (Springer) and an editorial board member for Transactions in GIS (Wiley). He earned his Ph.D. in Computer Science from the University of Southern California and his bachelor's degree in Information Management from the National Taiwan University. Dr. Chiang's research interests are in spatial artificial intelligence. He develops machine learning methods to understand complex spatiotemporal phenomena and how humans interact with these phenomena using multimodal multiscale data that can be sparse, and unevenly distributed in space and time. Dr. Chiang has received funding from various organizations, including NSF, NIH, DARPA, IARPA, NGA, NEH, and industry partners such as NTT Global Networks, BAE Systems, Conveyancing Liability Solutions, TerraGo, and Rumsey Map Collection. He has also worked as a visiting researcher at Google AI in New York City and a machine learning consultant at the Spatial Computing Group at Meta. Before pursuing his Ph.D., Dr. Chiang was a research scientist at Geosemble Technologies and Fetch Technologies in California, where he co-invented a patent on geospatial data fusion techniques. Dr. Chiang is also the founder of Kartta Foundation, a non-profit organization that provides software and services to distill and assemble geographic knowledge for the public good. Kartta Foundation manages Kartta Labs, a previous Google product, including


Transforming Children’s Technologies through Developmentally Responsive Designs

Talk Title: “Transforming Children’s Technologies through Developmentally Responsive Designs” By Dr. Saba Kawas

Today's computing technologies—from smartphones to wearables to embedded computing—impact how children learn, play, communicate, and interact with others. Popular media and common wisdom often portray technology use by children as detrimental to their growth and well-being. However, in recent years, much research evidence in child-computer interaction, health, and education suggests that well-designed interactive technologies can have developmental and positive benefits for children and young adults' well-being. The overarching goals of my research are to draw theoretically-driven and research-based design considerations to account for adolescents' and children's developmental needs and overall well-being when designing technologies for them and to support designers in creating developmentally responsive technologies for children and adolescents. In this talk, I will describe a series of research studies that identify children's developmental needs and co-designing with children and adolescents different systems and tools to support their health and well-being. Based on the findings from this work, I will discuss pathways to bridge the Child-Computer Interaction research-practice gap.

Saba Kawas is a Computing Innovation Fellow and a Postdoctoral Researcher in the Computer Science & Engineering Department at the University of Minnesota. Her research interests focus on studying the relationship between technology design and children and adolescents’ development, learning, and well-being. Her past research was funded by the Society for Research in Child Development–Jacobs Foundation Award and the University of Washington Innovation Award. She holds a master’s degree in Art and Design from North Carolina State University’s College of Design, a master’s in Human-Centered Design and Engineering, and a Ph.D. in Human-Computer Interaction from the University of Washington.

Summer Undergraduate Research Expo (SURE)

The Summer Undergraduate Research Expo is an exposition of the research accomplishments of undergraduate from all over the country who come to the University of Minnesota each summer to perform research in the labs of UMN faculty members.

The goals of the Expo are to increase the cohesion of the summer undergraduate research programs in science and engineering and enhance their exposure to the University and local industrial communities through interaction with UMN faculty and industrial partners.

Event website
Computer science REU site


Date: Thursday, August 10, 2023
9 A.M. – 9:30 A.M. - Poster setup up for morning session
9:30 A.M. – 11:30 A.M. - Morning Poster Session (Life Sciences, Humanities and Social Sciences presenting)
2 P.M. – 2:30 P.M. - Poster set up for afternoon session
2:30 P.M. – 4:30 P.M. - Afternoon Session (Physical Sciences and Engineering presenting)

Event Resources

Seminar by Dr. Venu Govindaraju

Dr. Venu Govindaraju is an AI pioneer in the area of machine learning and handwriting recognition. He is serving as the principle investigator of the national AI research institute on exceptional education and is also the Vice President for Research and Economic Development at SUNY Buffalo. 

Presentation title: “Research and Economic Development at the University at Buffalo”


As Vice President for Research and Economic Development, Dr. Govindaraju oversees the University at Buffalo’s diverse and broad-reaching research enterprise as well as the institution’s industry engagement. 

The university’s research program has launched a number of new opportunity areas, which Dr. Govindaraju will speak to. Among them includes a multi-million dollar, university-funded investment in high-end, cutting-edge equipment for use by researchers across campus. As well, a re-emphasized focus on the ever-strengthening international partnerships which focus on critical and emerging technology as well as collaboration within higher education. UB is home to “Communities of Excellence” – which aligns interdisciplinary teams of faculty, students and practitioners in diverse fields using research, education and engagement activities to create integrated solutions to issues – such as sustainable manufacturing and global health inequities. As well, Dr. Govindaraju launched the Buffalo Blue Sky program to provide funding for high-impact research that is structured to encourage cross-disciplinary partnerships to address “grand challenge” problems that are too complex for a single disciplinary approach.

A computer scientist, researcher and AI and machine learning expert, Dr. Govindaraju also will highlight UB’s recently created National AI Institute for Exceptional Education – funded by a $20M grant from the National Science Foundation and Institute of Education Sciences – for which he is PI and Director. Leaning into the positive impact pioneering AI can have on social good, the institute brings together the work of top-tier research scientists from nine institutions to address – drawing on AI as a key tool –the national shortage of speech-language pathologists. Dr. Govindaraju will share how the University at Buffalo, which is serving as the lead institution on the initiative, is elevating and prioritizing the work and future ambitions and opportunities of AI. Too, how the institute can serve as a model for cross-disciplinary and multi-institutional partnerships.

The NSF and IES AI Institute for Transforming Education for Children with Speech and Language Processing Challenges (or National AI Institute for Exceptional Education, in short) aims to close this gap by developing advanced AI technologies to scale SLPs’ availability and services such that no child in need of speech and language services is left behind. Towards this end, the Institute proposes to develop two novel AI solutions: (1) the AI Screener to enable universal early screening for all children, and (2) the AI Orchestrator to work with SLPs and teachers to provide individualized interventions for children with their formal Individualized Education Program (IEP).  In developing these solutions, the Institute will advance foundational AI technologies, enhance our understanding of children’s speech and language development, serve as a nexus point for all special education stakeholders, and represent a fundamental paradigm shift in how SLPs serve children in need of ability-based speech and language services.

Graduate Programs Online Information Session

RSVP today!.

During each session, the graduate staff will review:

  • Requirements (general)
  • Applying
  • Prerequisite requirements
  • What makes a strong applicant
  • Funding
  • Resources
  • Common questions
  • Questions from attendees

Students considering the following programs should attend: