Past events

2023 CS&E Research Showcase

The CS&E Research Showcase is a bi-annual event that features the collective works of students and faculty in the Department of Computer Science & Engineering. The event will feature over 60 posters, as well as a keynote addresses from Eugene Spafford, the founder and executive director of the Center for Education and Research in Information Assurance and Security (CERIAS) at Purdue University, and Ed Chi, CS&E alumni award winner and Distinguished Scientist at Google. See below for more information about the speakers.

Additionally, the event will feature the Fall 2023 Data Science Poster Fair. This event is held each semester and feature the capstone project and poster presentation for graduating data science master's student.

This event is open to the public and all interested undergraduate and graduate students, alumni, staff, faculty, and industry professionals are encouraged to attend.  To let us know you'll be joining us, please fill out our RSVP form below. We ask those who plan to attend to RSVP by Friday, November 10. 
 

 

Keynote Speakers

Eugene Spafford headshot

Eugene Spafford - Professor, Executive Director of CERIAS Emeritus

A Perspective on Cybersecurity History and Futures

Abstract  
Cybersecurity is about 60 years old.  As such, it is a relatively new field, with much of its early history being centered in computing.  As technology and computing uses have advanced, new challenges, threats, and solutions have appeared.  Today’s cybersecurity landscape includes issues related to people, laws, privacy, safety, and fundamental questions of ethics, in addition to issues of technology.

In this talk, I will recap some of the history and developments of computing that have had implications for cybersecurity and related areas.  I will discuss some of the current challenges and some of what I see as developments and challenges over the next few decades.  Many of these are more general issues in computing, developing as we adapt to new technologies and constraints.

About
Eugene H. Spafford is a professor of Computer Sciences at Purdue University. He is also the founder and Executive Director Emeritus of the Center for Education and Research in Information Assurance and Security (CERIAS). He has worked in computing as a student, researcher, consultant, and professor for more than 45 years. Some of his work is at the foundation of current security practice, including intrusion detection, incident response, firewalls, integrity management, and forensic investigation. His most recent work has been in cybersecurity policy, security of real-time systems, and future threats. He has also been a pioneer in education, including starting and heading the oldest degree-granting cybersecurity program.

Dr. Spafford has been recognized with significant honors from various organizations. These include being elected as a Fellow of the American Academy of Arts and Sciences (AAA&S), and the Association for the Advancement of Science (AAAS); a Life Fellow of the ACM, the IEEE, and the (ISC)2; a Life Distinguished Fellow of the ISSA; and a member of the Cyber Security Hall of Fame — the only person to ever hold all these distinctions. In 2012 he was named one of Purdue’s inaugural Morrill Professors — the university’s highest award for the combination of scholarship, teaching, and service. In 2016, he received the State of Indiana’s highest civilian honor by being named as a Sagamore of the Wabash.

Among many other activities, he is editor-in-chief of the journal Computers & Security, serves on the Board of Directors of the Computing Research Association, and is a member of the National Security Advisory Board for Sandia Laboratories.

 

Ed Chi headshot

Ed Chi - Distinguished Scientist at Google and Alumni Award Winner (Ph.D., 1999; M.S., 1998; B.S., 1994)

The LLM (Large Language Model) Revolution: Implications from Chatbots and Tool-use to Reasoning

Abstract
Deep learning is a shock to our field in many ways, yet still many of us were surprised at the incredible performance of Large Language Models (LLMs). LLM uses new deep learning techniques with massively large data sets to understand, predict, summarize, and generate new content.  LLMs like ChatGPT and Bard have seen a dramatic increase in their capabilities---generating text that is nearly indistinguishable from human-written text, translating languages with amazing accuracy, and answering your questions in an informative way. This has led to a number of exciting research directions for chatbots, tool-use, and reasoning:

- Chatbots: LLM chatbots that are more engaging and informative than traditional chatbots. First, LLMs can understand the context of a conversation better than ever before, allowing them to provide more relevant and helpful responses.  Second, LLMs enable more engaging conversations than traditional chatbots, because they can understand the nuances of human language and respond in a more natural way. For example, LLMs can make jokes, ask questions, and provide feedback.  Finally, because LLM chatbots can hold conversations on a wide range of topics, they can eventually learn and adapt to the user's individual preferences.  

- Tool-use, Retrieval Augmentation and Multi-modality: LLMs are also being used to create tools that help us with everyday tasks. For example, LLMs can be used to generate code, write emails, and even create presentations.  Beyond human-like responses in Chatbots, later LLM innovators realized LLM’s ability to incorporate tool-use, including calling search and recommendation engines, which means that they could effectively become human assistants in synthesizing summaries from web search and recommendation results.  Tool-use integration have also enabled multimodal capabilities, which means that the chatbot can produce text, speech, images, and video.

- Reasoning: LLMs are also being used to develop new AI systems that can reason and solve problems. Using Chain-of-Thought approaches, we have shown LLM's ability to break down problems, and then use logical reasoning to solve each of these smaller problems, and then combine the solutions to reach the final answer.  LLMs can answer common-sense questions by using their knowledge of the world to reason about the problem, and then use their language skills to generate text that is both creative and informative.

In this talk, I will cover recent advances in these 3 major areas, attempting to draw connections between them, and paint a picture of where major advances might still come from.  While the LLM revolution is still in its early stages, it has the potential to revolutionize the way we interact with AI, and make a significant impact on our lives.

About
Ed H. Chi is a Distinguished Scientist at Google DeepMind, leading machine learning research teams working on large language models (LaMDA/Bard), neural recommendations, and reliable machine learning. With 39 patents and ~200 research articles, he is also known for research on user behavior in web and social media.  As the Research Platform Lead, he helped launched Bard, a conversational AI experiment, and delivered significant improvements for YouTube, News, Ads, Google Play Store at Google with >660 product improvements since 2013.

Prior to Google, he was Area Manager and Principal Scientist at Xerox Palo Alto Research Center's Augmented Social Cognition Group in researching how social computing systems help groups of people to remember, think and reason. Ed earned his 3 degrees (B.S., M.S., and Ph.D.) in 6.5 years from University of Minnesota. Inducted as an ACM Fellow and into the CHI Academy, he also received a 20-year Test of Time award for research in information visualization. He has been featured and quoted in the press, including the Economist, Time Magazine, LA Times, and the Associated Press.  An avid golfer, swimmer, photographer and snowboarder in his spare time, he also has a blackbelt in Taekwondo.

ML Seminar: Aldo Scutari (IE, Purdue)

CSE DSI Machine Learning seminars will be held Tuesdays 11a.m. - 12 p.m. Central Time in hybrid mode. We hope facilitate face-to-face interactions among faculty, students, and partners from industry, government, and NGOs by hosting some of the seminars in-person. See individual dates for more information.

This week's speaker, Aldo Scutari (IE, Purdue), will be giving a talk titled, "Statistical Inference over Networks: Decentralized Optimization Meets High-Dimensional Statistics".

Abstract

There is growing interest in solving large-scale statistical machine learning problems over decentralized networks, where data are distributed across the nodes of the network and no centralized coordination is present (we termed these systems as “mesh” networks). Inference from massive datasets poses  a fundamental challenge at the nexus of the computational and statistical sciences: ensuring the quality of statistical inference when computational resources, like time and communication, are constrained.   While statistical-computation tradeoffs have been largely explored in the centralized setting, our understanding over mesh networks is limited: (i) distributed schemes, designed and performing well in the classical low-dimensional regime, can break down in the high-dimensional case; and (ii) existing convergence studies may fail to predict algorithmic behaviors, with some findings directly contradicted by empirical tests. This is mainly due to the fact that the majority of distributed algorithms  have been designed and studied only from the optimization perspective, lacking the statistical dimension. This talk will discuss some vignettes from  high-dimensional statistical inference suggesting  new analyses (and designs) aiming at bringing statistical thinking in distributed optimization.

Biography

Gesualdo Scutari  is a Professor with the School of Industrial Engineering and Electrical and Computer Engineering (by courtesy) at  Purdue University, West Lafayette, IN, USA, and he is a Purdue Faculty Scholar. His research interests include continuous optimization, equilibrium programming, and their applications to signal processing and statistical learning. Among others, he was a recipient of the 2013 NSF CAREER Award, the 2015 IEEE Signal Processing Society Young Author Best Paper Award, and the 2020 IEEE Signal Processing Society Best Paper Award. He serves as an IEEE Signal Processing Distinguish Lecturer (2023-2024). He served on the editorial broad of several IEEE journals and he is currently an Associate Editor of SIAM Journal on Optimization. He is an IEEE Fellow.

CRAY Colloquium: Digital Transformations of Cleanrooms in Academic Scientific Environments

The computer science colloquium takes place on Mondays from 11:15 a.m. - 12:15 p.m. This week's speaker, Klara Nahrstedt (University of Illinois Urbana-Champaign), will be giving a talk titled "Digital Transformations of Cleanrooms in Academic Scientific Environments".

Abstract

Computer science and engineering made tremendous advances to enable digital transformations in many domains via computing technologies such as data processing and management, Internet of Things (IoT) systems, wired and wireless networks, machine-learning and multi-modal analytics, and others.

These computing advances are coming now to academic scientific environments, used in physical and life sciences, enabling digital transformations not seen before. These digital transformations are and will enable speed-up of materials discovery, shortening the span between materials discovery and their usage in device, circuit and computer architectures development, and other scientific discoveries on our campuses. However, it is a non-trivial task to achieve these digital transformations in academic environments in comparison to industrial scientific environments due to highly diverse groups who work in academic cleanrooms, heterogeneous scientific equipment spanning very different lifespans, and major cost and other resource constraints. In this talk, I will discuss the challenges of academic cleanrooms and the diversity of computing technologies that can and are contributing to the digital transformations in the cleanrooms and other academic scientific environments. I will discuss dealing with data acquisitions, processing, and management from diverse microscopes, and handling older scientific instruments and their security concerns. Furthermore, I will present IoT systems that enable access to much finer granularity of state information in cleanrooms to the scientists and lab managers when it comes to micro-climate information, maintenance information of scientific instruments, and visualization of anomaly and alert
information in case of failures.  

Biography

Klara Nahrstedt is the Grainger Distinguished Chair in Engineering Professor in the Computer Science Department, and Director of Coordinated Science Laboratory in the Grainger College of Engineering at the University of Illinois Urbana-Champaign. Her research interests are directed toward multimedia systems and networks, immersive computing, Quality of Service (QoS), Quality of Experience (QoE), resource management, and Internet of Things in critical cyber-physical systems. She is the co- author of widely used multimedia books `Multimedia: Computing, Communications, and Applications’, published by Prentice Hall, and ‘Multimedia Systems’ published by Springer Verlag. She is the recipient of the IEEE Communication Society Leonard Abraham Award for Research Achievements, University Scholar, Humboldt Research Award, IEEE Computer Society and ACM SIGMM Technical Achievement Awards, and the former chair of the ACM Special Interest Group in Multimedia. She served as the general co-chair of ACM Multimedia, IEEE Percom, IEEE SmartGridComm, ACM/IEEE IOTDI, IEEE SECON, and other venues. Klara Nahrstedt received her Diploma in Mathematics from Humboldt University, Berlin, Germany in 1985. In 1995 she received her PhD from the University of Pennsylvania in the Department of Computer and Information Science. She is ACM, IEEE and AAAS Fellow, Member of the Leopoldina German National Academy of Sciences, and Member of the National Academy of Engineering.

ML Seminar: Qing Qu (University of Michigan)

CSE DSI Machine Learning seminars will be held Tuesdays 11a.m. - 12 p.m. Central Time in hybrid mode. We hope facilitate face-to-face interactions among faculty, students, and partners from industry, government, and NGOs by hosting some of the seminars in-person. See individual dates for more information.

This week's speaker, Qing Qu (University of Michigan), will be giving a talk titled, "On the Emergence of Invariant Low-Dimensional Subspaces in Gradient Descent for Learning Deep Networks".

Abstract

Over the past few years, an extensively studied phenomenon in training deep networks is the implicit bias of gradient descent towards parsimonious solutions. In this work, we first investigate this phenomenon by narrowing our focus to deep linear networks. Through our analysis, we reveal a surprising "law of parsimony" in the learning dynamics when the data possesses low-dimensional structures. Specifically, we show that the evolution of gradient descent starting from orthogonal initialization only affects a minimal portion of singular vector spaces across all weight matrices. In other words, the learning process happens only within a small invariant subspace of each weight matrix, even though all weight parameters are updated throughout training. 

This simplicity in learning dynamics could have significant implications for both efficient training and a better understanding of deep networks. First, the analysis enables us to considerably improve training efficiency by taking advantage of the low-dimensional structure in learning dynamics. We can construct smaller, equivalent deep linear networks without sacrificing the benefits associated with the wider counterparts. Moreover, we demonstrate the potential implications for efficient training deep nonlinear networks.

Second, it allows us to better understand deep representation learning by elucidating the progressive feature compression and discrimination from shallow to deep layers. The study paves the foundation for understanding hierarchical representations in deep nonlinear networks.

Biography

Qing Qu is an assistant professor in EECS department at the University of Michigan. Prior to that, he was a Moore-Sloan data science fellow at Center for Data Science, New York University, from 2018 to 2020. He received his Ph.D from Columbia University in Electrical Engineering in October 2018. He received his B.Eng. from Tsinghua University in July 2011, and a M.Sc.from the Johns Hopkins University in December 2012, both in Electrical and Computer Engineering. He interned at U.S. Army Research Laboratory in 2012 and Microsoft Research in 2016, respectively. His research interest lies at the intersection of the foundation of data science, machine learning, numerical optimization, and signal/image processing, with a focus on developing efficient nonconvex methods and global optimality guarantees for solving representation learning and nonlinear inverse problems in engineering and imaging sciences. He is the recipient of Best Student Paper Award at SPARS15 (with Ju Sun, John Wright), and the recipient of Microsoft PhD Fellowship in machine learning. He is the recipient of the NSF Career Award in 2022, and Amazon Research Award (AWS AI) in 2023.

CS&E Colloquium: Interpreting and Steering AI Explanations with Interactive Visualizations

The computer science colloquium takes place on Mondays from 11:15 a.m. - 12:15 p.m. This week's speaker, Qianwen Wang (University of Minnesota), will be giving a talk titled "Interpreting and Steering AI Explanations with Interactive Visualizations."

Abstract

Artificial Intelligence (AI) has advanced at a rapid pace and is expected to revolutionize many biomedical applications. However, current AI methods are usually developed via a data-centric approach regardless of the usage context and the end users, posing challenges for domain users in interpreting AI, obtaining actionable insights, and collaborating with AI in decision-making and knowledge discovery.

In this talk, I discussed how this challenge can be addressed by combining interactive visualizations with interpretable AI. Specifically, I present two methodologies: 1) visualizations that explain AI models and predictions and 2) interaction mechanisms that integrate user feedback into AI models. Despite some challenges, I will conclude on an optimistic note: interactive visual explanations should be indispensable for human-AI collaboration. The methodology discussed can be applied generally to other applications where human-AI collaborations are involved, assisting domain experts in data exploration and insight generation with the help of AI.

Biography

Qianwen Wang is a tenure-track assistant professor at the Department of Computer Science and Engineering at the University of Minnesota. Before joining UMN, she was a postdoctoral fellow at Harvard University. Her research aims to enhance communication and collaboration between domain users and AI through interactive visualizations, particularly focusing on their applications in addressing biomedical challenges.

Her research in visualization, human-computer interaction, and bioinformatics has been recognized with awards and featured in prestigious outlets such as MIT News and Nature Technology Features. She has earned multiple recognitions, including two best abstract awards from BioVis ISMB, one best paper award from IMLH@ICML, one best paper honorable mention from IEEE VIS, and the HDSI Postdoctoral Research Fund. 

ML Seminar: Volkan Cevher (Swiss Federal Institute of Technology Lausanne)

CSE DSI Machine Learning seminars will be held Tuesdays 11a.m. - 12 p.m. Central Time in hybrid mode. We hope facilitate face-to-face interactions among faculty, students, and partners from industry, government, and NGOs by hosting some of the seminars in-person. See individual dates for more information.

This week's speaker, Volkan Cevher (Swiss Federal Institute of Technology Lausannes), will be giving a talk titled, "Key Challenges in Foundation Models (... and some solutions!) ".

Abstract

Thanks to neural networks (NNs), faster computation, and massive datasets, machine learning is under increasing pressure to provide automated solutions to even harder real-world tasks beyond human performance with ever faster response times due to potentially huge technological and societal benefits. Unsurprisingly, the NN learning formulations present fundamental challenges to the back-end learning algorithms despite their scalability. In this talk, we will work backwards from the "customer's" perspective and highlight these challenges specifically on the Foundation Models based on NNs. We will then explain our solutions to some of these challenges, focusing mostly on robustness aspects. In particular, we will show how the existing theory and methodology for robust training misses the mark and how we can bridge the theory and the practice.

Biography

Volkan Cevher received the B.Sc. (valedictorian) in electrical engineering from Bilkent University in Ankara, Turkey, in 1999 and the Ph.D. in electrical and computer engineering from the Georgia Institute of Technology in Atlanta, GA in 2005. He was a Research Scientist with the University of Maryland, College Park, from 2006-2007 and also with Rice University in Houston, TX, from 2008-2009. Currently, he is an Associate Professor at the Swiss Federal Institute of Technology Lausanne and a Faculty Fellow in the Electrical and Computer Engineering Department at Rice University. His research interests include machine learning, signal processing theory,  optimization theory and methods, and information theory. Dr. Cevher is an ELLIS fellow and was the recipient of the ICML AdvML Best Paper Award in 2023, Google Faculty Research award in 2018, the IEEE Signal Processing Society Best Paper Award in 2016, a Best Paper Award at CAMSAP in 2015, a Best Paper Award at SPARS in 2009, and an ERC CG in 2016 as well as an ERC StG in 2011.

Thirst for Knowledge: A Human-Centered Approach to AI

Join the Department of Computer Science & Engineering (CS&E) for this all-alumni event to discuss a human-centered approach to AI, featuring faculty from the GroupLens Research Lab. Enjoy hosted beverages and appetizers, and the chance to reconnect with former classmates, colleagues, instructors, and friends. All alumni of the University of Minnesota CS&E programs (Computer Science, Data Science, MSSE) are invited to attend, and guests are welcome. 

There is no charge to attend our event, but pre-registration is required. 

About the Program

What happens when you put people at the center of computing? The GroupLens Lab has some answers. Learn how systems that understand us as social beings can bring people together to solve large problems -- or to find help for one person at a time. We also will talk about how Human-Centered AI can harness the power of optimization and machine learning to solve big social challenges while protecting human values of fairness, transparency, and helpfulness. 
 

CRAY Colloquium: Well-being, AI, and You: Developing AI-based Technology to Enhance our Well-being

The computer science colloquium takes place on Mondays from 11:15 a.m. - 12:15 p.m. This week's speaker, Alon Halevy (Meta), will be giving a talk titled "Well-being, AI, and You: Developing AI-based Technology to Enhance our Well-being."

Abstract

Many applications claim to enhance our well-being, whether directly by aiding meditation and exercise, or indirectly, by guiding us to our destinations, assessing our sleep quality, or helping us manage our daily tasks. However, the truth is that the potential of technology to improve our well-being often eludes us, and this is happening at the dawn of an era where AI is supposed to usher in a new generation of personalized assistants.  Presently, we find ourselves more distracted than ever, devoting excessive time to pondering life’s minutiae, and struggling to fully embrace the present moment.  

Part of the reason that our well-being is not benefiting fully from technology is the fact that each of these apps focuses on a specific aspect of well-being, lacking coordination with other apps. This situation is reminiscent of the early days of computer programming when each program interacted directly with the computer's hardware. Drawing from this analogy, this talk will begin by describing a set of mechanisms that can facilitate better cooperation between well-being applications, effectively proposing an operating system for well-being. This operating system comprises a data repository, referred to as a personal timeline, which captures your past experiences and future aspirations. It also includes mechanisms for utilizing your personal data to provide improved recommendations and life plans, and, lastly, a module to assist in nurturing and navigating crucial relationships in your life.

The second half of the talk will delve into the technical challenges involved in building the components of the operating system. In particular, we will focus on the creation of your life experiences timeline from the digital data you create on a daily basis. In this context, we will identify opportunities for language models to be a core component on which we build systems for querying personal timelines and for supporting other components of the operating system. In particular, the challenge of answering questions about your timeline raises important challenges in the intersection of large language models and structure data.  

Biography

Alon Halevy is a director at Meta’s Reality Labs Research, where he works on Personal Digital Data,  the combination of neural and symbolic techniques for data management and on Human Value Alignment. Prior to Meta, Alon was the CEO of Megagon Labs (2015-2018) and led the Structured Data Group at Google Research (2005-2015), where the team developed WebTables and Google Fusion Tables. From 1998 to 2005 he was a professor at the University of Washington, where he founded the database group. Alon is a founder of two startups, Nimble Technology and Transformic (acquired by Google in 2005). Alon co-authored two books: The Infinite Emotions of Coffee and Principles of Data Integration. In 2021 he received the Edgar F. Codd SIGMOD Innovations Award. Alon is a Fellow of the ACM and a recipient of the PECASE award and Sloan Fellowship. Together with his co-authors, he received VLDB 10-year best paper awards for the 2008 paper on WebTables and for the 1996 paper on the Information Manifold data integration system.

ML Seminar: Benjamin Grimmer (John Hopkins)

CSE DSI Machine Learning seminars will be held Tuesdays 11a.m. - 12 p.m. Central Time in hybrid mode. We hope facilitate face-to-face interactions among faculty, students, and partners from industry, government, and NGOs by hosting some of the seminars in-person. See individual dates for more information.

This week's speaker, Benjamin Grimmer (John Hopkins), will be giving a talk titled, "Accelerated Gradient Descent via Long Steps".

Abstract

This talk will discuss recent work establishing provably faster convergence rates for gradient descent in smooth convex optimization via semidefinite programming and computer-assisted analysis techniques. We do this by allowing nonconstant stepsize policies with frequent long steps potentially violating descent. This is managed by analyzing the overall effect of many iterations at once rather than the typical one-iteration inductions used in most first-order method analyses. We prove a O(1/T^{1.02449}) convergence rate, beating the classic O(1/T) rate simply by periodically including longer steps (no momentum needed!).

Biography

Ben Grimmer is an assistant professor at Johns Hopkins in Applied Math and Statistics. He completed his PhD at Cornell, mentored by Jim Renegar and Damek Davis, funded by an NSF fellowship, with brief stints at the Simons Institute and Google Research. Ben's work revolves around building meaningful foundational theory for first-order optimization methods. His research interests span from tackling challenges in nonsmooth/nonconvex/nonLipschitz optimization to developing novel (accelerated) projection-free "radial" methods based on dual gauge reformulations. This talk will just focus on developments on the foundations of classic gradient descent for smooth convex minimization.

CRAY Colloquium: Robot Navigation in Complex Indoor and Outdoor Environments

The computer science colloquium takes place on Mondays from 11:15 a.m. - 12:15 p.m. This week's speaker, Dinesh Manocha (University of Maryland), will be giving a talk titled "Robot Navigation in Complex Indoor and Outdoor Environments".

Abstract

In the last few decades, most robotics success stories have been limited to structured or controlled environments. A major challenge is to develop robot systems that can operate in complex or unstructured environments corresponding to homes, dense traffic, outdoor terrains, public places, etc. In this talk, we give an overview of our ongoing work on developing robust planning and navigation technologies that use recent advances in computer vision, sensor technologies, machine learning,  and motion planning algorithms. We present new methods that utilize multi-modal observations from an RGB camera, 3D LiDAR, and robot odometry for scene perception, along with deep reinforcement learning  for reliable planning.  The latter is also used to compute dynamically feasible and spatial aware velocities for a robot navigating among mobile obstacles and uneven terrains. We have integrated these methods with wheeled robots, home robots, and legged platforms and highlight their performance in crowded indoor scenes, home environments and dense outdoor terrains.

Biography

Dinesh Manocha is Paul Chrisman-Iribe Chair in Computer Science & ECE and Distinguished University Professor at University of Maryland College Park. His research interests include virtual environments, physically-based modeling, and robotics. His group has developed a number of software packages that are standard and licensed to 60+ commercial vendors. He has published more than 725 papers & supervised 46 PhD dissertations. He is a Fellow of AAAI, AAAS, ACM, and IEEE, member of ACM SIGGRAPH Academy, and Bézier Award from Solid Modeling Association. He received the Distinguished Alumni Award from IIT Delhi the Distinguished Career in Computer Science Award from Washington Academy of Sciences. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which was acquired by Valve Inc in November 2016.