Past events

CRAY Colloquium: Learning Coordinated, Performant, and Safe Flight with 20 Neurons

The computer science colloquium takes place on Mondays from 11:15 a.m. - 12:15 p.m. This week's speaker, Gaurav Sukhatme (University of Southern California), will be giving a talk titled "Learning Coordinated, Performant, and Safe Flight with 20 Neurons"

Abstract

We have recently demonstrated the possibility of learning controllers that are zero-shot transferable to groups of real quadrotors via large-scale, multi-agent, end-to-end reinforcement learning. We train policies parameterized by neural networks that can control individual drones in a group in a fully decentralized manner. Our policies, trained in simulated environments with realistic quadrotor physics, demonstrate advanced flocking behaviors, perform aggressive maneuvers in tight formations while avoiding collisions with each other, break and re-establish formations to avoid collisions with moving obstacles, and efficiently coordinate in pursuit-evasion tasks. The model learned in simulation transfers to highly resource-constrained physical quadrotors. Motivated by these results and the observation that neural control of memory-constrained, agile robots requires small yet highly performant models, the talk will conclude with some thoughts on coaxing learned models onto devices with modest computational capabilities.

Biography

Gaurav S. Sukhatme is Professor of Computer Science and Electrical and Computer Engineering at the University of Southern California (USC). He is the inaugural Director of the USC School of Advanced Computing and the Executive Vice Dean of the USC Viterbi School of Engineering. He holds the Donald M. Aldstadt Chair in Advanced Computing and was the Chairman of the USC Computer Science Department from 2012-17. He earned a B.Tech. in Computer Science and Engineering at IIT Bombay and M.S. and Ph.D. degrees in Computer Science from USC. He is the co-director of the USC Robotics Research Laboratory and directs the USC Robotic Embedded Systems Laboratory. His research is in networked robots, learning robots, and field robotics. He is a Fellow of the AAAI, AAAS, and the IEEE, a recipient of the NSF CAREER award, the Okawa Foundation research award, and an Amazon research award. He is one of the founders of the Robotics: Science and Systems conference and was the program chair of 2005 RSS 2005, ICRA 2008 and IROS 2011. He is the Editor-in-Chief of Autonomous Robots (Springer Nature).
 

NLP Seminar: Enabling Human-centric and Culturally Aware Safety of AI Agents

This weeks NLP Seminar, Maarten Sap (Carnegie Mellon University), will be giving a talk titled "Enabling Human-centric and Culturally Aware Safety of AI Agents"

Abstract

AI safety has made substantial strides, yet still struggles to keep up with increasingly agentic AI use cases, and often overly focuses on technical solutions rather than human centered ones. In this talk, I'll outline some recent works towards making AI safety more human-centric and culturally aware. 
First, I'll introduce HAICosystem and OpenAgentSafety, two new interactive benchmarks for evaluating LLM agents in multi-turn and tool-using interactions via simulations, which shows that agents still have safety issues due to tool use that were not previously known.
Then, focusing on users, I'll outline a recent study on how LLM agents should or should not refuse queries, showing that user perceptions, trust, and willingness to use LLMs are strongly affected by their refusal strategies, and that many current LLMs use least-preferred refusal strategies.
Finally, I'll cover an oft-overlooked aspect of safety, namely, cultural safety. Introducing MC-Signs, a new benchmark to measure the cultural safety of LLMs, VLMs, and T2I systems with respect to culturally offensive non-verbal communication (e.g., hand gestures), showing strong western-centric biases of all AI systems.
I'll conclude with some future directions towards better cultural and human-centric safety.  

Biography

Maarten Sap is an assistant professor in Carnegie Mellon University's Language Technologies Department (CMU LTI), and a courtesy appointment in the Human-Computer Interaction institute (HCII). He is also a part-time research scientist and AI safety lead at the Allen Institute for AI. His research focuses on (1) measuring and improving AI systems' social and interactional intelligence, (2) assessing and combatting social inequality, safety risks, and socio-cultural biases in human- or AI-generated language, and (3) building narrative language technologies for prosocial outcomes. He has presented his work in top-tier NLP and AI conferences, receiving paper awards or nominations at NeurIPS 2025, NAACL 2025, EMNLP 2023, ACL 2023, FAccT 2023, WeCNLP 2020, and ACL 2019. He was named a 2025 Packard Fellow and a recipient of the 2025 Okawa Research Award. His research has been covered in the press, including the New York Times, Forbes, Fortune, Vox, and more.

CRAY Colloquium: The Cognitive Costs of Technology Use: Attention, Multitasking, and Stress

The computer science colloquium takes place on Mondays from 11:15 a.m. - 12:15 p.m. This week's speaker, Gloria Mark (University of California, Irvine), will be giving a talk titled "The Cognitive Costs of Technology Use: Attention, Multitasking, and Stress"

Abstract

We are undergoing a fundamental shift in how we think, work, and focus in the digital age. While personal technologies are designed to extend our capabilities, my research shows that they often lead to increased multitasking and stress—factors that can hinder performance. To understand technology use, I study people in their real-world environments using sensors and other mixed methods. In this talk, I’ll begin by showing how our attention spans on screens have significantly decreased over the past two decades. I’ll discuss how this change is connected to broader sociotechnical changes in our lives. I’ll present different types of attention people experience and their relation to mood and wellbeing. AI is also influencing how we pay attention and reason. Finally, I will discuss solutions at both the individual and collective levels for gaining agency with attention, sharing insights on how people can recognize and work with their natural attentional rhythms.

Biography

Gloria Mark is Professor Emerita at UC Irvine and a leading researcher on how digital technology shapes the modern mind. For more than two decades, she has studied how our tools alter the way we think, focus, and feel. A Fulbright Scholar and member of the ACM SIGCHI Academy, she has authored over 200 papers focusing on the human side of technology. Her work has been featured on The Ezra Klein Show, CBS Sunday Morning, NPR’s Hidden Brain, Freakonomics, and Armchair Expert with Dax Shepard, among many others. Her award-winning book, Attention Span, named the #1 Best Business and Management Book of 2023 by The Globe and Mail and a Next Big Idea Book Club selection, explores how our attention has become the defining struggle of the digital age. Through her writing and her Substack, The Future of Attention, she envisions a future where technology empowers, rather than overwhelms, and where we can flourish together.
 

CRAY Colloquium: Lean: Machine-Checked Mathematics and Verified Programming

The computer science colloquium takes place on Mondays from 11:15 a.m. - 12:15 p.m. This week's speaker, Leonardo de Moura (Amazon Web Services), will be giving a talk titled "Lean: Machine-Checked Mathematics and Verified Programming"

Abstract

Imagine a world where mathematicians, programmers, and AI systems can collaborate with complete trust in each other's work. This is the promise of Lean, an open-source project that's transforming how we approach mathematics, software development, and artificial intelligence. Lean provides machine-checkable proofs, eliminating the need for manual verification and allowing humans and AI to build upon each other's work with confidence. By addressing the "Trust Bottleneck," Lean opens doors to cross-disciplinary collaboration. In this talk, we'll explore how Lean is impacting these fields. We’ll see how it's providing mathematicians with a new way to construct and verify complex proofs, enabling software developers to rigorously verify critical systems, and creating a foundation for more reliable AI for science and mathematics. We'll also discuss the role of the Lean Focused Research Organization (FRO), a non-profit dedicated to advancing Lean and growing its community. The FRO is driving Lean's development as both a proof assistant and an extensible programming language, empowering users to customize its capabilities for diverse applications. Through real-world examples from academia and industry, we'll discover how Lean is paving the way for a more efficient, reliable, and collaborative future in mathematics, software development, and AI.

Biography

Leo is a Senior Principal Applied Scientist in the Automated Reasoning Group at AWS. In his spare time, he dedicates himself to serving as the Chief Architect of the Lean FRO, a non-profit organization that he proudly co-founded alongside Sebastian Ullrich. He is also honored to hold a position on the Board of Directors at the Lean FRO, where he actively contributes to its growth and development. Before joining AWS in 2023, he was a Senior Principal Researcher in the RiSE group at Microsoft Research, where he worked for 17 years starting in 2006. Prior to that, he worked as a Computer Scientist at SRI International. His research areas are automated reasoning, theorem proving, decision procedures, SAT and SMT. He is the main architect of several automated reasoning tools: Lean, Z3, Yices 1.0 and SAL. Leo’s work in automated reasoning has been acknowledged with a series of prestigious awards, including the CAV, Haifa, and Herbrand awards, as well as the ACM SIGPLAN Programming Languages Software Award twice for Z3 and Lean. Leo’s work has also been reported in the New York Times and many popular science magazines such as Wired, Quanta, Nature News, and Scientific American.
 

CARLIS Colloquium: Love, Learning, and Computing Education

The computer science colloquium takes place on Mondays from 11:15 a.m. - 12:15 p.m. This week's speaker, Amy J. Ko (University of Washington, Seattle), will be giving a talk titled "Love, Learning, and Computing Education"

Abstract

We live in a world that is increasingly full of hate, cruelty, and violence. These cultural forces are destabilizing schools, colleges, universities, libraries, and other places of informal learning, and every learner, teacher, and leader in them, threatening education and democracy in the U.S. and worldwide. What is our role, as computing educators and scholars, in resisting this hate? In this talk, I argue for love. A kind of love that shows up not as an abstraction in our values, but in the concrete ways that we teach computing, in the questions we ask about learning computing, in the technologies we create to support computing education, and in what we choose to teach about computing. To make this case, I examine my own experiences with love in computing education and then offer a conception of love in computing education, drawing upon a rich history of scholarship on love and learning. I then deconstruct some of the fundamental tensions between love, computing, and computing education culture. I end with several examples of loving computing education from scholars in our community, each showing us how we might reimagine our teaching, research, and institutions around love. Through this transformation, I hope we might inspire a generation of youth to help create both loving uses of computing, a loving society more broadly, and perhaps a more loving scholarly community for ourselves.

Biography

Amy J. Ko studies equitable, liberatory learning and teaching about computing and information, in schools and beyond. She draws upon computing, education, learning sciences, behavioral sciences, sociology, and more, examining and reimagining learning through a transdisciplinary lens. Her work spans more than 140 peer-reviewed publications, with 23 receiving distinguished paper awards and 6 receiving most influential paper awards. She is an ACM Distinguished Member and a member of the SIGCHI Academy, for her substantial contributions to the field of human-computer interaction, computing education, and software engineering. She is Professor and Associate Dean for Academics at the University of Washington Information School, with a courtesy appointment in Computer Science & Engineering. She is also a proud biracial trans woman of color, mother, and community organizer for equity in K-12 education in the Pacific Northwest, which includes civil rights and sanctuary for transgender, non-binary, and gender nonconforming youth.
 

CANCELLED CRAY Colloquium: Prototyping the Coming XR/AI Singularity

The computer science colloquium takes place on Mondays from 11:15 a.m. - 12:15 p.m. This week's speaker, Ken Perlin (New York University), will be giving a talk titled "Prototyping the Coming XR/AI Singularity"

Abstract

The combination of ubiquitous extended reality (XR) eyewear and artificial intelligence (AI) will soon enable people to add an interactive visual component to every conversation. Widespread adoption of such capabilities will create a singularity, as profound in its impact as were the Web and the smartphone.

This will transform many fields, including science, engineering, architecture, physical therapy, medicine, and education. There are exciting research challenges in learning how to integrate these rich digitally mediated shared experiences into our lives and our work. In this talk, I will describe some of the challenges and opportunities that lie ahead, and how we can use the tools of today to effectively prototype the reality of tomorrow.

Biography

Ken Perlin, a professor in the Department of Computer Science at New York University, directs the Future Reality Lab. His research interests include multi-participant extended reality, computer graphics and animation, user interfaces and education. He is chief scientist at Tactonic Technologies. He is an advisor for High Fidelity and a Fellow of the National Academy of Inventors. He received an Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his noise and turbulence procedural texturing techniques, which are widely used in feature films and television, as well as membership in the ACM/SIGGRAPH Academy, the 2025 NYU Innovator of the Year Award, the 2020 New York Visual Effects Society Empire Award the 2008 ACM/SIGGRAPH Computer Graphics Achievement Award, the TrapCode award for achievement in computer graphics research, the NYC Mayor's award for excellence in Science and Technology and the Sokol award for outstanding Science faculty at NYU, and a Presidential Young Investigator Award from the National Science Foundation. He serves on the Advisory Board for the Centre for Digital Media at GNWC. He serves on the program committee of the AAAS, was external examiner for the Interactive Digital Media program at Trinity College, general chair of the UIST2010 conference, director of the NYU Center for Advanced Technology in Digital Multimedia, co-director of the NYU Games for Learning Institute, and has been a featured artist at the Whitney Museum of American Art. He received his Ph.D. in Computer Science from NYU, and a B.A. in theoretical mathematics from Harvard. Before working at NYU he was Head of Software Development at R/GREENBERG Associates in New York, NY. Prior to that he was the System Architect for computer generated animation at MAGI, where he worked on TRON.

CS&E Colloquium: Engineering Translational Human Neuroscience Through Brain Stimulation and Recording

The computer science colloquium takes place on Mondays from 11:15 a.m. - 12:15 p.m. This week's speaker, David Darrow (UMN Department of Neurosurgery and Psychiatry), will be giving a talk titled "Engineering Translational Human Neuroscience Through Brain Stimulation and Recording"

Abstract

Human brain disorders such as chronic pain, Parkinson’s disease, epilepsy, depression, pain and traumatic brain injury arise from dysfunction in distributed neural circuits rather than isolated anatomical lesions. However, most neuromodulation therapies remain anatomically targeted, open-loop, and only partially optimized. 

This talk presents an engineering framework for translational human neuroscience built around three pillars: large-scale neural recording in humans, circuit-level system identification linked to behavior, and rational optimization of brain stimulation. Our group leverages intracranial electrophysiology (128–256 channels, multi-kHz sampling), noninvasive stimulation techniques, and implantable neuromodulation devices to directly measure and perturb human brain networks. By combining these recordings with computational modeling approaches, including reinforcement learning, drift diffusion modeling, and Bayesian optimization, we identify individualized circuit biomarkers and guide stimulation targeting across disorders. I will present examples spanning epilepsy monitoring, modulation of cognitive effort with deep brain stimulation in Parkinson’s disease, precision functional mapping for chronic pain, and emerging closed-loop approaches for seizure and autoregulatory failure detection.
 

The long-term goal is to engineer adaptive, circuit-informed neuromodulation systems that move seamlessly from laboratory discovery to clinical implementation. This work sits at the intersection of computer science, engineering, and medicine and highlights how scalable algorithms and real-time control methods can be deployed in the most complex dynamical system we know: the human brain.

Biography

Dr. David Darrow is board-certified pain and functional neurosurgeon, Assistant Professor in the Department of Neurosurgery and Psychiatry at the University of Minnesota and the Rockswold-Kaplan Endowed Chair for Traumatic Brain Injury at Hennepin County Medical Center, specializing in functional and pain neurosurgery. Dr. Darrow treats diseases of the central nervous system with neuromodulation including epilepsy, movement disorders, trigeminal neuralgia/facial pain, chronic pain, and psychiatric diseases. 
 
Dr. Darrow is co-PI of the Herman-Darrow Human Neuroscience Lab with a mission of understanding and treating disorders of the nervous system with neuromodulation. The Herman-Darrow Lab links together circuit-level electrophysiology with behavior. By pairing neuromodulation with a quantitative understanding of the pathological circuits of the brain, the lab hopes to help patients improve symptoms and quality of life. 
 
Dr. Darrow is also the PI of the Restorative Neurotrauma Lab at HCMC where electrophysiology and neuromodulation are used to better understand and treat traumatic injuries of the central nervous system. He is the PI for the E-STAND trial, where neuromodulation is used to restore function after Spinal Cord Injury. In collaboration with many other investigators, the team is testing neuromodulation to restore volitional movement and autonomic function using algorithmic, personalized approaches through remote data collection.
 

CS&E Colloquium: Why do machines now show intelligence? And how can we use this to augment humans.

The computer science colloquium takes place on Mondays from 11:15 a.m. - 12:15 p.m. This week's speaker, Mar Gonzalez-Franco (Google), will be giving a talk titled "Why do machines now show intelligence? And how can we use this to augment humans"

Abstract

The arrival of AI puts additional pressure on redefining how humans will interact with machines. Until now, humans have been the brains that orchestrated what machines have done; however, with AI, we can see a future where this will need to happen in a cooperative way, and computers might even become fully independent in their actions and decisions. This talk presents a future where AI becomes more interactive as humans use it via XR devices. For this, we will need better XR–AI interaction metaphors and better inputs to convey intent to the AI. This interactive approach has the potential to enable humans to evolve as AI evolves and unlock tremendous benefits for the general population. The talk also does a retrospective into how we have reached the current moment.

Biography

Dr. Mar Gonzalez-Franco is a Computer Scientist and Neuroscientist working at the intersection of human and artificial intelligence. She currently leads a 30+ people organization at Google, including the the Blended Intelligence Research and Devices (BIRD) lab where she focuses on augmenting human capabilities by blending innate intelligence with AI through immersive experiences. Her team has led the research and product development for input and interactions to the new XR OS, Android XR, that recently launched with the Samsung Galaxy XR as the first XRAI OS. The work of the team on setting up the guidelines for inputs on XR Operating Systems was recently recognized with the SIGCHI Special Recognition in 2025. Previously, as a Principal Researcher at Microsoft Research, Dr. Gonzalez-Franco led the release of multiple Avatar libraries for research and the development of Avatars and Together Mode in Microsoft Teams, a feature recognized as one of Time Magazine's "Best Inventions of 2022" A pioneer in immersive AI and AR/VR, she holds over 40 patents and has authored more than 100 academic publications.
 

CS&E Colloquium: Automated Reasoning at Cloud Scale

The computer science colloquium takes place on Mondays from 11:15 a.m. - 12:15 p.m. This week's speaker, Mike Whalen (Amazon Web Services), will be giving a talk titled "Automated Reasoning at Cloud Scale"

Abstract

Amazon Web Services (AWS) is a cloud computing services provider that has made significant investments in automated reasoning to check the correctness of its internal systems and to provide assurances to customers. We are using proof engines called SAT and SMT solvers more than one billion times a day, both for real-time queries in customer security (checking security policies and verifying network protections), and also for large queries involving code and hardware reasoning that are at the limits of what can be feasibly solved by current solvers. Reaching this scale requires that we consistently focus on our customers and provide measurable improvements for existing customer problems.  It also requires careful examination of the problems to be solved and a clear focus on operations to ensure our analyses are consistently trustworthy and performant. I will discuss these aspects and, more generally, steps to make automated reasoning successful in a commercial organization.

Biography

Dr. Michael Whalen is a Principal Applied Scientist at Amazon Web Services and the former Director of the University of Minnesota Software Engineering Center.  Dr. Whalen is interested in formal analysis, language translation, testing, and requirements engineering.  He has led development of simulation, translation, testing, and formal analysis tools for both programming languages: Java, Rust, and C, and Model-Based Development languages: Simulink, Stateflow, and SCADE. Dr. Whalen has published 99 peer-reviewed articles on these topics, including 3 ICSE distinguished papers.  Dr. Whalen has led successful formal verification projects on real-time operating systems, foundational Amazon C libraries, and several industrial avionics projects.  He is currently working on formal verification at “cloud scale”, looking at how to scale testing and proof tools to larger and more complex problems than are handled by current tools.  He is also involved with outreach, helping developers and business customers apply verification tools to improve their team’s quality, velocity, and innovation.
 

Graduate Programs Online Information Session

RSVP today!.

During each session, the graduate staff will review:

  • Requirements (general)
  • Applying
  • Prerequisite requirements
  • What makes a strong applicant
  • Funding
  • Resources
  • Common questions
  • Questions from attendees

Students considering the following programs should attend: