Past events

Computer Science major applications open

On March 1, applications open for the computer science and data science majors. The application deadline is May 25.

Students typically apply to a major while enrolled in fall semester courses during their sophomore year (third semester).

Submit your application at the appropriate link below:

All applicants will be notified of their admission decision via email within three weeks of the application deadline.
 

CS&E Colloquium: Hyperscale Data Processing with Network-centric Designs

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m.

This week's speaker, Qizhen Zhang (University of Pennsylvania), will be giving a talk titled "Hyperscale Data Processing with Network-centric Designs".

Abstract

Today's largest data processing workloads are hosted in cloud data centers. Due to exponential data growth and the end of Moore's Law, these workloads have ballooned to the hyperscale level, where a single query encompasses billions to trillions of data items spread across hundreds to thousands of servers connected by the data center network. These massive scales fundamentally challenge the designs of both data processing systems and data center networks. My research rethinks the interactions between these two layers and seeks the optimal solutions for supporting data processing in data centers and evolving the cloud infrastructure.


In this talk, I will present a principled and cross-layer approach to building network-centric systems for hyperscale workloads. My approach covers data processing in both current networks and future networks, as well as how networks evolve. To demonstrate its efficiency, I will first discuss GraphRex, a system that combines classic database and systems techniques to push the performance of massive graph queries in current data centers. I will then introduce data processing in disaggregated data centers (DDCs), a promising new cloud proposal. I will detail TELEPORT, a system that allows data processing systems to unlock all DDC benefits. Finally, I will also show MimicNet, a system that facilitates network innovation at scale.

Biography

Qizhen Zhang is a Ph.D. candidate in the Department of Computer and Information Science at the University of Pennsylvania, advised by Vincent Liu and Boon Thau Loo. His dissertation research bridges cloud data processing systems and data center networks to address emerging challenges in hyperscale data processing. He is broadly interested in data management and computer systems and networking, and he researches across the data processing stack. His work appears at database and systems conferences such as SIGMOD, VLDB, and SIGCOMM.

 

Minnesota Natural Language Processing Seminar Series: Dynabench: Rethinking Benchmarking in AI

The Minnesota Natural Language Processing (NLP) Seminar is a venue for faculty, postdocs, students, and anyone else interested in theoretical, computational, and human-centric aspects of natural language processing to exchange ideas and foster collaboration. The talks are every other Friday from 12 p.m. - 1 p.m. during the Spring 2022 semester.

This week's speaker, Douwe Kiela (Huggingface), will be giving a talk titled "Dynabench: Rethinking Benchmarking in AI."

Abstract

The current benchmarking paradigm in AI has many issues: benchmarks saturate quickly, are susceptible to overfitting, contain exploitable annotator artifacts, have unclear or imperfect evaluation metrics, and do not necessarily measure what we really care about. I will talk about our work in trying to rethink the way we do benchmarking in AI, specifically in natural language processing, focusing mostly on the Dynabench platform (dynabench.org).

Biography

Douwe Kiela (@douwekiela, https://douwekiela.github.io/) is the Head of Research at Hugging Face. Before, he was a Research Scientist at Facebook AI Research. His current research interests lie in developing better models for (grounded, multi-agent) language understanding and better tools for evaluation and benchmarking.

CS&E Colloquium: Towards Interactive Autonomy with Relational Reasoning

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m.

This week's speaker, Jiachen Li (Stanford University), will be giving a talk titled "Towards Interactive Autonomy with Relational Reasoning".

Abstract

Modern intelligent systems (e.g., autonomous vehicles, social robots) interact intensively with surrounding static/dynamic objects and human beings. In a multi-agent system, the interactions between entities/components can give rise to very complex dynamics and behavior patterns at the scales of both individuals and the entire system. Therefore, effective relational reasoning and interaction modeling among interacting entities play an essential role in scene understanding, decision making, and motion planning for autonomous systems. The ultimate goal of my research is to build intelligent and autonomous agents that can perceive, understand, and reason about the physical world; safely interact and collaborate with human beings; and efficiently coordinate with other intelligent agents. I aim to develop a unified, generalizable, and explainable framework with relational inductive biases to systematically model the relations/interactions between multiple entities/components.

In this talk, I will first discuss the formulation of relational reasoning based on a flexible and scalable graph representation, where nodes represent interacting entities and edges represent the relation between a pair of entities. The relational reasoning has been investigated from two perspectives: a) to explicitly infer the underlying relation types/patterns between entities; b) to estimate the relative importance of a certain entity with respect to another one. I will then discuss the effectiveness of relational reasoning through downstream tasks (e.g., behavior prediction, decision making). The proposed methods can be applied to multi-agent systems in various domains (e.g., physical systems, human crowds/teams, intelligent transportation systems). Finally, I will talk about my future research vision and agenda.

Biography

Dr. Jiachen Li is currently a postdoctoral scholar in the Stanford Intelligent Systems Laboratory at Stanford University working with Prof. Mykel J. Kochenderfer. Before joining Stanford, he received his Ph.D. degree in Robotics from the University of California, Berkeley working with Prof. Masayoshi Tomizuka. He was affiliated with Berkeley DeepDrive. His research interest lies at the intersection of machine learning, computer vision, reinforcement learning, control and optimization approaches, and their applications to scene understanding and decision making for intelligent autonomous systems. In particular, his research focuses on enabling effective and efficient relational learning and reasoning to model interactive behaviors for multi-agent systems in uncertain, dynamically evolving environments. He served as an organizer of multiple workshops on machine learning, computer vision, autonomous driving, and robotics at NeurIPS, ICCV, IV, ITSC, ICRA, IROS. More details can be found at https://jiachenli94.github.io/.

CS&E Colloquium: Private Data Exploring, Sampling, and Profiling

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m.

This week's speaker, Chang Ge (University of Waterloo, Canada), will be giving a talk titled "Private Data Exploring, Sampling, and Profiling".

Abstract

Data analytics is being widely used in businesses. In many cases, conducting enterprise data analytics faces two practical challenges: 1) the datasets usually contain sensitive and private information and do not allow unfettered access; and 2) these data are often owned by multiple parties and stored in silos with different access control. Therefore, it's often required to do analytics on private siloed data.

In this talk, I discuss the challenges and introduce three systems that enable private data exploring, sampling, and profiling. On private data exploration, I describe our work in APEx for accuracy-aware differentially private data exploration; on private data sampling, I talk about the Kamino system for constraint-aware differentially private data synthesis; and on private data profiling, I introduce our work in SMFD for secure multi-party functional dependency discovery. 

Biography

Chang Ge is a PhD candidate in the Data Systems Group at the University of Waterloo advised by Ihab Ilyas. He is broadly interested in data management, with a recent focus on new algorithms and systems for data analytics in the presence of private and dirty data. He also has been solving data management problems at companies including Apple, Microsoft, IBM, and SAP. He was awarded the Queen Elizabeth II Graduate Scholarship in Science & Technology, and the Cybersecurity and Privacy Excellence Graduate Scholarship from the University of Waterloo.

CS&E Colloquium: Action-Perception Synergy: Bio-Inspired AI For Small Autonomous Robots

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m.

This week's speaker, Nitin Sanket (University of Maryland), will be giving a talk titled "Action-Perception Synergy: Bio-Inspired AI For Small Autonomous Robots".

Abstract

The human fascination to mimic ultra-efficient living beings like insects and birds has led to the rise of small autonomous robots. Smaller robots are safer, more agile and are task-distributable as swarms. One might wonder, why do we not have small robots deployed in the wild today? Smaller robots are constrained by a severe dearth of computation and sensor quality. To further exacerbate the situation, today's mainstream approach for autonomy on small robots relies on building a 3D map of the scene that is used to plan paths for executing a control algorithm. Such a methodology has severely bounded the potential of small autonomous robots due to the strict distinction between perception, planning, and control. Instead, we re-imagine each agent by drawing inspiration from insects at the bottom of the size and computation spectrum. Specifically, each of our agents is made up of a series of hierarchical competences built on bio-inspired sensorimotor AI loops by utilizing the action-perception synergy. Here, the agent controls its own movement and physical interaction to make up for what it lacks in computation and sensing. Such an approach imposes additional constraints on the data gathered to solve the problem using Active and Interactive Perception. I will present how the world's first prototype of a RoboBeeHive was built using this philosophy. Finally, I will conclude with a recent theory called Novel Perception that utilizes the statistics of motion fields to tackle various class of problems from navigation and interaction. This method has the potential to be the go-to mathematical formulation for tackling the class of motion-field-based problems in robotics.

Biography

Dr. Nitin J. Sanket received his M.S. in Robotics from the University of Pennsylvania's GRASP lab, where he worked with Prof. Kostas Daniildis on developing a benchmark for indoor to outdoor visual-inertial odometry systems. He is currently an Assistant Clinical Professor in the First-Year Innovation and Research Experience and a Postdoctoral fellow in the Perception and Robotics Group at the University of Maryland, College Park. He works with Prof. Yiannis Aloimonos and Dr. Conelia Fermuller. Nitin works on developing Bio-inspired AI frameworks using the Action-Perception Synergy for resource-constrained tiny mobile robots. Nitin's doctoral thesis won the Larry S. Davis award and the MDPI Drones Ph.D. Thesis award. Nitin is a recipient of the Dean's fellowship, Future Faculty fellowship, Ann G. Wylie fellowship and was the Maryland Robotics center student ambassador. He has also taught courses, including hands-on aerial robotics and vision, planning and control in aerial robotics. Nitin is currently an Associate Editor for the IEEE Robotics and Automation Letters Journal. He is also a reviewer for RA-L, T-ASE, IMAVIS, CVPR, ICRA, RSS, IROS, SIGGRAPH and many other top journals and conferences. 

CS&E Colloquium: No such thing as a model-free lunch? Model-free search and reliable decision making

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m.

This week's speaker, Jack Umenberger (MIT), will be giving a talk titled "No such thing as a model-free lunch? Model-free search and reliable decision making".

Abstract

Inspired by breakthrough results in the processing of complex data, machine learning is being increasingly applied to problems in decision-making and control. However, to be suitable for deployment in applications, we require learning-based algorithms that come with guarantees of reliability, robustness, and safety. 

This talk will focus on the reliability of model-free policy search, addressing the question: when do such methods find optimal solutions, and when do they get trapped in poor local minima? Existing work has considered static policies; however, for dynamic policies that remember past observations - necessary for optimal decision making in many applications - these questions have hitherto remained unanswered. Focusing on the classic control-theoretic problem of output estimation, I will present the first model-free policy search algorithm for dynamic policies guaranteed to converge to the optimal solution. 

Along the way, I’ll also describe my path toward working on this problem, highlighting some of my contributions to model-based approaches for safe and reliable control, including data-driven robust control, system identification, and trajectory optimization. I will offer my perspective on the strengths and weaknesses of model-free and model-based methods, as well as the ways in which they complement each other. 

The talk will conclude by discussing the potential of harnessing the best of model-free and model-based approaches for tackling challenging optimization problems more broadly, including those involving a mixture of continuous and discrete decisions. 

Biography

Jack Umbenberger is a postdoctoral associate in Russ Tedrake's Robot Locomotion at the Massachusetts Institute of Technology. He received his PhD in Engineering and B.E. in Mechatronics from The University of Sydney, Australia, in 2018 and 2013, respectively, and was a postdoctoral fellow in the Division of Systems and Control at Uppsala University, Sweden, from 2017-2019. He is interested in understanding how learning can improve decision making in uncertain and complex environments, with a focus on modeling and control of dynamical systems from data. 

Last day to receive a 25% tuition refund for canceling full semester classes

The last day to receive a 25% tuition refund for canceling full semester classes is Monday, February 14.

View the full academic schedule on One Stop.
 

CS&E Colloquium: Less Is More: Learning with Minimum Supervision for Embodied Agents

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m.

This week's speaker, Yanchao Yang (Stanford University), will be giving a talk titled "Less Is More: Learning with Minimum Supervision for Embodied Agents".

Abstract

We have recently seen many exciting robotic applications powered by neural networks trained using annotated datasets. However, when tested in the wild, these neural networks always output invalid predictions resulting in unexpected failures. To physically survive, autonomous agents have to manage the complexity and dynamics of the real world for tasks ranging from perception to decision-making, for which, human supervision would never be enough, not to mention the scarcity of data in some situations.

My research aims at learning algorithms that minimally rely on human supervision for robotic sensing and visual representations, so that neural networks can utilize and generalize to out-of-domain data streams. In this talk, I will first present an information-theoretic principle that detects and segments objects in real scenes. By exploiting the inductive bias from data, the method operates under no human supervision and can seamlessly incorporate multi-modal signals. It also enables continuous learning of object representations from interaction for compositional scene understanding. I will then present techniques that maximally utilize existing datasets by transferring annotations to unlabeled domains, each of which tackles a unique piece of the generalization problem.

Biography

Yanchao Yang is a Postdoctoral Research Fellow at Stanford University with Professor Leonidas J. Guibas at the Geometric Computation Group. He received his Ph.D. from the University of California, Los Angeles (UCLA), working with Professor Stefano Soatto in Computer Science. He researches at the intersection of computer vision, machine learning, and robotics, with a long-term pitch in developmental robotics for embodied agents. He currently focuses on self-supervised and semi-supervised techniques that allow autonomous agents to learn perception and representation for physical interactions in open environments. He is a recipient of the Dean's Award for Academic Excellence, and his work has won the AWS Nominated Paper Award. More information can be found at: https://yanchaoyang.github.io/.
 

CS&E Colloquium: Robust and Generalized Perception Towards Mainstreaming Domestic Robots

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m.

This week's speaker, Karthik Desingh (University of Washington), will be giving a talk titled "Robust and Generalized Perception Towards Mainstreaming Domestic Robots".

Abstract

My long-term goal is to build general-purpose robots that can care for and assist the aging and disabled population by autonomously performing various real-world tasks. To robustly execute a variety of tasks, a general-purpose robot should be capable of seamlessly perceiving and manipulating a wide variety of objects present in our environment. To achieve a given task, a robot should continually perceive the state of its environment, reason with the task at hand, plan and execute appropriate actions. In this pipeline, perception is largely unsolved and one of the more challenging problems. Common indoor environments typically pose two main problems: 1) inherent occlusions leading to unreliable observations of objects, and 2) presence and involvement of a wide range of objects with varying physical and visual attributes (i.e., rigid, articulated, deformable, granular, transparent, etc.). Thus, we need algorithms that can accommodate perceptual uncertainty in the state estimation and generalize to a wide range of objects.

In my research, I develop 1) probabilistic inference methods to estimate the world-state with the notion of uncertainty and 2) data-driven methods to learn object representations that can generalize the state estimation to a wide range of objects. This talk will highlight some of my research efforts in these two research thrusts. In the first part of the talk, I will describe an efficient belief propagation algorithm - Pull Message Passing for Nonparametric Belief Propagation (PMPNBP) - for estimating the state of articulated objects using a factored approach. In the second part of the talk, I will describe the most recent work - Spatial Object-centric Representation Network (SORNet) - for learning object-centric representation grounded for sequential manipulation tasks. I will also discuss the open research problems on both these thrusts towards realizing general-purpose domestic robots.

Biography

Karthik Desingh works as a Postdoctoral Scholar at the University of Washington (UW) with Professor Dieter Fox. Before joining UW, he received his Ph.D. in Computer Science and Engineering from the University of Michigan, working with Professor Chad Jenkins. During his Ph.D., he was closely associated with the Robotics Institute and Michigan AI. He earned his B.E. in Electronics and Communication Engineering at Osmania University, India, and M.S. in Computer Science at IIIT-Hyderabad and Brown University. He researches at the intersection of robotics, computer vision, and machine learning, primarily focusing on providing perceptual capabilities to robots using deep learning and probabilistic techniques to perform tasks in unstructured environments. His work has been recognized with the best workshop paper award at RSS 2019 and nominated as a finalist for the best systems paper award at CoRL 2021.