Past events

ML Seminar: Scientific Innovations in the Age of Generative AI

Scientific Innovations in the Age of Generative AI

CS&E Colloquium: Digital Safety and Security for Survivors of Technology-Mediated Harms

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Emily Tseng (Cornell Tech), will be giving a talk titled, "Digital Safety and Security for Survivors of Technology-Mediated Harms".

Abstract

Platforms, devices, and algorithms are increasingly weaponized to control and harass the most vulnerable among us. Some of these harms occur at the individual and interpersonal level: for example, abusers in intimate partner violence (IPV) use smartphones and social media to surveil and stalk their victims. Others are more subtle, at the level of social structure: for example, in organizations, workplace technologies can inadvertently scaffold exploitative labor practices. This talk will discuss my research (1) investigating these harms via online measurement studies, (2) building interventions to directly assist survivors with their security and privacy; and (3) instrumenting these interventions as observatories, to enable scientific research into new types of harms as attackers and technologies evolve. I will close by sharing my vision for centering inclusion and equity in digital safety, security and privacy, towards brighter technological futures for us all.

Biography

Emily Tseng is a PhD candidate in Information Science at Cornell University. Her research develops the systems, interventions, and design principles we need to make digital technology safe and affirming for everyone. Emily’s work has been published at top-tier venues in human-computer interaction (ACM CHI, CSCW) and computer security and privacy (USENIX Security, IEEE Oakland). For 5 years, she has worked as a researcher-practitioner with the Clinic to End Tech Abuse, where her work has enabled specialized security services for over 500 survivors of intimate partner violence (IPV). Emily is the recipient of a Microsoft Research PhD Fellowship, Rising Stars in EECS, Best Paper Awards at CHI, CSCW, and USENIX Security, and third place in the Internet Defense Prize. She has interned at Google and with the Social Media Collective at Microsoft Research. She holds a Bachelor’s from Princeton University.

CS&E Colloquium: Toward Practical Quantum Computing Systems with Intelligent Cross-Stack Co-Design

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Hanrui Wang (MIT), will be giving a talk titled "Toward Practical Quantum Computing Systems with Intelligent Cross-Stack Co-Design".

Abstract

Quantum Computing (QC) has the potential to solve classically hard problems with greater speed and efficiency, and we have witnessed exciting advancements in QC in recent years. However, there remain substantial gaps between the application requirements and the available devices in terms of reliability, software framework support, and efficiency. To close the gaps and fully unleash quantum power, it is critical to perform AI-enhanced co-design across various technology stacks, from algorithm and program design, to compilation, and hardware architecture.

In this talk, I will provide an overview of my contributions to the architecture and system supports for quantum computing. At the algorithm and program level, I will introduce QuantumNAS, a framework for quantum program structure (ansatz) design for variational quantum algorithms. QuantumNAS adopts an intelligent search engine and utilizes the noisy feedback from quantum devices to search for program structure and qubit mapping tailored for specific hardware, leading to notable resource reduction and reliability enhancements. Then, at the compilation and control level, I will discuss Q-Pilot, a compilation framework for the Field-Programmable Qubit Array (FPQA) implemented by the emerging reconfigurable atom arrays. This framework leverages movable atoms for routing 2Q gates and generates atom movements and gate scheduling with high scalability and parallelism. On the hardware architecture and design automation front, I will present SpAtten, an algorithm-architecture-circuit co-design aimed at Transformer-based quantum error correction decoding. SpAtten supports on-the-fly error pattern pruning to eliminate less critical inputs and boost efficiency. Finally, I will conclude with an overview of my ongoing work and my research vision toward building software and hardware supports for practical quantum advantages.

Biography

Hanrui Wang is a Ph.D. Candidate at MIT EECS, advised by Prof. Song Han. His research focuses on architecture and system-level supports for quantum computing, and AI for quantum. His work appears in conferences such as MICRO, HPCA, QCE, DAC, ICCAD, and NeurIPS and has been recognized by the QCE 2023 Best Paper Award, ICML RL4RL 2019 Best Paper Award, ACM Student Research Competition 1st Place Award, Best Poster Award at NSF AI Institute, Best Demo Award at DAC University Demo, MLCommons Rising Star in ML and Systems, and ISSCC 2024 Rising Star. His work is supported by the Qualcomm Innovation Fellowship, Baidu Fellowship, and Unitary Fund. He is the creator of the TorchQuantum library, which has been adopted by the IBM Qiskit Ecosystem and PyTorch Ecosystem with 1.2K+ stars on GitHub. He is passionate about teaching and has served as a course developer and co-instructor for a new course on efficient ML and quantum computing at MIT. He is also the co-founder of the QuCS "Quantum Computer Systems" forum for quantum education.

ML Seminar: Vertical Reasoning Enhanced Learning, Generation and Scientific Discovery

Vertical Reasoning Enhanced Learning, Generation and Scientific Discovery

CS&E Colloquium: Taming the Beast: Practical Theories for Responsible Learning

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Zhun Deng (Columbia University), will be giving a talk titled "Taming the Beast: Practical Theories for Responsible Learning".

Abstract

Modern digital systems powered by machine learning have permeated various aspects of society, playing an instrumental role in many high-stakes areas such as medical care and finance. Therefore, it is crucial to ensure that machine learning algorithms are deployed in a “responsible” way so that digital systems are more reliable, explainable, and aligned with societal values. In this talk, I will introduce my research on building practical theories to guide real-world responsible deployment of machine learning. First, I will introduce our recent work on distribution-free uncertainty quantification for a rich class of statistical functionals of quantile functions to avoid catastrophic outcomes and unfair discrimination in the deployment of black-box models. The power of our framework is shown by applications to large language models and medical care. Second, I will describe an extension of the previous framework to handle group-based fairness notions so as to protect every group that can be meaningfully identified from data. At the end, I will conclude my talk with future directions.

Biography

Zhun Deng is a postdoctoral researcher in Computer Science with Toniann Pitassi and Richard Zemel at Columbia University. Previously, he completed his Ph.D. from the Theory of Computation Group at Harvard University, where he was advised by Cynthia Dwork. His research investigates both the theoretical foundations and applications of reliable and responsible machine learning. His papers have won multiple honors at flagship machine learning conferences. His research has also been awarded with fundings from the Accelerating Foundation Models Research Program of Microsoft. 

CS&E Colloquium: Intelligent Software in the Era of Deep Learning

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Yuke Wang (University of California, Santa Barbara), will be giving a talk titled "Intelligent Software in the Era of Deep Learning".

Abstract

With the end of Moore's Law and the rise of compute- and data-intensive deep-learning (DL) applications, the focus on arduous new processor design has shifted towards a more effective and agile approach -- Intelligent Software to maximize the performance gains of DL hardware like GPUs.

In this talk, I will first highlight the importance of software innovation to bridge the gap between the increasingly diverse DL applications and the existing powerful DL hardware platforms. The second part of my talk will recap my research work on DL system software innovation, focusing on bridging the 1) Precision Mismatch between DL applications and high-performance GPU units like Tensor Cores (PPoPP '21 and SC '21), and 2) Computing Pattern Mismatch between the sparse and irregular DL applications such as Graph Neural Networks and the dense and regular tailored GPU computing paradigm (OSDI '21 and OSDI '23). Finally, I will conclude this talk with my vision and future work for building efficientscalable, and secure DL systems.

Biography

Yuke Wang is a final-year Doctor of Philosophy (Ph.D.) candidate in the Department of computer science at the University of California, Santa Barbara (UCSB). He got his Bachelor of Engineering (B.E.) in software engineering from the University of Electronic Science and Technology of China (UESTC) in 2018. At UCSB, Yuke is working with Prof.Yufei Ding (Now at UC at San Diego, CSE). Yuke's research interests include Systems & Compiler for Deep Learning and GPU-based High-performance Computing. His projects cover graph neural network (GNN) optimization and its acceleration on GPUs. Yuke’s research has resulted in 20+ publications (with 10 first-authored papers) in top-tier conferences, including OSDI, ASPLOS, ISCA, USENIX ATC, PPoPP, and SC. Yuke’s research outcome has been adopted for further research in industries (e.g., NVIDIA, OctoML, and Alibaba) and academia (e.g., University of Washington and Pacific Northwest National Laboratory). Yuke is also the recipient of the NVIDIA Graduate Fellowship 2022 (Top-10 out of global applicants) and has industry experience at Microsoft Research, NVIDIA Research, and Alibaba. The ultimate goal of Yuke’s research is to facilitate efficientscalable, and secure deep learning in the future.  https://www.wang-yuke.com/

MSSE Information Session (Virtual)

Interested in learning more about the University of Minnesota's Master of Science in Software Engineering program?

Reserve a spot at an upcoming virtual information session to get all your questions answered.

Info sessions are recommended for those who have at least 1-2 years of software engineering experience.

During each session, MSSE staff will review:

  • Requirements (general)
  • Applying
  • Prerequisite requirements
  • What makes a strong applicant
  • Funding
  • Resources
  • Common questions
  • Questions from attendees
     

RSVP for the next information session now

ML Seminar: Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models

Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models

CS&E Colloquium: Lossy computation done right: Scalable and Accessible LLM Fine-Tuning and Serving

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Zirui Liu (Rice University), will be giving a talk titled "Lossy computation done right: Scalable and Accessible LLM Fine-Tuning and Serving".

Abstract

As the model size grows, Large language models (LLMs) have exhibited human-like conversation ability. This advancement opens the door to a wave of new applications, such as custom AI agents. To achieve this, two essential steps are involved: fine-tuning and serving. Fine-tuning is the process of adapting the LLM to a specific task, such as understanding and responding to domain-specific inquiries. The second step, serving, is about generating useful outputs to the questions in real-time. However, both of these two steps are hard and expensive due to the massive model scale, limiting their accessibility to most of the users.

Our key idea to overcome this challenge is that LLMs are extremely robust to the noise from lossy computation, such as low numerical precision and randomized computation like Dropout. Following this insight, we will discuss some recent results in fine-tuning and serving LLMs with much accessible hardware. First, I will share my research on using randomized matrix multiplication to make fine-tuning both faster and more memory-efficient. Following that, I will show the extremely low-bit model and KV Cache quantization can reduce the cost of the LLM serving process while maintaining performance. Finally, I will discuss my broader research vision in LLM data-quality problem, on-device LLM deployment, and biomedical applications.

Biography

Zirui Liu is a final year Ph.D. candidate from the Department of Computer Science at Rice University. His interests lie in the broad area of large-scale machine learning, particularly in algorithm-system co-design, randomized algorithm, and large-scale graph learning. He has published more than 20 papers in top venues such as ICLR, NeurIPS, ICML, and MLSys. Moreover, his research also has been widely deployed in industrial applications such as the Meta recommendation system and Samsung advertisement platform.  Website: https://zirui-ray-liu.github.io/

CS&E Colloquium: Adaptive Experimental Design to Accelerate Scientific Discovery and Engineering Design

The computer science colloquium takes place on Mondays and Fridays from 11:15 a.m. - 12:15 p.m. This week's speaker, Aryan Deshwal (Washington State University), will be giving a talk titled "Adaptive Experimental Design to Accelerate Scientific Discovery and Engineering Design".

Abstract

A huge range of scientific discovery and engineering design problems ranging from materials discovery and drug design to 3D printing and chip design can be formulated as the following general problem: adaptive optimization of complex design spaces guided by expensive experiments where expense is measured in terms of resources consumed by the experiments. For example, searching the space of materials for a desired property while minimizing the total resource-cost of physical lab experiments for their evaluation. The key challenge is how to select the sequence of experiments to uncover high-quality solutions for a given resource budget.

In this talk, I will introduce novel adaptive experiment design algorithms to optimize combinatorial spaces (e.g., sequences and graphs). First, I will present a dictionary-based surrogate model for high-dimensional fixed-size structures. Second, I will discuss a surrogate modeling approach for varying-size structures by synergistically combining the strengths of deep generative models and domain knowledge in the form of expert-designed kernels. Third, I will describe a general output space entropy search framework to select experiments for the challenging real-world scenario of optimizing multiple conflicting objectives using multi-fidelity experiments that trade-off resource cost and accuracy of evaluation. I will also present results on applying these algorithms to solve high-impact science and engineering applications in domains including nanoporous materials discovery, electronic design automation, additive manufacturing, and optimizing commercial Intel systems.

Biography

Aryan Deshwal is a final year PhD candidate in CS at Washington State University. His research agenda is AI to Accelerate Scientific Discovery and Engineering Design where he focuses on advancing foundations of AI/ML to solve challenging real-world problems with high societal impact in collaboration with domain experts. He is selected for Rising Stars in AI by KAUST AI Initiative (2023) and Heidelberg Laureate Forum (2022). He won the College of Engineering Outstanding Dissertation Award (2023), Outstanding Research Assistant Award (2022), and Outstanding Teaching Assistant in CS Award (2020) from WSU. He won outstanding reviewer awards from ICML (2020), ICLR (2021), and ICML (2021) conferences.