Events Listing

List of Upcoming Events

Machine learning in healthcare

Professor Yogatheesan Varatharajah at ECE spring 2026 colloquium

(details coming soon)

Quantum materials

Professor Ying Wang at ECE spring 2026 colloquium

(details coming soon)

Health informatics

Professor Mahdi Bayat at ECE spring 2026 colloquium

(details coming soon)

The Chemical Reaction Between AI and Data System Research

Professor Zhichao Cao at ECE Spring 2026 Colloquium

In this talk, Professor Zhichao Cao will explore the symbiotic relationship between AI and data systems, showing how Large Language Models (LLMs) both automate data systems and drive new data system designs. He will present two representative projects: (1) StorageXTuner, the first LLM agent–driven framework that automatically tunes performance for diverse data systems such as RocksDB, LevelDB, MySQL, and CacheLib, outperforming traditional tuning methods; (2) M2Cache, a system that makes LLM inference sustainable and accessible on outdated or low-end hardware through a co-design of dynamic mixed-precision inference and a predictive multi-level cache across HBM, DRAM, and SSDs, greatly improving efficiency and reducing carbon footprint. Finally, Cao will conclude with his vision for future research at the intersection of AI and data systems.

Magnetism

Professor Andrew Kent at ECE spring 2026 colloquium

(details coming soon)

Electronic/photonic I/O

Professor Samuel Palermo at ECE spring 2026 colloquium

(details coming soon)

Automatic control

Professor Maurizio Porfiri at ECE spring 2026 colloquium

(details coming soon)

Transistor scaling challenges and opportunities

Senior process integration engineer Kriti Agarwal of Intel at ECE spring 2026 colloquium

(details coming soon)

Machine learning and generative AI

Professor Sanjay Shakkottai at ECE spring 2026 colloquium

(details coming soon)

List of Past Events

Robust Online Convex Optimization for Disturbance Rejection

Professor Peter Seiler at ECE Fall 2025 Colloquium

This talk will consider robust disturbance rejection in high precision applications. We will start by motivating the work with one relevant problem: the control required for optical communication between satellites. We will then discuss the fundamental performance limits associated with linear time invariant (LTI) control. Linear time varying controllers, e.g. those that rely on online convex optimization, can potentially provide significant performance improvements. However, the ability to accurately adapt to the disturbance while maintaining closed-loop stability relies on having an accurate model of the plant. In fact, the model uncertainty can cause the closed-loop to become unstable. We provide a sufficient condition for robust stability based on the small gain theorem using the ell-infinity norm. This condition is easily incorporated as an on-line constraint in controllers that rely on online convex optimization.

Variational quantum computing for solving constrained optimization problems

Professor Vassilis Kekatos at ECE Fall 2025 Colloquium

Current quantum computers are limited in terms of number of qubits and reliability. As quantum technology matures, variational quantum approaches (VQAs) offer a promising transitional step. We propose a suite of VQAs to solve large-scale optimization problems. VQAs encode high-dimensional variables as states of parameterized quantum circuits (PQC), whose parameters can be adjusted to control the related quantum observables. The PQC is used to measure functions and their gradients, while a classical computer updates PQC parameters via standard optimization schemes. We target problems with many variables and constraints, such as the optimal power flow in scheduling power grids. We will discuss how to: i) encode such problems into PQC states; ii) adjust their parameters using gradient-based schemes; iii) measure gradients as quantum observables; and iv) extend our methods to a quantum ML framework where a PQC is trained to solve multiple instances of an optimization problem.

Early Architecture Evaluation through Hardware Feasibility Studies

Subhash Sethumurugan at ECE Fall 2025 Colloquium

ARM architecture powers a wide range of computing platforms — from embedded systems to high-performance servers. With such broad adoption, any addition must be carefully considered. A new architectural feature isn’t just a design change; it becomes a long-term commitment affecting silicon, software stacks, and the ecosystem. This talk will focus on the challenges of evolving a mature architecture like ARM and the importance of early evaluation. Before ratifying a feature, we assess its impact on performance, complexity, power, area, and software. I’ll describe several evaluation strategies used at ARM — from qualitative reasoning and empirical estimation to performance modeling, RTL prototyping, and software prototyping. Using the PACMAN mitigation as a case study, I’ll show how we apply these methods to estimate performance impact and validate feasibility, even without full hardware or toolchain support.
 

The dual frontier between AI and the power grid

Professor Hao Zhu at ECE Fall 2025 Colloquium

AI/ML technologies are rapidly reshaping the paradigm of operating electric power grids. Meanwhile, the hyperscale and dynamic energy demands of AI datacenters pose significant challenges to grid reliability. In this talk, I will explore the dual frontier between AI and the power system, with a focus on grid dynamic modeling and analysis. First, to effectively apply AI tools to grid dynamics, it is crucial to consider not only computational/memory efficiency but also the unique characteristics of dynamic systems. To address this, we introduce TRASE-NODEs—Trajectory Sensitivity-aware Neural ODEs—which leverage the classical dynamic sensitivity concept to significantly improve data efficiency and control performance in neural dynamic models. Second, we examine the impact of large-scale AI datacenters on wide-area power system oscillations. By developing a stochastic model to represent sustained, periodic power fluctuations, our numerical studies reveal that factors such as datacenter sizing and geographic distribution can influence oscillation levels. This quantitative analysis highlights the need for developing mitigation strategies at both the grid and hardware levels to support the continued growth of AI-driven energy demand.

Advancing semiconductor and spintronic materials toward new-functional microelectronics and optoelectronics

Dr. Yuan Lu at ECE's Fall 2025 Colloquium

The continuous development of advanced semiconductor and spintronic materials is driving a new era of multifunctional micro- and optoelectronic devices. In particular, III–V semiconductors provide a versatile platform for integrating high-quality epitaxial growth, efficient light emission, and strong spin–photon interactions, enabling revolutionary applications in information and communication technologies. Spin light-emitting diodes (spin-LEDs), which exploit the conversion of carrier spin polarization into photon circular polarization, are a prominent example of this progress. Recent breakthroughs in efficient spin injection [1], perpendicular magnetic anisotropy [2], and electrical magnetization control [3] now allow for high-speed modulation of light polarization without external magnetic fields, opening pathways for optical communication, three-dimensional displays, and quantum-inspired photonic technologies.

In parallel, we are pioneering organic and hybrid spintronic architectures that expand the functional space of semiconductor devices. A notable achievement is the demonstration of a giant tunneling magnetoresistance (TMR) of –266% in La₀.₆Sr₀.₄MnO₃/poly(vinylidene fluoride)/Co organic memristors [4]. More interesting, voltage-driven fluorine motion in the junction generates a huge reversible resistivity change up to 10% with ns timescale. Such multifunctional devices, combining high TMR with memristive behavior, pave the way toward neuromorphic computing and multi-state memory applications.

Optics, sensors and AI: Synergic computational imaging to go beyond the limits imposed by conventional imaging

Professor Ashok Veeraraghavan at ECE's Fall 2025 Colloquium

Synergic computational imaging to go beyond the limits imposed by conventional imaging

In this talk, I will discuss about several projects in my lab at the confluence of optics, sensors and artificial intelligence. In particular, I will provide examples of how co-designing sensors, optics and AI algorithms results in superior performance capabilities for imaging systems. I will provide a few example projects: (1) how co-designing imaging optics along with AI algorithms can enable high-throughput 3D imaging, and microscopy, (2) how novel diffractive and meta-optical elements allow us to realize imaging systems with novel functionalities and form-factors and finally time permitting, (3) how emerging neural representations along with high resolution spatial light modulators can allow us to image through thick scattering media without the need for guidestars. 

I will use these projects to argue that we should look at the three computational blocks within an imaging system, optics, sensors and algorithms together and that co-designing them can result in significant performance improvements over the state of art.

Distributed computing using feedback control in the data center

Professor Sanjay Lall at ECE Fall 2025 Colloquium

The bittide system is a recent approach to distributed computing, designed to achieve synchronous execution at a large scale without the need for a global clock or traditional wall-clock synchronization. It aims to overcome the complexity and expense of maintaining precise wall-clock time in distributed systems, especially at datacenter scale. The underlying mechanism of bittide uses feedback control to regulate the frequency of the hardware oscillators driving both computation and communication, in such a way as to ensure that all nodes operate in syntony. This allows applications to treat time as purely logical, and to make use of deterministic scheduling and programming methodologies across the entire datacenter.  In this talk we present an overview of the bittide system, discussing how the system works, mathematical formulations of the system behavior, and the consequences of logical synchrony for applications.
 

Power Systems

Professor Vijay Vittal at ECE Fall 2025 Colloquium

More details will be posted soon.

Solid-State Nanopores for DNA Data Storage

Chunhui Dai (Staff Engineer at Samsung) at ECE's 2025 Fall Colloquium

Human society is generating digital data at an unprecedented pace. While this data holds immense value for mining and analysis—especially with advances in AI and machine learning—the rising cost of long-term storage using existing technologies has created a pressing “save-or-discard” dilemma. DNA offers an ultra-dense, durable, and energy-efficient medium for long-term information storage. This talk will focus on solid-state nanopore technologies for DNA reading and sequencing. The first part will highlight atomically precise nanopore fabrication in two-dimensional (2D) materials, including material synthesis (Bernal-stacked hBN), single-atom defect engineering using electron beams, and DNA translocation detection. The second part will cover scalable, CMOS-compatible silicon nitride (SiNx) membranes and high-throughput dielectric breakdown processes for large-scale device integration. Together, these advances move us closer to practical DNA-based storage solutions, bridging nanoscale material engineering with the rapidly growing demands of next-generation data storage.

The Rise and Fall of Machine Learning for Computing and Optimization

Professor Cunxi Yu at ECE's Fall 2025 Colloquium

In recent years, Machine Learning (ML) has gained considerable momentum in electronic design automation (EDA). ML-driven methods and infrastructures have demonstrated a unique capability to capture the multitude of factors affecting estimation accuracy, effectively explore large algorithmic and design spaces in synthesis, and accelerate classical combinatorial optimization problems. In particular, synthesis and verification, two critical stages in EDA, have significantly benefited from ML over the past five years. However, the development of ML-driven synthesis and verification approaches has also revealed several points of convergence, including challenges in practicality, system engineering, data availability, and determinism. In this talk, I will present the journey of ML in synthesis and verification, highlighting its evolution from static ML-based approaches to algorithmic learning and general combinatorial optimizations enabled by differentiable programming, ML infrastructures, and specialized hardware. I will also discuss the emerging role of large language models (LLMs) in combinatorial optimization.