Caiwen Ding Explores Energy-Efficient Computing with $700K NSF Grant
Department of Computer Science & Engineering Associate Professor Caiwen Ding is a principal investigator on a $700K grant from the National Science Foundation that aims to establish the foundation for compact, energy-efficient, and adaptive optical AI processors. The project titled, “Multi-Modal Sensing with Robust, Unified, and Scalable Diffractive Optical Neural Networks,” is a joint effort between the University of Minnesota and North Carolina State University (NC State).
“When we talk about how computers work and what it takes to run modern devices, we really have to think about AI workloads,” Ding said. “These tasks can take a long time to process and use a lot of energy. And as AI technology keeps advancing and getting more sophisticated, the amount of energy it needs only goes up. Our goal is to make this processing more efficient so we can reduce both the energy use and the time it takes.”
Conventional electronic-based computing systems face a critical bottleneck: the energy and time required to move data between processors and memory now exceed the costs of performing the calculations themselves. This imbalance has slowed progress in AI, while contributing to rising energy demands. Ding’s project investigates a photonic-based computing paradigm—diffractive optical neural networks (DONNs)—which use light to perform information processing. Through a hardware–software co-design approach, this project is developing both the computing systems and the accompanying algorithms to enable tasks to be carried out at the speed of light while significantly reducing energy consumption.
“We have complementary expertise with the team at NC State,” Ding said. “We focus on shaping the AI workloads and preparing the algorithms for the DONN system. Once the algorithms are ready, we move into an interactive co-design process with the NC State team to map them onto the optical devices.”
This project aims to design, fabricate, and experimentally validate a new generation of metasurface-based DONNs that overcome key limitations of existing optical accelerators, focusing on three major advancements: multi-modality, scalability, and robustness to non-idealities. By tightly integrating algorithmic innovation, photonic device engineering, and experimental validation, this work will establish the foundations for compact, energy-efficient, and adaptive optical AI processors, offering a pathway toward practical deployment in embedded and edge-computing applications.
“First we will focus on algorithm development, co-design, and feedback loop optimization,” Ding said. “Our first phase is developing a deep neural network framework for optical devices, so that we can process images, texts and graphs. From there we can fine tune those neural networks. In the second stage, we can identify bottlenecks and address them, for example, nonlinear activation reduction. Then we will work to make it scalable to address real AI workloads using model compression. Finally we will make sure this model can work for the specific device.”
By establishing this blueprint for photonic-based AI processors, the goal is to dramatically reduce energy consumption while sustaining high-performance AI computations. Applying this approach to mobile and edge devices could have a significant impact, enabling phones, sensors, and other portable systems to run AI workloads much longer. This vision reflects the broader potential of DONNs to deliver scalable, energy-efficient AI, from compact edge devices to more complex computing systems.