Professor Di Wu at ECE Spring 2024 Colloquium

Salvage hardware efficiency via unary computing in the deep learning era

In the last decade, deep learning has played an indispensable role in the human world. The resource-intensive nature of deep neural networks, especially their core operation, general matrix multiplication (GEMM), has prompted extensive optimization efforts on conventional hardware, aiming to democratize the transformative capabilities of deep learning technology. However, conventional hardware with binary computing does not offer optimal hardware efficiency. To yield unprecedented levels of hardware efficiency and enable new applications, my research leverages unconventional computing paradigms to design next-generation computer architecture, including unary, neuromorphic, approximate computing, and beyond. In this talk, I will focus on how unary computing utilizes extremely simple hardware to manipulate unary bitstreams, and how unary computing contributes to improved hardware efficiency for both brain-computer interfaces in edge devices and general deep learning in data centers.

Start date
Thursday, March 21, 2024, 4 p.m.
End date
Thursday, March 21, 2024, 5 p.m.
Location

Share