Trading off accuracy for reduced computation in scientific computing

Data Science Seminar

Alex Gittens (Rensselaer Polytechnic Institute)

Abstract

Classical linear algebraic algorithms guarantee high accuracy in exchange for high computational cost. These costs can be infeasible in modern applications, so over the last two decades, randomized algorithms have been developed that allow a user-specified trade-off between accuracy and computational efficiency when dealing with massive data sets. The intuition is that when dealing with an excess of structured data (e.g., a large matrix which has low numerical rank), one can toss away a large portion of this data, thereby reducing the computational load, without introducing much additional error into the computation. In this talk we look at the design and performance analysis of several numerical linear algebra and machine learning algorithms--- including linear solvers, approximate kernel machines, and tensor low-rank decomposition--- based upon this principle.

Start date
Tuesday, Oct. 24, 2023, 1:25 p.m.
End date
Tuesday, Oct. 24, 2023, 2:25 p.m.
Location

Lind Hall 325 or via Zoom

Zoom registration

 

 

Share