Trading off accuracy for reduced computation in scientific computing
Data Science Seminar
Alex Gittens (Rensselaer Polytechnic Institute)
Abstract
Classical linear algebraic algorithms guarantee high accuracy in exchange for high computational cost. These costs can be infeasible in modern applications, so over the last two decades, randomized algorithms have been developed that allow a user-specified trade-off between accuracy and computational efficiency when dealing with massive data sets. The intuition is that when dealing with an excess of structured data (e.g., a large matrix which has low numerical rank), one can toss away a large portion of this data, thereby reducing the computational load, without introducing much additional error into the computation. In this talk we look at the design and performance analysis of several numerical linear algebra and machine learning algorithms--- including linear solvers, approximate kernel machines, and tensor low-rank decomposition--- based upon this principle.