The Star Geometry of Regularizer Learning
Data Science Seminar
Oscar Leong
UCLA
Abstract
Across many tasks in data science, it is necessary to estimate a signal from corrupted measurements. Perhaps the most pervasive and commonly used technique to address such problems is variational regularization. This consists of solving an optimization problem where one must minimize the sum of a data fidelity term and a regularizer, a penalty term chosen to encourage certain structure in solutions. While there is a suite of regularizers one could choose from, we currently lack a systematic understanding from a modeling perspective of what types of geometries should be preferred in a regularizer for a given data source. In particular, given a data distribution, what is the "optimal" regularizer for such data? Moreover, what aspects about the data govern whether the regularizer enjoys certain properties, such as convexity? Using ideas from star geometry, Brunn-Minkowski theory, and variational analysis, I show that we can characterize the optimal regularizer for a given distribution and establish conditions under which this optimal regularizer is convex. Moreover, I will discuss how our theory can be applied to recent deep learning-based regularization learning frameworks that incorporate additional measurement information into regularizers, which are especially useful in the context of inverse problems.