ISyE Graduate Seminar: Calibration of Robust Empirical Optimization Problems

Classes are back in session at the University of Minnesota and the Department of Industrial and Systems Engineering seminars are back, too! Again this year we will have two seminar tracks—one is more research-focused and the other more analytics-focused. All are welcome at any seminar.

Our first seminar of fall semester will be research-focused, featuring Professor Andrew Lim from the National University of Singapore who will discuss calibration of robust empirical optimization problems.

Livestreaming: Again this year we will be coordinating with the Institute for Mathematics and its Applications to livestream our seminars on the IMA YouTube Channel. Attend in person or watch the livestream.

“Calibration of Robust Empirical Optimization Problems”

3:15 p.m. - Refreshments

3:30 p.m. - Graduate Seminar

Professor Andrew Lim, National University of Singapore

About the seminar

Lim will discuss recent results on the out-of-sample properties of robust empirical optimization and develop a theory for data-driven calibration of the “robustness parameter” for worst-case maximization problems with concave reward functions. Building on the intuition that robust optimization reduces the sensitivity to model misspecification by controlling the spread of the reward distribution, Lim will show that the first-order benefit of a “little bit of robustness” is a significant reduction in the variance of the out-of-sample reward while the corresponding impact on the mean is almost an order of magnitude smaller. One implication is that a substantial reduction in the variance of the out-of-sample reward (i.e., sensitivity of the expected reward to model misspecification) is possible at little cost if the robustness parameter is properly calibrated.

To this end, Lim will introduce the notion of a robust mean-variance frontier to select the robustness parameter and show that it can be approximated using resampling methods like the bootstrap. Examples show that robust solutions resulting from “open loop” calibration methods (e.g., selecting a 90 percent confidence level regardless of the data and objective function) can be very conservative out-of-sample, while selecting an ambiguity parameter that optimizes an estimate of the out-of-sample expected reward (e.g., via the bootstrap) with no regard for the variance is often insufficiently robust. Lim will also explain why the out-of-sample expected reward generated by the solution of a worst-case problem can sometimes exceed that of a sample-average optimizer.

About the speaker

Andrew Lim is a professor in the Department of Analytics and Operations and the Department of Finance at the National University of Singapore. Prior to that, he was a faculty member in the Department of Industrial Engineering and Operations Research at the University of California, Berkeley. He is a past recipient of a National Science Foundation CAREER Award and has served on the editorial boards of a number of journals including Operations Research, Management Science, and the IEEE Transactions on Automatic Control. He has a Ph.D. from the Australian National University. His research interests are in the areas of stochastic control and optimization, decision making under uncertainty, robust optimization, and financial engineering.

Start date
Wednesday, Sept. 4, 2019, 3:30 p.m.
Location

Lind Hall, Room 305

Share