ISyE Graduate Seminar Series: The Power of Two Samples in Generative Adversarial Networks

Please join us for our next seminar of fall semester, featuring Professor Sewoong Oh from the University of Washington who will discuss the power of two samples in generative adversarial networks.

Livestreaming: Again this year we are coordinating with the Institute for Mathematics and its Applications to livestream our seminars on the IMA YouTube Channel. Attend in person or watch the livestream.

3:15 p.m. - Refreshments

3:30 p.m. - Graduate Seminar

About the seminar

Professor Sewoong Oh’s team brings the tools from Blackwell’s seminal result on comparing two stochastic experiments from 1953, to shine a new light on a modern application of great interest: Generative Adversarial Networks (GANs). Binary hypothesis testing is at the center of training GANs, where a trained neural network called a critic determines whether a given samples from the real data or the generated (fake) data. By jointly training the generator and the critic, the hope is that eventually the trained generator will create realistic samples.

One of the major challenges in GAN is known as “mode collapse," or the lack of diversity in the samples generated by the trained generators. Oh proposes a new training framework, where the critic is fed with multiple samples jointly (which he calls packing), as opposed to each sample separately as done in standard GAN training. With this simple but fundamental departure from standard GANs, experimental results show that the diversity of the generated samples improve significantly.

Oh and his colleagues analyze this practical gain by first providing a formal mathematical definition of mode collapse and making a fundamental connection between the idea of packing and the intensity of mode collapse. Precisely, they show that the packed critic naturally penalizes mode collapse, thus encouraging generators with less mode collapse. The analyses critically rely on operational interpretation of hypothesis testing and corresponding data processing inequalities, which lead to sharp analyses with simple proofs. For this talk, Oh will assume no prior background on GANs.

About the speaker

Sewoong Oh is an associate professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. He received his Ph.D. from the Department of Electrical Engineering at Stanford University. Following his Ph.D., he worked as a postdoctoral researcher at Laboratory for Information and Decision Systems (LIDS) at the Massachusetts Institute of Technology.

His research interest is in theoretical machine learning, including generative adversarial networks and saddle-point problems, and privacy and blockchains. He was co-awarded the best paper award at SIGMETRICS in 2015, and received a National Science Foundation CAREER Award in 2016 and a Google Faculty Research Award.

Start date
Wednesday, Sept. 18, 2019, 3:30 p.m.
Location

Lind Hall, Room 305

Share