Past Events
Learning in Stochastic Games
Tuesday, April 11, 2023, 11:15 a.m. through Tuesday, April 11, 2023, 12:15 p.m.
Zoom
Data Science Seminar
Muhammed Omer Sayin (Bilkent University)
Abstract
Reinforcement learning (RL) has been the backbone of many frontier artificial intelligence (AI) applications, such as game playing and autonomous driving, by addressing how intelligent and autonomous systems should engage with an unknown dynamic environment. The progress and interest in AI are now transforming social systems with human decision-makers, such as (consumer/financial) markets and road traffic, into socio-technical systems with AI-powered decision-makers. However, self-interested AI can undermine the social systems designed and regulated for humans. We are delving into the uncharted territory of AI-AI and AI-human interactions. The new grand challenge is to predict and control the implications of AI selfishness in AI-X interactions with systematic guarantees. Hence, there is now a critical need to study self-interested AI dynamics in complex and dynamic environments through the lens of game theory.
In this talk, I will present the recent steps we have taken toward the foundation of how self-interested AI would and should interact with others by bridging the gap between game theory and practice in AI-X interactions. I will specifically focus on stochastic games to model the interactions in complex and dynamic environments since they are commonly used in multi-agent reinforcement learning. I will present new learning dynamics converging almost surely to equilibrium in important classes of stochastic games. The results can also be generalized to the cases where (i) agents do not know the model of the environment, (ii) do not observe opponent actions, (iii) can adopt different learning rates, and (iv) can be selective about which equilibrium they will reach for efficiency. The key idea is to use the power of approximation thanks to the robustness of learning dynamics to perturbations. I will conclude my talk with several remarks on possible future research directions for the framework presented.
IMA Data Science Seminar - Learning in Stochastic Games
Tuesday, April 11, 2023, 11:15 a.m. through Tuesday, April 11, 2023, 12:15 p.m.
Zoom
Muhammed Omer Sayin (Bilkent University) will give a presentation entitled Learning in Stochastic Games.
Continuous-time probabilistic generative models for dynamic networks
Tuesday, April 4, 2023, 1:25 p.m. through Tuesday, April 4, 2023, 2:25 p.m.
Walter Library 402 or Zoom
Data Science Seminar
Kevin Xu (Case Western Reserve University)
Abstract
Networks are ubiquitous in science, serving as a natural representation for many complex physical, biological, and social systems. Probabilistic generative models for networks provide plausible mechanisms by which network data are generated to reveal insights about the underlying complex system. Such complex systems are often time-varying, which has led to the development of dynamic network representations to enable modeling, analysis, and prediction of temporal dynamics.
In this talk, I introduce a class of continuous-time probabilistic generative models for dynamic networks that augment statistical models for network structure with multivariate Hawkes processes to model temporal dynamics. The class of models allows an analyst to trade off flexibility and scalability of a model depending on the application setting. I focus on two specific models on opposite ends of the tradeoff: the community Hawkes independent pairs (CHIP) model that scales up to millions of nodes, and the multivariate Community Hawkes (MULCH) model that is flexible enough to replicate a variety of observed structures in real network data, including temporal motifs. I demonstrate how these models can be used for analysis, prediction, and simulation on several real network data sets, including a network of militarized disputes between countries over time.
Working as an Artificial Intelligence Advisor to the US Government
Friday, March 31, 2023, 1:25 p.m. through Friday, March 31, 2023, 2:25 p.m.
Walter Library 402 or Zoom
Industrial Problems Seminar
Mitchell Kinney (The MITRE Corporation)
Abstract
Though Artificial Intelligence (AI) has progressed rapidly, many areas of government remain wary of upending legacy systems to capitalize on the technology. MITRE serves as a trusted advisor to government agencies and is a conduit between private industry and government through the management of multiple Federally Funded Research and Development Centers (FFRDCs). As a member of the AI and Autonomy Innovation Center, my role is to help government understand the potential positives and pitfalls of implementing AI technology.
I will discuss my background, my company, my responsibilities and give an overview of a project I worked on to help highlight how machine learning could be used to transfer paper-based systems engineering models into modern software. The prototype we developed uses computer vision techniques to build an internal graph representation of the diagram that can be translated to commercial tools.
Lecture: Adil Ali
Tuesday, March 28, 2023, 1:25 p.m. through Tuesday, March 28, 2023, 2:25 p.m.
Walter Library 402
Industrial Problems Seminar
Adil Ali (CH Robinson)
Viewing graph solvability and its relevance in 3D Computer Vision
Tuesday, March 28, 2023, 1:25 p.m. through Tuesday, March 28, 2023, 2:25 p.m.
Zoom only
Data Science Seminar
Federica Arrigoni (Politecnico di Milano)
Abstract
“Structure from motion” is a relevant problem in Computer Vision that aims at reconstructing both cameras and the 3D scene starting from multiple images. This talk will explore the theoretical aspects of structure from motion with particular focus on the “viewing graph”: such a graph has a camera for each node and an edge for each available fundamental matrix. In particular, a relevant problem is studying the “solvability” of a viewing graph, namely establishing if it determines a unique configuration of cameras. The talk will be based on the following paper:
Federica Arrigoni, Andrea Fusiello, Elisa Ricci, and Tomas Pajdla. Viewing graph solvability via cycle consistency. ICCV 2021 (Best paper honorable mention)
Applied Math at Boeing
Friday, March 24, 2023, 1:25 p.m. through Friday, March 24, 2023, 2:25 p.m.
Walter Library 402 or Zoom
Industrial Problems Seminar
Brittan Farmer (The Boeing Company)
Registration is required to access the Zoom webinar.
Abstract
Adversarial training and the generalized Wasserstein barycenter problem
Tuesday, March 21, 2023, 1:25 p.m. through Tuesday, March 21, 2023, 2:25 p.m.
Walter Library 402 or Zoom
Data Science Seminar
Matt Jacobs (Purdue University)
Abstract
Adversarial training is a framework widely used by practitioners to enforce robustness of machine learning models. During the training process, the learner is pitted against an adversary who has the power to alter the input data. As a result, the learner is forced to build a model that is robust to data perturbations. Despite the importance and relative conceptual simplicity of adversarial training, there are many aspects that are still not well-understood (e.g. regularization effects, geometric/analytic interpretations, tradeoff between accuracy and robustness, etc...), particularly in the case of multiclass classification.
In this talk, I will show that in the non-parametric setting, the adversarial training problem is equivalent to a generalized version of the Wasserstein barycenter problem. The connection between these problems allows us to completely characterize the optimal adversarial strategy and to bring in tools from optimal transport to analyze and compute optimal classifiers. This also has implications for the parametric setting, as the value of the generalized barycenter problem gives a universal upper bound on the robustness/accuracy tradeoff inherent to adversarial training.
Joint work with Nicolas Garcia Trillos and Jakwang Kim
Overparametrization in machine learning: insights from linear models
Thursday, March 16, 2023, 1:25 p.m. through Thursday, March 16, 2023, 2:25 p.m.
Walter Library 402 and Zoom (Zoom registration required)
Data Science Seminar
Andrea Montanari (Stanford University)
Abstract
Deep learning models are often trained in a regime that is forbidden by classical statistical learning theory. The model complexity can be larger than the sample size and the train error does not concentrate around the test error. In fact, the model complexity can be so large that the network interpolates noisy training data. Despite this, it behaves well on fresh test data, a phenomenon that has been dubbed `benign overfitting.'
I will review recent progress towards a precise quantitative understanding of this phenomenon in linear models and kernel regression. In particular, I will present a recent characterization of ridge regression in Hilbert spaces which provides a unified understanding on several earlier results.
[Based on joint work with Chen Cheng]
Meta-Analysis of Randomized Experiments: Applications to Heavy-Tailed Response Data
Friday, March 3, 2023, 1:25 p.m. through Friday, March 3, 2023, 2:25 p.m.
Industrial Problems Seminar
Dominique Perrault-Joncas (Amazon)
Abstract
A central obstacle in the objective assessment of treatment effect (TE) estimators in randomized control trials (RCTs) is the lack of ground truth (or validation set) to test their performance. In this paper, we propose a novel cross-validation-like methodology to address this challenge. The key insight of our procedure is that the noisy (but unbiased) difference-of-means estimate can be used as a ground truth “label" on a portion of the RCT, to test the performance of an estimator trained on the other portion. We combine this insight with an aggregation scheme, which borrows statistical strength across a large collection of RCTs, to present an end-to-end methodology for judging an estimator’s ability to recover the underlying treatment effect as well as produce an optimal treatment "roll out" policy. We evaluate our methodology across 699 RCTs implemented in the Amazon supply chain. In this heavy-tailed setting, our methodology suggests that procedures that aggressively downweight or truncate large values, while introducing bias, lower the variance enough to ensure that the treatment effect is more accurately estimated.