IMA Data Science Seminar - Learning in Stochastic Games
Data Science Seminar
Muhammed Omer Sayin (Bilkent University)
Reinforcement learning (RL) has been the backbone of many frontier artificial intelligence (AI) applications, such as game playing and autonomous driving, by addressing how intelligent and autonomous systems should engage with an unknown dynamic environment. The progress and interest in AI are now transforming social systems with human decision-makers, such as (consumer/financial) markets and road traffic, into socio-technical systems with AI-powered decision-makers. However, self-interested AI can undermine the social systems designed and regulated for humans. We are delving into the uncharted territory of AI-AI and AI-human interactions. The new grand challenge is to predict and control the implications of AI selfishness in AI-X interactions with systematic guarantees. Hence, there is now a critical need to study self-interested AI dynamics in complex and dynamic environments through the lens of game theory.
In this talk, I will present the recent steps we have taken toward the foundation of how self-interested AI would and should interact with others by bridging the gap between game theory and practice in AI-X interactions. I will specifically focus on stochastic games to model the interactions in complex and dynamic environments since they are commonly used in multi-agent reinforcement learning. I will present new learning dynamics converging almost surely to equilibrium in important classes of stochastic games. The results can also be generalized to the cases where (i) agents do not know the model of the environment, (ii) do not observe opponent actions, (iii) can adopt different learning rates, and (iv) can be selective about which equilibrium they will reach for efficiency. The key idea is to use the power of approximation thanks to the robustness of learning dynamics to perturbations. I will conclude my talk with several remarks on possible future research directions for the framework presented.