A mixture of generative models strategy helps humans generalize across tasks [preprint]

Preprint date

February 16, 2021

Authors

Santiago Herce Castañón, Pedro Cardoso-Leite, C Shawn Green, Irene Altarelli, Paul Schrater (associate professor), Daphne Bavelier

Abstract

Humans extract statistical regularities over time by forming internal representations of the transition probabilities between states. Studies on sequence prediction learning typically focus on how experiencing a particular sequence allows the underlying generative model to be learned. Here we ask what role generative models play when participants experience and learn to predict different sequences from a common generative family. Our novel multi-task prediction paradigm requires participants to complete four sequence learning tasks, each task being a different instantiation of a common generative family. This multi-task environment allows participants to demonstrate two types of learning: (i) within-task learning (i.e., finding the solution to each of the tasks), and (ii) across-task learning (i.e., learning from one task that changes how they approach future tasks). Specifically, to examine across-task learning, we focus on the first four trials of each sequence learning task, as choices during these trials must be driven by participants’ background knowledge and not by their in-task learning experience. We show not only that participants arrive with a structured prior, but also that as participants proceed from one task to the next, their prior increasingly resembles the true generative family. Then, to analyse the within-task learning, we model behaviour within each task as arising from a mixture-of-generative-models learning strategy, whereby participants have a pool of candidate explanatory models (mostly within the true generative family) that they continuously update and arbitrate against each other to best explain the stimulus sequence. This framework predicts specific structured errors from participants, as well as a gating mechanism for learning, both of which are observed in the data.

Link to full paper

A mixture of generative models strategy helps humans generalize across tasks

Keywords

neuroscience

Share