Biological and artificial neural networks for mapping observations to latent causes
Assistant Professor,
Department of Neuroscience, UMN
Our brains are locked inside dark and silent skulls. They do not have direct access over environmental objects or events. Instead, based on noisy, incomplete, and often ambiguous sensory evidence they have to infer (i.e., make an educated guess) what is the state of the environment. A common example of this challenge within neuroscience is the fact that our retinas are 2D, while the environment is 3D. Thus, in principle there is an infinite number of possible environmental objects that could lead to the same retinal image. Recently, my group and I (Noel et al., 2022, eLife; Noel & Angelaki, 2023, TICS) have demonstrated that individuals within the Autism Spectrum show a very specific anomaly in this process of causal inference – attributing observations to causes – which could be a root cause to well-established sensory differences in Autism.
This credit assignment problem is also central in robotics. Pixels do not come labeled as representing one particular object or another. Similarly, visual and auditory features of a given objects have to be stitched as belonging to the same underlying object/cause. While machine learning methods can be successfully applied to this credit assignment problem, they are slow, data-greedy, computationally-inefficient (particularly when compared to the energic cost of the brain vs. artificial compute), and usually suffer from a lack of generalizability. A central goal of the Noel Lab is to understand how biological neural networks solve the credit assignment/causal inference problem, and leverage this insight to develop more efficient algorithms for robot-human and robot-environment interaction. An early interest of Dr. Noel, and a domain in which this computational challenge is expressed at the interface of neuroscience and robotics, is neuroprosthetics. While we can build ever-better prosthetic limbs, the reality is that most patients, when they go home, do not use these devices. Why? Because they feel like tools – like a fork or a knife. They do not feel as if part of their body, of themselves. We must not only build ever more sophisticated prosthetic devices, but also understand how to integrate them with neural networks in order to blend the disparity between device and body. This is also a problem of causal inference / credit assignment.
The Noel Lab performs psychophysics and virtual reality based experiments in humans, while also recording body- and eye-movements, as well as scalp electroencephalography (EEG). Via collaborators, we also record invasively from human patients. Further, we train mice to perform sophisticated behaviors, such as navigation in virtual reality (Fig. 1), object discrimination, audio-visual correlation detection, etc. Much of our work is centered on understanding how neural circuits map observations to latent causes in order to infer the state of the environment, and how this process goes awry in disease states (e.g., Autism and Schizophrenia). We have a strong preference for neuroethological behaviors, and espouse the framework of “embodied cognition.” We record from tens of thousands of single neurons. In a typical mouse neurophysiology pipeline, we record ~2GB/s, and our datasets for a single project are on the order of 50TB. We apply – and sometimes develop – tools from machine learning such as marker-less pose estimation (Fig. 2), targeted dimensionality reduction), as well as normative Bayesian and reinforcement learning models.
If any of the above sound interesting to you, please reach out to Dr. Noel at [email protected].