Please see the home page for a general introduction of our research interests. Here are a few more specific, ongoing research projects. You can find published research articles on the projects below on the publications page.

Structure learning

A large part of learning is figuring out what to learn about: which features of the world matter and should have influence over what choices we make? Which features should not be considered relevant to decision-making? Learning structure allows us to abstract out general rules that can be applied in different contexts, transfer knowledge to new situations, and generally speed learning.

The CCN Lab investigates how we perform such structure learning, what constrains it, and how we may benefit from it (or perhaps incur costs because of it).

In a project in collaboration with Michael Arcaro, we are exploring human-unique features within thalamocortical cognitive control networks using fMRI. In pursuit of this, we are developing a novel hierarchical, rule-based dynamic decision-making paradigm that leverages context switching at multiple hierarchical levels and supports generalization. The project is part of the Thalamus Conte Center.

A list of Professor Collins’ talks on our recent work in this area:
Generalization and hierarchical reinforcement learning (at CCN 202 GAC Workshops)

Working memory and learning

Our reinforcement learning system integrates value information over time, in a robust and exhaustive way, but as a slow and inflexible cumulative process. It essentially trades off learning speed for precise value estimation. However, in specific conditions, we are able to learn very quickly, simply by remembering what to do. Working memory provides this “quick and approximate” learning method, at the other end of the trade-off: fast, reliable and flexible, but limited in resources and information maintenance duration.

We investigate how working memory contributes to reinforcement learning, and how this process interacts with the reinforcement learning system.

In a project in collaboration with Linda Wilbrecht and Ronald Dahl, we are investigating how the use of RL and WM for learning changes during development, focusing on the roles of neural changes during puberty.

A list of Professor Collins’ talks on our recent work in this area:
Interactions of the reinforcement learning and working memory systems (at the Simons Institute)
Working memory influences reinforcement learning computations in brain and behavior (at Stanford University)

Reward learning, dopamine and cortico-basal ganglia loops

Reinforcement learning (RL) is a class of algorithms that use different approaches to estimate the expected value of different choices in different states. They’ve been very successful at explaining many aspects of learning from rewards/punishments in humans and animals. In particular, phasic firing of dopamine neurons code signals related to a “reward prediction error”, an essential construct of model-free RL algorithms, such as temporal difference learning. We think that the cortico-basal ganglia loop implements a form of this algorithm using dopamine-modulated plasticity.

The CCN Lab uses computational modeling and experiments to investigate more precisely how this is implemented, and how it affects exactly how we learn and make motivated choices. Why do we have two separate pathways (the direct and indirect pathway) that apparently encode the same information in opposite ways (how much you want vs. don’t want an option)? Is the redundancy perfect? Does it provide benefits in certain environments? How does it affect learning and decision making?

A list of Professor Collins’ talks on our recent work in this area:
Stage for Reinforcement Learning (hosted by Stanford University)
Decision-Making and the Brain round-table discussion (hosted by Concordia University)
Executive contributions to reinforcement learning computations in the human brain (at UC Berkeley Neuroscience Seminar)
One-shot intrinsic reward valuation in humans (at VIDA 2021)