About
Hi, I'm Gaspard Lambrechts, a PhD student doing research in reinforcement learning under the supervision of Prof. Damien Ernst at the University of Liège, currently in an internship with Prof. Aditya Mahajan at McGill and Mila. My research focuses on reinforcement learning in partially observable environments (RL in POMDP). I am mainly interested in sequence modeling, representation learning, world models, and asymmetric learning.
Publications
-
Informed POMDP: Leveraging Additional Information in Model-Based RL.
Gaspard Lambrechts, Adrien Bolland, Damien Ernst.
In Reinforcement Learning Conference, August 2024. Article. Poster. Slides. Code.Abstract
In this work, we generalize the problem of learning through interaction in a POMDP by accounting for eventual additional information available at training time. First, we introduce the informed POMDP, a new learning paradigm offering a clear distinction between the training information and the execution observation. Next, we propose an objective for learning a sufficient statistic from the history for the optimal control that leverages this information. We then show that this informed objective consists of learning an environment model from which we can sample latent trajectories. Finally, we show for the Dreamer algorithm that the convergence speed of the policies is sometimes greatly improved on several environments by using this informed environment model. Those results and the simplicity of the proposed adaptation advocate for a systematic consideration of eventual additional information when learning in a POMDP using model-based RL. -
Warming Up Recurrent Neural Networks to Maximize Reachable Multistability Greatly Improves Learning.
Gaspard Lambrechts*, Florent De Geeter*, Nicolas Vecoven*, Damien Ernst, Guillaume Drion.
In Neural Networks, August 2023. Article. Blog Post. Code.Abstract
Training recurrent neural networks is known to be difficult when time dependencies become long. In this work, we show that most standard cells only have one stable equilibrium at initialisation, and that learning on tasks with long time dependencies generally occurs once the number of network stable equilibria increases; a property known as multistability. Multistability is often not easily attained by initially monostable networks, making learning of long time dependencies between inputs and outputs difficult. This insight leads to the design of a novel way to initialise any recurrent cell connectivity through a procedure called “warmup” to improve its capability to learn arbitrarily long time dependencies. This initialisation procedure is designed to maximise network reachable multistability, i.e., the number of equilibria within the network that can be reached through relevant input trajectories, in few gradient steps. We show on several information restitution, sequence classification, and reinforcement learning benchmarks that warming up greatly improves learning speed and performance, for multiple recurrent cells, but sometimes impedes precision. We therefore introduce a double-layer architecture initialised with a partial warmup that is shown to greatly improve learning of long time dependencies while maintaining high levels of precision. This approach provides a general framework for improving learning abilities of any recurrent cell when long time dependencies are present. We also show empirically that other initialisation and pretraining procedures from the literature implicitly foster reachable multistability of recurrent cells. -
Recurrent Networks, Hidden States and Beliefs in Partially Observable Environments.
Gaspard Lambrechts, Adrien Bolland, Damien Ernst.
In Transaction on Machine Learning Research, August 2022. Article. Blog Post. Code.Abstract
Reinforcement learning aims to learn optimal policies from interaction with environments whose dynamics are unknown. Many methods rely on the approximation of a value function to derive near-optimal policies. In partially observable environments, these functions depend on the complete sequence of observations and past actions, called the history. In this work, we show empirically that recurrent neural networks trained to approximate such value functions internally filter the posterior probability distribution of the current state given the history, called the belief. More precisely, we show that, as a recurrent neural network learns the Q-function, its hidden states become more and more correlated with the beliefs of state variables that are relevant to optimal control. This correlation is measured through their mutual information. In addition, we show that the expected return of an agent increases with the ability of its recurrent architecture to reach a high mutual information between its hidden states and the beliefs. Finally, we show that the mutual information between the hidden states and the beliefs of variables that are irrelevant for optimal control decreases through the learning process. In summary, this work shows that in its hidden states, a recurrent neural network approximating the Q-function of a partially observable environment reproduces a sufficient statistic from the history that is correlated to the relevant part of the belief for taking optimal actions.
Workshops
-
Parallelizing Autoregressive Generation with Variational State Space Models.
Gaspard Lambrechts*, Yann Claes*, Pierre Geurts, Damien Ernst.
In ICML Workshop on Next Generation of Sequence Modeling Architectures, July 2024. Article. Poster.Abstract
Attention-based models such as Transformers and recurrent models like state space models (SSMs) have emerged as successful methods for autoregressive sequence modeling. Although both enable parallel training, none enable parallel generation due to their autoregressiveness. We propose the variational SSM (VSSM), a variational autoencoder (VAE) where both the encoder and decoder are SSMs. Since sampling the latent variables and decoding them with the SSM can be parallelized, both training and generation can be conducted in parallel. Moreover, the decoder recurrence allows generation to be resumed without reprocessing the whole sequence. Finally, we propose the autoregressive VSSM that can be conditioned on a partial realization of the sequence, as is common in language generation tasks. Interestingly, the autoregressive VSSM still enables parallel generation. We highlight on toy problems (MNIST, CIFAR) the empirical gains in speed-up and show that it competes with traditional models in terms of generation quality (Transformer, Mamba SSM). -
Belief States of POMDPs and Internal States of Recurrent RL Agents: an Empirical Analysis of their Mutual Information.
Gaspard Lambrechts, Adrien Bolland, Damien Ernst.
In European Workshop on Reinforcement Learning, September 2022. Article. Poster.Abstract
Reinforcement learning aims to learn optimal policies from interaction with environments whose dynamics are unknown. Many methods rely on the approximation of a value function to derive near-optimal policies. In partially observable environments, these functions depend on the complete sequence of observations and past actions, called the history. In this work, we show empirically that recurrent neural networks trained to approximate such value functions internally filter the posterior probability distribution of the current state given the history, called the belief. More precisely, we show that, as a recurrent neural network learns the Q-function, its hidden states become more and more correlated with the beliefs of state variables that are relevant to optimal control. This correlation is measured through their mutual information. In addition, we show that the expected return of an agent increases with the ability of its recurrent architecture to reach a high mutual information between its hidden states and the beliefs. Finally, we show that the mutual information between the hidden states and the beliefs of variables that are irrelevant for optimal control decreases through the learning process. In summary, this work shows that in its hidden states, a recurrent neural network approximating the Q-function of a partially observable environment reproduces a sufficient statistic from the history that is correlated to the relevant part of the belief for taking optimal actions.
Preprints
-
Reinforcement Learning to Improve Delta Robot Throws for Sorting Scrap Metal.
Arthur Louette, Gaspard Lambrechts, Damien Ernst, Eric Pirard, Godefroid Dislaire.
Under review, June 2024. Preprint.Abstract
This study proposes a novel approach based on reinforcement learning (RL) to enhance the sorting efficiency of scrap metal using delta robots and a Pick-and-Place (PaP) process, widely used in the industry. We use three classical model-free RL algorithms (TD3, SAC and PPO) to reduce the time to sort metal scraps. We learn the release position and speed needed to throw an object in a bin instead of moving to the exact bin location, as with the classical PaP technique. Our contribution is threefold. First, we provide a new simulation environment for learning RL-based Pick-and-Throw (PaT) strategies for parallel grippers. Second, we use RL algorithms for learning this task in this environment resulting in 89.32% accuracy while speeding up the throughput by 51% in simulation. Third, we evaluate the performances of RL algorithms and compare them to a PaP and a state-of-the-art PaT method both in simulation and reality, learning only from simulation with domain randomisation and without fine tuning in reality to transfer our policies. This work shows the benefits of RL-based PaT compared to PaP or classical optimization PaT techniques used in the industry. The code is available at https://github.com/louettearthur/pick-and-throw. -
Behind the Myth of Exploration in Policy Gradients.
Adrien Bolland, Gaspard Lambrechts, Damien Ernst.
Under review, May 2024. Preprint.Abstract
Policy-gradient algorithms are effective reinforcement learning methods for solving control problems with continuous state and action spaces. To compute near-optimal policies, it is essential in practice to include exploration terms in the learning objective. Although the effectiveness of these terms is usually justified by an intrinsic need to explore environments, we propose a novel analysis and distinguish two different implications of these techniques. First, they make it possible to smooth the learning objective and to eliminate local optima while preserving the global maximum. Second, they modify the gradient estimates, increasing the probability that the stochastic parameter update eventually provides an optimal policy. In light of these effects, we discuss and illustrate empirically exploration strategies based on entropy bonuses, highlighting their limitations and opening avenues for future works in the design and analysis of such strategies.
Talks
-
Learning to Memorize the Past by Learning to Predict the Future.
VUB Reinforcement Learning Talks, November 17th, 2023. Slides. -
Informed POMDP: Leveraging Additional Information in Model-Based RL.
Reinforcement Learning Conference, August 12th, 2024. Slides.
Teaching
Code
Contact
Gaspard Lambrechts, I.106Montefiore Institute, B28
Allée de la découverte, 10
4000 Liège
Belgium