Meta-Learning for Reinforcement Learning: Generalizing Agents to Unseen Scenarios
This tutorial was hosted at EASSS 2025 on September 1st, 2025. It consisted of a 1.5-hour theoretical session covering Reinforcement Learning (RL), Meta-Learning, and Meta-RL, followed by a 1.5-hour hands-on lab dedicated to exploring the problem of agent generalization.
Reinforcement Learning (RL) is a popular framework where agents learn optimal behaviors through trial-and-error interactions with an environment, but it often struggles to generalize to new or changing conditions. Meta-learning addresses this limitation by enabling models to quickly adapt using prior experience, even with minimal new data. Meta-Reinforcement Learning (Meta-RL) extends this concept to sequential decision-making, focusing on learning adaptable components such as policy parameters, exploration strategies, or representation functions that allow an agent to adjust efficiently to new tasks, rather than relying on a fixed policy trained for a single environment. This tutorial (i) covers the foundations of meta-learning and deep RL, (ii) introduces Model-Agnostic Meta-Learning (MAML) as a key Meta-RL technique, and (iii) concludes with a practical session using customizable code templates.