The most up-to-date list of publications can be found on my Google Scholar page.
My extended CV is here.
Thesis
Selected Publications
Intrinsically Motivated Open-Ended Learning
Building agents that set their own goals and continuously discover new skills through curiosity, language and experience.
- Gaven, L., Carta, T., Romac, C., Colas, C., Lamprier, S., Sigaud, O., & Oudeyer, P-Y. (2025).
— MAGELLAN: Metacognitive Predictions of Learning Progress Guide Autotelic LLM Agents in Large Goal Spaces. ICML.
- Pourcel, J., Colas, C., Oudeyer, P-Y. & Teodorescu, L. (2023).
— ACES: Generating Diverse Programming Puzzles with Autotelic Language Models and Semantic Descriptors. NeurIPS.
- Du, Y., Watkins, O., Wang, Z., Colas, C., Darrell, T., Abbeel, P., Gupta, A. & Andreas, J. (2023).
— Guiding Pretraining in Reinforcement Learning with Large Language Models. ICML.
- Colas, C., Teodorescu, L., Oudeyer, P.-Y., Xingdi Y. & Côté M-A. (2023).
— Augmenting Autotelic Agents with Large Language Models. CoLLAs.
- Colas, C., Karch, T., Lair, N., Dussoux, J. M., Moulin-Frier, C., Dominey, P. F., & Oudeyer, P. Y. (2020).
— Language as a Cognitive Tool to Imagine Goals in Curiosity-Driven Exploration. NeurIPS.
[Talk] [Code].
- Colas, C., Sigaud, O., Oudeyer, P. Y. (2018).
— CURIOUS: Intrinsically Motivated Modular Multi-Goal Reinforcement Learning. ICML.
[Video] [Talk] [Code].
Social and Cultural Learning
Modeling how humans learn to solve new tasks from experience and advice from other humans or AIs.
- Colas, C., Mills, T., Prystawski, B., Tessler, M. H., Goodman, N., Andreas, J. & Tenenbaum, J. B. (2025).
— Language and Experience: A Computational Model of Social Learning in Complex Tasks. CogSci, ICLR.
[Demo].
Language Model Reasoning
Investigating how language models can self-improve and reason through code to solve novel problems.
- Pourcel J., Colas, C., & Oudeyer, P-Y. (2025).
— Self-Improving Language Models for Evolutionary Program Synthesis: A Case Study on ARC-AGI. ICML.
[ARC-AGI - 2nd paper prize] [Video].
- Zhang C. E., Colas, C., Poesia, G., Tenenbaum, J. B., & Andreas, J. (2025).
— Code-enabled language models can outperform reasoning models on diverse tasks. preprint.
Perspectives and Reviews
Surveys and position papers on autotelic agents, open-ended learning and the role of language and culture in AI.
- Sigaud, O., Baldassarre, G., Colas, C., Doncieux, S., Duro, R., Oudeyer, P-Y., Perrin-Gilbert, N. & Santucci, V.G. (2023).
— A Definition of Open-Ended Learning Problems for Goal-Conditioned Agents. preprint.
- Sigaud, O., Caselles-Dupré, H., Colas, C., Akakzia A., Oudeyer, P-Y. & Chetouani, M. (2021).
— Towards Teachable Autotelic Agents. IEEE Transactions on Cognitive and Developmental Systems.
- Colas, C., Karch, T., Moulin-Frier, C. & Oudeyer, P. Y. (2022).
— Language and Culture Internalization for Human-Like AI. Nature Machine Intelligence.
[Slides].
- Colas, C., Karch, T., Sigaud, O. & Oudeyer, P-Y. (2021).
— Autotelic Agents with Intrinsically Motivated Goal-Conditioned Reinforcement Learning: a Short Survey. Journal of AI Research.
- Portelas, R., Colas, C., Weng, L., Hofmann, K., Oudeyer, P. Y. (2020).
— Automatic Curriculum Learning For Deep RL: A Short Survey. IJCAI.
[Talk].
Statistics for RL
Establishing rigorous statistical practices for evaluating and comparing reinforcement learning algorithms.
- Colas, C., Sigaud, O., Oudeyer, P-Y. (2019).
— A Hitchhiker’s Guide to Statistical Comparisons of Reinforcement Learning Algorithms. preprint.
[Code].
- Colas, C., Sigaud, O., Oudeyer, P-Y. (2018).
— How Many Random Seeds? Statistical Power Analysis in Deep Reinforcement Learning Experiments. preprint.
Digital Art
Exploring creative applications of algorithms at the intersection of art and computation.