The most up-to-date list of publications can be found on my Google Scholar page.
My extended CV is here.
Thesis
Articles
Intrinsically Motivated Open-Ended Learning
- Molinaro, G., Colas, C., Oudeyer, P-Y., & Collins, A. (2024).
— Latent Learning Progress Drives Autonomous Goal Selection in Human Reinforcement Learning. NeurIPS 2024.
- Pourcel, J., Colas, C., Oudeyer, P-Y. & Teodorescu, L. (2023).
— ACES: Generating Diverse Programming Puzzles with Autotelic Language Models and Semantic Descriptors. NeurIPS 2024.
- Du, Y., Watkins, O., Wang, Z., Colas, C., Darrell, T., Abbeel, P., Gupta, A. & Andreas, J. (2023).
— Guiding Pretraining in Reinforcement Learning with Large Language Models. ICML 2023.
- Teodorescu, L., Colas, C., Bowers, M., Carta, T., & Oudeyer, P-Y. (2023).
— Codeplay: Autotelic Learning through Collaborative Self-Play in Programming Environments. In IMOL 2023 – Intrinsically Motivated Open-ended Learning Workshop at NeurIPS 2023.
- Colas, C., Teodorescu, L., Oudeyer, P.-Y., Xingdi Y. & Côté M-A. (2023).
— Augmenting Autotelic Agents with Large Language Models. CoLLAs 2023.
- Akakzia, A., Serris, O., Sigaud, 0. & Colas, C. (2022).
— Help Me Explore: Minimal Social Interventions for Graph-Based Autotelic Agents. preprint.
[Code].
- Akakzia A., Colas, C., Oudeyer, P-Y., Chetouani, M. & Sigaud, O. (2020).
— Grounding Language to Autonomously-Acquired Skills via Goal Generation. ICLR 2021.
[Code].
- Colas, C., Karch, T., Lair, N., Dussoux, J. M., Moulin-Frier, C., Dominey, P. F., & Oudeyer, P. Y. (2020).
— Language as a Cognitive Tool to Imagine Goals in Curiosity-Driven Exploration. NeurIPS 2020.
[Talk] [Code].
- Lair, N., Colas, C., Portelas, R., Dussoux, J. M., Dominey, P. F., & Oudeyer, P. Y. (2019).
— Language Grounding through Social Interactions and Curiosity-Driven Multi-Goal Learning. (Visually Grounded Interaction and Language NeurIPS workshop, 2019).
- Portelas, R., Colas, C., Hofmann, K., & Oudeyer, P. Y. (2019).
— Teacher Algorithms for Curriculum Learning of Deep RL in Continuously Parameterized Environments. CoRL 2019.
[Code].
- Colas, C., Sigaud, O., Oudeyer, P. Y. (2018).
— CURIOUS: Intrinsically Motivated Modular Multi-Goal Reinforcement Learning. ICML 2019.
[Video] [Talk] [Code].
- Fournier, Colas, C., Chetouani, M., & P., Sigaud, O. (2019).
— CLIC: Curriculum Learning and Imitation for feature Control in non-rewarding environments. IEEE Transactions on Cognitive and Developmental Systems.
- Colas, C., Sigaud, O., Oudeyer, P.. (2018).
— GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms. ICML 2018.
[Talk] [Code].
Perspectives and Reviews
- Sigaud, O., Baldassarre, G., Colas, C., Doncieux, S., Duro, R., Oudeyer, P-Y., Perrin-Gilbert, N.& Santucci, V.G. (2023).
— A Definition of Open-Ended Learning Problems for Goal-Conditioned Agents. preprint.
- Sigaud, O., Caselles-Dupré, H., Colas, C., Akakzia A., Oudeyer, P-Y. & Chetouani, M. (2021).
— Towards Teachable Autotelic Agents. IEEE Transactions on Cognitive and Developmental Systems.
- Colas, C., Karch, T., Moulin-Frier, C. & Oudeyer, P. Y. (2022).
— Language and Culture Internalization for Human-Like AI. Nature Machine Intelligence.
[Slides].
- Colas, C., Karch, T., Sigaud, O. & Oudeyer, P-Y. (2021).
— Autotelic Agents with Intrinsically Motivated Goal-Conditioned Reinforcement Learning: a Short Survey. Journal of AI Research.
- Portelas, R., Colas, C., Weng, L., Hofmann, K., Oudeyer, P. Y. (2020).
— Automatic Curriculum Learning For Deep RL: A Short Survey. IJCAI 2020.
[Talk].
Optimization and Epidemiology
- Colas, C., Hejblum, B., Rouillon, S., Thiébaut, R., Oudeyer, P-Y., Moulin-Frier, C. & Prague, M. (2020).
— EpidemiOptim: A Toolbox for the Optimization of Control Policies in Epidemiological Models. Journal of AI Research.
[Slides] [Demo] [Code].
Evolutionary Computation
- Pourcel, J., Colas, C., Oudeyer, P-Y. & Teodorescu, L. (2023).
— ACES: Generating Diverse Programming Puzzles with Autotelic Language Models and Semantic Descriptors. NeurIPS 2024.
- Colas, C., Huizinga, J., Madhavan, V., & Clune, J. (2020).
— Scaling MAP-Elites to Deep Neuroevolution. GECCO 2020.
[Slides] [Talk] [Code].
Other AI
- Perez, J., Kovač, G., Léger, C., Colas, C., Molinaro, G., Derex, M., Oudeyer, P-Y., & Moulin-Frier, C. (2024).
— When LLMs Play the Telephone Game: Cumulative Changes and Attractors in Iterated Cultural Transmissions. ICLR 2025.
- Srivastava, M., Colas, C., Sadigh, D., & Andreas, J. (2024).
— Policy Learning with a Language Bottleneck. preprint.
- Kovac, G., Sawayama, M., Portelas, R., Colas, C., Dominey, P.F. & Oudeyer, P-Y (2023).
— Large Language Models as a Superpositions of Cultural Perspectives. preprint.
Statistics for RL
- Colas, C., Sigaud, O., Oudeyer, P-Y. (2019).
— A Hitchhiker’s Guide to Statistical Comparisons of Reinforcement Learning Algorithms. preprint.
[Code].
- Colas, C., Sigaud, O., Oudeyer, P-Y. (2018).
— How Many Random Seeds? Statistical Power Analysis in Deep Reinforcement Learning Experiments. preprint.
Brain-Computer Interfaces
Digital Art