Filippos Christianos

University of Edinburgh, School of Informatics.

prof_pic.jpg

Previously at NVIDIA Research.

I am in the final stages of my PhD in the CDT for Robotics and Autonomous Agents, advised by Stefano Albrecht (University of Edinburgh) and a member of the Autonomous Agents Research Group.

My PhD research is in the area of Multi-Agent Deep Reinforcement Learning. In particular, I study how multiple agents can efficiently explore and learn in environments with sparse rewards.

I am the author and maintainer of the Multi-Robot Warehouse environment for multi-agent RL research. I also developed and maintain the Python version of Level-based Foraging. Our group has been using both environments to develop new and exciting algorithms for MARL. I am the first author of two such algorithms: Shared Experience Actor-Critic (SEAC), and Selective Parameter Sharing (SePS) that have been published in NeurIPS (2020) and ICML (2021) respectively.

Finally, I am a co-author of E-PyMARL, a library for MARL which has been widely used by the community.

Keywords: Machine Learning, Deep Reinforcement Learning (RL), Multi-agent Systems, Exploration in RL.

news

Jan 18, 2023 :newspaper_roll: My NVIDIA internship resulted in “Planning with Occluded Traffic Agents using Bi-Level Variational Occlusion Models” which was just accepted in ICRA 2023!
Oct 29, 2022 Preprints of my two new papers are online! My NVIDIA internship paper on AV occlusions and Pareto Actor-Critic: a new algorithm for MARL.
Jun 23, 2022 I joined NVIDIA Research for a three month internship on autonomous vehicles!
Dec 20, 2021 :newspaper_roll: Our paper titled “Decoupling Exploitation and Intrinsically-Motivated Exploration in Reinforcement Learning” has been accepted in AAMAS 2022!
Sep 27, 2021 :newspaper_roll: Another paper accepted at NeurIPS 2021: Agent Modelling under Partial Observability for Deep Reinforcement Learning.
Jul 29, 2021 :newspaper_roll: Our benchmarking paper for MARL has been accepted at NeurIPS 2021: Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks.
May 10, 2021 :newspaper_roll::newspaper_roll: Two new papers accepted at ICML 2021: Scaling Multi-Agent Reinforcement Learning with Selective Parameter Sharing and Towards Open Ad Hoc Teamwork Using Graph-based Policy Learning.
Dec 20, 2020 :robot: My blog post on two new environments for MARL has just been posted in our groups webpage.
Dec 8, 2020 :newspaper_roll: Our paper, Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning, has been accepted and published in Neural Information Processing Systems (NeurIPS 2020).

selected publications

  1. NeurIPS
    Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning
    Christianos Filippos, Schäfer Lukas, and Albrecht Stefano
    In Advances in Neural Information Processing Systems, 2020
  2. ICML
    Scaling Multi-Agent Reinforcement Learning with Selective Parameter Sharing
    Christianos Filippos, Papoudakis Georgios, Rahman Arrasy, and Albrecht Stefano
    In Proceedings of the 38th International Conference on Machine Learning, 2021
  3. NeurIPS
    Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks
    Papoudakis Georgios *, Christianos Filippos *, Schäfer Lukas, and Albrecht Stefano
    In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, 2021
  4. ICRA
    Planning with Occluded Traffic Agents using Bi-Level Variational Occlusion Models
    Christianos Filippos, Karkus Peter, Ivanovic Boris, Albrecht Stefano, and Pavone Marco
    In IEEE International Conference on Robotics and Automation, 2023