MOMAland: A Set of Benchmarks for Multi-Objective Multi-Agent Reinforcement Learning
Abstract
Many challenging tasks such as managing traffic systems, electricity grids, or supply chains involve complex decision-making processes that must balance multiple conflicting objectives and coordinate the actions of various independent decision-makers (DMs). One perspective for formalising and addressing such tasks is multi-objective multi-agent reinforcement learning (MOMARL). MOMARL broadens reinforcement learning (RL) to problems with multiple agents each needing to consider multiple objectives in their learning process. In reinforcement learning research, benchmarks are crucial in facilitating progress, evaluation, and reproducibility. The significance of benchmarks is underscored by the existence of numerous benchmark frameworks developed for various RL paradigms, including single-agent RL (e.g., Gymnasium), multi-agent RL (e.g., PettingZoo), and single-agent multi-objective RL (e.g., MO-Gymnasium). To support the advancement of the MOMARL field, we introduce MOMAland, the first collection of standardised environments for multi-objective multi-agent reinforcement learning. MOMAland addresses the need for comprehensive benchmarking in this emerging field, offering over 10 diverse environments that vary in the number of agents, state representations, reward structures, and utility considerations. To provide strong baselines for future research, MOMAland also includes algorithms capable of learning policies in such settings.
Community
MOMAland is the first multi-objective multi-agent RL library! In this setting, each agent learns policies while balancing multiple (conflicting) objectives.
Essentially, MOMAland extends PettingZoo to multi-objective rewards or MO-Gymnasium to multi-agent settings. The library has been designed to be as close as possible to PettingZoo, enabling the reuse of various utilities, e.g., some of the wrappers.
The library currently contains a dozen environments, wrappers for handling vectorial rewards, and a few learning algorithms. These provide great baselines for this emerging field of research!
It also brings open challenges. In cooperative settings with unknown preferences among objectives, solutions resemble those in single-agent MORL. In general settings with known preferences, solutions align with single-objective MARL. However, in general settings with unknown preferences, we do not really know yet!
Excited to try it out? You can install MOMAland with a simple pip install momaland
.
Documentation page for more information: https://momaland.farama.org/
Code: https://github.com/Farama-Foundation/momaland
Paper: https://arxiv.org/abs/2407.16312
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Constrained Reinforcement Learning with Average Reward Objective: Model-Based and Model-Free Algorithms (2024)
- RRLS : Robust Reinforcement Learning Suite (2024)
- RobocupGym: A challenging continuous control benchmark in Robocup (2024)
- Diffusion Models for Offline Multi-agent Reinforcement Learning with Safety Constraints (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper