Handling multiple objectives and multiple agents is a salient and ubiquitous characteristic of many, if not most, real-world decision problems. Mathematically, this translates to agents receiving a reward vector, rather than a scalar reward. This seemingly minor change fundamentally transforms the problem, shaping both the optimization criteria and the solution concepts. For example, the well-known game-theory result that every (single-objective) normal form game has a Nash equilibrium, no longer holds when the agents care about more than one objective.

In this tutorial, we will start with what it means to care about more than one aspect of the solution and why it is pertinent for modelling multi-agent settings. We will examine what agents should optimise for in multi-objective settings and discuss different assumptions, culminating in a taxonomy of multi-objective multi-agent settings and the accompanying solution concepts. We will then follow up with existing results and algorithmic approaches from evolutionary and multi-agent multi-objective reinforcement learning.