Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published July 2021 | Published + Supplemental Material + Accepted Version
Journal Article Open

Coach-Player Multi-agent Reinforcement Learning for Dynamic Team Composition

Abstract

In real-world multi-agent systems, agents with different capabilities may join or leave without altering the team's overarching goals. Coordinating teams with such dynamic composition is challenging: the optimal team strategy varies with the composition. We propose COPA, a coach-player framework to tackle this problem. We assume the coach has a global view of the environment and coordinates the players, who only have partial views, by distributing individual strategies. Specifically, we 1) adopt the attention mechanism for both the coach and the players; 2) propose a variational objective to regularize learning; and 3) design an adaptive communication method to let the coach decide when to communicate with the players. We validate our methods on a resource collection task, a rescue game, and the StarCraft micromanagement tasks. We demonstrate zero-shot generalization to new team compositions. Our method achieves comparable or better performance than the setting where all players have a full view of the environment. Moreover, we see that the performance remains high even when the coach communicates as little as 13% of the time using the adaptive communication strategy.

Additional Information

© 2021 The authors.

Attached Files

Published - liu21m.pdf

Accepted Version - 2105.08692.pdf

Supplemental Material - liu21m-supp.pdf

Files

liu21m-supp.pdf
Files (5.0 MB)
Name Size Download all
md5:bb623bcb6adab2644834689486ce1363
216.9 kB Preview Download
md5:cffeefaab6903769b552525479a70196
2.4 MB Preview Download
md5:ad2b453b5aa52ce4f7c128aa99ce78f1
2.4 MB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 23, 2023