Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published July 15, 2022 | Accepted Version
Report Open

Reinforcement Learning in Factored Action Spaces using Tensor Decompositions

Abstract

We present an extended abstract for the previously published work TESSERACT [Mahajan et al., 2021], which proposes a novel solution for Reinforcement Learning (RL) in large, factored action spaces using tensor decompositions. The goal of this abstract is twofold: (1) To garner greater interest amongst the tensor research community for creating methods and analysis for approximate RL, (2) To elucidate the generalised setting of factored action spaces where tensor decompositions can be used. We use cooperative multi-agent reinforcement learning scenario as the exemplary setting where the action space is naturally factored across agents and learning becomes intractable without resorting to approximation on the underlying hypothesis space for candidate solutions.

Additional Information

Attribution 4.0 International (CC BY 4.0)

Attached Files

Accepted Version - 2110.14538.pdf

Files

2110.14538.pdf
Files (734.9 kB)
Name Size Download all
md5:e1230ead418eb01f6e83e0e71178d1b2
734.9 kB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 24, 2023