Our paper on distributional deep RL with GMMs is accepted to ICRA 2019

[2019.01.30]

The following paper is accepted to the IEEE International Conference on Robotics and Automation (ICRA 2019):

  • Distributional Deep Reinforcement Learning with a Mixture of Gaussians by Yunho Choi, Kyungjae Lee, and Songhwai Oh
    • Abstract: In this paper, we propose a novel distributional reinforcement learning (RL) method which models the distribution of the sum of rewards using a mixture density network. Recently, it has been shown that modeling the randomness of the return distribution leads to better performance in Atari games and control tasks. Despite the success of the prior work, it has limitations which come from the use of a discrete distribution. First, it needs a projection step and softmax parametrization for the distribution, since it minimizes the KL divergence loss. Secondly, its performance depends on discretization hyperparameters such as the number of atoms and bounds of the support which require domain knowledge. We mitigate these problems with the proposed parameterization, a mixture of Gaussians. Furthermore, we propose a new distance metric called the Jensen-Tsallis distance, which allows the computation of the distance between two mixtures of Gaussians in a closed form. We have conducted various experiments to validate the proposed method, including Atari games and autonomous vehicle driving.
    • Video