Two papers from RLLAB are accepted to ICRA 2020

[2020.01.23]

Following papers are accepted to the IEEE International Conference on Robotics and Automation (ICRA 2020):

  • Learning to Walk a Tripod Mobile Robot Using Nonlinear Soft Vibration Actuators with Entropy Adaptive Reinforcement Learning by Jae In Kim, Mineui Hong, Kyungjae Lee, DongWook Kim, Yong-Lae Park, and Songhwai Oh (RA-L option)
    • Abstract: Soft mobile robots have shown great potential in unstructured and confined environments by taking advantage of their excellent adaptability and high dexterity. However, there are several issues to be addressed, such as actuating speeds and controllability, in soft robots. In this paper, a new vibration actuator is proposed using the nonlinear stiffness characteristic of a hyperelastic material, which creates continuous vibration of the actuator. By integrating three proposed actuators, we also present an advanced soft mobile robot with high degrees of freedom of movement. However, since the dynamic model of the soft mobile robot is generally hard to obtain(intractable), it is difficult to design a controller for the robot. In this regard, we present a method to train a controller, using a novel reinforcement learning (RL) algorithm called adaptive soft actor-critic (ASAC). ASAC gradually reduces a parameter called an entropy temperature, which regulates the entropy of the control policy. In this way, the proposed method can narrow down the search space during training, and reduce the duration of demanding data collection processes in real-world experiments. For the verification of the robustness and the controllability of our robot and the RL algorithm, experiments for zig-zagging path tracking and obstacle avoidance were conducted, and the robot successfully finished the missions with only an hour of training time.
    • Video

  • Hierarchical 6-DoF Grasping with Approaching Direction Selection by Yunho Choi, Hogun Kee, Kyungjae Lee, Jaegu Choy, and Songhwai Oh
    • Abstract: In this paper, we tackle the problem of 6-DoF grasp detection which is crucial for robot grasping in cluttered real-world scenes. Unlike existing approaches which synthesize 6-DoF grasp data sets and train grasp quality networks with input grasp representations based on point clouds, we rather take a novel hierarchical approach which does not use any 6-DoF grasp data. We cast the 6-DoF grasp detection problem as a robot arm approaching direction selection problem using the existing 4-DoF grasp detection algorithm, by exploiting a fully convolutional grasp quality network for evaluating the quality of an approaching direction. To select the best approaching direction with the highest grasp quality, we propose an approaching direction selection method which leverages a geometry-based prior and a derivative-free optimization method. Specifically, we optimize the direction iteratively using the cross entropy method with initial samples of surface normal directions. Our algorithm efficiently finds diverse 6-DoF grasps by the novel way of evaluating and optimizing approaching directions. We validate that the proposed method outperforms other selection methods in scenarios with cluttered objects in a physics-based simulator. Finally, we show that our method outperforms the state-of-the-art grasp detection method in real-world experiments with robots.
    • Video