Dibya Ghosh, Abhishek Gupta, Sergey Levine
NeurIPS Deep RL Workshop 2018
In this paper, we present ARCs, a representation learning algorithm which attempts to optimize functionally relevant elements of state. ARCs relate distance between states in latent space with the actions required to reach the state, which implicitly captures system dynamics and ignores uncontrollable factors. ARCs are useful for exploration, features for policies, and for developing hierarchies.
Justin Fu*, Avi Singh*, Dibya Ghosh, Larry Yang, Sergey Levine
In this paper, we present VICE, a method which generalizes inverse optimal control to learning reward functions from goal examples. By requiring only goal examples, and not demonstration trajectories, VICE is well suited for real-world robotics tasks, in which reward specification is difficult.
Dibya Ghosh, Avi Singh, Aravind Rajeswaran, Vikash Kumar, Sergey Levine
In this paper, we present a method for scaling model-free deep reinforcement learning methods to tasks with high stochasticity in initial state and task distributions. We demonstrate our method on a suite of challenging sparse-reward manipulation tasks that were unsolved by prior work.