Here's my Google Scholar. My Erdős number is 3. My Bacon number is lamentably still undefined.
Learning Actionable Representations with Goal-Conditioned Policies

Dibya Ghosh, Abhishek Gupta, Sergey Levine
NeurIPS Deep RL Workshop 2018

ArXiv Website

In this paper, we present ARCs, a representation learning algorithm which attempts to optimize functionally relevant elements of state. ARCs relate distance between states in latent space with the actions required to reach the state, which implicitly captures system dynamics and ignores uncontrollable factors. ARCs are useful for exploration, features for policies, and for developing hierarchies.

Variational Inverse Control with Events

Justin Fu*, Avi Singh*, Dibya Ghosh, Larry Yang, Sergey Levine
NeurIPS 2018

ArXiv Website

In this paper, we present VICE, a method which generalizes inverse optimal control to learning reward functions from goal examples. By requiring only goal examples, and not demonstration trajectories, VICE is well suited for real-world robotics tasks, in which reward specification is difficult.

Divide-and-Conquer Reinforcement Learning

Dibya Ghosh, Avi Singh, Aravind Rajeswaran, Vikash Kumar, Sergey Levine
ICLR 2018.

ArXiv Website Code

In this paper, we present a method for scaling model-free deep reinforcement learning methods to tasks with high stochasticity in initial state and task distributions. We demonstrate our method on a suite of challenging sparse-reward manipulation tasks that were unsolved by prior work.