Here's my Google Scholar. My Erdős number is 3. My Bacon number is lamentably still undefined.
Variational Inverse Control with Events

Justin Fu*, Avi Singh*, Dibya Ghosh, Larry Yang, Sergey Levine
NIPS 2018

ArXiv Website

In this paper, we present VICE, a method which generalizes inverse optimal control to learning reward functions from goal examples. By requiring only goal examples, and not demonstration trajectories, VICE is well suited for real-world robotics tasks, in which reward specification is difficult.

Divide-and-Conquer Reinforcement Learning

Dibya Ghosh, Avi Singh, Aravind Rajeswaran, Vikash Kumar, Sergey Levine
ICLR 2018.

ArXiv Website Code

In this paper, we present a method for scaling model-free deep reinforcement learning methods to tasks with high stochasticity in initial state and task distributions. We demonstrate our method on a suite of challenging sparse-reward manipulation tasks that were unsolved by prior work.