Our paper studies the theoretical and practical utility of learning memory-based policies (policies that change through an episode) in offline reinforcement learning. Using Bayesian tools, we prove that this dependency on memory is necessary to be optimal and discuss how these memory-based "adaptive" policies must be trained. As a first step towards learning optimal adaptive strategies, we propose an ensemble-based offline RL algorithm for learning adaptive policies and demonstrate their favorable properties in challenging image-based offline RL problems.
Offline RL algorithms must account for the fact that the dataset they are provided may leave many facets of the environment unknown. The most common way to approach this challenge is to employ pessimistic or conservative methods, which avoid behaviors that are too dissimilar from those in the training dataset. However, relying exclusively on conservatism has drawbacks: performance is sensitive to the exact degree of conservatism, and conservative objectives can recover highly suboptimal policies. In this work, we propose that offline RL methods should instead be adaptive in the presence of uncertainty. We show that acting optimally in offline RL in a Bayesian sense involves solving an implicit POMDP. As a result, optimal policies for offline RL must be adaptive, depending not just on the current state but rather all the transitions seen so far during evaluation.We present a model-free algorithm for approximating this optimal adaptive policy, and demonstrate the efficacy of learning such adaptive policies in offline RL benchmarks.