Differentiable Spatial Planning using Transformers

Devendra Singh Chaplot
Deepak Pathak
Jitendra Malik
FAIR
CMU
UC Berkeley/FAIR
Published at ICML, 2021

[Paper]
[Talk]
[Slides]



We consider the problem of spatial path planning. In contrast to the classical solutions which optimize a new plan from scratch and assume access to the full map with ground truth obstacle locations, we learn a planner from the data in a differentiable manner that allows us to leverage statistical regularities from past data. We propose Spatial Planning Transformers (SPT), which given an obstacle map learns to generate actions by planning over long-range spatial dependencies, unlike prior data-driven planners that propagate information locally via convolutional structure in an iterative manner. In the setting where the ground truth map is not known to the agent, we leverage pre-trained SPTs in an end-to-end framework that has the structure of mapper and planner built into it which allows seamless generalization to out-of-distribution maps and goals. SPTs outperform prior state-of-the-art differentiable planners across all the setups for both manipulation and navigation tasks, leading to an absolute improvement of 7-19%.


Spatial Planning Transformers

The Spatial Planning Transformer consists of 3 components, an Encoder E to encode the input, a Transformer network T responsible for planning, and a Decoder D decoding the output of the Transformer into action distances.




Short Presentation



Paper and Bibtex

[Paper]

Citation
 
Chaplot, D.S., Pathak, D., and Malik, J. 2021. Differentiable Spatial Planning using Transformers. In ICML.

[Bibtex]
@inproceedings{chaplot2020differentiable,
  title={Differentiable Spatial Planning using Transformers},
  author={Chaplot, Devendra Singh and Pathak, Deepak and 
          Malik, Jitendra},
  booktitle={ICML},
  year={2021}}
                


Website template from here and here.