This is the source code for the paper titled: "Look No Deeper: Recognizing Places from Opposing Viewpoints under Varying Scene Appearance using Single-View Depth Estimation", [arXiv][IEEE Xplore].
If you find this work useful, please cite it as: Garg, S., Babu V, M., Dharmasiri, T., Hausler, S., Suenderhauf, N., Kumar, S., Drummond, T., & Milford, M. (2019). Look no deeper: Recognizing places from opposing viewpoints under varying scene appearance using single-view depth estimation. In IEEE International Conference on Robotics and Automation (ICRA), 2019. IEEE.
bibtex:
@inproceedings{garg2019look,
title={Look No Deeper: Recognizing Places from Opposing Viewpoints under Varying Scene Appearance using Single-View Depth Estimation},
author={Garg, Sourav and Babu V, Madhu and Dharmasiri, Thanuja and Hausler, Stephen and Suenderhauf, Niko and Kumar, Swagat and Drummond, Tom and Milford, Michael},
booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
year={2019}
}
Optionally, for vis_results.ipynb:
In seq2single/precomputed/
, download pre-computed representations (~10 GB). Please refer to the seq2single/precomputed/readme.md
for instructions on how to compute these representations.
[Optional] In seq2single/images/
, download images (~1 GB). These images are a subset of two different traverses from the Oxford Robotcar dataset.
(Note: These download links from Mega.nz require you to first create an account (free))
The code is released under MIT License.
CRICOS No. 00213J