This code is licensed under CC BY-NC-SA 4.0. Commercial usage is not permitted. If you use this dataset or the code in a scientific publication, please cite the following paper (preprint and additional material):
@article{fischer2020event,
title={Event-Based Visual Place Recognition With Ensembles of Temporal Windows},
author={Fischer, Tobias and Milford, Michael},
journal={IEEE Robotics and Automation Letters},
volume={5},
number={4},
pages={6924--6931},
year={2020}
}
The Brisbane-Event-VPR dataset accompanies this code repository: https://zenodo.org/record/4302805
The following code is available:
video_beginning
indicates the ROS timestamp within the bag file that corresponds to the first frame of the consumer camera video file.Please note that in our paper we used manually annotated and then interpolated correspondences; instead here we provide matches based on the GPS data. Therefore, the results between what is reported in the paper and what is obtained using the methods here will be slightly different.
Clone this repository: git clone https://github.com/Tobias-Fischer/ensemble-event-vpr.git
Clone https://github.com/cedric-scheerlinck/rpg_e2vid and follow the instructions to create a conda environment and download the pretrained models.
Download the Brisbane-Event-VPR dataset.
Now convert the bag files to txt/zip files that can be used by the event2video code: python convert_rosbags.py
. Make sure to adjust the path to the extract_events_from_rosbag.py
file from the rpg_e2vid repository.
Now do the event to video conversion: python reconstruct_videos.py
. Make sure to adjust the path to the run_reconstruction.py
file from the rpg_e2vid repository.
conda create --name brisbaneeventvpr tensorflow-gpu pynmea2 scipy matplotlib numpy tqdm jupyterlab opencv pip ros-noetic-rosbag ros-noetic-cv-bridge python=3.8 -c conda-forge -c robostack
conda activate brisbaneeventvpr
python export_frames_from_rosbag.py
Create a new conda environment with the dependencies: conda create --name brisbaneeventvpr tensorflow-gpu pynmea2 scipy matplotlib numpy tqdm jupyterlab opencv pip
conda activate brisbaneeventvpr
git clone https://github.com/QVPR/netvlad_tf_open.git
cd netvlad_tf_open && pip install -e .
Download the NetVLAD checkpoint here (1.1 GB). Extract the zip and move its contents to the checkpoints folder of the netvlad_tf_open
repository.
Open the Brisbane Event VPR.ipynb and adjust the path to the dataset_folder
.
You can now run the code in Brisbane Event VPR.ipynb.
Please check out this collection of related works on place recognition.
CRICOS No. 00213J