From a095f4575334d4117045609710a3afae3137a4ba Mon Sep 17 00:00:00 2001 From: Thomas Roddick Date: Thu, 29 Aug 2019 13:01:36 +0100 Subject: [PATCH] Update readme.md with inference instructions --- readme.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/readme.md b/readme.md index 077e33c..0a99b23 100644 --- a/readme.md +++ b/readme.md @@ -3,13 +3,19 @@ ![OFTNet-Architecture](https://github.com/tom-roddick/oft/raw/master/architecture.png "OFTNet-Architecture") This is a PyTorch implementation of the OFTNet network from the paper [Orthographic Feature Transform for Monocular 3D Object Detection](https://arxiv.org/abs/1811.08188). The code currently supports training the network from scratch on the KITTI dataset - intermediate results can be visualised using Tensorboard. The current version of the code is intended primarily as a reference, and for now does not support decoding the network outputs into bounding boxes via non-maximum suppression. This will be added in a future update. Note also that there are some slight implementation differences from the original code used in the paper. -## Usage +## Training The training script can be run by calling `train.py` with the name of the experiment as a required position argument. ``` python train.py name-of-experiment --gpu 0 ``` By default data will be read from `data/kitti/objects` and model checkpoints will be saved to `experiments`. The model is trained using the KITTI 3D object detection benchmark which can be downloaded from [here](http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d). See `train.py` for a full list of training options. +## Inference +To decode the network predictions and visualise the resulting bounding boxes, run the `infer.py` script with the path to the model checkpoint you wish to visualise: +``` +python infer.py /path/to/checkpoint.pth.gz --gpu 0 +``` + ## Citation If you find this work useful please cite the paper using the citation below. ```