Based on nutonomy/second.pytorch
git clone https://github.com/hova88/Lidardet.git
It is recommend to use the Anaconda package manager.
First, use Anaconda to configure as many packages as possible.
conda create -n pointpillars python=3.7 anaconda
source activate pointpillars
conda install shapely pybind11 protobuf scikit-image numba pillow
conda install pytorch torchvision -c pytorch
conda install google-sparsehash -c bioconda
Then use pip for the packages missing from Anaconda.
pip install --upgrade pip
pip install fire
Finally, install SparseConvNet. This is not required for PointPillars, but the general SECOND code base expects this to be correctly configured.
git clone [email protected]:facebookresearch/SparseConvNet.git
cd SparseConvNet/
bash build.sh
# NOTE: if bash build.sh fails, try bash develop.sh instead
Additionally, you may need to install Boost geometry:
sudo apt-get install libboost-all-dev
You need to add following environment variables for numba to ~/.bashrc:
export NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/libcuda.so
export NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so
export NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice
Useing nutonomy/second.pytorch Repo to Prepare dataset
git clone https://github.com/nutonomy/second.pytorch.git
cd second.pytorch
Add Lidardet/ to your PYTHONPATH.
export PYTHONPATH=path/to/Lidardet/second.pytorch
Download KITTI dataset and create some directories first:
└── KITTI_DATASET_ROOT
├── training <-- 7481 train data
| ├── image_2 <-- for visualization
| ├── calib
| ├── label_2
| ├── velodyne
| └── velodyne_reduced <-- empty directory
└── testing <-- 7580 test data
├── image_2 <-- for visualization
├── calib
├── velodyne
└── velodyne_reduced <-- empty directory
Note: PointPillar's protos use KITTI_DATASET_ROOT=/data/sets/kitti_second/
.
python create_data.py create_kitti_info_file --data_path=KITTI_DATASET_ROOT
python create_data.py create_reduced_point_cloud --data_path=KITTI_DATASET_ROOT
python create_data.py create_groundtruth_database --data_path=KITTI_DATASET_ROOT
The config file needs to be edited to point to the above datasets:
train_input_reader: {
...
database_sampler {
database_info_path: "/path/to/kitti_dbinfos_train.pkl"
...
}
kitti_info_path: "/path/to/kitti_infos_train.pkl"
kitti_root_path: "KITTI_DATASET_ROOT"
}
...
eval_input_reader: {
...
kitti_info_path: "/path/to/kitti_infos_val.pkl"
kitti_root_path: "KITTI_DATASET_ROOT"
}
cd Path/to/Lidardet/
export PYTHONPATH=path/to/Lidardet
python ./train.py train --config_path=./params/configs/pointpillars_kitti_car_xy16.yaml --model_dir=/path/to/logs/XXX
- If you want to train a new model, make sure "/path/to/model_dir" doesn't exist.
- If "/path/to/model_dir" does exist, training will be resumed from the last checkpoint.
- Training only supports a single GPU.
- Training uses a batchsize=2 which should fit in memory on most standard GPUs.
- On a single 2070Ti, training xyres_16 requires approximately 30 hours for 1000 epochs.
- Detection result will saved in model_dir/eval_results/step_xxx.
- By default, results are stored as a result.pkl file. To save as official KITTI label format use --pickle_result=False.