Tutorial for Torch Points3D library
Torch_Points3D is a modern library for 3D Vision Learning on point cloud data. It includes built-in implementation of many tasks (e.g. semantic segmentation, panoptic segmentation, 3D object detection and scene classification). Since it’s still under early development, there are many bugs, so please submit issues when you hit into any problem. This is an unofficial tutorial about how to use the library to run on some datasets. I will update ASAP to cover all the tasks and datasets. Hope it will help you.
Train VoteNet with torch_points3d
Run the following command will trigger the training:
cd torch-points3d
python train.py task=object_detection models=object_detection/votenet2 \
model_name=VoteNetPaper data=object_detection/scannet-fixed
Here is a short explanation of what is all the arguments mean:
argument | meaning | reference |
---|---|---|
task | point cloud training task (here we define it as <task-name> ) including: <ul>segmentation</ul><ul>registration</ul><ul>panoptic</ul><ul>object_detection</ul> |
./conf/task/*.yaml where * is the specific task |
models | the model for each task, for example: object_detection/votenet2 is the model votenet2 inside object_detection task |
/<task-name>/modelname </br> please refer to ./conf/models/<task-name>/<method-name>.yaml for detailed configuration |
model_name | model name is the exact name for the model | it can be found from ./conf/models/<task-name>/<method-name>.yaml |
data | dataset configuration | ./conf/data/<task-name>/<dataset>.yaml |
::warning:: Please use -fixed dataset configuration for non-sparse conv model!!!!
The non-sparseconv methods requires a fixed point size to align and stack the batches.
Issues for votenetv2 training
-[x] dataset must match with methods, use fixed
dataset config for pointnet backbone models.
-[x] run into bug when use multi-thread preprocessing, change process_workers=1
in segmentation/scannet.py
solved the problem.