Datasets for the 3D Semantic Segmentation to the Open World (3DOW) Challenge


Overview

Instructions

    1. Develop your model on the SubKITTI dataset.
    2. Use your optimized model to obtain results on the AugKITTI dataset.
    3. Submit your results in format on the page of 3DOW Challenge .

    Label definition:

    0-unknown, 1-car, 2-sign, 3-trunk, 4-plants, 5-pole, 6-fence, 7-building, 8-bike, 9-road.
    In train dataset (SubKITTI) there are objects of label 0-9.
    In test dataset (AugKITTI) there are unexperienced objects, which do not belong to any defined classes.


Download

Training dataset:

  • Name: SubKITTI
  • Description: A part of SemanticKITTI data.
  • Test dataset:

  • Name: AugKITTI
  • Description: An augmented dataset using SemanticKITTI and SemanticPOSS.
  • Evaluation program: evaluate.py


    Evaluation

    For 3D semantic segmentation tasks, we evaluate the outputs by:

  • accuracy: The accuracy, which is the proportion of observation classified correctly by your model, will be evaluated considering the ten categories as a whole.
  • IoU: To a certain category, an observation belonging to it can be considerd as positive, while others can be considered as negative. TP(True positive) is the number of positive observations where the model correctly predicts them positive. FP(False positive ) is the number of negative observations where the model incorrectly predicts them positive. FN(False negative) is the number of negative observations where the model correctly predicts them negative.
    IoU(Intersection over Union)=TP/(TP+FP+FN)
    IoU of nine object categories will be evaluated respectively.
  • IoU

    For OOD detection, we evaluate the confidence score by:

  • AUROC(area under the ROC curve): With a confidence threshold,observations can be classified as either ID (positive) or OOD (negative) by your model. TPR((true positive rate) represents the proportion of observations that are classified to be positive when indeed they are positive. FPR(false positive rate) represents the proportion of observations that are classified to be positive when they're actually negative. A ROC(Receiver Operator Characteristic) curve can be created by plotting pairs of TPR vs. FPR for every possible decision threshold of the model.

    We also provide AUROC for each predictive classes, which reflects the reliability when model predicts the input data to the specific class. The AUROC will be evaluated among all data predicted into the specific class. However, there are two special case:
    1. If none of the ID data is predicted into the specific class, the AUROC will be 0. (The predicted class will always be OOD)
    2. If none of the OOD data is predicted into the specific class, the AUROC will be 1. (The predicted class will always be ID)
    The range of the confidence is from 0 to 1,and threshold is uniformly sampled at intervals of 0.01.

  • roc

    Cite

      @inproceedings{behley2019arxiv,
      author = {J. Behley and M. Garbade and A. Milioto and J. Quenzel and S. Behnke and C. Stachniss and J. Gall},
      title = {{SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences}},
      booktitle = {Proc. of the IEEE International Conf. on Computer Vision (ICCV)},
      year = {2019}
      }
      @inproceedings{pan2020semanticposs,
      author={Pan, Yancheng and Gao, Biao and Mei, Jilin and Geng, Sibo and Li, Chengkun and Zhao, Huijing},
      title={SemanticPOSS: A point cloud dataset with large quantity of dynamic instances},
      booktitle={2020 IEEE Intelligent Vehicles Symposium (IV)},
      pages={687--693}, year={2020}
      }
      @inproceedings{geiger2012cvpr,
      author = {A. Geiger and P. Lenz and R. Urtasun},
      title = {{Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite}},
      booktitle = {Proc.~of the IEEE Conf.~on Computer Vision and Pattern Recognition (CVPR)},
      pages = {3354--3361},
      year = {2012}
      }

    License

      This dataset follow Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License.