This document has instructions for running Mask R-CNN inference.
Download the 2017 COCO dataset using the download_dataset.sh
script.
Export the DATASET_DIR
environment variable to specify the directory where the dataset
will be downloaded. This environment variable will be used again when running quickstart scripts.
cd <path to your clone of the model zoo>/quickstart/object_detection/pytorch/maskrcnn/inference/cpu
export DATASET_DIR=<directory where the dataset will be saved>
bash download_dataset.sh
DataType | mode | Throughput | Latency | Accuracy |
---|---|---|---|---|
FP32 | imperative | bash inference_throughput.sh fp32 imperative | bash inference_realtime.sh fp32 imperative | bash accuracy.sh fp32 imperative |
BF16 | imperative | bash inference_throughput.sh bf16 imperative | bash inference_realtime.sh bf16 imperative | bash accuracy.sh bf16 imperative |
BF32 | imperative | bash inference_throughput.sh bf32 imperative | bash inference_realtime.sh bf32 imperative | bash accuracy.sh bf32 imperative |
FP32 | jit | bash inference_throughput.sh fp32 jit | bash inference_realtime.sh fp32 jit | bash accuracy.sh fp32 jit |
BF16 | jit | bash inference_throughput.sh bf16 jit | bash inference_realtime.sh bf16 jit | bash accuracy.sh bf16 jit |
BF32 | jit | bash inference_throughput.sh bf32 jit | bash inference_realtime.sh bf32 jit | bash accuracy.sh bf32 jit |
Follow the instructions to setup your bare metal environment on either Linux or Windows systems. Once all the setup is done, the Model Zoo can be used to run a quickstart script. Ensure that you have a clone of the Model Zoo Github repository.
git clone https://github.com/IntelAI/models.git
Follow link to install Miniconda and build Pytorch, IPEX, TorchVison and Jemalloc.
-
Install dependencies
pip install yacs opencv-python pycocotools defusedxml cityscapesscripts conda install intel-openmp
-
Install model
cd models/object_detection/pytorch/maskrcnn/maskrcnn-benchmark/ python setup.py develop
-
Download pretrained model
cd <path to your clone of the model zoo>/quickstart/object_detection/pytorch/maskrcnn/inference/cpu export CHECKPOINT_DIR=<directory where the pretrained model will be saved> bash download_model.sh
-
Set Jemalloc Preload for better performance
After Jemalloc setup, set the following environment variables.
export LD_PRELOAD="<path to the jemalloc directory>/lib/libjemalloc.so":$LD_PRELOAD export MALLOC_CONF="oversize_threshold:1,background_thread:true,metadata_thp:auto,dirty_decay_ms:9000000000,muzzy_decay_ms:9000000000"
-
Set IOMP preload for better performance
IOMP should be installed in your conda env. Set the following environment variables.
export LD_PRELOAD=<path to the intel-openmp directory>/lib/libiomp5.so:$LD_PRELOAD
-
Set ENV to use AMX if you are using SPR
export DNNL_MAX_CPU_ISA=AVX512_CORE_AMX
-
Run the model:
cd models # Set environment variables export DATASET_DIR=<path to the COCO dataset> export CHECKPOINT_DIR=<path to the downloaded pretrained model> export OUTPUT_DIR=<path to an output directory> export MODE=<set to 'jit' or 'imperative'> # Run a quickstart script (for example, FP32 batch inference jit) bash quickstart/object_detection/pytorch/maskrcnn/inference/cpu/inference_throughput.sh fp32 jit
If not already setup, please follow instructions for environment setup on Windows.
-
Install dependencies
pip install yacs opencv-python pycocotools defusedxml cityscapesscripts conda install intel-openmp
-
Using Windows CMD.exe, run:
cd models # Env vars set DATASET_DIR=<path to the COCO dataset> set CHECKPOINT_DIR=<path to the downloaded pretrained model> set OUTPUT_DIR=<path to the directory where log files will be written> set MODE=<set to 'jit' or 'imperative'> #Run a quickstart script for fp32 precision(FP32 inference_realtime or inference_throughput or accuracy) bash quickstart\object_detection\pytorch\maskrcnn\inference\cpu\batch_inference_baremetal.sh fp32 jit