Movidius Rasberry Pi and Tensorflow Train a Tensorflow Keras deep neural network and conduct image classification on a an edge device (Raspberry Pi) with hardware acceleration from the Intel Nueral Compute Stick 2.
By using Docker, training and conversion of models can be done with simple make
commands.
Download the repository
git clone https://github.com/byarbrough/movrasten.git && cd movrasten
Place your images in to the data/train
folder. Each class of images should be in its own folder. For example, tree
on a three-class dataset should yield.
data
├── test
│ ├── 01
│ ├── 02
│ └── 03
└── train
├── 01
├── 02
└── 03
Makefile does not yet implement automated testing, so it is fine if data/test
is empty.
Also place one or more images directly into the data/infer
folder, as this is what will be used for the demo.
Development was done with BelgiumTSC_Training (171.3MBytes) and BelgiumTSC_Testing (76.5MBytes) from Belgium Traffic Sign Dataset, but most images should work.
To build the Docker image, train a model, convert that model to a 32-bit OpenVINO format, and run a classification on images in data/infer/*
, simply call
make all
These stages can also be run independently. To show all options use:
make help
Keep in mind that an image must first be running (make run
) before it can be used for training or conversion.
The Keras models are saved in models/
as both .h5
and .pb
. The models in OpenVINO format are saved in modesl/openvino
. If using make convert_16
for inference on the Raspberry Pi, make sure to copy all three files (bin
, .mapping
, .xml
) to the Pi.
Minimum version 2019.2.242
. The latest version can be found at the Intel® Open Source Technology Center
Follow Install OpenVINO™ toolkit for Raspbian* OS
Or use these consolidated steps
cd ~/Downloads/
wget https://download.01.org/opencv/2019/openvinotoolkit/R2/l_openvino_toolkit_runtime_raspbian_p_2019.2.242.tgz
sudo mkdir -p /opt/intel/openvino
sudo tar -xf l_openvino_toolkit_runtime_raspbian_p_2019.2.242.tgz --strip 1 -C /opt/intel/openvino
sudo apt install cmake
source /opt/intel/openvino/bin/setupvars.sh
echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc
You should see the output
[setupvars.sh] OpenVINO environment initialized
Then setup USB rules
sudo usermod -a -G users "$(whoami)"
Log out and back in
sh /opt/intel/openvino/install_dependencies/install_NCS_udev_rules.sh
At this point, the following should execute cleanly:
python3 -c "from openvino.inference_engine import IENetwork, IEPlugin"
Once the files have been copeid from models/openvino
to the Raspberry Pi, conduct inference on a single image with:
python classify/classification_sample.py -m <path to model>.xml -nt 5 -i <path to test image> -d MYRIAD
This is a little bit complicated becasue there are several things that need to be installed, potentially on multiple devices.
Get the code
git clone https://github.com/byarbrough/movrasten.git && cd movrasten
Install python requiremets
pip install -r requirements.txt
This may require sudo
and --user
options.
Intel's OpenVINO Toolkit is s unified AI framework for computer vision designed to work with with CPU, GPU, NCS, and FPGA with a single API. Neat!
Depending on the version of the neural compute stick in use, the installation changes. I tested on NCS 2 which requires OpenVINO, not the sdk. According to Intel Support the version can be determined with
lsusb | grep 03e7
Where 2150
corresponds to version 1 and 2485
means you are using NCS 2.
This guide follows Get Started with Intel NCS 2
- Download OpenVINO from https://software.intel.com/en-us/openvino-toolkit/choose-download. This has been tested with 2019.3.376
- Follow the instructions to install it. For Linux:
cd ~/Downloads
tar xvf l_openvino_toolkit_<VERSION>.tgz
cd l_openvino_toolkit_<VERSION>
sudo -E ./install_openvino_dependencies.sh
./install_GUI.sh
# Follow the GUI to install
Install dependencies
cd /opt/intel/openvino/install_dependencies
sudo -E ./install_openvino_dependencies.sh
For all others see Intel's getting started 3. Set the enviornment variable
source /opt/intel/openvino/bin/setupvars.sh
This will need to be done every time a terminal is opened. I like to create a symbolic link within movrasten or its parent directory ln -s /opt/intel/openvino/bin/setupvars.sh env-ncs
so that I can call source env-ncs
and don't have to remember where setupvars.sh
is.
Alternatively, .bashrc
can be modified to include that line.
- More things to install USB Driver
cd /opt/intel/openvino/install_dependencies
./install_NCS_udev_rules.sh
Prerequisites
cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites/
This can be done in a virtual environment with the optional venv tag.
./install_prerequisites.sh venv tf
- Demo! Make sure the NCS is plugged in!
cd /opt/intel/openvino/deployment_tools/demo
./demo_squeezenet_download_convert_run.sh -d MYRIAD
You should get a great Demo completed successfully.
- Train the model
- Freeze the model
- Convert the model to OpenVINO format
- Run inference on the model
A model can be trained with basic/tr_image.py
cd basic
python tr_image.py <path to training directory>
This will output a .h5 file in the Keras format and a frozen .pb file.
If you trained with the provided file then you already have a frozen .pb file. If you are using a pretrained model then consider freezing it
This requries prerequisites to be installed
Convert a frozen Tensorflow Model
python /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model model.pb -b 1 --data_type FP16 --scale 255 --reverse_input_channels
-b
is the batch size. The rescale must match whatever rescale was done in training.
Input channels are reversed (bgr) and that caused me a lot of suffering until I saw it somewhere in the docs!!!
Inference requires both the .bin
and .xml
files to be in the same directory.
python classify/classification_sample.py -m <path to model>.xml -nt 5 -i <path to test image> -d MYRIAD
On the desktop inference should work with MYRIAD or CPU for the -d
option
Copy both the .bin
and .xml
files from the desktop to the Pi. Then the same inference command will work!
On the Raspberry Pi inference will only work for MYRIAD.