Skip to content

embedded-machine-learning/hwmodule-tf2oda-openvino

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Training and Inference of Tensorflow 2 Object Detection API on OpenVino with EML Tools

In this folder, there is a template project for inference of trained, exported models of TF2ODA in OpenVino. In the following procedure, instructions are provided to setup and run one or more networks and to extract the evaluations of the executions. All evaluations are compatible with the EML tools.

Setup

Prerequisites

  1. Setup the task spooler on the target device. Instructions can be found here: https://github.com/embedded-machine-learning/scripts-and-guides/blob/main/guides/task_spooler_manual.md
  2. Install OpenVino. In this implementation, we used openvino_2021.4.582
  3. Setup sendmail according to this guide https://github.com/embedded-machine-learning/scripts-and-guides/blob/main/guides/SENDMAIL.md

Dataset

For validating the tool chain, download the small validation set from kaggle: https://www.kaggle.com/alexanderwendt/oxford-pets-cleaned-for-eml-tools

It contains of two small sets that are used for training and inference validation in the structure that is compatible to the EML Tools. Put it in the following folder structure, e.g. /srv/cdl-eml/datasets/dataset-oxford-pets-cleaned/

Generate EML Tools directory structure, Setup the TF2ODA and OpenVino Environment

The following steps are only necessary if you setup the EML tools for the first time on a device.

  1. Create a folder for your datasets. Usually, multiple users use one folder for all datasets to be able to share them. Later on, in the training and inference scripts, you will need the path to the dataset.

  2. Create the EML tools folder structure, e.g. eml-tools. The structure can be found here: https://github.com/embedded-machine-learning/eml-tools#interface-folder-structure. Most of the following steps are performed with this script as well: generate_workspace_tf2oda_openvino

#!/bin/bash


#1. Create a folder for your datasets. Usually, multiple users use one folder for all datasets to be able to share them. Later on, in the 
#training and inference scripts, you will need the path to the dataset.
#2. Create the EML tools folder structure, e.g. ```eml-tools```. The structure can be found here: https://github.com/embedded-machine-learning/eml-tools#interface-folder-structure
ROOTFOLDER=`pwd`

#In your root directory, create the structure. Sample code
mkdir -p eml_projects
mkdir -p venv

#3. Clone the EML tools repository into your workspace
EMLTOOLSFOLDER=./eml-tools
if [ ! -d "$EMLTOOLSFOLDER" ] ; then
  git clone https://github.com/embedded-machine-learning/eml-tools.git "$EMLTOOLSFOLDER"
else 
  echo $EMLTOOLSFOLDER already exists
fi

#4. Create the task spooler script to be able to use the correct task spooler on the device. In our case, just copy
#./init_ts.sh

# Project setup
#5. Create a virtual environment for TF2ODA in your venv folder. The venv folder is put outside of the project folder to 
#avoid copying lots of small files when you copy the project folder. Conda would also be a good alternative.
# From root
cd $ROOTFOLDER

cd ./venv

TF2ODAENV=tf24_py36
if [ ! -d "$TF2ODAENV" ] ; then
  virtualenv -p python3.8 $TF2ODAENV
  source ./$TF2ODAENV/bin/activate

  # Install necessary libraries
  python -m pip install --upgrade pip
  pip install --upgrade setuptools cython wheel
  
  # Install EML libraries
  pip install lxml xmltodict tdqm beautifulsoup4 pycocotools numpy tdqm pandas matplotlib pillow
  
  # Install TF2ODA specifics
  #pip install tensorflow==2.4.1
  
  cd $ROOTFOLDER
  
  echo # Install protobuf
  PROTOC_ZIP=protoc-3.14.0-linux-x86_64.zip
  curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v3.14.0/$PROTOC_ZIP
  unzip -o $PROTOC_ZIP -d protobuf
  rm -f $PROTOC_ZIP
  
  echo # Clone tensorflow repository
  git clone https://github.com/tensorflow/models.git
  cd models/research/
  cp object_detection/packages/tf2/setup.py .
  python -m pip install .
  
  echo # Add object detection and slim to python path
  export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
  
  echo # Prepare TF2 Proto Files
  ../../protobuf/bin/protoc object_detection/protos/*.proto --python_out=.

  echo # Test installation
  # If all tests are OK or skipped, then the installation was successful
  python object_detection/builders/model_builder_tf2_test.py
  
  echo #Test if Tensorflow works with CUDA on the machine. For TF2.4.1, you have to use CUDA 11.0
  python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
  
  echo "Important information: If there are any library errors, you have to install the correct versions manually. TFODAPI does install the latest version of "
  echo "tensorflow. However, in this script Tensorflow 2.4.1 is desired. Then, you have to uninstall the newer versions and replace with current versions."

  echo # Installation complete
  
else 
  echo $TF2ODAENV already exists
fi

cd $ROOTFOLDER
source ./venv/$TF2ODAENV/bin/activate

#6. Create a virtual environment for OpenVino 2021.4 in Python 3.6 as it does not cope with Python 3.8
# From root
cd $ROOTFOLDER
cd ./venv
OPENVINOVENVFOLDER=openvino_tf2_py36
if [ ! -d "$OPENVINOVENVFOLDER" ] ; then
  virtualenv -p python3.6 $OPENVINOVENVFOLDER
  source $OPENVINOVENVFOLDER/bin/activate

  # Install necessary libraries
  python -m pip install --upgrade pip
  pip install --upgrade setuptools cython wheel

  # Install EML libraries
  pip install lxml xmltodict tdqm beautifulsoup4 pycocotools pandas absl-py

  # Install OpenVino libraries
  pip install onnx-simplifier networkx defusedxml progress requests
  
  # Install Tensorflow
  pip install tensorflow
else 
  echo $OPENVINOVENVFOLDER already exists
fi

cd $ROOTFOLDER

echo Created TF2ODA environment for TF2ODA inference and OpenVino inference

Note: Also in the OpenVino environment, TensorFlow has to be installed to execute the conversion.

Project setup

  1. Go to your project folder e.g. ./eml_projects and create a project folder, e.g. ./tf2oda-oxford-pets

  2. Copy the scripts from this repository to that folder and execute chmod 777 *.sh to be able to run the scripts. One of the script is the task spooler script, which could be used by multiple EML projects, ./init_ts.sh.

  3. run ./setup_dirs_openvino.sh to generate all necessary folders.

  4. Copy your exported models from your training, e.g from https://github.com/embedded-machine-learning/hwmodule-tf2oda-server to your ./exported-models.

Modification of script files

The next step is to adapt the script files to the current environment.

Adapt Task Spooler Script

In init.ts.sh, either adapt

export TS_SOCKET="/srv/ts_socket/GPU.socket"
chmod 777 /srv/ts_socket/GPU.socket
export TS_TMPDIR=~/logs

to your task spooler path or call another task spooler script in your EML Tools root.

. ../../init_ts.sh

Adapt Environment Scripts for Tensorflow Inference

In init_env.sh, adapt the following part to your venv folder or conda implementation.

PROJECTROOT=`pwd`
ENVROOT=../..

source $ENVROOT/venv/tf24_py36/bin/activate
cd $ENVROOT/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
echo New python path $PYTHONPATH

cd $PROJECTROOT

In init_env.sh, adapt the source ../../venv/yolov5_pv38/bin/activate to your venv folder or conda implementation.

In init_env_openvino.sh, adapt source ../../venv/openvino_tf2_py36/bin/activate to match your openvino virtual environment.

OpenVino Conversion of Tensorflow Exported Models to OpenVino IR

The first script to adapt is convert_tf2_to_ir_TEMPLATE.sh. It converts the exported models into OpenVino IR.

  1. Edit the script and adapt the following constants for your conversion:
# Constant Definition
[email protected]    #Change to your email address
SCRIPTPREFIX=../../eml-tools   #No need to change this
HARDWARENAME=IntelNUC    #Use your HW identifier

#Openvino installation directory for the model optimizer
OPENVINOINSTALLDIR=/opt/intel/openvino_2021.4.582   #Use your OpenVino installation
PRECISIONLIST="FP16 FP32"   # Set the precisions to convert to. Default: FP16 and FP32.
  1. If you want to use the script manually do the following. Else, you can use the add_folder_conv_ir.sh instead. Copy and rename convert_yolo_onnx_to_ir_TEMPLATE.sh to convert_yolo_onnx_to_ir_[MODELNAME].sh, e.g. convert_tf2_to_ir_tf2oda_ssdmobilenetv2_300x300_pets_D100.sh, where the MODELNAME is a exact match of the folder name of the model folder in exported-models. Now, this script will only be used to convert this model. Note that the MODELNAME will be extracted from the file name and information about the implementation will be extracted for the evaluation.

Inference Script OpenVino

The script openvino_inf_eval_saved_model_TEMPLATE.sh executes the TEMPLATE network on OpenVino.

Adapt the following constants for your environment:

SCRIPTPREFIX=../../eml-tools   #No need to change this
HARDWARENAME=IntelNUC     #Set your hardware id
DATASET=/home/intel-nuc/eml-tools/datasets/dataset-oxford-pets-val-debug   #Validation dataset
LABELMAP=label_map.pbtxt

#Openvino installation directory for the inferrer (not necessary the same as the model optimizer)
OPENVINOINSTALLDIR=/opt/intel/openvino_2021.4.582   #Set your OpenVino version
APIMODE=sync   # Use sync mode to get the detailed layer reports too
HARDWARETYPELIST="CPU GPU MYRIAD"  #Set the hardware, which you want to do inference on

Important Notice: TF2ODA EfficientDet does run, but does not return any useful bounding boxes.

Inference Script TensorFlow

The script tf2oda_inf_eval_saved_model_TEMPLATE.sh trains and executes the model in the TEMPLATE on TF2.

For inference, TEMPLATE has to be replaced by the network name that shall be trained. In case you don not use the add_folder... scripts, you can manually prepare the scripts. First copy tf2oda_inf_eval_saved_model_TEMPLATE.sh and rename it to fit your network, e.g. tf2oda_inf_eval_saved_model_tf2oda_ssdmobilenetv2_300x300_pets_s1000.sh. The network will use the model name to load the config from ./jobs.

For each network to be trained, the following constants have to be adapted:

[email protected] # Set your email to get notified
SCRIPTPREFIX=../../eml-tools    # No need to change
DATASET=/srv/cdl-eml/datasets/dataset-oxford-pets-val-debug   #Set this dataset as the validation dataset
HARDWARENAME=IntelNUC   # Set your hardware name

Put the adapted scripts in ./jobs. From there the script can be started.

Send Mail

To get notifications when the jobs start and stop, the program can send mails to the user. Each Sendmail script consists of a prefix sendmail_[TEMPLATE] and a action-hardware part. The script reads the latter part and uses it as subject and content in a mail. Here is an example script for sending a mail when the TF2 inference starts on the IntelNUC: sendmail_Start_TF2_IntelNUC.sh

For the proper scripts, the following constants need to be changed:

Add Folder Jobs

add_folder_conv_ir.sh adds conversion jobs to the task spooler. Nothing needs to be changed here.

add_folder_inftf2_jobs.sh loads all converted IR models from ./exported-models-openvino and puts their execution scripts in the task spooler. The execution script templates work like this: The add jobs script makes a copy of tf2oda_inf_eval_saved_model_TEMPLATE.sh and replaces TEMPLATE with the model name. Then, it adds these two scripts to the task spooler. No script adaptions are necessary.

add_folder_infopenvino_jobs.sh loads all converted IR models from ./exported-models-openvino and puts their execution scripts in the task spooler. The execution script templates work like this: The add jobs script makes a copy of openvino_inf_eval_saved_model_TEMPLATE.sh and replaces TEMPLATE with the model name. Then, it adds these two scripts to the task spooler. No script adaptions are necessary.

Running the system

  1. Convert the models you want to run with ./add_folder_conv_ir.sh and wait for the job to finish
  2. Run ./add_all_inf.sh to add the jobs of both TF2ODA and OpenVino to the task spooler. As an alternative, you can run the separate add folder job scripts: add_folder_inftf2_jobs.sh and add_folder_infopenvino_jobs.sh

The results can be found in ./results.

Common Problems

Task Spooler Blocked

If the task spooler freezes or is blocked, the following error message is shown:

=== Init task spooler ===
Setup task spooler socket for GPU.
chmod: changing permissions of '/srv/ts_socket/GPU.socket': Operation not permitted
task spooler output directory: /home/wendt/logs
Task spooler initialized /srv/ts_socket/GPU.socket
(tf24) [wendt@eda02 graz-pedestrian]$ ts -l
c: cannot connect to the server
(tf24) [wendt@eda02 graz-pedestrian]$

The cause is the a user blocks the task spooler and nobody else has access rights. It has to be released by the user or a sudo-user. The solution is to put the following command line into the task spooler script: chmod 777 /srv/ts_socket/GPU.socket

Windows Instead of Linux EOL in Files

Note: If you get this error: /bin/bash^M: bad interpreter or other strange execution problems, then you might use Windows EOL. To correct it, change EOL to Unix.

Embedded Machine Learning Laboratory

This repository is part of the Embedded Machine Learning Laboratory at the TU Wien. For more useful guides and various scripts for many different platforms visit our EML-Tools: https://github.com/embedded-machine-learning/eml-tools.

Our newest projects can be viewed on our webpage: https://eml.ict.tuwien.ac.at/

About

Tensorflow Object Detection API on OpenVino

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages