Skip to content

Commit

Permalink
Remove A3C FP32, SSD-VGG16 FP32 and Int8, Inception V4 Int8, Inceptio…
Browse files Browse the repository at this point in the history
…n ResNet V2 Int8, and RFCN Int8 (#150)
  • Loading branch information
dmsuehir authored and karthikvadla committed Feb 6, 2019
1 parent 65d1dd2 commit f9e2637
Show file tree
Hide file tree
Showing 107 changed files with 4 additions and 13,897 deletions.
7 changes: 2 additions & 5 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,19 +18,16 @@ dependencies to be installed:
| Adversarial Networks | TensorFlow | DCGAN | Inference | [FP32](adversarial_networks/tensorflow/dcgan/README.md#fp32-inference-instructions) |
| Classification | TensorFlow | Wide & Deep | Inference | [FP32](classification/tensorflow/wide_deep/README.md#fp32-inference-instructions) |
| Content Creation | TensorFlow | DRAW | Inference | [FP32](content_creation/tensorflow/draw/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | Inception ResNet V2 | Inference | [Int8](image_recognition/tensorflow/inception_resnet_v2/README.md#int8-inference-instructions) [FP32](image_recognition/tensorflow/inception_resnet_v2/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | Inception ResNet V2 | Inference | [FP32](image_recognition/tensorflow/inception_resnet_v2/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | Inception V3 | Inference | [Int8](image_recognition/tensorflow/inceptionv3/README.md#int8-inference-instructions) [FP32](image_recognition/tensorflow/inceptionv3/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | Inception V4 | Inference | [Int8](image_recognition/tensorflow/inceptionv4/README.md#int8-inference-instructions) |
| Image Recognition | TensorFlow | MobileNet V1 | Inference | [FP32](image_recognition/tensorflow/mobilenet_v1/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | ResNet 101 | Inference | [Int8](image_recognition/tensorflow/resnet101/README.md#int8-inference-instructions) [FP32](image_recognition/tensorflow/resnet101/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | ResNet 50 | Inference | [Int8](image_recognition/tensorflow/resnet50/README.md#int8-inference-instructions) [FP32](image_recognition/tensorflow/resnet50/README.md#fp32-inference-instructions) |
| Image Recognition | TensorFlow | SqueezeNet | Inference | [FP32](image_recognition/tensorflow/squeezenet/README.md#fp32-inference-instructions) |
| Image Segmentation | TensorFlow | 3D UNet | Inference | [FP32](image_segmentation/tensorflow/3d_unet/README.md#fp32-inference-instructions) |
| Image Segmentation | TensorFlow | Mask R-CNN | Inference | [FP32](image_segmentation/tensorflow/maskrcnn/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | Fast R-CNN | Inference | [FP32](object_detection/tensorflow/fastrcnn/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | R-FCN | Inference | [Int8](object_detection/tensorflow/rfcn/README.md#int8-inference-instructions) [FP32](object_detection/tensorflow/rfcn/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | R-FCN | Inference | [FP32](object_detection/tensorflow/rfcn/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | SSD-MobileNet | Inference | [FP32](object_detection/tensorflow/ssd-mobilenet/README.md#fp32-inference-instructions) |
| Object Detection | TensorFlow | SSD-VGG16 | Inference | [Int8](object_detection/tensorflow/ssd-vgg16/README.md#int8-inference-instructions) [FP32](object_detection/tensorflow/ssd-vgg16/README.md#fp32-inference-instructions) |
| Recommendation | TensorFlow | NCF | Inference | [FP32](recommendation/tensorflow/ncf/README.md#fp32-inference-instructions) |
| Reinforcement Learning | TensorFlow | A3C | Inference | [FP32](reinforcement_learning/tensorflow/a3c/README.md#fp32-inference-instructions) |
| Text-to-Speech | TensorFlow | WaveNet | Inference | [FP32](text_to_speech/tensorflow/wavenet/README.md#fp32-inference-instructions) |
87 changes: 2 additions & 85 deletions benchmarks/common/tensorflow/start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -155,22 +155,6 @@ function 3d_unet() {
fi
}

# A3C model
function a3c() {
if [ ${PRECISION} == "fp32" ]; then

pip install opencv-python
export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}

CMD="${CMD} --checkpoint=${CHECKPOINT_DIRECTORY}"

PYTHONPATH=${PYTHONPATH} CMD=${CMD} run_model
else
echo "PRECISION=${PRECISION} is not supported for ${MODEL_NAME}"
exit 1
fi
}

# DCGAN model
function dcgan() {
if [ ${PRECISION} == "fp32" ]; then
Expand Down Expand Up @@ -289,9 +273,7 @@ function inception_resnet_v2() {
exit 1
fi

if [ ${PRECISION} == "int8" ]; then
PYTHONPATH=${PYTHONPATH} CMD=${CMD} run_model
elif [ ${PRECISION} == "fp32" ]; then
if [ ${PRECISION} == "fp32" ]; then
# Add on --in-graph and --data-location for int8 inference
if [ ${MODE} == "inference" ] && [ ${ACCURACY_ONLY} == "True" ]; then
CMD="${CMD} --in-graph=${IN_GRAPH} --data-location=${DATASET_LOCATION}"
Expand All @@ -305,21 +287,6 @@ function inception_resnet_v2() {
fi
}

# inceptionv4 model
function inceptionv4() {
if [ ${PRECISION} == "int8" ]; then
# For accuracy, dataset location is required
if [ "${DATASET_LOCATION_VOL}" == None ] && [ ${ACCURACY_ONLY} == "True" ]; then
echo "No dataset directory specified, accuracy cannot be calculated."
exit 1
fi
PYTHONPATH=${PYTHONPATH} CMD=${CMD} run_model
else
echo "PRECISION=${PRECISION} is not supported for ${MODEL_NAME}"
exit 1
fi
}

# Mask R-CNN model
function maskrcnn() {
if [ ${PRECISION} == "fp32" ]; then
Expand Down Expand Up @@ -451,16 +418,7 @@ function rfcn() {
split_arg="--split=${split}"
fi

if [ ${PRECISION} == "int8" ]; then
number_of_steps_arg=""

if [ -n "${number_of_steps}" ] && [ ${BENCHMARK_ONLY} == "True" ]; then
number_of_steps_arg="--number_of_steps=${number_of_steps}"
fi

CMD="${CMD} ${number_of_steps_arg} ${split_arg}"

elif [ ${PRECISION} == "fp32" ]; then
if [ ${PRECISION} == "fp32" ]; then
if [[ -z "${config_file}" ]] && [ ${BENCHMARK_ONLY} == "True" ]; then
echo "R-FCN requires -- config_file arg to be defined"
exit 1
Expand Down Expand Up @@ -522,41 +480,6 @@ function ssd_mobilenet() {
fi
}

# SSD-VGG16 model
function ssd_vgg16() {
# In-graph is required
if [ "${IN_GRAPH}" == None ] ; then
echo "In graph must be specified!"
exit 1
fi

# For accuracy, dataset location is required, see README for more information.
if [ "${DATASET_LOCATION_VOL}" == "None" ] && [ ${ACCURACY_ONLY} == "True" ]; then
echo "No Data directory specified, accuracy will not be calculated."
exit 1
fi

if [ "${DATASET_LOCATION_VOL}" == "None" ] && [ ${BENCHMARK_ONLY} == "True" ]; then
DATASET_LOCATION=""
fi

if [ ${NOINSTALL} != "True" ]; then
pip install opencv-python
fi

if [ ${PRECISION} == "int8" ]; then
CMD="${CMD} --data-location=${DATASET_LOCATION}"
elif [ ${PRECISION} == "fp32" ]; then
CMD="${CMD} --in-graph=${IN_GRAPH} \
--data-location=${DATASET_LOCATION}"
else
echo "PRECISION=${PRECISION} is not supported for ${MODEL_NAME}"
exit 1
fi

PYTHONPATH=${PYTHONPATH} CMD=${CMD} run_model
}

# Wavenet model
function wavenet() {
if [ ${PRECISION} == "fp32" ]; then
Expand Down Expand Up @@ -612,8 +535,6 @@ echo "Log output location: ${LOGFILE}"
MODEL_NAME=$(echo ${MODEL_NAME} | tr 'A-Z' 'a-z')
if [ ${MODEL_NAME} == "3d_unet" ]; then
3d_unet
elif [ ${MODEL_NAME} == "a3c" ]; then
a3c
elif [ ${MODEL_NAME} == "dcgan" ]; then
dcgan
elif [ ${MODEL_NAME} == "draw" ]; then
Expand All @@ -624,8 +545,6 @@ elif [ ${MODEL_NAME} == "inceptionv3" ]; then
inceptionv3
elif [ ${MODEL_NAME} == "inception_resnet_v2" ]; then
inception_resnet_v2
elif [ ${MODEL_NAME} == "inceptionv4" ]; then
inceptionv4
elif [ ${MODEL_NAME} == "maskrcnn" ]; then
maskrcnn
elif [ ${MODEL_NAME} == "mobilenet_v1" ]; then
Expand All @@ -642,8 +561,6 @@ elif [ ${MODEL_NAME} == "squeezenet" ]; then
squeezenet
elif [ ${MODEL_NAME} == "ssd-mobilenet" ]; then
ssd_mobilenet
elif [ ${MODEL_NAME} == "ssd-vgg16" ]; then
ssd_vgg16
elif [ ${MODEL_NAME} == "wavenet" ]; then
wavenet
elif [ ${MODEL_NAME} == "wide_deep" ]; then
Expand Down
162 changes: 0 additions & 162 deletions benchmarks/image_recognition/tensorflow/inception_resnet_v2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,170 +2,8 @@

This document has instructions for how to run Inception ResNet V2 for the
following modes/precisions:
* [Int8 inference](#int8-inference-instructions)
* [FP32 inference](#fp32-inference-instructions)

## Int8 Inference Instructions

1. Clone this [intelai/models](https://github.com/IntelAI/models)
repository:

```
$ git clone https://github.com/IntelAI/models.git
```

This repository includes launch scripts for running benchmarks and the
an optimized version of the Inception ResNet V2 model code.

2. A link to download the pre-trained model is coming soon.

3. Build a docker image using master of the official
[TensorFlow](https://github.com/tensorflow/tensorflow) repository with
`--config=mkl`. More instructions on
[how to build from source](https://software.intel.com/en-us/articles/intel-optimization-for-tensorflow-installation-guide#inpage-nav-5).

4. If you would like to run Inception ResNet V2 inference and test for
accuracy, you will need the full ImageNet dataset. Benchmarking for latency
and throughput do not require the ImageNet dataset.

Register and download the
[ImageNet dataset](http://image-net.org/download-images).

Once you have the raw ImageNet dataset downloaded, we need to convert
it to the TFRecord format. This is done using the
[build_imagenet_data.py](https://github.com/tensorflow/models/blob/master/research/inception/inception/data/build_imagenet_data.py)
script. There are instructions in the header of the script explaining
its usage.

After the script has completed, you should have a directory with the
sharded dataset something like:

```
$ ll /home/myuser/datasets/ImageNet_TFRecords
-rw-r--r--. 1 user 143009929 Jun 20 14:53 train-00000-of-01024
-rw-r--r--. 1 user 144699468 Jun 20 14:53 train-00001-of-01024
-rw-r--r--. 1 user 138428833 Jun 20 14:53 train-00002-of-01024
...
-rw-r--r--. 1 user 143137777 Jun 20 15:08 train-01022-of-01024
-rw-r--r--. 1 user 143315487 Jun 20 15:08 train-01023-of-01024
-rw-r--r--. 1 user 52223858 Jun 20 15:08 validation-00000-of-00128
-rw-r--r--. 1 user 51019711 Jun 20 15:08 validation-00001-of-00128
-rw-r--r--. 1 user 51520046 Jun 20 15:08 validation-00002-of-00128
...
-rw-r--r--. 1 user 52508270 Jun 20 15:09 validation-00126-of-00128
-rw-r--r--. 1 user 55292089 Jun 20 15:09 validation-00127-of-00128
```

5. Next, navigate to the `benchmarks` directory in your local clone of
the [intelai/models](https://github.com/IntelAI/models) repo from step 1.
The `launch_benchmark.py` script in the `benchmarks` directory is
used for starting a benchmarking run in a optimized TensorFlow docker
container. It has arguments to specify which model, framework, mode,
precision, and docker image to use, along with your path to the ImageNet
TF Records that you generated in step 4.

Substitute in your own `--data-location` (from step 4, for accuracy
only), `--in-graph` pre-trained model file path (from step 2),
and the name/tag for your docker image (from step 3).

Inception ResNet V2 can be run for accuracy, latency benchmarking, or throughput
benchmarking. Use one of the following examples below, depending on
your use case.

For accuracy (using your `--data-location`, `--accuracy-only` and
`--batch-size 100`):

```
python launch_benchmark.py \
--model-name inception_resnet_v2 \
--precision int8 \
--mode inference \
--framework tensorflow \
--accuracy-only \
--batch-size 100 \
--docker-image tf_int8_docker_image \
--in-graph /home/myuser/inception_resnet_v2_int8_pretrained_model.pb \
--data-location /home/myuser/datasets/ImageNet_TFRecords
```

For latency (using `--benchmark-only`, `--socket-id 0` and `--batch-size 1`):

```
python launch_benchmark.py \
--model-name inception_resnet_v2 \
--precision int8 \
--mode inference \
--framework tensorflow \
--benchmark-only \
--batch-size 1 \
--socket-id 0 \
--docker-image tf_int8_docker_image \
--in-graph /home/myuser/inception_resnet_v2_int8_pretrained_model.pb
```

For throughput (using `--benchmark-only`, `--socket-id 0` and `--batch-size 128`):

```
python launch_benchmark.py \
--model-name inception_resnet_v2 \
--precision int8 \
--mode inference \
--framework tensorflow \
--benchmark-only \
--batch-size 128 \
--socket-id 0 \
--docker-image tf_int8_docker_image \
--in-graph /home/myuser/inception_resnet_v2_int8_pretrained_model.pb
```

Note that the `--verbose` flag can be added to any of the above commands
to get additional debug output.

6. The log file is saved to the
`models/benchmarks/common/tensorflow/logs` directory. Below are
examples of what the tail of your log file should look like for the
different configs.

Example log tail when running for accuracy:

```
Processed 49800 images. (Top1 accuracy, Top5 accuracy) = (0.8015, 0.9523)
Processed 49900 images. (Top1 accuracy, Top5 accuracy) = (0.8016, 0.9524)
Processed 50000 images. (Top1 accuracy, Top5 accuracy) = (0.8015, 0.9524)
lscpu_path_cmd = command -v lscpu
lscpu located here: /usr/bin/lscpu
Ran inference with batch size 100
Log location outside container: /home/myuser/intelai/models/benchmarks/common/tensorflow/logs/benchmark_inception_resnet_v2_inference_int8_20190104_193854.log
```

Example log tail when benchmarking for latency:
```
Iteration 39: 0.052 sec
Iteration 40: 0.052 sec
Average time: 0.052 sec
Batch size = 1
Latency: 52.347 ms
Throughput: 19.103 images/sec
lscpu_path_cmd = command -v lscpu
lscpu located here: /usr/bin/lscpu
Ran inference with batch size 1
Log location outside container: /home/myuser/intelai/models/benchmarks/common/tensorflow/logs/benchmark_inception_resnet_v2_inference_int8_20190104_194938.log
```

Example log tail when benchmarking for throughput:
```
Iteration 39: 0.993 sec
Iteration 40: 1.023 sec
Average time: 0.996 sec
Batch size = 128
Throughput: 128.458 images/sec
lscpu_path_cmd = command -v lscpu
lscpu located here: /usr/bin/lscpu
Ran inference with batch size 128
Log location outside container: /home/myuser/intelai/models/benchmarks/common/tensorflow/logs/benchmark_inception_resnet_v2_inference_int8_20190104_195504.log
```


## FP32 Inference Instructions

1. Clone this [intelai/models](https://github.com/IntelAI/models)
Expand Down

This file was deleted.

Loading

0 comments on commit f9e2637

Please sign in to comment.