-
Notifications
You must be signed in to change notification settings - Fork 220
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Adding README files for Intel® Data Center Flex Series GPUs (#125) * fix incorrect links (#127) * bump ipython to fix CVE (#128) --------- Signed-off-by: WafaaT <[email protected]> Co-authored-by: Clayne Robison <[email protected]>
- Loading branch information
1 parent
8c4c7c9
commit accca70
Showing
9 changed files
with
604 additions
and
42 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,23 @@ | ||
# Model Zoo for Intel® Architecture Workloads Optimized for the Intel® Data Center GPU Flex Series | ||
|
||
This document provides links to step-by-step instructions on how to leverage Model Zoo docker containers to run optimized open-source Deep Learning inference workloads using Intel® Extension for PyTorch* and Intel® Extension for TensorFlow* on the [Intel® Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/data-center-gpu/flex-series/overview.html). | ||
|
||
## Base Containers | ||
|
||
| AI Framework | Extension | Documentation | | ||
| -----------------------------| ------------- | ----------------- | | ||
| PyTorch | Intel® Extension for PyTorch* | [Intel® Extension for PyTorch Container](https://github.com/IntelAI/models/blob/master/quickstart/ipex-tool-container/gpu/devcatalog.md) | | ||
| TensorFlow | Intel® Extension for TensorFlow* | [Intel® Extension for TensorFlow Container](https://github.com/IntelAI/models/blob/master/quickstart/tf-tool-container/gpu/devcatalog.md)| | ||
|
||
## Optimized Workloads | ||
|
||
The table below provides links to run each workload in a docker container. The containers are optimized for Linux*. | ||
|
||
|
||
| Model | Framework | Mode | Documentation | Dataset | | ||
| ----------------------------| ---------- | ----------| ------------------- | ------------ | | ||
| [ResNet 50 v1.5](https://github.com/tensorflow/models/tree/v2.11.0/official/legacy/image_classification/resnet) | TensorFlow | Inference| [INT8](https://github.com/IntelAI/models/blob/master/quickstart/image_recognition/tensorflow/resnet50v1_5/inference/gpu/devcatalog.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) | | ||
| [ResNet 50 v1.5](https://arxiv.org/pdf/1512.03385.pdf) | PyTorch | Inference | [INT8](https://github.com/IntelAI/models/blob/master/quickstart/image_recognition/pytorch/resnet50v1_5/inference/gpu/DEVCATALOG_FLEX.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) | | ||
| [SSD-MobileNet v1](https://arxiv.org/pdf/1704.04861.pdf) | PyTorch | Inference | [INT8](https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/devcatalog.md) | [COCO 2017](https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/README.md#datasets) | | ||
| [YOLO v4](https://arxiv.org/pdf/1704.04861.pdf) | PyTorch | Inference |[INT8](https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/yolov4/inference/gpu/devcatalog.md) | [COCO 2017](https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/README.md#datasets) | | ||
| [SSD-MobileNet](https://arxiv.org/pdf/1704.04861.pdf) | TensorFlow | Inference | [INT8](https://github.com/IntelAI/models/blob/master/quickstart/object_detection/tensorflow/ssd-mobilenet/inference/gpu/devcatalog.md)| [COCO 2017 validation dataset](https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images) | |
102 changes: 102 additions & 0 deletions
102
quickstart/image_recognition/pytorch/resnet50v1_5/inference/gpu/DEVCATALOG_FLEX.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,102 @@ | ||
# Running ResNet50 v1.5 Inference with Int8 on Intel® Data Center GPU Flex Series using Intel® Extension for PyTorch* | ||
|
||
|
||
## Overview | ||
|
||
This document has instructions for running ResNet50v1.5 inference using Intel(R) Extension for PyTorch with GPU. | ||
|
||
## Requirements | ||
| Item | Detail | | ||
| ------ | ------- | | ||
| Host machine | Intel® Data Center GPU Flex Series | | ||
| Drivers | GPU-compatible drivers need to be installed: [Download Driver 476.14](https://dgpu-docs.intel.com/releases/stable_476_14_20221021.html) | ||
| Software | Docker* Installed | | ||
|
||
## Get Started | ||
|
||
## Download Datasets | ||
|
||
The [ImageNet](http://www.image-net.org/) validation dataset is used. | ||
|
||
Download and extract the ImageNet2012 dataset from http://www.image-net.org/, | ||
then move validation images to labeled subfolders, using | ||
[the valprep.sh shell script](https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh) | ||
|
||
A after running the data prep script, your folder structure should look something like this: | ||
|
||
``` | ||
imagenet | ||
└── val | ||
├── ILSVRC2012_img_val.tar | ||
├── n01440764 | ||
│ ├── ILSVRC2012_val_00000293.JPEG | ||
│ ├── ILSVRC2012_val_00002138.JPEG | ||
│ ├── ILSVRC2012_val_00003014.JPEG | ||
│ ├── ILSVRC2012_val_00006697.JPEG | ||
│ └── ... | ||
└── ... | ||
``` | ||
The folder that contains the `val` directory should be set as the | ||
`DATASET_DIR` | ||
(for example: `export DATASET_DIR=/home/<user>/imagenet`). | ||
|
||
## Quick Start Scripts | ||
|
||
| Script name | Description | | ||
|-------------|-------------| | ||
| `inference_block_format.sh` | Runs ResNet50 inference (block format) for the specified precision (int8) | | ||
|
||
## Run Using Docker | ||
|
||
### Set up Docker Image | ||
|
||
``` | ||
docker pull intel/image-recognition:pytorch-flex-gpu-resnet50v1-5-inference | ||
``` | ||
### Run Docker Image | ||
The ResNet50 v1-5 inference container includes scripts,model and libraries need to run int8 inference. To run the `inference_block_format.sh` quickstart script using this container, you'll need to provide volume mounts for the ImageNet dataset. You will need to provide an output directory where log files will be written. | ||
|
||
``` | ||
export PRECISION=int8 | ||
export OUTPUT_DIR=<path to output directory> | ||
export DATASET_DIR=<path to the preprocessed imagenet dataset> | ||
export SCRIPT=quickstart/inference_block_format.sh | ||
DOCKER_ARGS=${DOCKER_ARGS:---rm -it} | ||
IMAGE_NAME=intel/image-recognition:pytorch-flex-gpu-resnet50v1-5-inference | ||
VIDEO=$(getent group video | sed -E 's,^video:[^:]*:([^:]*):.*$,\1,') | ||
RENDER=$(getent group render | sed -E 's,^render:[^:]*:([^:]*):.*$,\1,') | ||
test -z "$RENDER" || RENDER_GROUP="--group-add ${RENDER}" | ||
docker run \ | ||
-v <your-local-dir>:/workspace \ | ||
--group-add ${VIDEO} \ | ||
${RENDER_GROUP} \ | ||
--device=/dev/dri \ | ||
--ipc=host \ | ||
--env PRECISION=${PRECISION} \ | ||
--env OUTPUT_DIR=${OUTPUT_DIR} \ | ||
--env DATASET_DIR=${DATASET_DIR} \ | ||
--env http_proxy=${http_proxy} \ | ||
--env https_proxy=${https_proxy} \ | ||
--env no_proxy=${no_proxy} \ | ||
--volume ${OUTPUT_DIR}:${OUTPUT_DIR} \ | ||
--volume ${DATASET_DIR}:${DATASET_DIR} \ | ||
${DOCKER_ARGS} \ | ||
${IMAGE_NAME} \ | ||
/bin/bash $SCRIPT | ||
``` | ||
|
||
## Documentation and Sources | ||
|
||
[GitHub* Repository](https://github.com/IntelAI/models/tree/master/dockerfiles/model_containers) | ||
|
||
## Support | ||
Support for Intel® Extension for PyTorch* is found via the [Intel® AI Analytics Toolkit.](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html#gs.qbretz) Additionally, the Intel® Extension for PyTorch* team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-pytorch/issues). Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported. | ||
|
||
## License Agreement | ||
|
||
LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the [license file](https://github.com/IntelAI/models/tree/master/third_party) for additional details. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,87 @@ | ||
# Optimizations for Intel® Data Center GPU Flex Series using Intel® Extension for PyTorch* | ||
|
||
## Overview | ||
|
||
This document has instruction for running Intel® Extension for PyTorch* (IPEX) for | ||
GPU in container. | ||
|
||
## Requirements | ||
| Item | Detail | | ||
| ------ | ------- | | ||
| Host machine | Intel® Data Center GPU Flex Series | | ||
| Drivers | GPU-compatible drivers need to be installed: [Download Driver 476.14](https://dgpu-docs.intel.com/releases/stable_476_14_20221021.html) | ||
| Software | Docker* Installed | | ||
|
||
## Get Started | ||
|
||
### Installing the Intel Extensions for PyTorch | ||
#### Docker pull command: | ||
|
||
`docker pull intel/intel-extension-for-pytorch:xpu-flex` | ||
|
||
### Running container: | ||
|
||
Run following commands to start IPEX GPU tools container. You can use `-v` option to mount your | ||
local directory into container. The `-v` argument can be omitted if you do not need | ||
access to a local directory in the container. Pass the video and render groups to your | ||
docker container so that the GPU is accessible. | ||
``` | ||
IMAGE_NAME=intel/intel-extension-for-pytorch:xpu-flex | ||
DOCKER_ARGS=${DOCKER_ARGS:---rm -it} | ||
VIDEO=$(getent group video | sed -E 's,^video:[^:]*:([^:]*):.*$,\1,') | ||
RENDER=$(getent group render | sed -E 's,^render:[^:]*:([^:]*):.*$,\1,') | ||
test -z "$RENDER" || RENDER_GROUP="--group-add ${RENDER}" | ||
docker run --rm \ | ||
-v <your-local-dir>:/workspace \ | ||
--group-add ${VIDEO} \ | ||
${RENDER_GROUP} \ | ||
--device=/dev/dri \ | ||
--ipc=host \ | ||
-e http_proxy=$http_proxy \ | ||
-e https_proxy=$https_proxy \ | ||
-e no_proxy=$no_proxy \ | ||
${DOCKER_ARGS} \ | ||
${IMAGE_NAME} \ | ||
bash | ||
``` | ||
|
||
#### Verify if XPU is accessible from PyTorch: | ||
You are inside container now. Run following command to verify XPU is visible to PyTorch: | ||
``` | ||
python -c "import torch;print(torch.device('xpu'))" | ||
``` | ||
Sample output looks like below: | ||
``` | ||
xpu | ||
``` | ||
Then, verify that the XPU device is available to IPEX: | ||
``` | ||
python -c "import intel_extension_for_pytorch as ipex;print(ipex.xpu.is_available())" | ||
``` | ||
Sample output looks like below: | ||
``` | ||
True | ||
``` | ||
Finally, use the following command to check whether MKL is enabled as default: | ||
``` | ||
python -c "import intel_extension_for_pytorch as ipex;print(ipex.xpu.has_onemkl())" | ||
``` | ||
Sample output looks like below: | ||
``` | ||
True | ||
``` | ||
|
||
## Summary and Next Steps | ||
Now you are inside container with Python 3.9, PyTorch and IPEX preinstalled. You can run your own script | ||
to run on Intel GPU. | ||
|
||
## Documentation and Sources | ||
|
||
[GitHub* Repository](https://github.com/intel/intel-extension-for-pytorch/tree/master/docker) | ||
|
||
|
||
## Support | ||
Support for Intel® Extension for PyTorch* is found via the [Intel® AI Analytics Toolkit.](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html#gs.qbretz) Additionally, the Intel® Extension for PyTorch* team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-pytorch/issues). Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported. |
Oops, something went wrong.