This document provides details and information on more advanced usages of the model-builder script. For basic usage instructions, see the README.md file.
There are environment variables that can be set prior to running the model-builder script in order provide custom settings for the model packages and containers.
Variable | Default value | Description |
---|---|---|
MODEL_PACKAGE_DIR |
../output |
Directory where model package .tar.gz files are located |
LOCAL_REPO |
model-zoo |
Local images will be built as ${LOCAL_REPO}:tag . Tags are defined by the spec yml files |
TENSORFLOW_TAG |
2.3.0 |
Tag of the intel-optimized-tensorflow image to use as the base for versioned containers |
TAG_PREFIX |
${TENSORFLOW_TAG} |
Prefix used for the image tags (typically this will be the TF version) |
MODEL_WORKSPACE |
/workspace |
Location where the model package will be extracted in the model container |
IMAGE_LIST_FILE |
None | Specify a file path where the list of built image/tags will be written. This is used by automated build scripts. |
You can set these environment variables to customize the model-builder settings. For example:
MODEL_PACKAGE_DIR=/tmp/model_packages model-builder
When init-spec
is run, model spec yaml file's documentation section has
a text_replace
dictionary that defines keyword and value pairs that
will be replaced when the final README.md is generated. The final README.md
can be generated using either model-builder generate-documentation <model>
or model-builder make <model>
. The text_replace
section is optional,
and if it doesn't exist then no text replacement will happen when
documentation is generated.
By default, when init-spec
is run, the following text replacement
options will be defined in the model's spec yaml file:
Keyword | Value |
---|---|
<model name> |
The model's name formatted to be written in sentences (like ResNet50 or SSD-MobileNet ) |
<precision> |
The model's precision formatted to be written in sentences (like FP32 or Int8 ) |
<mode> |
The mode for the model package/container (inference or training ) |
<use_case> |
The model's use case formatted as it is in the model zoo directory structure (like image_recognition or object_detection ) |
<model-precision-mode> |
The model spec name, which consists of the model name, precision, and mode, as it's formatted in file names (like resnet50-fp32-inference ) |
An example of what this looks like in the spec yaml is below:
documentation:
...
text_replace:
<mode>: inference
<model name>: SSD-ResNet34
<precision>: FP32
<use case>: object_detection
<package url>:
<package name>: ssd-resnet34-fp32-inference.tar.gz
<package dir>: ssd-resnet34-fp32-inference
<docker image>:
Note: Please make sure to fill in the package url and docker image once the they have been uploaded and pushed to a repo.
After init-spec
is run, these values can be changed (for example, if
the <model name>
is not formatted correctly).
The documentation fragments use the keywords. For example, title.md has:
<!--- 0. Title -->
# <model name> <precision> <mode>
When the documentation is generated, the text subsitution will happen and the generated README.md will have the values filled in:
<!--- 0. Title -->
# SSD-ResNet34 FP32 inference
The model-builder command will build packages by calling docker run on the tf-tools container passing in arguments to assembler.py. This internal call looks like the following:
docker run --rm -u 503:20 -v <path-to-models-repo>/tools/docker:/tf -v $PWD:/tf/models tf-tools python3 assembler.py --release dockerfiles --build_packages --model_dir=models --output_dir=models/output
For single targets such as bert-large-fp32-training
the model-builder adds an argument:
--only_tags_matching=.*bert-large-fp32-training$
The model-builder command will construct Dockerfiles by calling docker run on the tf-tools container passing in arguments to assembler.py. This internal call looks like the following:
docker run --rm -u 503:20 -v <path-to-models-repo>/tools/docker:/tf -v <path-to-models-repo>/dockerfiles:/tf/dockerfiles tf-tools python3 assembler.py --release dockerfiles --construct_dockerfiles
For single targets such as bert-large-fp32-training
the model-builder adds an argument:
--only_tags_matching=.*bert-large-fp32-training$
The model-builder command will build images by calling docker run on the tf-tools container passing in arguments to assembler.py. This internal call looks like the following:
docker run --rm -v <path-to-models-repo>/tools/docker:/tf -v /var/run/docker.sock:/var/run/docker.sock tf-tools python3 assembler.py --arg _TAG_PREFIX=2.3.0 --arg http_proxy= --arg https_proxy= --arg TENSORFLOW_TAG=2.3.0 --arg PACKAGE_DIR=model_packages --arg MODEL_WORKSPACE=/workspace --repository model-zoo --release versioned --build_images --only_tags_matching=.*bert-large-fp32-training$ --quiet
For single targets such as bert-large-fp32-training
the model-builder adds an argument:
--only_tags_matching=.*bert-large-fp32-training$