Skip to content

Commit

Permalink
Specify precision for the openvino core object
Browse files Browse the repository at this point in the history
The latest processors like sapphire rapids assume default
precision to be BF16. Hence, the accuracy for F32 models with
default config is impacted.

Adding changes to specify the precision, to avoid accuracy
issues across processors.

Tracked-On: OAM-111649
Signed-off-by: Anoob Anto K <[email protected]>
  • Loading branch information
akodanka authored and sysopenci committed Aug 7, 2023
1 parent 2c3dfa8 commit d94091d
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions IENetwork.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ bool IENetwork::createNetwork(std::shared_ptr<ov::Model> network, const std::str
ALOGE("Invalid Network pointer");
return false;
} else {
ie.set_property(deviceStr, {{ov::hint::inference_precision.name(), "f32"}});
ov::CompiledModel compiled_model = ie.compile_model(network, deviceStr);
ALOGD("createNetwork is done....");
#if __ANDROID__
Expand Down Expand Up @@ -76,6 +77,7 @@ void IENetwork::loadNetwork(const std::string& modelName) {

ALOGD("loading infer request for Intel Device Type : %s", deviceStr.c_str());

ie.set_property(deviceStr, {{ov::hint::inference_precision.name(), "f32"}});
ov::CompiledModel compiled_model = ie.compile_model(modelName, deviceStr);
mInferRequest = compiled_model.create_infer_request();
isLoaded = true;
Expand Down

0 comments on commit d94091d

Please sign in to comment.