Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add QNN EP HTP shared memory allocator #23136

Open
wants to merge 43 commits into
base: main
Choose a base branch
from
Open

Conversation

edgchen1
Copy link
Contributor

@edgchen1 edgchen1 commented Dec 18, 2024

Description

Adds QNN EP HTP shared memory allocator.

The HTP shared memory allocator (HtpSharedMemoryAllocator) calls the rpcmem shared library (libcdsprpc.so/dll) to allocate and free memory that can be shared between HTP and CPU.

The allocator can be enabled by setting QNN EP option enable_htp_shared_memory_allocator to 1. QNNExecutionProvider::CreatePreferredAllocators() will then return an instance of HtpSharedMemoryAllocator.

For each QNN context, we also need to register and unregister memory handles in order to use the HTP shared memory. This memory handle management is added to QnnBackendManager, which also manages the QNN context handles.

For more information about using HTP shared memory with QNN, see: https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/htp_shared_buffer_tutorial.html#shared-buffer-tutorial

Limitations:

  • HTP shared memory usage is only supported for graph inputs and outputs. Intermediate values are not supported.
  • An allocation is assigned to a single shared memory buffer. The allocator is not smart enough to have multiple allocations share a single shared memory buffer.

Motivation and Context

Improve performance by using HTP shared memory to avoid overhead from copying data between CPU and NPU.

edgchen1 and others added 30 commits November 5, 2024 15:12
… declarations and definitions for IAllocator::TensorAlloc().
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can commit the suggested changes from lintrunner.

onnxruntime/core/providers/qnn/qnn_allocator.cc Outdated Show resolved Hide resolved
@jywu-msft jywu-msft requested a review from HectorSVC December 18, 2024 23:23
@@ -63,6 +65,12 @@ size_t GetElementSizeByType(ONNXTensorElementDataType elem_type) {
return pos->second;
}

size_t GetQnnTensorDataSize(gsl::span<const uint32_t> shape, Qnn_DataType_t element_type) {
ORT_ENFORCE(!shape.empty(), "Empty shape not allowed."); // TODO can we just treat empty shape as a scalar?
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this check is copied from the original implementation here:

ORT_RETURN_IF(dims.empty(), "Tensor dimensions is nullptr");

I'm not sure if it's needed

@edgchen1 edgchen1 marked this pull request as ready for review January 6, 2025 23:14
@edgchen1 edgchen1 changed the title [WIP] Add QNN EP HTP shared memory allocator Add QNN EP HTP shared memory allocator Jan 6, 2025
@HectorSVC HectorSVC added the ep:QNN issues related to QNN exeution provider label Jan 7, 2025
Comment on lines +1706 to +1707
// - QNN context handle is still valid. This should be true as long as QNN contexts are not freed from
// anywhere other than the destructor.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be true as long as QNN contexts are not freed from anywhere other than the destructor.

it seems kind of brittle to depend on this.

@@ -1098,6 +1099,38 @@ TEST_F(QnnHTPBackendTests, EPOffloadsGraphIOQuantDequant) {
}
}

TEST_F(QnnHTPBackendTests, UseHtpSharedMemoryAllocatorForInputs) {
#if !defined(__ANDROID__) && !defined(_WIN32)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QC device for windows is Arm64 based, so you can check defined(aarch64) defined(_M_ARM64)

@@ -1098,6 +1099,38 @@ TEST_F(QnnHTPBackendTests, EPOffloadsGraphIOQuantDequant) {
}
}

TEST_F(QnnHTPBackendTests, UseHtpSharedMemoryAllocatorForInputs) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also have some codes to demonstrate how this feature get used from user code.
Here are some IObinding examples for other EPs:

#if defined(USE_CUDA) || defined(USE_TENSORRT)

@@ -6,6 +6,8 @@
#include <string_view>

#include "core/common/hash_combine.h"
#include "core/framework/ortdevice.h"
#include "core/session/onnxruntime_c_api.h" // for OrtMemType, OrtAllocatorType
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: weird dependency for something in framework to rely on something in session but not sure there's a good way to avoid that.


struct AllocationRecord {
SharedMemoryInfo shared_memory_info;
InlinedVector<AllocationCleanUpFn, 1> clean_up_fns;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we expect more than one cleanup func?

Comment on lines +34 to +35
marker.fill('\0');
allocator_ptr = nullptr;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we limit doing the fill to a debug build? not sure how many allocations QNN makes and whether there's any meaningful perf cost.


namespace {

struct AllocationHeader {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be great to add a comment describing the overall setup and how it uses this header.

const size_t allocation_offset = AllocationOffsetFromStartOfHeader();
const size_t shared_memory_block_size_in_bytes = allocation_offset + requested_size;

// rpcmem_alloc() has an int size parameter. make sure we don't overflow.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use SafeInt?

htp_arch,
soc_model,
enable_htp_weight_sharing);
static const std::string QNN_HTP_SHARED_MEMORY_ALLOCATOR_ENABLED = "enable_htp_shared_memory_allocator";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be more user visible?

Comment on lines +67 to +68
SharedContext(const SharedContext&) = delete;
SharedContext& operator=(const SharedContext&) = delete;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ORT_DISALLOW_COPY_ASSIGNMENT_AND_MOVE?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:QNN issues related to QNN exeution provider
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants