Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add TensorRT support for GNNs #4016

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

benjaminhuth
Copy link
Member

@benjaminhuth benjaminhuth commented Jan 9, 2025

Cannot be compiled currently in the CI

--- END COMMIT MESSAGE ---

Any further description goes here, @-mentions are ok here!

  • Use a conventional commits prefix: quick summary
    • We mostly use feat, fix, refactor, docs, chore and build types.
  • A milestone will be assigned by one of the maintainers

Summary by CodeRabbit

Release Notes

  • New Features

    • Added TensorRT support for edge classification in the ExaTrkX plugin.
    • Introduced a new TensorRT-based edge classifier for machine learning tasks.
  • Infrastructure

    • Updated build configuration to support TensorRT integration.
    • Added a new build job for TensorRT-enabled components.
  • Technical Improvements

    • Enhanced plugin with GPU-accelerated inference capabilities.
    • Integrated TensorRT runtime and execution context for efficient processing.

Copy link

coderabbitai bot commented Jan 9, 2025

Warning

Rate limit exceeded

@benjaminhuth has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 8 minutes and 38 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 09ce4b2 and 45ffd7b.

📒 Files selected for processing (1)
  • .gitlab-ci.yml (1 hunks)

Walkthrough

A new TensorRT-powered edge classification capability, added to the Acts library, yes. Multiple files, the changes span, introducing a TensorRTEdgeClassifier for GPU-accelerated machine learning inference in track finding. CI configuration updates, Python bindings, CMake modifications, and core implementation files to support TensorRT-based edge classification, included they are.

Changes

File Change Summary
.gitlab-ci.yml Added build_gnn_tensorrt job with TensorRT Docker image and dependency configuration
Examples/Python/src/ExaTrkXTrackFinding.cpp Introduced TensorRTEdgeClassifier Python bindings
Plugins/ExaTrkX/CMakeLists.txt Added TensorRT package discovery and library linking
Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/TensorRTEdgeClassifier.hpp Defined TensorRTEdgeClassifier header with configuration and interface
Plugins/ExaTrkX/src/TensorRTEdgeClassifier.cpp Implemented TensorRTEdgeClassifier with TensorRT inference logic

Possibly related PRs

Suggested Labels

automerge

Suggested Reviewers

  • paulgessinger

Poem

In circuits of silicon bright, 🖥️
TensorRT dances with neural might, 🤖
Edges classified with grace,
GPU's computational embrace,
Track finding's quantum delight! 🚀


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions github-actions bot added Component - Examples Affects the Examples module Component - Plugins Affects one or more Plugins labels Jan 9, 2025
@github-actions github-actions bot added this to the next milestone Jan 9, 2025
Copy link

sonarqubecloud bot commented Jan 9, 2025

Copy link

github-actions bot commented Jan 9, 2025

📊: Physics performance monitoring for 09ce4b2

Full contents
🟥 summary not found!

@benjaminhuth benjaminhuth marked this pull request as ready for review January 15, 2025 13:53
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (4)
Plugins/ExaTrkX/src/TensorRTEdgeClassifier.cpp (1)

98-100: Use ACTS logging instead of std::cout, prefer you should.

For consistency within the codebase, replace std::cout with ACTS logging macros.

Apply this diff to use the logging framework:

 ~TimePrinter() {
-  std::cout << name << ": " << milliseconds(t0, t1) << std::endl;
+  ACTS_INFO(name << ": " << milliseconds(t0, t1) << " ms");
 }
Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/TensorRTEdgeClassifier.hpp (2)

38-41: Destructor to be marked override, consider you should.

Since the base class has a virtual destructor, marking the destructor in the derived class with override good practice it is.

Apply this diff for clarity:

   ~TensorRTEdgeClassifier();
+  ~TensorRTEdgeClassifier() override;

49-58: Member variables' initialization order, ensure you must.

Initialize member variables in the order they are declared to avoid warnings.

Ensure that m_cfg is initialized before m_trtLogger, as declared.

Examples/Python/src/ExaTrkXTrackFinding.cpp (1)

110-128: Logger name, more specific make you should.

For clarity and consistency, use a distinct logger name for TensorRTEdgeClassifier.

Apply this diff to specify the logger name:

                       return std::make_shared<Alg>(
-                          c, getDefaultLogger("EdgeClassifier", lvl));
+                          c, getDefaultLogger("TensorRTEdgeClassifier", lvl));
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c0e65bc and f5a819a.

📒 Files selected for processing (5)
  • .gitlab-ci.yml (1 hunks)
  • Examples/Python/src/ExaTrkXTrackFinding.cpp (2 hunks)
  • Plugins/ExaTrkX/CMakeLists.txt (1 hunks)
  • Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/TensorRTEdgeClassifier.hpp (1 hunks)
  • Plugins/ExaTrkX/src/TensorRTEdgeClassifier.cpp (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: CI Bridge / build_gnn_tensorrt
  • GitHub Check: linux_physmon
  • GitHub Check: linux_examples_test
  • GitHub Check: missing_includes
  • GitHub Check: linux_ubuntu_extra (ubuntu2204_clang, 20)
  • GitHub Check: build_debug
🔇 Additional comments (3)
Examples/Python/src/ExaTrkXTrackFinding.cpp (1)

126-126: Missing configuration member useEdgeFeatures, verify you should.

Inconsistent the configuration is with other classifiers. Include useEdgeFeatures if required.

Ensure that all necessary configuration options are included.

Plugins/ExaTrkX/CMakeLists.txt (2)

Line range hint 1-38: Well-structured, this CMake configuration is!

Follow consistent patterns for different backends, it does. Proper organization and clarity, I sense.


26-38: Version constraints for TensorRT, specify we must!

Hmmmm, missing version constraints for TensorRT package, I see. Dangerous this can be, yes. Compatibility issues, it may cause.

Apply this change, you should:

-    find_package(TensorRT REQUIRED)
+    find_package(TensorRT 8.6 REQUIRED)


void *outputMem{nullptr};
std::size_t outputSize = edgeIndex.size(1) * sizeof(float);
cudaMalloc(&outputMem, outputSize);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Check return value of cudaMalloc, you must.

Ensure that memory allocation on the GPU is successful before proceeding.

Apply this diff to check cudaMalloc:

-cudaMalloc(&outputMem, outputSize);
+cudaError_t err = cudaMalloc(&outputMem, outputSize);
+if (err != cudaSuccess) {
+  ACTS_ERROR("cudaMalloc failed: " << cudaGetErrorString(err));
+  // Handle the error appropriately.
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
cudaMalloc(&outputMem, outputSize);
cudaError_t err = cudaMalloc(&outputMem, outputSize);
if (err != cudaSuccess) {
ACTS_ERROR("cudaMalloc failed: " << cudaGetErrorString(err));
// Handle the error appropriately.
}

Comment on lines +67 to +68
auto status = initLibNvInferPlugins(m_trtLogger.get(), "");
assert(status);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Proper error handling for plugin initialization, implement you must.

Relying on assert(status); insufficient it is, especially in release builds. Check the return status and handle errors gracefully, you should.

Apply this diff to handle the error:

 auto status = initLibNvInferPlugins(m_trtLogger.get(), "");
-assert(status);
+if (!status) {
+  ACTS_ERROR("Failed to initialize TensorRT plugins.");
+  // Handle the error appropriately, perhaps throw an exception or return an error code.
+}

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +81 to +83
m_engine.reset(m_runtime->deserializeCudaEngine(engineData.data(), fsize));

m_context.reset(m_engine->createExecutionContext());
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Deserialization errors, handle you should.

Check if m_engine is successfully created after deserialization. Handle errors to avoid null pointer dereferences.

Apply this diff to verify the engine:

 m_engine.reset(m_runtime->deserializeCudaEngine(engineData.data(), fsize));
+if (!m_engine) {
+  ACTS_ERROR("Failed to deserialize CUDA engine.");
+  // Handle the error appropriately.
+}

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +155 to +156
auto status = m_context->enqueueV3(stream);
cudaStreamSynchronize(stream);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Inference execution status, verify you should.

Check the return status of enqueueV3 to confirm that the inference executed successfully.

Apply this diff to handle inference errors:

 auto status = m_context->enqueueV3(stream);
+if (!status) {
+  ACTS_ERROR("Inference execution failed.");
+  // Handle the error appropriately.
+}
 cudaStreamSynchronize(stream);

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +76 to +78
std::ifstream engineFile(m_cfg.modelPath);
engineFile.read(engineData.data(), fsize);

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

File opening and reading errors, check you must.

Ensure the model file opens successfully before reading. Handle any file I/O errors to prevent unexpected behavior.

Apply this diff to add error handling:

 std::ifstream engineFile(m_cfg.modelPath);
+if (!engineFile.is_open()) {
+  ACTS_ERROR("Failed to open model file: " << m_cfg.modelPath);
+  // Handle the error appropriately.
+}
 engineFile.read(engineData.data(), fsize);
+if (!engineFile) {
+  ACTS_ERROR("Failed to read model data from: " << m_cfg.modelPath);
+  // Handle the error appropriately.
+}

Committable suggestion skipped: line range outside the PR's diff.

const ExecutionContext &execContext = {}) override;

Config config() const { return m_cfg; }
torch::Device device() const override { return torch::kCUDA; };
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Return a proper torch::Device object, you must.

Current implementation returns a device type, not a torch::Device instance. Correct this, you should.

Apply this diff to return the correct device:

   torch::Device device() const override { return torch::kCUDA; };
+  torch::Device device() const override { return torch::Device(torch::kCUDA); };

Committable suggestion skipped: line range outside the PR's diff.

.gitlab-ci.yml Outdated
Comment on lines 189 to 222
build_gnn_tensorrt:
stage: build
image: nvcr.io/nvidia/tensorrt:24.12-py3
variables:
DEPENDENCY_URL: https://acts.web.cern.ch/ACTS/ci/ubuntu-24.04/deps.$DEPENDENCY_TAG.tar.zst

cache:
key: ccache-${CI_JOB_NAME}-${CI_COMMIT_REF_SLUG}-${CCACHE_KEY_SUFFIX}
fallback_keys:
- ccache-${CI_JOB_NAME}-${CI_DEFAULT_BRANCH}-${CCACHE_KEY_SUFFIX}
when: always
paths:
- ${CCACHE_DIR}

tags:
- docker-gpu-nvidia

script:
- apt-get update -y
- git clone $CLONE_URL src
- cd src
- git checkout $HEAD_SHA
- source CI/dependencies.sh
- cd ..
- mkdir build
- >
cmake -B build -S src
-DACTS_BUILD_PLUGIN_EXATRKX=ON
-DACTS_EXATRKX_ENABLE_TENSORRT=ON
-DPython_EXECUTABLE=$(which python3)
-DCMAKE_CUDA_ARCHITECTURES="75;86"



Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Complete, this CI job configuration is not! Missing crucial elements, I sense.

Several improvements needed, there are:

  1. Build command after cmake configuration, missing it is
  2. Testing stage for TensorRT functionality, define we must
  3. Artifacts for downstream jobs, configure we should
  4. CUDA architectures with other ExaTrkX jobs, align we must

Apply these changes, you should:

 build_gnn_tensorrt:
   stage: build
   image: nvcr.io/nvidia/tensorrt:24.12-py3
   variables:
     DEPENDENCY_URL: https://acts.web.cern.ch/ACTS/ci/ubuntu-24.04/deps.$DEPENDENCY_TAG.tar.zst
+    TORCH_CUDA_ARCH_LIST: "8.0 8.6 8.9 9.0"

   cache:
     key: ccache-${CI_JOB_NAME}-${CI_COMMIT_REF_SLUG}-${CCACHE_KEY_SUFFIX}
     fallback_keys:
       - ccache-${CI_JOB_NAME}-${CI_DEFAULT_BRANCH}-${CCACHE_KEY_SUFFIX}
     when: always
     paths:
       - ${CCACHE_DIR}

+  artifacts:
+    paths:
+      - build/
+    exclude:
+      - build/**/*.o
+    expire_in: 6 hours

   tags:
     - docker-gpu-nvidia

   script:
     - apt-get update -y
     - git clone $CLONE_URL src
     - cd src
     - git checkout $HEAD_SHA
     - source CI/dependencies.sh
     - cd ..
     - mkdir build
     - >
       cmake -B build -S src
       -DACTS_BUILD_PLUGIN_EXATRKX=ON
       -DACTS_EXATRKX_ENABLE_TENSORRT=ON
       -DPython_EXECUTABLE=$(which python3)
       -DCMAKE_CUDA_ARCHITECTURES="75;86"
+    
+    - ccache -z
+    - cmake --build build -- -j6
+    - ccache -s

+test_gnn_tensorrt:
+  stage: test
+  needs:
+    - build_gnn_tensorrt
+  image: nvcr.io/nvidia/tensorrt:24.12-py3
+  tags:
+    - docker-gpu-nvidia
+  script:
+    - apt-get update -y
+    - git clone $CLONE_URL src
+    - cd src
+    - git checkout $HEAD_SHA
+    - source CI/dependencies.sh
+    - cd ..
+    - ctest --test-dir build -R TensorRT
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
build_gnn_tensorrt:
stage: build
image: nvcr.io/nvidia/tensorrt:24.12-py3
variables:
DEPENDENCY_URL: https://acts.web.cern.ch/ACTS/ci/ubuntu-24.04/deps.$DEPENDENCY_TAG.tar.zst
cache:
key: ccache-${CI_JOB_NAME}-${CI_COMMIT_REF_SLUG}-${CCACHE_KEY_SUFFIX}
fallback_keys:
- ccache-${CI_JOB_NAME}-${CI_DEFAULT_BRANCH}-${CCACHE_KEY_SUFFIX}
when: always
paths:
- ${CCACHE_DIR}
tags:
- docker-gpu-nvidia
script:
- apt-get update -y
- git clone $CLONE_URL src
- cd src
- git checkout $HEAD_SHA
- source CI/dependencies.sh
- cd ..
- mkdir build
- >
cmake -B build -S src
-DACTS_BUILD_PLUGIN_EXATRKX=ON
-DACTS_EXATRKX_ENABLE_TENSORRT=ON
-DPython_EXECUTABLE=$(which python3)
-DCMAKE_CUDA_ARCHITECTURES="75;86"
build_gnn_tensorrt:
stage: build
image: nvcr.io/nvidia/tensorrt:24.12-py3
variables:
DEPENDENCY_URL: https://acts.web.cern.ch/ACTS/ci/ubuntu-24.04/deps.$DEPENDENCY_TAG.tar.zst
TORCH_CUDA_ARCH_LIST: "8.0 8.6 8.9 9.0"
cache:
key: ccache-${CI_JOB_NAME}-${CI_COMMIT_REF_SLUG}-${CCACHE_KEY_SUFFIX}
fallback_keys:
- ccache-${CI_JOB_NAME}-${CI_DEFAULT_BRANCH}-${CCACHE_KEY_SUFFIX}
when: always
paths:
- ${CCACHE_DIR}
artifacts:
paths:
- build/
exclude:
- build/**/*.o
expire_in: 6 hours
tags:
- docker-gpu-nvidia
script:
- apt-get update -y
- git clone $CLONE_URL src
- cd src
- git checkout $HEAD_SHA
- source CI/dependencies.sh
- cd ..
- mkdir build
- >
cmake -B build -S src
-DACTS_BUILD_PLUGIN_EXATRKX=ON
-DACTS_EXATRKX_ENABLE_TENSORRT=ON
-DPython_EXECUTABLE=$(which python3)
-DCMAKE_CUDA_ARCHITECTURES="75;86"
- ccache -z
- cmake --build build -- -j6
- ccache -s
test_gnn_tensorrt:
stage: test
needs:
- build_gnn_tensorrt
image: nvcr.io/nvidia/tensorrt:24.12-py3
tags:
- docker-gpu-nvidia
script:
- apt-get update -y
- git clone $CLONE_URL src
- cd src
- git checkout $HEAD_SHA
- source CI/dependencies.sh
- cd ..
- ctest --test-dir build -R TensorRT

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component - Examples Affects the Examples module Component - Plugins Affects one or more Plugins
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant