-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add CUDA connected components & track building #4015
base: main
Are you sure you want to change the base?
feat: Add CUDA connected components & track building #4015
Conversation
WalkthroughEnhanced, the ExaTrkX plugin has been. CUDA-based track building capabilities introduced, they are. Multiple files modified, new CUDA utilities, track building classes, and connected components algorithms added. Support for GPU-accelerated track finding, the modifications extend, with updates to build configurations, Python bindings, and unit testing infrastructure. Changes
Possibly related PRs
Suggested Labels
Suggested Reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
⏰ Context from checks skipped due to timeout of 90000ms (6)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a great start, some comments though.
Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/detail/ConnectedComponents.cuh
Outdated
Show resolved
Hide resolved
template <typename T> | ||
__device__ void swap(T &a, T &b) { | ||
T tmp = a; | ||
a = b; | ||
b = tmp; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know what the type of TEdge
is, but if it is big you might want to implement this using moves.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, since this is a __device__
function, is there any realistic scenario where a object used on device that has a move constructor? I naivly would not expect this, but I don't have a lot of experience here...
Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/detail/ConnectedComponents.cuh
Outdated
Show resolved
Hide resolved
Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/detail/CudaUtils.cuh
Outdated
Show resolved
Hide resolved
|
||
std::vector<std::vector<int>> CudaTrackBuilding::operator()( | ||
std::any /*nodes*/, std::any edges, std::any weights, | ||
std::vector<int>& spacepointIDs, const ExecutionContext& execContext) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This vector can be const.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there is some dumb reason that ONNX
runtime accepts only mutable pointers or so... Probably in that case it would be better to just copy the data, but I wouldn't touch it in this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah but this is graph building... so indeed it could be const
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Nitpick comments (6)
Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/detail/ConnectedComponents.cuh (1)
139-144
: Clarify comments, you must.Hard to understand, these comments are. Elaborate further, to aid future readers and maintainers.
Tests/UnitTests/Plugins/ExaTrkX/ConnectedComponentCudaTests.cu (1)
257-259
: Use test framework's logging, prefer you should.Instead of 'std::cout', the test framework's logging facilities utilize. Cleaner and more consistent, it will be.
Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/CudaTrackBuilding.hpp (1)
31-34
: Document the types within std::any, crucial it is.For nodes, edges, and edge_weights parameters, document the expected types within std::any you must. Help future Jedi understand the interface, this documentation will.
Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/detail/CudaUtils.cuh (3)
17-25
: Synchronize wisely, you must.Unnecessary synchronization in cudaAssert, performance impact it may have. Consider making synchronization optional through a parameter, wisdom this would be.
-inline void cudaAssert(cudaError_t code, const char *file, int line) { +inline void cudaAssert(cudaError_t code, const char *file, int line, bool sync = false) { if (code != cudaSuccess) { std::stringstream ss; ss << "CUDA error: " << cudaGetErrorString(code) << ", " << file << ":" << line; throw std::runtime_error(ss.str()); } - cudaDeviceSynchronize(); + if (sync) { + cudaDeviceSynchronize(); + } }
27-42
: Parallel printing, implement you should.Sequential printing in CUDA kernel, efficient it is not. Consider implementing parallel reduction for better performance, hmmmm.
51-59
: Excessive synchronization in CUDA_PRINTV, I sense.Two synchronizations you have. One before kernel launch, one after. Only after kernel launch, synchronize you must.
#define CUDA_PRINTV(ptr, size) \ do { \ std::cout << #ptr << ": "; \ - CUDA_CHECK(cudaDeviceSynchronize()); \ cudaPrintArray<<<1, 1>>>(ptr, size); \ CUDA_CHECK(cudaGetLastError()); \ CUDA_CHECK(cudaDeviceSynchronize()); \ std::cout << std::endl; \ } while (0)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
.gitlab-ci.yml
(1 hunks)Examples/Python/src/ExaTrkXTrackFinding.cpp
(2 hunks)Plugins/ExaTrkX/CMakeLists.txt
(1 hunks)Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/CudaTrackBuilding.hpp
(1 hunks)Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/detail/ConnectedComponents.cuh
(1 hunks)Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/detail/CudaUtils.cuh
(1 hunks)Plugins/ExaTrkX/src/CudaTrackBuilding.cu
(1 hunks)Tests/UnitTests/Plugins/ExaTrkX/CMakeLists.txt
(1 hunks)Tests/UnitTests/Plugins/ExaTrkX/ConnectedComponentCudaTests.cu
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (21)
- GitHub Check: merge-sentinel
- GitHub Check: CI Bridge / build_linux_ubuntu
- GitHub Check: CI Bridge / lcg_106a: [alma9, clang16]
- GitHub Check: CI Bridge / lcg_106a: [alma9, gcc13]
- GitHub Check: CI Bridge / lcg_105: [alma9, clang16]
- GitHub Check: CI Bridge / lcg_106a: [alma9, gcc14]
- GitHub Check: CI Bridge / lcg_105: [alma9, gcc13]
- GitHub Check: CI Bridge / linux_ubuntu_2204_clang
- GitHub Check: CI Bridge / build_exatrkx
- GitHub Check: CI Bridge / linux_ubuntu_2204
- GitHub Check: CI Bridge / clang_tidy
- GitHub Check: CI Bridge / build_exatrkx_cpu
- GitHub Check: CI Bridge / build_exatrkx_cpu
- GitHub Check: CI Bridge / clang_tidy
- GitHub Check: unused_files
- GitHub Check: linux_ubuntu_extra (ubuntu2204_clang, 20)
- GitHub Check: missing_includes
- GitHub Check: linux_ubuntu_extra (ubuntu2204, 20)
- GitHub Check: macos
- GitHub Check: build_debug
- GitHub Check: linux_ubuntu
🔇 Additional comments (10)
Plugins/ExaTrkX/src/CudaTrackBuilding.cu (2)
22-22
: Pass 'spacepointIDs' as const reference, you should.By making 'spacepointIDs' a const reference, unintended modifications you prevent, and performance you may improve.
58-58
: Unnecessary 'cudaGetLastError()' call, this is.After 'cudaStreamSynchronize', calling 'CUDA_CHECK(cudaGetLastError());' may not be needed.
Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/detail/ConnectedComponents.cuh (4)
39-40
: Use a do-while loop here, you could.Simplify the loop structure, a do-while loop would. Clearer and more intuitive, it might be.
131-131
: Rename macro 'CUDA_CHECK', we should.Conflicts with other code, 'CUDA_CHECK' may cause. 'ACTS_CUDA_CHECK' instead, consider using to namespace properly.
136-137
: Error checking, forget not to add.After kernel launch at line 137, 'CUDA_CHECK(cudaGetLastError());' include, you should, to catch errors promptly.
165-165
: Unnecessary error check, this might be.Calling 'CUDA_CHECK(cudaGetLastError());' here, perhaps redundant it is. Evaluate if needed, you should.
Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/CudaTrackBuilding.hpp (1)
37-42
: Well structured, the private section is. Hmmmm.Proper encapsulation and const correctness, maintained they are. The Force, strong with this code it is.
Examples/Python/src/ExaTrkXTrackFinding.cpp (1)
115-131
:⚠️ Potential issueMissing endif directive, found I have.
Past wisdom speaks true - endif directive for ACTS_EXATRKX_WITH_CUDA, missing it is. Add it you must, or chaos in compilation there will be.
} +#endif #endif
Likely invalid or redundant comment.
Tests/UnitTests/Plugins/ExaTrkX/CMakeLists.txt (1)
7-7
: Well placed, the new test is.Following the established pattern, this addition does. The Force of testing, stronger it makes.
Plugins/ExaTrkX/CMakeLists.txt (1)
5-5
: Wise addition of CUDA source, I sense.Correctly placed within the CUDA-enabled block, this source file is. Private visibility, appropriate it maintains. Hmmmm.
// TODO not sure why there is an issue that is not detected in the unit tests | ||
numberLabels += 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Investigate the TODO comment, you must.
An issue not detected in unit tests, the increment of 'numberLabels' suggests. Rather than adjusting manually, find and fix the root cause, we should.
Assist you, can I. Help investigate this issue or open a new GitHub issue, would you like?
cudaMalloc(&cudaSrc, src.size() * sizeof(int)); | ||
cudaMalloc(&cudaTgt, tgt.size() * sizeof(int)); | ||
cudaMemcpy(cudaSrc, src.data(), src.size() * sizeof(int), | ||
cudaMemcpyHostToDevice); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check return values of CUDA functions, you should.
For 'cudaMalloc' and 'cudaMemcpy', check return values to ensure success, you must. Error handling, improve this will.
CudaTrackBuilding(const Config &cfg, std::unique_ptr<const Logger> logger) | ||
: m_cfg(cfg), | ||
m_logger(std::move(logger)), | ||
m_device(torch::Device(torch::kCUDA)) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Hmmmm, wisdom in error handling I sense, but better approach we must take.
Follow the ways of TorchMetricLearning and TorchEdgeClassifier, you should. Device ID in configuration and ACTS_WARNING they use, yes.
- m_device(torch::Device(torch::kCUDA)) {}
+ m_device([&]() {
+ if (cfg.deviceID >= 0) {
+ m_device = torch::Device(torch::kCUDA, cfg.deviceID);
+ } else {
+ ACTS_WARNING("GPU device " << cfg.deviceID << " not available");
+ throw std::runtime_error("Invalid CUDA device requested");
+ }
+ return m_device;
+ }()) {}
- Add
deviceID
to Config struct, you must - Consistent with codebase patterns, this approach is
🔗 Analysis chain
Handle CUDA device initialization gracefully, young padawan must.
In constructor, catch CUDA errors you should, when device initialization fails it might. Use CUDA_CHECK macro from CudaUtils.cuh, wisdom this would be.
- m_device(torch::Device(torch::kCUDA)) {}
+ m_device([&]() {
+ CUDA_CHECK(cudaGetDeviceCount(&deviceCount));
+ if (deviceCount == 0) {
+ throw std::runtime_error("No CUDA devices available");
+ }
+ return torch::Device(torch::kCUDA);
+ }()) {}
🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
Script:
#!/bin/bash
# Search for CudaUtils.cuh and its content
fd -e cuh "CudaUtils" --exec cat {}
# Look for CUDA device initialization patterns
rg -A 2 "torch::Device.*kCUDA"
# Search for CUDA_CHECK usage
rg "CUDA_CHECK"
# Find error handling patterns with CUDA
rg "throw.*CUDA|CUDA.*error"
Length of output: 5533
Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/detail/CudaUtils.cuh
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/detail/CudaUtils.cuh (1)
17-25
: Synchronization after each error check, unnecessary it might be.Performance impact, cudaDeviceSynchronize() has. Move it to where truly needed, we should. For error checking alone, cudaGetLastError() sufficient it is.
inline void cudaAssert(cudaError_t code, const char *file, int line) { if (code != cudaSuccess) { std::stringstream ss; ss << "CUDA error: " << cudaGetErrorString(code) << ", " << file << ":" << line; throw std::runtime_error(ss.str()); } - cudaDeviceSynchronize(); }
Plugins/ExaTrkX/src/CudaTrackBuilding.cu (1)
20-22
: Type safety compromised by std::any, it is.Consider strongly typed parameters or template approach, we should. Performance overhead and runtime type checking, std::any brings.
Examples/Python/src/ExaTrkXTrackFinding.cpp (1)
116-133
: Empty Config struct, future parameters it may need.Consider configuration options for:
- CUDA device selection
- Memory allocation limits
- Stream configuration
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.gitlab-ci.yml
(1 hunks)Examples/Python/src/ExaTrkXTrackFinding.cpp
(2 hunks)Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/detail/ConnectedComponents.cuh
(1 hunks)Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/detail/CudaUtils.cuh
(1 hunks)Plugins/ExaTrkX/src/CudaTrackBuilding.cu
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/detail/ConnectedComponents.cuh
🚧 Files skipped from review as they are similar to previous changes (1)
- .gitlab-ci.yml
⏰ Context from checks skipped due to timeout of 90000ms (9)
- GitHub Check: merge-sentinel
- GitHub Check: build_debug
- GitHub Check: unused_files
- GitHub Check: linux_ubuntu_extra (ubuntu2204_clang, 20)
- GitHub Check: macos
- GitHub Check: linux_ubuntu_extra (ubuntu2204, 20)
- GitHub Check: linux_ubuntu
- GitHub Check: docs
- GitHub Check: missing_includes
🔇 Additional comments (4)
Plugins/ExaTrkX/include/Acts/Plugins/ExaTrkX/detail/CudaUtils.cuh (1)
29-32
: Wise implementation of the macro, this is.Safe macro practices followed, they are. Do-while(0) wrapper used correctly, it is.
Plugins/ExaTrkX/src/CudaTrackBuilding.cu (3)
32-35
: Defensive programming, strong with this one is.Early return on empty edges, wise decision it is. Clear warning message provided, it has.
42-44
: Async allocation with proper error checking, good it is.Consistent use of ACTS_CUDA_CHECK macro, commendable it is. Memory management pattern established well, it has been.
49-50
:⚠️ Potential issueRoot cause of label adjustment, investigate we must.
Manual increment of numberLabels, a workaround it seems. Find the true cause in connectedComponentsCuda, we should. Unit tests strengthen, we must.
Quality Gate passedIssues Measures |
Adds an implementation of graph connected components in CUDA, with unit tests.
Adds a trackbuilding module that uses that implementation.
Depending on #4014, #4012
--- END COMMIT MESSAGE ---
Any further description goes here, @-mentions are ok here!
feat
,fix
,refactor
,docs
,chore
andbuild
types.Summary by CodeRabbit
New Features
Tests
Infrastructure