fix rmm managed memory resource initialization to resolve some intermittent … #787
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
…memory issues
Looks like rmm.reinitialize destroys memory resources assigned to all other devices besides the current device. This way of initializing as in this PR and doing only once per process avoids this. I think the intermittent cuda errors were due to hit or miss uses of destroyed resources on the c++ api rmm operations. So this should address the issue.
Also, in umap, current device was set before the memory resource was initialized. Needs to be the other way around.
also pin numpy < 1 in readme