-
Notifications
You must be signed in to change notification settings - Fork 412
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix hard-coded double
precision in test_functions to default dtype
#2597
Conversation
Hi @AVHopp! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
I am aware that I still need to sign the CLA and will do so soon. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2597 +/- ##
=======================================
Coverage 99.98% 99.98%
=======================================
Files 196 196
Lines 17367 17367
=======================================
Hits 17365 17365
Misses 2 2 ☔ View full report in Codecov by Sentry. |
I have signed the CLA, but it seems like the test was not retriggered, does this just take some more time or did something go wrong with me signing it? |
No need to re-test. I can now import the PR and land it even if GitHub says that check failed. |
@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
@Balandat @esantorella Pinging both of you since we already started discussions on #2596 . TL;DR: Changed to use As proposed there, I have re-implemented this now such that there is an explicit For better reviewing, each commit just touches a single file - however, I'd be more than happy to rebase/squash the commits at a later point, but since I could not find any guidelines regarding whether or not this is done here, I thought I'd better just have a clean but "true" history for now 😄 I tested the code again locally using Looking forward to your feedback/comments :) |
I think it's fine to have these as one commit. But it doesn't really matter for the branch history here; the way we merge things into the main branch in botorch is that our github bot will auto-squash every change from this PR into a single commit. |
@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Thanks! |
Summary: ## Motivation This PR removes the `as_tensor` argument of the `set_tensors_from_ndarray_1d` function in `numpy_utils.py`. The reason for this change is that the previous implementation - messed up the float precision: That is, even when trying to use `float32` precision, this always transformed floats into `float64` precision - did only use the default function anyway. This change was discussed previously with Balandat and esantorella in #2596 . ### Have you read the [Contributing Guidelines on pull requests](https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md#pull-requests)? Yes, I have read it. Pull Request resolved: #2615 Test Plan: I checked the format using `ufmt format .` and verified that all tests are still running using `pytest -ra`. ## Related PRs This PR is related to #2597 and #2596, fixing a similar issue (`float32` being broken by some internal calculations). Reviewed By: Balandat Differential Revision: D65537491 Pulled By: saitcakmak fbshipit-source-id: f178a761691ce092a1b06d020158ea367fc6c99e
Motivation
This PR replaces the hard-coded double precision that was used in the initialization of
test_functions/base.py
to usetorch.get_default_dtype()
instead.See #2596 for more details.
Have you read the Contributing Guidelines on pull requests?
Yes, I have read it.
Test Plan
I ran code formatting via
ufmt
and checked the code viapytest -ra
. All tests related to the changes here were adjusted in the second commit of the branch.Locally, two tests failed for me, but it seems to me that these are not related to the fix implemented here. If it turns out they are, I'd be more than happy to further adjust.
Related PRs
None, but #2596 is related.