Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix hard-coded double precision in test_functions to default dtype #2597

Closed
wants to merge 13 commits into from

Conversation

AVHopp
Copy link
Contributor

@AVHopp AVHopp commented Oct 28, 2024

Motivation

This PR replaces the hard-coded double precision that was used in the initialization of test_functions/base.py to use torch.get_default_dtype() instead.

See #2596 for more details.

Have you read the Contributing Guidelines on pull requests?

Yes, I have read it.

Test Plan

I ran code formatting via ufmt and checked the code via pytest -ra. All tests related to the changes here were adjusted in the second commit of the branch.
Locally, two tests failed for me, but it seems to me that these are not related to the fix implemented here. If it turns out they are, I'd be more than happy to further adjust.

Related PRs

None, but #2596 is related.

@facebook-github-bot
Copy link
Contributor

Hi @AVHopp!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@AVHopp
Copy link
Contributor Author

AVHopp commented Oct 28, 2024

I am aware that I still need to sign the CLA and will do so soon.

Copy link

codecov bot commented Oct 28, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.98%. Comparing base (9d37e90) to head (6ddd51f).
Report is 6 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2597   +/-   ##
=======================================
  Coverage   99.98%   99.98%           
=======================================
  Files         196      196           
  Lines       17367    17367           
=======================================
  Hits        17365    17365           
  Misses          2        2           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@AVHopp
Copy link
Contributor Author

AVHopp commented Oct 28, 2024

I have signed the CLA, but it seems like the test was not retriggered, does this just take some more time or did something go wrong with me signing it?

@Balandat
Copy link
Contributor

I have signed the CLA, but it seems like the test was not retriggered, does this just take some more time or did something go wrong with me signing it?

No need to re-test. I can now import the PR and land it even if GitHub says that check failed.

@facebook-github-bot
Copy link
Contributor

@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Oct 28, 2024
@AVHopp
Copy link
Contributor Author

AVHopp commented Oct 29, 2024

@Balandat @esantorella Pinging both of you since we already started discussions on #2596 .

TL;DR: Changed to use dtype in __init__. Looking forward to feedback, how to deal with the branch history?

As proposed there, I have re-implemented this now such that there is an explicit dtype argument in all of the test functions. I just added this as an argument to every explicit __init__ function, always setting it to torch.double. Not sure if there is a more elegant way, if so feel free to point me to it :)

For better reviewing, each commit just touches a single file - however, I'd be more than happy to rebase/squash the commits at a later point, but since I could not find any guidelines regarding whether or not this is done here, I thought I'd better just have a clean but "true" history for now 😄

I tested the code again locally using pytest and also in my own code where I observed this problem and everything seems to work.

Looking forward to your feedback/comments :)

botorch/test_functions/base.py Outdated Show resolved Hide resolved
botorch/test_functions/base.py Outdated Show resolved Hide resolved
botorch/test_functions/multi_fidelity.py Show resolved Hide resolved
botorch/test_functions/multi_fidelity.py Show resolved Hide resolved
botorch/test_functions/synthetic.py Show resolved Hide resolved
@Balandat
Copy link
Contributor

For better reviewing, each commit just touches a single file - however, I'd be more than happy to rebase/squash the commits at a later point, but since I could not find any guidelines regarding whether or not this is done here, I thought I'd better just have a clean but "true" history for now 😄

I think it's fine to have these as one commit. But it doesn't really matter for the branch history here; the way we merge things into the main branch in botorch is that our github bot will auto-squash every change from this PR into a single commit.

@facebook-github-bot
Copy link
Contributor

@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@Balandat
Copy link
Contributor

Thanks!

@facebook-github-bot
Copy link
Contributor

@Balandat merged this pull request in 10f6342.

@AVHopp AVHopp deleted the fix/bounds_precision_test_functions branch November 6, 2024 08:08
facebook-github-bot pushed a commit that referenced this pull request Nov 6, 2024
Summary:
## Motivation

This PR removes the `as_tensor` argument of the `set_tensors_from_ndarray_1d` function in `numpy_utils.py`.

The reason for this change is that the previous implementation
- messed up the float precision: That is, even when trying to use `float32` precision, this always transformed floats into `float64` precision
- did only use the default function anyway.

This change was discussed previously with Balandat and esantorella in #2596 .

### Have you read the [Contributing Guidelines on pull requests](https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md#pull-requests)?

Yes, I have read it.

Pull Request resolved: #2615

Test Plan:
I checked the format using `ufmt format .` and verified that all tests are still running using `pytest -ra`.

## Related PRs

This PR is related to #2597 and #2596, fixing a similar issue (`float32` being broken by some internal calculations).

Reviewed By: Balandat

Differential Revision: D65537491

Pulled By: saitcakmak

fbshipit-source-id: f178a761691ce092a1b06d020158ea367fc6c99e
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants