-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
experimental_enable_bitpacked_activations failure for same padding #541
Comments
Thanks for the issue :) This is expected behaviour if |
Thanks for the clarification! In this case, would it be a good idea to raise a warning in the converter (i.e. when using Also, if the reference implementation supports bitpacked output with 'same-zero' padding, shouldn't this cause a runtime error in the optimized implementation instead (and raise a warning as well during conversion)? |
Sometime soon we want to make I completely agree that we should work out a good way to raise warnings for models which won't convert in an 'optimal' way -- essentially any eggregious violation of our model optimisation guide. It's slightly complicated for a few reasons:
|
^I've copied the above into issue #542. |
Passing |
I understand that this feature is experimental, but I ran into the following issue: it looks like when two binary convolutions follow each other, an
LceQuantize
op is injected if the first convolution has same padding. The padding of the second convolution seems to have no effect on the outcome:Here are the output flatbuffers:
larq_bitpacked_padding_bug.zip
Package versions:
tensorflow: 2.3.0
larq: 0.10.1 (from pypi)
larq-compute-engine: 0.4.3 (from pypi)
The text was updated successfully, but these errors were encountered: