-
Notifications
You must be signed in to change notification settings - Fork 424
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Leaky Quantized ReLU fix #961
Conversation
… new layer easier
… and default values to general attributes
…near activation embedded in qkeras layers without activation in k_to_hls converter
…ot produce leakyrelu activations when negative slope is provided
… the layer templates
…thers test scripts
hls4ml/model/types.py
Outdated
@@ -300,7 +300,7 @@ def saturation_mode(self, mode): | |||
def __str__(self): | |||
args = [self.width, self.integer, self.rounding_mode, self.saturation_mode, self.saturation_bits] | |||
args = ','.join([str(arg) for arg in args if arg is not None]) | |||
typestring = '{signed}fixed<{args}>'.format(signed='u' if not self.signed else '', args=args) | |||
typestring = 'ap_{signed}fixed<{args}>'.format(signed='u' if not self.signed else '', args=args) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't have 'ap_' or 'ac_' prefixes in our internal typestings. This change should be reverted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have removed the ap_
here and fixed the vivado_accelerator writer which gave issues with the compiler due to the my_project_axi.hpp
not using the hls type definition but directly the internal typestrings. However this fix has been removed from this branch to separate the fixes.
…r regarding ap_fixed types temporarily directly in the vivado accelerator writer
… not according to the negative slope
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, needs cleanup. Thanks Marco!
.gitignore
Outdated
@@ -14,3 +14,4 @@ docs/autodoc/* | |||
hls4mlprj_* | |||
*~ | |||
*.ipynb_checkpoints/ | |||
*.bak |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unlrelated change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exception removed, my bad, it was here for the byproducts of running precommit manually.
@@ -68,16 +68,16 @@ def __init__(self, config, model_inputs, model_outputs): | |||
if out_axi_t not in ['float', 'double']: | |||
self.output_type = self._next_factor8_type(config.backend.convert_precision_string(out_axi_t)) | |||
|
|||
if self.input_type == 'float': | |||
if str(self.input_type) == 'float': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unlrelated change, shouldn't be a part of this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have removed all the fixes concerning the vivado_accelerator backend and writer (small stuff), should I open another PR for those?
hls4ml/converters/keras/qkeras.py
Outdated
@@ -147,6 +146,10 @@ def parse_qactivation_layer(keras_layer, input_names, input_shapes, data_reader) | |||
layer['slope_prec'] = FixedPrecisionType(width=2, integer=0, signed=False) | |||
layer['shift_prec'] = FixedPrecisionType(width=2, integer=0, signed=False) | |||
layer['activation'] = activation_config['class_name'].replace('quantized_', 'hard_') | |||
elif activation_config['class_name'] == 'quantized_relu' and activation_config['config']['negative_slope'] != 0: | |||
layer['class_name'] = 'QLeakyReLU' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Calling it QLeakyReLU
is only complicating things down the line. Without Q there's no need for changes to keras_to_hls.py and layers.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, I wanted to be clear when writing the code for my sake but indeed using LeakyReLU
makes the fix simpler.
hls4ml/converters/keras_to_hls.py
Outdated
@@ -208,6 +208,8 @@ def parse_keras_model(model_arch, reader): | |||
] | |||
# Recurrent layers | |||
recurrent_layers = ['SimpleRNN', 'LSTM', 'GRU'] | |||
# QActivation layers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not needed with the change above
hls4ml/converters/keras_to_hls.py
Outdated
if 'activation' in layer and layer['class_name'] not in activation_layers + recurrent_layers: # + qkeras_layers: | ||
if ( | ||
'activation' in layer | ||
and layer['class_name'] not in activation_layers + recurrent_layers + qactivation_layers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not needed
hls4ml/utils/config.py
Outdated
signed = False | ||
integer -= 1 | ||
if quantizer['config']['negative_slope'] != 0.0: | ||
print(quantizer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get rid of the print statement and commented out code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Print and commented code removed
@@ -38,8 +38,14 @@ def write_axi_wrapper(self, model): | |||
newline += f'static const unsigned N_IN = {inp.size()};\n' | |||
newline += f'static const unsigned N_OUT = {out.size()};\n' | |||
if self.vivado_accelerator_config.get_interface() == 'axi_stream': | |||
newline += f'typedef {inp_axi_t} T_in;\n' | |||
newline += f'typedef {out_axi_t} T_out;\n' | |||
if 'fixed' in str(inp_axi_t): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unlrelated to this PR
test/pytest/q_leaky_relu_test.py
Outdated
def test_quantizer(randX_1000_1, quantizer, backend, io_type): | ||
''' | ||
Test a single quantizer as an Activation function. | ||
Checks the type inference through the conversion is correct without just |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand this sentence :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am sorry, it does not make any sense, I think it was a copy and paste mishap.
test/pytest/q_leaky_relu_test.py
Outdated
) | ||
@pytest.mark.parametrize('backend', ['Vivado', 'Vitis', 'Quartus']) | ||
@pytest.mark.parametrize('io_type', ['io_parallel', 'io_stream']) | ||
def test_quantizer(randX_1000_1, quantizer, backend, io_type): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should have a proper name, and could be fused with the existing qkeras test file. Or split along with the other qactivation tests. But whatever the solution ends up being it should be properly named with test_
prefix.
@@ -272,6 +272,7 @@ def randX_1000_1(): | |||
(quantized_relu(8, 4)), | |||
(quantized_relu(10)), | |||
(quantized_relu(10, 5)), | |||
(quantized_relu(10, 5, negative_slope=0.25)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aren't you already testing this in the other test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to create the test suite requested for the PR I added the line in the already present qkeras test, however due to problems with the environment requested for the pytests, some issues not concerning my fix came up. So, I have written a separate script just to show that the "new" layer passes an accuracy test. If the test_qkeras.py
test passes when called in the automatic workflow, my other script can be removed.
Sorry for the delay, but what is the status of this? It looks to me like maybe I can approve and merge it. |
I was testing this and the changes work, however the PR includes unnecessary changes and the test doesn't do anything (data is all positive, so it passes even without the changes in the converter). I distilled the changes and made a proper test in https://github.com/vloncar/hls4ml/tree/qrelu_negative_slope |
I made #987 as a continuation of this PR |
Description
The
quantized_relu
activation layer included in the QKeras library lets the user set thenegative_slope
option which practically change the activation function from the usual ReLU to a LeakyReLU. This change was not perceived by hls4ml, meaning that that information was lost when the HLS_model was created by the library.In order to fix this behaviour I have added the QLeakyReLU in the list of supported layers in
model/layers.py
using theParametrizedActivation
implementation mimicking theLeakyReLU
already present and so following a similar path of implementation to the "non-leaky"quantized_relu
.Other changes were made to
model/profiling.py
andutils/config.py
to make them compatible with the new layer.A couple of other fixes were made in
backends/vivado_accelerator/vivado_accelerator_config.py
(a missing "casting" function when comparing complex objects with strings) and inmodel/types.py
(adding theap_
to the hls fixed precision type due to errors raised by the vivado compiler)Type of change
Tests
The
quantized_relu
with anegative_slope
different from the default one was added in the pytest routine covering qkeras' layers.A specific test script has been also added to the pytest directory. The results from the new implementation are asserted to be equal to the QKeras layer with a relative tolerance of 0.00001.
Checklist
pre-commit
on the files I edited or added.