We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This could improve accuracy, but would require much more memory. In the example it's around 16 kbit additional memory and an accuracy of 91.31 %.
1x1 convolution and average pooling:
loss: 0.7989 - accuracy: 0.7441 +sequential stats---------------------------------------------------------------------------------------------+ | Layer Input prec. Outputs # 1-bit # 32-bit Memory 1-bit MACs 32-bit MACs | | (bit) x 1 x 1 (kB) | +-------------------------------------------------------------------------------------------------------------+ | quant_conv2d - (-1, 26, 26, 8) 72 0 0.01 0 48672 | | batch_normalization - (-1, 26, 26, 8) 0 16 0.06 0 0 | | quant_conv2d_1 1 (-1, 24, 24, 16) 1152 0 0.14 663552 0 | | batch_normalization_1 - (-1, 24, 24, 16) 0 32 0.12 0 0 | | max_pooling2d - (-1, 12, 12, 16) 0 0 0 0 0 | | quant_conv2d_2 1 (-1, 10, 10, 32) 4608 0 0.56 460800 0 | | batch_normalization_2 - (-1, 10, 10, 32) 0 64 0.25 0 0 | | max_pooling2d_1 - (-1, 5, 5, 32) 0 0 0 0 0 | | quant_conv2d_3 1 (-1, 5, 5, 64) 2048 0 0.25 51200 0 | | batch_normalization_3 - (-1, 5, 5, 64) 0 128 0.50 0 0 | | quant_conv2d_4 1 (-1, 5, 5, 10) 640 0 0.08 16000 0 | | global_average_pooling2d - (-1, 10) 0 0 0 ? ? | | activation - (-1, 10) 0 0 0 ? ? | +-------------------------------------------------------------------------------------------------------------+ | Total 8520 240 1.98 1191552 48672 | +-------------------------------------------------------------------------------------------------------------+
Fully connected layer (e7a12d5):
loss: 3.3418 - accuracy: 0.9131 +sequential stats------------------------------------------------------------------------------------------+ | Layer Input prec. Outputs # 1-bit # 32-bit Memory 1-bit MACs 32-bit MACs | | (bit) x 1 x 1 (kB) | +----------------------------------------------------------------------------------------------------------+ | quant_conv2d - (-1, 26, 26, 8) 72 0 0.01 0 48672 | | batch_normalization - (-1, 26, 26, 8) 0 16 0.06 0 0 | | quant_conv2d_1 1 (-1, 24, 24, 16) 1152 0 0.14 663552 0 | | batch_normalization_1 - (-1, 24, 24, 16) 0 32 0.12 0 0 | | max_pooling2d - (-1, 12, 12, 16) 0 0 0 0 0 | | quant_conv2d_2 1 (-1, 10, 10, 32) 4608 0 0.56 460800 0 | | batch_normalization_2 - (-1, 10, 10, 32) 0 64 0.25 0 0 | | max_pooling2d_1 - (-1, 5, 5, 32) 0 0 0 0 0 | | quant_conv2d_3 1 (-1, 5, 5, 64) 2048 0 0.25 51200 0 | | batch_normalization_3 - (-1, 5, 5, 64) 0 128 0.50 0 0 | | flatten - (-1, 1600) 0 0 0 0 0 | | quant_dense 1 (-1, 10) 16000 0 1.95 16000 0 | | activation - (-1, 10) 0 0 0 ? ? | +----------------------------------------------------------------------------------------------------------+ | Total 23880 240 3.85 1191552 48672 | +----------------------------------------------------------------------------------------------------------+
The text was updated successfully, but these errors were encountered:
No branches or pull requests
This could improve accuracy, but would require much more memory. In the example it's around 16 kbit additional memory and an accuracy of 91.31 %.
1x1 convolution and average pooling:
Fully connected layer (e7a12d5):
The text was updated successfully, but these errors were encountered: