You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I'm experiencing some strange visual artifacts when viewing multiscale Neuroglancer data, and I'm trying to determine whether this issue originates from our data or from Neuroglancer itself.
Previously, the neuroglancer_precomputed driver incorrectly encoded jpeg
and png-format chunks in the case that the x and y dimensions of the
chunk size differ. In particular, the chunks were encoded with the y
dimension specified as the width, and x * z specified as the height, but
in fact the data was stored with the x dimension as the
inner-most dimension. While all pixels were stored in the
correct linear order, and could be decoded properly by both tensorstore
and neuroglancer, the image data was misaligned to image rows, such that
individual chunks did not display correctly in a normal image viewer,
and compression performed poorly due to this misaligned. While png is
lossless and poor compression merely increases the size, with jpeg this
also tended to introduce extreme artifacts.
This commit also enables support for 16-bit PNG images with up to 4
channels.
Fixesgoogle/neuroglancer#677.
PiperOrigin-RevId: 708415255
Change-Id: I057cc3fc4f073026e8bf96cc543be5a769c626c2
Hi,
I'm experiencing some strange visual artifacts when viewing multiscale Neuroglancer data, and I'm trying to determine whether this issue originates from our data or from Neuroglancer itself.
Here is an excample of this issue
Issue Details:
Any insights or suggestions would be greatly appreciated!
The text was updated successfully, but these errors were encountered: