You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You may find 64^3 chunks provide better visualization performance than 96^3, since for cross section views you end up loading 50% more data for the initial display with 96^3 chunks. You may also find it helpful to use jpeg compression for uint8 data --- I see some volumes are using n5 with gzip compression --- I know we already discussed about the lack of n5/zarr jpeg compression support in Neuroglancer and tensorstore, though.
The text was updated successfully, but these errors were encountered:
I admit I picked 96^3 somewhat arbitrarily in an attempt to balance loading / visualization performance against the cost of a lot of tiny objects in storage. I should do a more thorough comparison.
As for datatypes, I think all the raw uint8 data are stored with jpeg compression in precomputed format, albeit unsharded (until tensorstore has a better API for saving sharded volumes with parallelism over shard files :) ). For some of the analysis volumes (predictions) I opted for the lossless compression in case someone wants to download those volumes for re-analysis.
jpeg compression for n5/zarr would be great, I have a stalled PR that would enable this for n5: zarr-developers/zarr-python#577. As soon as that gets some traction I will probably open an issue / PR on neuroglancer to support this.
You may find 64^3 chunks provide better visualization performance than 96^3, since for cross section views you end up loading 50% more data for the initial display with 96^3 chunks. You may also find it helpful to use jpeg compression for uint8 data --- I see some volumes are using n5 with gzip compression --- I know we already discussed about the lack of n5/zarr jpeg compression support in Neuroglancer and tensorstore, though.
The text was updated successfully, but these errors were encountered: