Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

code bug #15

Open
cindy12-gao opened this issue Feb 15, 2023 · 2 comments
Open

code bug #15

cindy12-gao opened this issue Feb 15, 2023 · 2 comments

Comments

@cindy12-gao
Copy link

I ran the script by the data of myself. the N5 parameter is {"compression":{"type":"gzip","useZlib":false,"level":-1},"pixelResolution":[0.18026115154068303,0.18026115154068303,0.9994],"downsamplingFactors":[1,1,1],"blockSize":[64,64,5],"dataType":"uint8","dimensions":[3904,3884,14]}, there alway is an error in the _method.py, line 80,
def _count_reduce_items(arr, axis, keepdims=False, where=True):
# fast-path for the default case
if where is True:
# no boolean mask given, calculate items according to axis
if axis is None:
axis = tuple(range(arr.ndim))
elif not isinstance(axis, tuple):
axis = (axis,)
items = 1
for ax in axis: #edit by gaoxinwei
items *= arr.shape[mu.normalize_axis_index(ax, arr.ndim)]
items = nt.intp(items)
the axis is 0, so in the ax in axis, it stopped.
Could you help me the find the reason? thank you ~

the main script as follow:

import numpy as np
import zarr, tifffile
from bigstream.align import alignment_pipeline
from bigstream.transform import apply_transform

file paths to tutorial data

replace the capitalized text below with the path to your copy of the bigstream repository

fix_path = 'D:/bigatream/r1.n5'
mov_path = 'D:/bigatream/r2.n5'

create Zarr file objects

fix_zarr = zarr.open(store=zarr.N5Store(fix_path), mode='r')
mov_zarr = zarr.open(store=zarr.N5Store(mov_path), mode='r')

get pointers to the low res scale level

still just pointers, no data loaded into memory yet

fix_lowres = fix_zarr['/lowres']
mov_lowres = mov_zarr['/lowres']

we need the voxel spacings for the low res data sets

we can compute them from the low res data set metadata

fix_meta = fix_lowres.attrs.asdict()
mov_meta = mov_lowres.attrs.asdict()
fix_lowres_spacing = np.array(fix_meta['pixelResolution']) * fix_meta['downsamplingFactors']
mov_lowres_spacing = np.array(mov_meta['pixelResolution']) * mov_meta['downsamplingFactors']
fix_lowres_spacing = fix_lowres_spacing[::-1] # put in zyx order to be consistent with image data
mov_lowres_spacing = mov_lowres_spacing[::-1]

read small image data into memory as numpy arrays

fix_lowres_data = fix_lowres[...]
mov_lowres_data = mov_lowres[...]

sanity check: print the voxel spacings and lowres dataset shapes

print(fix_lowres_spacing, mov_lowres_spacing)
print(fix_lowres_data.shape, mov_lowres_data.shape)

get pointers to the high res scale level

fix_highres = fix_zarr['/highres']
mov_highres = mov_zarr['/highres']

we need the voxel spacings for the high res data sets

we can compute them from the high res data set metadata

fix_meta = fix_highres.attrs.asdict()
mov_meta = mov_highres.attrs.asdict()
fix_highres_spacing = np.array(fix_meta['pixelResolution']) * fix_meta['downsamplingFactors']
mov_highres_spacing = np.array(mov_meta['pixelResolution']) * mov_meta['downsamplingFactors']
fix_highres_spacing = fix_highres_spacing[::-1]
mov_highres_spacing = mov_highres_spacing[::-1]

sanity check: print the voxel spacings and lowres dataset shapes

print(fix_highres_spacing, mov_highres_spacing)
print(fix_highres.shape, mov_highres.shape)

define arguments for the feature point and ransac stage (you'll understand these later)

ransac_kwargs = {'blob_sizes':[6, 20]}
#ransac_kwargs = {'blob_sizes':[6, 20]}

define arguments for the gradient descent stage (you'll understand these later)

affine_kwargs = {
'shrink_factors':(2,),
'smooth_sigmas':(2.5,),
'optimizer_args':{
'learningRate':0.25,
'minStep':0.,
'numberOfIterations':400,
},
}

define the alignment steps

steps = [('ransac', ransac_kwargs), ('affine', affine_kwargs)]

execute the alignment

affine = alignment_pipeline(
fix_lowres_data, mov_lowres_data,
fix_lowres_spacing, mov_lowres_spacing,
steps,
)

resample the moving image data using the transform you found

aligned = apply_transform(
fix_lowres_data, mov_lowres_data,
fix_lowres_spacing, mov_lowres_spacing,
transform_list=[affine,],
)

write results

np.savetxt('./affine.mat', affine)
tifffile.imsave('./affine_lowres.tiff', aligned)

load precomputed result (handy to use later if you've already run the cell)

affine = np.loadtxt('./affine.mat')

@GFleishman
Copy link
Member

Hi Cindy - thank you for trying bigstream, and I'm sorry for the very long delay before reading and attempting to help with this issue. I am not only the developer of bigstream but also using it everyday on data from various sources. So it is very hard to find enough time to complete all that work and help users with issues. But I can now try to help you some.

But its not clear to me from what you've shown in this issue exactly where an error is happening. The actual exception you shared at the top of your message is not from my own code, its from some library that bigstream uses. And you haven't shown me exactly which line from the bigstream tutorial notebook throws this exception. So I can't really debug this issue for you with what you've given me.

Are you still trying to work with bigstream and getting errors, screenshots of your Jupyter Notebook session, showing the cell which throws the exception and the message that is printed out, would be more helpful.

@cindy12-gao
Copy link
Author

cindy12-gao commented Jul 29, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants