-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
code bug #15
Comments
Hi Cindy - thank you for trying bigstream, and I'm sorry for the very long delay before reading and attempting to help with this issue. I am not only the developer of bigstream but also using it everyday on data from various sources. So it is very hard to find enough time to complete all that work and help users with issues. But I can now try to help you some. But its not clear to me from what you've shown in this issue exactly where an error is happening. The actual exception you shared at the top of your message is not from my own code, its from some library that bigstream uses. And you haven't shown me exactly which line from the bigstream tutorial notebook throws this exception. So I can't really debug this issue for you with what you've given me. Are you still trying to work with bigstream and getting errors, screenshots of your Jupyter Notebook session, showing the cell which throws the exception and the message that is printed out, would be more helpful. |
hi Greg,
Sorry for the late reply. actually I`ve tried to reply the e-mail, but I don`t know why the e-mail has been returned.
The script as follow:
import os
import numpy as np
import zarr, tifffile
from bigstream.align import alignment_pipeline
from bigstream.transform import apply_transform
Path='F:/selectz_test_file_converter/1-10_c1'
C2_Path='F:/selectz_test_file_converter/1-10_c2'
Outpath='F:/selectz_test_file_converter/1-10_c1_output/'
Outpath_C2='F:/selectz_test_file_converter/1-10_c2_output/'
timepoint=os.listdir('F:/selectz_test_file_converter/1-10_c1')
for item in range(len(timepoint)):
fix_path=Path+'/'+timepoint[0]
mov_path = Path+'/'+timepoint[item]
mov_path_C2=C2_Path+'/'+timepoint[item]
# create Zarr file objects
fix_zarr = zarr.open(store=zarr.N5Store(fix_path), mode='r')
mov_zarr = zarr.open(store=zarr.N5Store(mov_path), mode='r')
mov_zarr_C2 = zarr.open(store=zarr.N5Store(mov_path_C2), mode='r')
# mov_zarr_channel2=zarr.open(store=zarr.N5Store(mov_path_channel2), mode='r')
# get pointers to the high res scale level
fix_highres = fix_zarr['/high']
mov_highres = mov_zarr['/high']
mov_highres_channel2 = mov_zarr_C2['/high']
fix_meta = fix_highres.attrs.asdict()
# mov_meta = mov_highres.attrs.asdict()
mov_meta = fix_meta
mov_meta_C2=fix_meta
# mov_meta_channel2=mov_highres_channel2.attrs.asdict()
fix_highres_spacing = np.array(fix_meta['pixelResolution']) * fix_meta['downsamplingFactors']
mov_highres_spacing = np.array(mov_meta['pixelResolution']) * mov_meta['downsamplingFactors']
mov_highres_spacing_channel2=np.array(mov_meta_C2['pixelResolution'])*mov_meta_C2['downsamplingFactors']
fix_highres_spacing = fix_highres_spacing[::-1]
mov_highres_spacing = mov_highres_spacing[::-1]
mov_highres_spacing_channel2=mov_highres_spacing_channel2[::-1]
# ransac_kwargs = {'blob_sizes':[6, 20]}
# affine_kwargs = {
# 'shrink_factors':(2.5,),
# 'smooth_sigmas':(3.0,),
# 'optimizer_args':{
# 'learningRate':0.25,
# 'minStep':0.,
# 'numberOfIterations':200,
# },
# }
affine_kwargs = {
'shrink_factors': (2,),
'smooth_sigmas': (2.5,),
'optimizer_args': {
'learningRate': 0.25,
'minStep': 0.,
'numberOfIterations': 200,
},
}
steps = [('affine', affine_kwargs)]
affine = alignment_pipeline(
fix_highres, mov_highres,
fix_highres_spacing, mov_highres_spacing,
steps,
)
aligned = apply_transform(
fix_highres, mov_highres,
fix_highres_spacing, mov_highres_spacing,
transform_list=[affine,],
)
# write results
file_save_name=Outpath+timepoint[item]+'.tiff'
np.savetxt('./affine.mat', affine)
tifffile.imsave(file_save_name, aligned)
aligned = apply_transform(
fix_highres, mov_highres_channel2,
fix_highres_spacing, mov_highres_spacing_channel2,
transform_list=[affine,],
)
# write results
file_save_name2=Outpath_C2+timepoint[item]+'.tiff'
np.savetxt('./affine.mat', affine)
tifffile.imsave(file_save_name2, aligned)
…-----原始邮件-----
发件人:"Greg M. Fleishman" ***@***.***>
发送时间:2023-07-25 02:53:12 (星期二)
收件人: GFleishman/bigstream ***@***.***>
抄送: cindy12-gao ***@***.***>, Author ***@***.***>
主题: Re: [GFleishman/bigstream] code bug (Issue #15)
Hi Cindy - thank you for trying bigstream, and I'm sorry for the very long delay before reading and attempting to help with this issue. I am not only the developer of bigstream but also using it everyday on data from various sources. So it is very hard to find enough time to complete all that work and help users with issues. But I can now try to help you some.
But its not clear to me from what you've shown in this issue exactly where an error is happening. The actual exception you shared at the top of your message is not from my own code, its from some library that bigstream uses. And you haven't shown me exactly which line from the bigstream tutorial notebook throws this exception. So I can't really debug this issue for you with what you've given me.
Are you still trying to work with bigstream and getting errors, screenshots of your Jupyter Notebook session, showing the cell which throws the exception and the message that is printed out, would be more helpful.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
I ran the script by the data of myself. the N5 parameter is {"compression":{"type":"gzip","useZlib":false,"level":-1},"pixelResolution":[0.18026115154068303,0.18026115154068303,0.9994],"downsamplingFactors":[1,1,1],"blockSize":[64,64,5],"dataType":"uint8","dimensions":[3904,3884,14]}, there alway is an error in the _method.py, line 80,
def _count_reduce_items(arr, axis, keepdims=False, where=True):
# fast-path for the default case
if where is True:
# no boolean mask given, calculate items according to axis
if axis is None:
axis = tuple(range(arr.ndim))
elif not isinstance(axis, tuple):
axis = (axis,)
items = 1
for ax in axis: #edit by gaoxinwei
items *= arr.shape[mu.normalize_axis_index(ax, arr.ndim)]
items = nt.intp(items)
the axis is 0, so in the ax in axis, it stopped.
Could you help me the find the reason? thank you ~
the main script as follow:
import numpy as np
import zarr, tifffile
from bigstream.align import alignment_pipeline
from bigstream.transform import apply_transform
file paths to tutorial data
replace the capitalized text below with the path to your copy of the bigstream repository
fix_path = 'D:/bigatream/r1.n5'
mov_path = 'D:/bigatream/r2.n5'
create Zarr file objects
fix_zarr = zarr.open(store=zarr.N5Store(fix_path), mode='r')
mov_zarr = zarr.open(store=zarr.N5Store(mov_path), mode='r')
get pointers to the low res scale level
still just pointers, no data loaded into memory yet
fix_lowres = fix_zarr['/lowres']
mov_lowres = mov_zarr['/lowres']
we need the voxel spacings for the low res data sets
we can compute them from the low res data set metadata
fix_meta = fix_lowres.attrs.asdict()
mov_meta = mov_lowres.attrs.asdict()
fix_lowres_spacing = np.array(fix_meta['pixelResolution']) * fix_meta['downsamplingFactors']
mov_lowres_spacing = np.array(mov_meta['pixelResolution']) * mov_meta['downsamplingFactors']
fix_lowres_spacing = fix_lowres_spacing[::-1] # put in zyx order to be consistent with image data
mov_lowres_spacing = mov_lowres_spacing[::-1]
read small image data into memory as numpy arrays
fix_lowres_data = fix_lowres[...]
mov_lowres_data = mov_lowres[...]
sanity check: print the voxel spacings and lowres dataset shapes
print(fix_lowres_spacing, mov_lowres_spacing)
print(fix_lowres_data.shape, mov_lowres_data.shape)
get pointers to the high res scale level
fix_highres = fix_zarr['/highres']
mov_highres = mov_zarr['/highres']
we need the voxel spacings for the high res data sets
we can compute them from the high res data set metadata
fix_meta = fix_highres.attrs.asdict()
mov_meta = mov_highres.attrs.asdict()
fix_highres_spacing = np.array(fix_meta['pixelResolution']) * fix_meta['downsamplingFactors']
mov_highres_spacing = np.array(mov_meta['pixelResolution']) * mov_meta['downsamplingFactors']
fix_highres_spacing = fix_highres_spacing[::-1]
mov_highres_spacing = mov_highres_spacing[::-1]
sanity check: print the voxel spacings and lowres dataset shapes
print(fix_highres_spacing, mov_highres_spacing)
print(fix_highres.shape, mov_highres.shape)
define arguments for the feature point and ransac stage (you'll understand these later)
ransac_kwargs = {'blob_sizes':[6, 20]}
#ransac_kwargs = {'blob_sizes':[6, 20]}
define arguments for the gradient descent stage (you'll understand these later)
affine_kwargs = {
'shrink_factors':(2,),
'smooth_sigmas':(2.5,),
'optimizer_args':{
'learningRate':0.25,
'minStep':0.,
'numberOfIterations':400,
},
}
define the alignment steps
steps = [('ransac', ransac_kwargs), ('affine', affine_kwargs)]
execute the alignment
affine = alignment_pipeline(
fix_lowres_data, mov_lowres_data,
fix_lowres_spacing, mov_lowres_spacing,
steps,
)
resample the moving image data using the transform you found
aligned = apply_transform(
fix_lowres_data, mov_lowres_data,
fix_lowres_spacing, mov_lowres_spacing,
transform_list=[affine,],
)
write results
np.savetxt('./affine.mat', affine)
tifffile.imsave('./affine_lowres.tiff', aligned)
load precomputed result (handy to use later if you've already run the cell)
affine = np.loadtxt('./affine.mat')
The text was updated successfully, but these errors were encountered: