You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The conversion of literature measurements into average densities and the fitting steps of the pipeline takes (a lot of) time because they require an estimate of the volume, cell and neuron density of each individual region of the annotation atlas (see function measurement_to_average_densityhere which leverages compute_region_volumes and calls compute_region_densities twice).
However, these estimations could be speed up if they were done together as the filtering of the regions' voxels and the results stored in files (csv or json) to be re-used (e.g for fitting). Additionally, composition rules (mother/children regions relations) can speed up the process if the regions are treated from leaf regions to major regions.
I wonder if this should be a separate/isolated step of the pipeline done before the conversion of literature measurements or directly integrated in this step. Also is it worth creating yet additional intermediate files to speed up the fitting step?
The text was updated successfully, but these errors were encountered:
4818ae2 improves the computation of cell densities and volumes for the literature measurements into average densities step.
It could be even faster if ids were treated from leaf regions to major regions, I will look into it.
In the end, I not so sure anymore if it is worth the effort to sort the regions.
For info, the idea was to compute the densities of each region based on its direct children, instead of all its children.
Since the regions were to be sorted from leaf to main regions, each calculation would rely on the previous ones.
Effectively, this would have meant to sort differently get_hierarchy_info.
This would have avoided to compute the list of children for each region and fasten the sum of cell and volume counts, especially for region with a lot of children. However, this process is fast anyways so I am not sure it is worth all the refactoring?
I also believe now that it is not worth to store the results for the other steps (fitting and inhibitory neuron computation) as I initially thought since most of the process of filtering each region's voxels are required anyways in these steps.
Now it can still be useful to store the results anyways for further analysis.
The conversion of literature measurements into average densities and the fitting steps of the pipeline takes (a lot of) time because they require an estimate of the volume, cell and neuron density of each individual region of the annotation atlas (see function
measurement_to_average_density
here which leveragescompute_region_volumes
and callscompute_region_densities
twice).However, these estimations could be speed up if they were done together as the filtering of the regions' voxels and the results stored in files (csv or json) to be re-used (e.g for fitting). Additionally, composition rules (mother/children regions relations) can speed up the process if the regions are treated from leaf regions to major regions.
I wonder if this should be a separate/isolated step of the pipeline done before the conversion of literature measurements or directly integrated in this step. Also is it worth creating yet additional intermediate files to speed up the fitting step?
The text was updated successfully, but these errors were encountered: