Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lookup table for literature integration and fitting #64

Open
drodarie opened this issue Feb 13, 2024 · 3 comments · May be fixed by #65
Open

Lookup table for literature integration and fitting #64

drodarie opened this issue Feb 13, 2024 · 3 comments · May be fixed by #65
Assignees

Comments

@drodarie
Copy link
Collaborator

The conversion of literature measurements into average densities and the fitting steps of the pipeline takes (a lot of) time because they require an estimate of the volume, cell and neuron density of each individual region of the annotation atlas (see function measurement_to_average_density here which leverages compute_region_volumes and calls compute_region_densities twice).

However, these estimations could be speed up if they were done together as the filtering of the regions' voxels and the results stored in files (csv or json) to be re-used (e.g for fitting). Additionally, composition rules (mother/children regions relations) can speed up the process if the regions are treated from leaf regions to major regions.

I wonder if this should be a separate/isolated step of the pipeline done before the conversion of literature measurements or directly integrated in this step. Also is it worth creating yet additional intermediate files to speed up the fitting step?

@drodarie drodarie self-assigned this Feb 15, 2024
@drodarie
Copy link
Collaborator Author

4818ae2 improves the computation of cell densities and volumes for the literature measurements into average densities step.
It could be even faster if ids were treated from leaf regions to major regions, I will look into it.

@drodarie
Copy link
Collaborator Author

In the end, I not so sure anymore if it is worth the effort to sort the regions.

For info, the idea was to compute the densities of each region based on its direct children, instead of all its children.
Since the regions were to be sorted from leaf to main regions, each calculation would rely on the previous ones.
Effectively, this would have meant to sort differently get_hierarchy_info.
This would have avoided to compute the list of children for each region and fasten the sum of cell and volume counts, especially for region with a lot of children. However, this process is fast anyways so I am not sure it is worth all the refactoring?

@drodarie
Copy link
Collaborator Author

I also believe now that it is not worth to store the results for the other steps (fitting and inhibitory neuron computation) as I initially thought since most of the process of filtering each region's voxels are required anyways in these steps.
Now it can still be useful to store the results anyways for further analysis.

@drodarie drodarie linked a pull request Feb 16, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant