Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mount field to process new data assets in a pipeline #111

Closed
arielleleon opened this issue May 28, 2024 · 4 comments · Fixed by #116
Closed

mount field to process new data assets in a pipeline #111

arielleleon opened this issue May 28, 2024 · 4 comments · Fixed by #116
Assignees

Comments

@arielleleon
Copy link

Is your feature request related to a problem? Please describe.
I cannot specify a mount point for a capsule in the current architecture. This is required to trigger a pipeline with a new data asset.

Describe the solution you'd like
I would like a mount field to the BasicJobUploadConfig so that transfer-service can specify a mount point when a pipeline is processed.

@dyf
Copy link
Member

dyf commented Jun 3, 2024

@arielleleon @jtyoung84 I'm wondering if we want also to just have a library of configurations that know how to set mount points and other properties by default. For example the ecephys cares about two mount points (one for the data, one for the noise classifier model). The single-plane-ophys and multiplane-ophys pipelines likewise would have their own.

Concern with this is that I think the trigger capsule needs to know about this as well.

@jtyoung84 jtyoung84 self-assigned this Jun 3, 2024
@arielleleon
Copy link
Author

@dyf @jtyoung84 I'm fine with a default configuration for platform / modality. But do we need to care about the trigger capsule anymore? Looks like the aind-data-transfer basic-job can take in any capsule id

@dyf
Copy link
Member

dyf commented Jun 3, 2024

@arielleleon If a scientist wants to trigger reprocessing without re-uploading themselves, one way to do that is to use the trigger capsule app panel. It is (debatably) convenient to be able to do this within Code Ocean rather than learning to use the API.

@jtyoung84
Copy link
Contributor

@arielleleon Currently, the trigger capsule is run as default. It registers a data, runs a pipeline, monitors the pipeline, and captures the results once the pipeline is finished. We can add update the BasicUploadJobConfigs class to handle other cases, although I'd like to keep things backwards compatible as much as possible.

@dyf I discussed this Alessio. I think it should be fine having defaults in the trigger capsule repo itself. I'm probably fine with having the defaults checked in the repo, but we can pull them from a database if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants