You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
These yaml files also contain information on datasets used (path_adata; adata contains already selected genes (same in all runs to ensure results are comparable), normalized and count data, and metadata used for integration (system+batch) and evaluation (e.g. cell types)), covariates (group_key, batch_key, system_key), and where the results are saved (path_save, output_dir; in advance directories path_save, path_save/logs, and path_save/integration should be created manually), additional seml configs, param configs and their name (necesary for easy plotting/eval afterwards where different methods are compared), and random seeds to be used. It would be best if new method is simply added to the existing yaml files and then everything will use the right datasets and be saved to the right location for easy comparison of results afterwards.
The script called by seml passes params to the integration scripts and here only the parameters used by the integration script should be passed on, else it wont work as it is currently set up.
Evaluation is done via seml.
Seml is run based on config file, these are
These yaml files also contain information on datasets used (path_adata; adata contains already selected genes (same in all runs to ensure results are comparable), normalized and count data, and metadata used for integration (system+batch) and evaluation (e.g. cell types)), covariates (group_key, batch_key, system_key), and where the results are saved (path_save, output_dir; in advance directories path_save, path_save/logs, and path_save/integration should be created manually), additional seml configs, param configs and their name (necesary for easy plotting/eval afterwards where different methods are compared), and random seeds to be used. It would be best if new method is simply added to the existing yaml files and then everything will use the right datasets and be saved to the right location for easy comparison of results afterwards.
The arguments from config are passed to the script that runs the python process (https://github.com/theislab/cross_species_prediction/blob/main/notebooks/eval/run_seml.py) that calls script for integration with my models or scvi, similarly a new script to be called could be added for other methods (e.g. scGLUE).
The script called by seml passes params to the integration scripts and here only the parameters used by the integration script should be passed on, else it wont work as it is currently set up.
The results are then read from shared directory for every data set separately (for all successfully finished runs) and plotted in https://github.com/theislab/cross_species_prediction/blob/main/notebooks/eval/eval_summary_integration.py I save jupyter notebooks only on server not git (I use jupytext to convert ipynb to py). My code is in /lustre/groups/ml01/code/karin.hrovatin/cross_species_prediction
The text was updated successfully, but these errors were encountered: