Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release/113 #951

Merged
merged 61 commits into from
Sep 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
d924e50
Changed GENCODE Basic tag to 'gencode_basic' as per ENSINT-1885
sgiorgetti Apr 26, 2024
a0b629a
Minor fixes
Apr 29, 2024
ec58c92
Missing semicolon
Apr 29, 2024
6b97c44
Change glob parameters
Apr 29, 2024
5d99a70
Remove debugging
Apr 29, 2024
577c6ba
Fixes to prevent warnings
Apr 29, 2024
a29d373
Fix file paths
Apr 30, 2024
7fb4c04
Keep original files if no species file
May 2, 2024
c05a7d4
Merge pull request #920 from TamaraNaboulsi/xref/new_python_pipeline
dpopleton May 2, 2024
74296b6
Updated HGNC custom download URL
jmgonzmart May 10, 2024
66f0ff4
Merge pull request #923 from jmgonzmart/release/113
vinay-ebi May 10, 2024
0c7c41f
moved ensembl/xrefs to ensembl/production/xrefs
vinay-ebi May 15, 2024
0393357
Update xref.config
vinay-ebi May 15, 2024
4391762
base load changed to ensembl.production.xrefs
vinay-ebi May 16, 2024
fde8b7b
Merge pull request #925 from Ensembl/feature/nf_xref
vinay-ebi May 16, 2024
6f63ae1
Update requirements.txt
JAlvarezJarreta May 16, 2024
15b4ead
Updated default resources from 100mb to 1gb
dpopleton May 20, 2024
d9ac953
Merge pull request #928 from Ensembl/update/increase_rc
dpopleton May 20, 2024
2c33d61
Merge pull request #921 from Ensembl/main
vinay-ebi May 20, 2024
7859b31
Merge pull request #926 from JAlvarezJarreta/patch-2
vinay-ebi May 20, 2024
2021302
Updated JSON remodeler to stop Experimental push on scalar is now for…
dpopleton May 23, 2024
d1f6dbf
Merge branch 'release/113' into merge_conflicts
vinay-ebi May 24, 2024
cdce7ed
Merge pull request #930 from Ensembl/merge_conflicts
dpopleton May 24, 2024
3290f21
Update xref_all_sources.json for RGD
vinay-ebi May 24, 2024
808b346
Merge pull request #931 from Ensembl/xref_resource_update
vinay-ebi May 24, 2024
975fa4a
Update tag names and info relating to gencode genesets
nwillhoft May 28, 2024
95c1320
Bugfix for files not being overwritten
May 29, 2024
ba155dc
Merge pull request #929 from Ensembl/update/file_dump_perl_compatibility
dpopleton May 29, 2024
1c15ef9
Merge pull request #933 from TamaraNaboulsi/xref/bugfix
vinay-ebi May 29, 2024
3ad6a02
Fix for when no species file is found
Jun 4, 2024
700c96f
Merge pull request #934 from TamaraNaboulsi/xref/bugfix
vinay-ebi Jun 4, 2024
9586a89
Update ProteinFeatures analysis
vinay-ebi Jun 4, 2024
f065884
Update ProteinFeatures_conf.pm
vinay-ebi Jun 4, 2024
2b7fded
Merge pull request #935 from Ensembl/bugifx/proteinfeatures
vinay-ebi Jun 5, 2024
d283cf0
Update xref_sources.json
vinay-ebi Jun 19, 2024
d74f87e
Update xref_all_sources.json
vinay-ebi Jun 19, 2024
4c8fcf0
Merge pull request #938 from Ensembl/bugfix/update_xenbase
vinay-ebi Jun 19, 2024
b2da23c
Fixed as per ENSPROD-9493
sgiorgetti Jun 20, 2024
7463aca
Merge pull request #939 from Ensembl/fix113/alphafold-displaylabel
vinay-ebi Jun 21, 2024
7e4aca5
Fix use of keys on a scalar
jgtate Jun 24, 2024
a3ff2b4
Fixes for 113 issues
Jun 26, 2024
39c3a57
Merge pull request #941 from TamaraNaboulsi/xref/fixes
dpopleton Jun 27, 2024
6549a72
Merge pull request #940 from Ensembl/fix-keys-on-scalar
jgtate Jul 1, 2024
ee5da99
Update SourceFactory.pm
vinay-ebi Jul 5, 2024
2976823
Merge pull request #942 from Ensembl/bug/experimenta_scalar
vinay-ebi Jul 5, 2024
d1ce293
Updated Base class with slurm default resource 1GB
vinay-ebi Jul 19, 2024
772caf1
Update Typo GB
vinay-ebi Jul 19, 2024
ce884ca
Merge pull request #918 from Ensembl/fix113/gencode_basic
vinay-ebi Jul 22, 2024
14500c9
Merge pull request #932 from nwillhoft/fix113/update_tag_info
vinay-ebi Jul 22, 2024
2b01517
decompress upidump.lis.gz file before load to hive db
vinay-ebi Jul 22, 2024
434e865
delete the upidump file after loading into hive db
vinay-ebi Jul 22, 2024
693c899
Update modules/Bio/EnsEMBL/Production/Pipeline/ProteinFeatures/LoadUn…
vinay-ebi Jul 22, 2024
51cc0c3
Update modules/Bio/EnsEMBL/Production/Pipeline/ProteinFeatures/LoadUn…
vinay-ebi Jul 22, 2024
7cbce1e
Merge pull request #943 from Ensembl/feature/slurm_resource
vinay-ebi Jul 22, 2024
515063a
Merge pull request #946 from Ensembl/bugfix/pf_uniparc_gunzip
vinay-ebi Jul 23, 2024
0eae3f6
Include human and mouse symlinks
pblins Jul 25, 2024
3542ca1
include Mouse and Human symlinks
Jul 25, 2024
ac6c741
include Mouse and Human symlinks
Jul 25, 2024
8380bd1
Merge pull request #947 from pblins/include-human-and-mouse
vinay-ebi Jul 25, 2024
bbc1362
Update ProteinFeatures_conf.pm
vinay-ebi Aug 13, 2024
d26c79e
Merge pull request #949 from Ensembl/bugfix/pf_analysis_lc
vinay-ebi Aug 13, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ sub run {
-db => 'alphafold',
-db_version => $alpha_version,
-db_file => $self->param('db_dir') . '/accession_ids.csv',
-display_label => 'AlphaFold DB import',
-display_label => 'AFDB-ENSP mapping',
-displayable => '1',
-description => 'Protein features based on AlphaFold predictions, mapped with GIFTS or UniParc'
);
Expand Down
3 changes: 2 additions & 1 deletion modules/Bio/EnsEMBL/Production/Pipeline/GTF/DumpFile.pm
Original file line number Diff line number Diff line change
Expand Up @@ -383,7 +383,8 @@ feature for the position of this on the genome
- cds_start_NF: the coding region start could not be confirmed
- mRNA_end_NF: the mRNA end could not be confirmed
- mRNA_start_NF: the mRNA start could not be confirmed.
- basic: the transcript is part of the gencode basic geneset
- gencode_basic: the transcript is part of the gencode basic geneset
- gencode_primary: the transcript is part of the gencode primary geneset

Comments

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ sub all_hashes {
} ## end foreach my $slice (@slices)

for my $seq_type (keys %$batch) {
for my $attrib_table (keys $batch->{$seq_type}) {
for my $attrib_table (keys %{$batch->{$seq_type}}) {
$attribute_adaptor->store_batch_on_Object($attrib_table, $batch->{$seq_type}->{$attrib_table}, 1000);
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -292,7 +292,10 @@ sub merge_xrefs {
$obj->{$dbname} = [];
}
for my $ann ( @{ $subobj->{$dbname} } ) {
push $obj->{$dbname}, $self->copy_hash($ann);
if (ref($obj->{$dbname}) ne 'ARRAY') {
$obj->{$dbname} = [];
}
push @{ $obj->{$dbname} }, $self->copy_hash($ann);
}
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ sub write_output {
my $compara_param = $self->param('compara');
my $cleanup_dir = $self->param('cleanup_dir');

foreach my $pair (keys $sp_config) {
foreach my $pair (keys %{$sp_config}) {
my $compara = $sp_config->{$pair}->{'compara'};
if (defined $compara_param && $compara ne $compara_param) {
print STDERR "Skipping $compara\n";
Expand Down
44 changes: 17 additions & 27 deletions modules/Bio/EnsEMBL/Production/Pipeline/PipeConfig/Base_conf.pm
Original file line number Diff line number Diff line change
Expand Up @@ -66,14 +66,14 @@ sub beekeeper_extra_cmdline_options {
sub resource_classes {
my $self = shift;

## String it together
my %time = (
H => ' --time=1:00:00',
D => ' --time=1-00:00:00',
W => ' --time=7-00:00:00'
);

## Sting it together
my %time = (H => ' --time=1:00:00',
D => ' --time=1-00:00:00',
W => ' --time=7-00:00:00',);

my %memory = ('100M' => '100',
'200M' => '200',
my %memory = (
'500M' => '500',
'1GB' => '1000',
'2GB' => '2000',
Expand All @@ -89,40 +89,30 @@ sub resource_classes {
);

my $dq = ' --partition=datamover';

my %output = (
#Default is a duplicate of 100M
'default' => { 'LSF' => '-q ' . $self->o('production_queue'), 'SLURM' => $time{'H'} . ' --mem=' . $memory{'100M'} . 'm' },
'default_D' => { 'LSF' => '-q ' . $self->o('production_queue'), 'SLURM' => $time{'D'} . ' --mem=' . $memory{'100M'} . 'm' },
'default_W' => { 'LSF' => '-q ' . $self->o('production_queue'), 'SLURM' => $time{'W'} . ' --mem=' . $memory{'100M'} . 'm' },
'default' => { 'SLURM' => $time{'H'} . ' --mem=' . $memory{'1GB'} . 'm' },
'default_D' => { 'SLURM' => $time{'D'} . ' --mem=' . $memory{'1GB'} . 'm' },
'default_W' => { 'SLURM' => $time{'W'} . ' --mem=' . $memory{'1GB'} . 'm' },
#Data mover nodes
'dm' => { 'LSF' => '-q ' . $self->o('datamover_queue'), 'SLURM' => $dq . $time{'H'} . ' --mem=' . $memory{'100M'} . 'm' },
'dm_D' => { 'LSF' => '-q ' . $self->o('datamover_queue'), 'SLURM' => $dq . $time{'D'} . ' --mem=' . $memory{'100M'} . 'm' },
'dm_W' => { 'LSF' => '-q ' . $self->o('datamover_queue'), 'SLURM' => $dq . $time{'W'} . ' --mem=' . $memory{'100M'} . 'm' },
'dm32_D' => { 'LSF' => '-q ' . $self->o('datamover_queue') . ' -M 32000 -R "rusage[mem=32000]"', 'SLURM' => $dq . $time{'D'} . ' --mem=' . $memory{'32GB'} . 'm' },
'dmMAX_D' => { 'LSF' => '-q ' . $self->o('datamover_queue') . ' -M 200000 -R "rusage[mem=200000]"', 'SLURM' => $dq . $time{'D'} . ' --mem=' . $memory{'200GB'} . 'm' },
'dm' => { 'SLURM' => $dq . $time{'H'} . ' --mem=' . $memory{'1GB'} . 'm' },
'dm_D' => { 'SLURM' => $dq . $time{'D'} . ' --mem=' . $memory{'1GB'} . 'm' },
'dm_W' => { 'SLURM' => $dq . $time{'W'} . ' --mem=' . $memory{'1GB'} . 'm' },
'dm32_D' => { 'SLURM' => $dq . $time{'D'} . ' --mem=' . $memory{'32GB'} . 'm' },
'dmMAX_D' => { 'SLURM' => $dq . $time{'D'} . ' --mem=' . $memory{'200GB'} . 'm' },
);
#Create a dictionary of all possible time and memory combinations. Format would be:
#2G={
# 'SLURM' => ' --time=1:00:00 --mem=2000m',
# 'LSF' => '-q $self->o(production_queue) -M 2000 -R "rusage[mem=2000]"'
# };

while ((my $time_key, my $time_value) = each(%time)) {
while ((my $memory_key, my $memory_value) = each(%memory)) {
if ($time_key eq 'H') {
$output{$memory_key} = { 'LSF' => '-q ' . $self->o('production_queue') . ' -M ' . $memory_value . ' -R "rusage[mem=' . $memory_value . ']"',
'SLURM' => $time_value . ' --mem=' . $memory_value . 'm' }
$output{$memory_key} = { 'SLURM' => $time_value . ' --mem=' . $memory_value . 'm' };
}
else {
$output{$memory_key . '_' . $time_key} = { 'LSF' => '-q ' . $self->o('production_queue') . ' -M ' . $memory_value . ' -R "rusage[mem=' . $memory_value . ']"',
'SLURM' => $time_value . ' --mem=' . $memory_value . 'm' }
$output{$memory_key . '_' . $time_key} = { 'SLURM' => $time_value . ' --mem=' . $memory_value . 'm' };
}
}
}

return \%output;

}

1;
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ sub default_options {

interpro_file => 'names.dat',
interpro2go_file => 'interpro2go',
uniparc_file => 'upidump.lis',
uniparc_file => 'upidump.lis.gz',
mapping_file => 'idmapping_selected.tab.gz',

# Files are retrieved and stored locally with the same name.
Expand Down Expand Up @@ -227,6 +227,30 @@ sub default_options {
ipscan_xml => 'TMHMM',
ipscan_lookup => 0,
},
{
db => 'Phobius',
ipscan_lookup => 1,
ipscan_name => 'Phobius',
ipscan_xml => 'PHOBIUS',
logic_name => 'phobius',
program => 'InterProScan',
},
{
db => 'SignalP_GRAM_POSITIVE',
ipscan_lookup => 1,
ipscan_name => 'SignalP_GRAM_POSITIVE',
ipscan_xml => 'SIGNALP_GRAM_POSITIVE',
logic_name => 'signalp_gram_positive',
program => 'InterProScan',
},
{
db => 'SignalP_GRAM_NEGATIVE',
ipscan_lookup => 1,
ipscan_name => 'SignalP_GRAM_NEGATIVE',
ipscan_xml => 'SIGNALP_GRAM_NEGATIVE',
logic_name => 'signalp_gram_negative',
program => 'InterProScan',
},
#seg replaces low complexity regions in protein sequences with X characters(https://rothlab.ucdavis.edu/genhelp/seg.html)
{
logic_name => 'seg',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -173,6 +173,7 @@ sub pipeline_analyses {
base_path => $self->o('base_path'),
release => $self->o('release')
},
-max_retry_count => 0,
-flow_into => {
'2->A' => 'dump_xref',
'A->1' => 'schedule_mapping'
Expand All @@ -187,6 +188,7 @@ sub pipeline_analyses {
release => $self->o('release'),
config_file => $self->o('config_file')
},
-max_retry_count => 0,
-flow_into => { 2 => 'align_factory' },
-rc_name => '1GB',
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,16 +21,27 @@ package Bio::EnsEMBL::Production::Pipeline::ProteinFeatures::LoadUniParc;

use strict;
use warnings;

use IO::Uncompress::Gunzip qw(gunzip $GunzipError);
use File::Basename;

use base ('Bio::EnsEMBL::Production::Pipeline::Common::Base');

sub run {
my ($self) = @_;
my $uniparc_file = $self->param_required('uniparc_file_local');


if (-e $uniparc_file) {

#check if uniparc file is compressed
if ($uniparc_file =~ /\.gz$/){
my $uniparc_file_decompress = $uniparc_file;
$uniparc_file_decompress =~ s/\.gz$//;
gunzip $uniparc_file => $uniparc_file_decompress or $self->throw("gunzip failed: $GunzipError");
#delete compressed file .gz
unlink $uniparc_file or $self->throw("unable to delete $uniparc_file: $!");
$uniparc_file = $uniparc_file_decompress;
}

my $dbh = $self->hive_dbh;
my $sql = "LOAD DATA LOCAL INFILE '$uniparc_file' INTO TABLE uniparc FIELDS TERMINATED BY ' '";
$dbh->do($sql) or self->throw($dbh->errstr);
Expand All @@ -41,9 +52,14 @@ sub run {
my $index_2 = 'ALTER TABLE uniparc ADD KEY md5sum_idx (md5sum) USING HASH';
$dbh->do($index_2) or self->throw($dbh->errstr);

#delete upidump file from pipeline direcotry after loading into hive db
unlink $uniparc_file or $self->throw("unable to delete $uniparc_file: $!");

} else {
$self->throw("Checksum file '$uniparc_file' does not exist");
}


}

1;
14 changes: 11 additions & 3 deletions modules/Bio/EnsEMBL/Production/Pipeline/Xrefs/Alignment.pm
Original file line number Diff line number Diff line change
Expand Up @@ -84,10 +84,18 @@ sub run {
$exe =~ s/\n//g;
my $command_string = sprintf ("%s --showalignment FALSE --showvulgar FALSE --ryo '%s' --gappedextension FALSE --model 'affine:local' %s --subopt no --query %s --target %s --querychunktotal %s --querychunkid %s", $exe, $ryo, $method, $source, $target, $max_chunks, $chunk);
my $output = `$command_string`;
my @hits = grep {$_ =~ /^xref/} split "\n", $output; # not all lines in output are alignments

while (my $hit = shift @hits) {
print $fh $hit . "\n";
if ($? == 0) {
my @hits = grep {$_ =~ /^xref/} split "\n", $output; # not all lines in output are alignments

while (my $hit = shift @hits) {
print $fh $hit . "\n";
}
} else {
my $job = $self->input_job();
$job->adaptor()->db()->get_LogMessageAdaptor()->store_job_message($job->dbID(), $output, 'WORKER_ERROR');

throw("Exonerate failed with exit_code: $?\n");
}

$fh->close();
Expand Down
17 changes: 12 additions & 5 deletions modules/Bio/EnsEMBL/Production/Pipeline/Xrefs/ScheduleSource.pm
Original file line number Diff line number Diff line change
Expand Up @@ -127,21 +127,28 @@ sub run {
} else {
# Create list of files
opendir(my $dir_handle, $file_name);
my @list_files = readdir($dir_handle);
my @temp_list_files = readdir($dir_handle);
closedir($dir_handle);

my @list_files;
foreach my $file (@temp_list_files) {
next if ($file =~ /^\./);
push(@list_files, $file_name . "/" . $file);
}
if ($preparse) { @list_files = $preparse; }

# For Uniprot and Refseq, files might have been split by species
if (!$preparse && ($name =~ /^Uniprot/ || $name =~ /^RefSeq_peptide/ || $name =~ /^RefSeq_dna/)) {
my $file_prefix = ($name =~ /SPTREMBL/ ? 'uniprot_trembl' : ($name =~ /SWISSPROT/ ? 'uniprot_sprot' : ($name =~ /_dna/ ? 'refseq_rna' : 'refseq_protein')));
@list_files = glob($file_name . "/**/" . $file_prefix . "-" . $species_id);
$_ = basename(dirname($_)) . "/" . basename($_) foreach (@list_files);
my @species_list_files = glob($file_name . "/**/**/**/**/" . $file_prefix . "-" . $species_id);
if (scalar(@species_list_files) > 0) {
@list_files = @species_list_files;
}
}

foreach my $file (@list_files) {
next if ($file =~ /^\./);
$file =~ s/\n//;
$file = $file_name . "/" . $file;
if (!-f $file) { next; }
if (defined $release_file and $file eq $release_file) { next; }

$dataflow_params = {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@
{
"name" : "HGNC",
"parser" : "HGNCParser",
"file" : "https://www.genenames.org/cgi-bin/download?col=gd_hgnc_id&col=gd_app_sym&col=gd_app_name&col=gd_prev_sym&col=gd_aliases&col=gd_pub_eg_id&col=gd_pub_ensembl_id&col=gd_pub_refseq_ids&col=gd_ccds_ids&col=gd_lsdb_links&status=Approved&status_opt=2&where=&order_by=gd_app_sym_sort&format=text&limit=&hgnc_dbtag=on&submit=submit",
"file" : "https://www.genenames.org/cgi-bin/download/custom?col=gd_hgnc_id&col=gd_app_sym&col=gd_app_name&col=gd_prev_sym&col=gd_aliases&col=gd_pub_eg_id&col=gd_pub_ensembl_id&col=gd_pub_refseq_ids&col=gd_ccds_ids&col=gd_lsdb_links&status=Approved&status_opt=2&where=&order_by=gd_app_sym_sort&format=text&limit=&hgnc_dbtag=on&submit=submit",
"db" : "ccds",
"priority" : 3
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,7 @@
{
"name" : "Xenbase",
"parser" : "XenopusJamboreeParser",
"file" : "http://ftp.xenbase.org/pub/GenePageReports/GenePageEnsemblModelMapping.txt",
"file" : "http://ftp.xenbase.org/pub/GenePageReports/GenePageEnsemblModelMapping_4.1.txt",
"priority" : 1
},
{
Expand All @@ -241,7 +241,7 @@
{
"name" : "HGNC",
"parser" : "HGNCParser",
"file" : "https://www.genenames.org/cgi-bin/download?col=gd_hgnc_id&col=gd_app_sym&col=gd_app_name&col=gd_prev_sym&col=gd_aliases&col=gd_pub_eg_id&col=gd_pub_ensembl_id&col=gd_pub_refseq_ids&col=gd_ccds_ids&col=gd_lsdb_links&status=Approved&status_opt=2&where=&order_by=gd_app_sym_sort&format=text&limit=&hgnc_dbtag=on&submit=submit",
"file" : "https://www.genenames.org/cgi-bin/download/custom?col=gd_hgnc_id&col=gd_app_sym&col=gd_app_name&col=gd_prev_sym&col=gd_aliases&col=gd_pub_eg_id&col=gd_pub_ensembl_id&col=gd_pub_refseq_ids&col=gd_ccds_ids&col=gd_lsdb_links&status=Approved&status_opt=2&where=&order_by=gd_app_sym_sort&format=text&limit=&hgnc_dbtag=on&submit=submit",
"db" : "ccds",
"priority" : 3
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -254,7 +254,7 @@
{
"name" : "Xenbase",
"parser" : "XenopusJamboreeParser",
"file" : "http://ftp.xenbase.org/pub/GenePageReports/GenePageEnsemblModelMapping.txt",
"file" : "http://ftp.xenbase.org/pub/GenePageReports/GenePageEnsemblModelMapping_4.1.txt",
"priority" : 1
},
{
Expand All @@ -269,7 +269,7 @@
{
"name" : "HGNC",
"parser" : "HGNCParser",
"file" : "https://www.genenames.org/cgi-bin/download?col=gd_hgnc_id&col=gd_app_sym&col=gd_app_name&col=gd_prev_sym&col=gd_aliases&col=gd_pub_eg_id&col=gd_pub_ensembl_id&col=gd_pub_refseq_ids&col=gd_ccds_ids&col=gd_lsdb_links&status=Approved&status_opt=2&where=&order_by=gd_app_sym_sort&format=text&limit=&hgnc_dbtag=on&submit=submit",
"file" : "https://www.genenames.org/cgi-bin/download/custom?col=gd_hgnc_id&col=gd_app_sym&col=gd_app_name&col=gd_prev_sym&col=gd_aliases&col=gd_pub_eg_id&col=gd_pub_ensembl_id&col=gd_pub_refseq_ids&col=gd_ccds_ids&col=gd_lsdb_links&status=Approved&status_opt=2&where=&order_by=gd_app_sym_sort&format=text&limit=&hgnc_dbtag=on&submit=submit",
"db" : "ccds",
"priority" : 3
}
Expand Down
Loading
Loading