Skip to content

A tool to implement Genomic Prediction Experiments using Deep Learning

Notifications You must be signed in to change notification settings

lauzingaretti/DeepGP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation



Harnessing deep learning for complex traits Genomic Prediction (GP) in plant breeding programs

Laura Zingaretti ([email protected]) and Miguel Pérez-Enciso ([email protected])


A python based tool to implement Genomic Prediction Experiments using Deep Learning

The Software

The DeepGP package implements Multilayer Perceptron Networks (MLP), Convolutional Neural Network (CNN), Ridge Regression and Lasso Regression to Genomic Prediction purposes. Our package takes advantage of Autonomio Talos [Computer software] (2019) ( http://github.com/autonomio/talos) functionalities to optimize the networks parameters. It also implements uni and multi-trait experiments with a set of normalization options. We believe it can be a useful tool for people who want to implement Deep Learning models for predicting complex traits in an easy way. For a more 'didactic' site on DL, please check: https://github.com/miguelperezenciso/DLpipeline Citation: Pérez-Enciso M, Zingaretti LM. 2019. A Guide on Deep Learning for Complex Trait Genomic Prediction. Genes, 10, 553.

Citation

If you find this resource useful, please cite:

Zingaretti LM, Gezan SA, Ferrão LFV, Osorio LF, Monfort A, Muñoz PR, Whitaker VM, Pérez-Enciso M. 2020. Exploring Deep Learning for Complex Trait Genomic Prediction in Polyploid Outcrossing Species. Frontiers in Plant Science 11:25

Pérez-Enciso M, Zingaretti LM. 2019. A Guide on Deep Learning for Complex Trait Genomic Prediction. Genes, 10, 553.

and possibly

Bellot P, De Los Campos G, Pérez-Enciso M. 2018. Can Deep Learning Improve Genomic Prediction of Complex Human Traits? Genetics 210:809-819.

Introduction

Deep Learning (DL) techniques comprise a heterogeneous collection of Machine Learning algorithms which have excelled at prediction tasks. All DL algorithms employ multiple neuron layers and numerous architectures have been proposed. DL is relatively easy to implement (https://keras.io/why-use-keras/) but is not 'plug and play' and the DL performance highly depends of the hyperparameter combination.

Genomic Selection is the breeding strategy consisting in predicting complex traits using genomic-wide genetic markers and it is standard in many animal and plant breeding schemes. Its powerful predictive ability makes DL a suitable tool for prediction problems. Here, we deliver a tool to predicting complex traits from whole genome SNP information in a easy way. This is thanks to Keras API (https://keras.io/) and TensorFlow (https://www.tensorflow.org/), which allow all intricacies to be encapsulated through very simple statements. TensorFlow is a machine-learning library developed by Google. We also take advantage of the scikit learn (https://scikit-learn.org/stable/) to surrender classical penalized models for prediction purposes.

The novelty of our tool lies in the fact that it allows us to tune the most important hyperparameters in CNN and MLP architectures.

GP main features

  • It allows tuning the MLP and CNN architectues for GP purposes
  • Our package uses a random search for hyperparameter optimization.
  • It is easy to implement
  • It delivers all the parameters to implemented models allowing you to explore the hyperparameters influence

What is a hyperparameter in DL jargon?

Hyperparameters are variables that need to set in order to apply a DL algorithm to a dataset. Even in a simple neural network, numerous hyperparameters need to be configured. The optimal combination of hyperparameters depends on the specific problem and a set of suitable hyperparameters are required each time. This makes our tool a good solution for applying DL to Genomic Prediction problems.

Deep Learning hyperparameters

Term Definition
Activation functions The mathematical function f that produces neuron’s output f(w’x + b), where w is a weights vector, x is an input vector, and b is bias, a scalar. Both w and b are to be estimated for all neurons.
Batch In Stochastic Gradient Descent algorithms, each of the sample partitions within a given epoch.
Dropout Dropout means that a given percentage of neurons output is set to zero. The percentage is kept constant, but the specific neurons are randomly sampled in every iteration. The goal of dropout is to avoid overfitting.
Kernel = Filter = Tensor In DL terminology, the kernel is a multidimensional array of weights.
Learning rate Specify the speed of gradient update.
Loss Loss function measures how differences between observed and predicted target variables are quantified.
Neural layer ‘Neurons’ are arranged in layers, i.e., groups of neurons that take the output of previous group of neurons as input
Neuron The basic unit of a DL algorithm. A ‘neuron’ takes as input a list of variable values (x) multiplied by ‘weights’ (w) and, as output, produces a non-linear transformation f(w’x + b) where f is the activation function and b is the bias. Both w and b need to be estimated for each neuron such that the loss is minimized across the whole set of neurons.
Optimizer Algorithm to find weights (w and b) that minimize the loss function. Most DL optimizers are based on Stochastic Gradient Descent (SGD).
Pooling A pooling function substitutes the output of a network at a certain location with a summary statistic of the neighboring outputs. This is one of the crucial steps on the CNN architecture. The most common pooling operations are maximum, mean, median.
Weight regularization An excess of parameters (weights, w) may produce the phenomenon called ‘overfitting’, which means that the model adjusts to the observed data very well, but prediction of new unobserved data is very poor. To avoid this, weights are estimated subject to constraints, a strategy called ‘penalization’ or ‘regularization’. The two most frequent regularizations are the L1 and L2 norms, which set restrictions on the sum of absolute values of w (L1) or of the square values (L2)

List of input parameters

The next two tables include the whole list of DeepGenomic.py input parameters to be parser using argparse.

Input parameter Description
Xtr Required: True.
The training prediction matrix. It should be a .csv file with samples in rows and SNPs in column coded as 0, 1, to ploidy level. Data may contains row and colnames.
Usage: --Xtr /path/to/XtrainData.csv
ytr Required: True.
The training y matrix containing traits to be predicted. It should be a .csv file with samples in rows and traits in columns. Data may contains row and colnames and rownames should match with Xtr. The matrix should contains at least one column which means you want evaluating a single trait. It allows both continous and categorical traits.
Usage: --ytr /path/to/ytrainData.csv
Xval Required: False.
The validation set prediction matrix. It should be a .csv file with samples in rows and SNPs in column coded as 0, 1, to ploidy level. Data may contains row and colnames. This matrix have to have the same SPNs set than Xtr. If you don't give a validation set, the software internally split the data in train and validation set.
Usage: --Xval /path/to/XvalData.csv
yval Required: False.
The validation y matrix containing traits to be predicted. It should be a .csv file with samples in rows and traits in columns. Data may contains row and colnames and rownames should match with Xval. The matrix should contains the same traits than ytr.
Usage: --yval /path/to/yvalData.csv
cat Required: False.
This argument facilitates the normalization of the training dataset by a categorical variable, i.e. if you have a breeding program and you need to categorize by trial. To use it, ytr have to contains an extra categorical column indicating observations belonging to each trial.
Usage: --cat Trial_Cat, where Trial_Cat is the name of the categorical variable.
scale Default: False.
It is a Boolean argument indicating if the y matrix should be scaled. It only works if you write --scale argument when you run DeepGenomic.py.
dummy Default: False.
It is a Boolean argument indicating if the SNP matrix should be converted into OneHot enconding.
Usage: --dummy
categorical Default: False.
It is a Boolean argument indicating if traits in y matrix are categorical.
Usage: --categorical .
trait Required: False.
This argument indicates which trait you want evaluating. Use in case your matrix have more than 1 trait and you don't want evaluating all of them.
Usage: --trait name_of_the_trait when you run DeepGenomic.py.
output Required: True.
This parameter indicates the folder which will contain the outputs (table with evaluated models, weights for the best models and a plot with the predictive performance of the model).
Usage: --output path/to/output/folder/.
Method Required: True.
A mutually exclusive group indicating which model you want to run.
Usage: --mlp for Multilayer Perceptron Networks. --cnn for Convolutional Neural Networks --lasso for classical lasso modeling and --ridge for ridge regression approach.

The next table includes the parameters required if you want to evaluate a MLP (--mlp argument) or CNN (--cnn argument) neural networks, i.e. it is the process of hyperparameter tuning using the talos tool. All these parameters wouldn't work if you choose --ridge or --lasso as Method.

Input parameter Description
lr Required: False.
A list of learning rates to be considered in the process of Neural Network (NN) hyperparameters tuning.
Usage: --lr 0.001 0.025 0.1 or just lr: -lr 0.0001 if you don't want to evaluate different learning rates. Default: --lr 0.0025.
dr1 Required: False.
A list of dropouts in the first layer to be considered in the process of Neural Network (NN) hyperparameters tuning.
Usage: --dr1 0.001 0 0.1 or just -- dr1 0. Default: --dr1 0.
dr2 Required: False.
A list of dropouts in the hidden layers to be considered in the process of Neural Network (NN) hyperparameters tuning.
Usage: --dr2 0.001 0 0.1 or just -- dr2 0. Default: --dr2 0.
reg1 Required: False.
A list of L2 weight regularization to be used in the first layer in the process of Neural Network (NN) hyperparameters tuning.
Usage: --reg1 0.001 0 0.1 or just -- reg1 0. Default: --reg1 0.
reg2 Required: False.
A list of L2 weight regularization to be used in the hidden layers in the process of Neural Network (NN) hyperparameters tuning.
Usage: --reg2 0.001 0 0.1 or just -- reg2 0. Default: --reg2 0.
reg3 Required: False.
A list of L2 weight regularization to be used in the last layer in the process of Neural Network (NN) hyperparameters tuning.
Usage: --reg3 0.001 0 0.1 or just -- reg3 0. Default: --reg3 0.
hn Required: False.
A list of integers with the number of hidden neurons to be evaluated in the process of Neural Network (NN) hyperparameters tuning.
Usage: --hn 8 16 32 or just -- hn 64. Default: -hn 8.
hl Required: False.
A list of integers with the number of hidden layers in the fully connected layers of both CNN or MLP networks.
Usage: --hl 1 5 7 or just -- hl 3. Default: --hl 3.
N_neurons_FL Required: False.
A list of integer containing the number of neurons considered in the first layer of the mlp architecture.
Usage: --N_neurons_FL 1 4 10 or just --N_neurons_FL 32. Default: --N_neurons_FL 8. It only works when --mlp is activated.
epochs Required: False.
A list of integers containing the number of epochs to be evaluated.
Usage: --epochs 50 100 150 or just -- epochs 1000. Default: --epochs 50.
optimizer Required: False.
A list of strings containing the optimizer function to be considered. You can choice between Adam, Nadam and sgd.
Usage: --optimizer Adam sgd or just -- optimizer Adam. Default: --optimizer Adam sgd Nadam.
bs Required: False.
A list of integers containing the batch size to be considered.
Usage: --bs 16 32 or just -- bs 32. Default: --bs 16.
act1 Required: False.
A list of str with the activation function to be considered. For a fully connected network, i.e. mlp it is the activation to be used in the fully connected layers and for a cnn it should be the activation for the Convolutional layer. You can chose one or more from the following list: relu, elu, linear, softmax, tanh, softplus.
Usage: --act1 tanh linear or just --act1 tanh. Default: --act1 relu, elu, linear, softmax, tanh, softplus.
act2 Required: False.
A list of str with the activation function to be considered in the fully connected layers of a CNN network.You can chose one or more from the following list: relu, elu, linear, softmax, tanh, softplus.
Usage: --act2 tanh linear or just --act2 tanh. Default: --act2 relu, elu, linear, softmax, tanh, softplus. It only works when --cnn is activated.
nconv Required: False.
A list of integer with the number of convolutional layers to be considered.
Usage: --nconv 1 5 or just --nconv 1. Default: --nconv 1. It only works when --cnn is activated.
ps Required: False.
A list of integer containing pool size for convolutions.
Usage: --ps 1 5 or just --ps 1. Default: --ps 1. It only works when --cnn is activated.
ks Required: False.
A list of integer containing kernel size for convolutions.
Usage: --ks 3 5 7 or just --ks 3. Default: --ks 3. It only works when --cnn is activated.
ns Required: False.
A list of integer containing the number of strides for convolutions.
Usage: --ns 3 5 7 or just --ns 3. Default: --ns 1. It only works when --cnn is activated.
nfilters Required: False.
A list of integer containing the number of convolutional operations in each convolutional layer.
Usage: --nfilters 16 32 128 or just --nfilters 32. Default: --nfilters 8. It only works when --cnn is activated.
gridProp Required: False.
Proportion of random search for hyperparameter combination. Please be careful when selecting the prop and make sure you include at least one model to be evaluated, i.e. if the hyperparameters combination includes 10 models, gridProp should be at least 0.1.
Usage: --gridProp 0.01. Default: --gridProp 1 which means all the models would be evaluated. It only works when --cnn or --mlp is activated.

📈 Genomic Prediction Examples

Check in any of these items to access examples to run the desired DL option.


💾 Usage

Please clone this repo i the DeepGP folder

git clone https://github.com/lauzingaretti/DeepGP.git DeepGP

Install dependencies

pip3 install -r requirements.txt

cd DeepGP

python3 Deep_genomic.py --Xtr /path/to/Xtrain.csv --Ytr /path/to/Ytrain.csv --Xval /path/to/Xval.csv --yval /path/to/Yval.csv --trait name of the trait --gridProp 1 --output /path/to/outputFolder/ --cnn --lr 0.025 0.01 --dr1 0 0.01 --dr2 0 --reg1 0.0001 --nconv 3 --act1 linear tanh --ks 3 --hl 1 5 --optimizer Adam --act2 tanh --dummy --scale `

How to cite?

If you considered our program useful, please cite

Zingaretti LM, Gezan SA, Ferrão LFV, Osorio LF, Monfort A, Muñoz PR, Whitaker VM, Pérez-Enciso M. 2020. Exploring Deep Learning for Complex Trait Genomic Prediction in Polyploid Outcrossing Species. Frontiers in Plant Science 11:25


About

A tool to implement Genomic Prediction Experiments using Deep Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages