Skip to content

A toolset and pipeline for running zero shot and supervised protein fitness prediction, drop in compatible with scikitlearn

License

Notifications You must be signed in to change notification settings

EvanKomp/aide_predict

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AIDE

Python Tests codecov

alt text

This repository serves fundementally to increase the accessibility of protein engineering tasks that fall into the following catagory:

$$\hat{y}=f(X)$$

Here, $X$ is a set of proteins, eg. their sequence and optionaly structure. $y$ is a property of the protein that is difficult to measure, such as binding affinity, stability, or catalytic activity. $\hat{y}$ is the predicted value of $y$ given $X$.

Existing models $f$ in the literature are varied, and a huge amount of work has gone into designing clever algorithms that leverage labeled and unlabeled data. For example, models differ in the following ways (non exhaustive):

The variety an nuance of each of these means that each application is a bespoke, independent codebase, and are generally inaccessible to those with little or no coding exprience. Some applications alleviate the second problem by hosting web servers. Add to this problem is a lack of standardization in API across applications, where individual code bases can be extremely poorly documented or hard to use due to hasty development to minimize time to publication.

The goals of this project are succinctly as follows:

  • Create a generalizable, unittested, API for protein prediction tasks that is compatible with scikit learn. This API will allow those who are familiar with the gold standard of ML libraries to conduct protein prediction tasks in much the same way you'd see on an intro to ML Medium article. Further, it makes it much easier for bespoke strategies to be accessed and compared; any new method whose authors wrap their code in the API are easily accessed by the community without spending hours studying the codebase.
  • Use API components to create a DVC tracked pipeline for protein prediction tasks. This pipeline will allow for those with zero software experience to conduct protein prediction tasks with a few simple commands. After (optionally) editting a config file, inputing their training data and their putative proteins, they can train and get predictions as simply as executing dvc repro.

API examples:

The following should look and feel like canonical sklearn tasks/code. See the demo folder for more details and executable examples. Also see the colab notebook to play with some if its capabilities in the cloud. Finally, checkout the notebooks in showcase where we conduct two full protein predictions optimization and scoring tasks on real data that are greater than small example sets.

Checking which protein models are available given the data you have

from aide_predict.utils.checks import check_model_compatability
exp = pd.read_csv('exp.csv')
seqs = ProteinSequences.from_list(exp['sequence'].tolist())
wt = ProteinSequence("MQYKLILNGKTLKGETTTEAVDAATAEKVFKQYANDNGVDGEWTYDDATKTFTVTELEVLFQGPLDPNSMATYEVLCEVARKLGTDDREVVLFLLNVFIPQPTLAQLIGALRALKEEGRLTFPLLAECLFRAGRRDLLRDLLHLDPRFLERHLAGTMSYFSPYQLTVLHVDGELCARDIRSLIFLSKDTIGSRSTPQTFLHWVYCMENLDLLGPTDVDALMSMLRSLSRVDLQRQVQTLMGLHLSGPSHSQHYRHTPLEHHHHHH", id='WT')

check_model_compatibility(
    training_msa=None,
    training_sequences=seqs,
    wt=wt,
)
>>>{'compatible': ['ESM2Embedding',
  'ESM2LikelihoodWrapper',
  'KmerEmbedding',
  'OneHotProteinEmbedding'],
 'incompatible': ['EVMutationWrapper',
  'HMMWrapper',
  'MSATransformerEmbedding',
  'MSATransformerLikelihoodWrapper',
  'OneHotAlignedEmbedding',
  'SaProtEmbedding',
  'SaProtLikelihoodWrapper',
  'VESPAWrapper']}

In silico mutagenesis using MSATransformer

# data preparation
wt = ProteinSequence(
    "LADDRTLLMAGVSHDLRTPLTRIRLATEMMSEQDGYLAESINKDIEECNAIIEQFIDYLR",
)
msa = ProteinSequences.from_fasta("data/msa.fasta")
library = wt.saturation_mutagenesis()
mutations = library.ids
print(mutations[0])
>>> 'L1A'

# model fitting
model = MSATransformerLikelihoodWrapper(
   wt=wt,
   marginal_method="masked_marginal"
)
model.fit(msa)

# make predictions for each mutated sequence
predictions = model.predict(library)

results = pd.DataFrame({'mutation': mutations, 'seqeunce': library,'prediction': predictions})

Compare a couple of zero shot predictors against experimental data

# data preparation
data = pd.read_csv("data/experimental_data.csv")
X = ProteinSequences.from_list(data['sequence'])
y = data['experimental_value']
wt = X['my_id_for_WT']
msa = ProteinSequences.from_fasta("data/msa.fasta")

# model defenitions
evmut = EVMutation(wt=wt, metadata_folder='./tmp/evm')
evmut.fit(msa)
esm2 = ESM2LikelihoodWrapper(wt=wt, model_checkpoint='esm2_t33_650M_UR50S')
esm2.fit([])
models = {'evmut': evmut, 'esm2': esm2}

# model fitting and scoring
for name, model in models.items():
    score = model.score(X, y)
    print(f"{name} score: {score}")

Train a supervised model to predict activity on an experimental combinatorial library, test on sequences with greater mutational depth than training

# data preparation
data = pd.read_csv("data/experimental_data.csv").set_index('id')
sequences = ProteinSequences.from_dict(data['sequence'])
sequences.aligned
>>> True
sequences.fixed_length
>>> True

wt = sequences['my_id_for_WT']
data['sequence'] = sequences
data['mutational_depth'] = data['sequence'].apply(lambda x: len(x.mutated_positions(wt)))
test = data[data['mutational_depth'] > 5]
train = data[data['mutational_depth'] <= 5]
train_X, train_y = train['sequence'], train['experimental_value']
test_X, test_y = test['sequence'], test['experimental_value']

# embeddings protein sequences
# use mean pool embeddings of esm2
embedder = ESM2Embedding(pool=True)
train_X = embedder.fit_transform(train_X)
test_X = embedder.transform(test_X)

# model fitting
model = RandomForestRegressor()
model.fit(train_X, train_y)

# model scoring
train_score = model.score(train_X, train_y)
test_score = model.score(test_X, test_y)
print(f"Train score: {train_score}, Test score: {test_score}")

Train a supervised predictor on a set of homologs, focusing only on positions of known importance, wrap the entire process into an sklearn pipeline including some standard sklearn transormers, and make predictions for a new set of homologs

# data preparation
data = pd.read_csv("data/experimental_data.csv")
data.set_index('id', inplace=True)
sequences = ProteinSequences.from_dict(data['sequence'].to_dict())
y_train = data['experimental_value']

wt = sequences['my_id_for_WT']
wt_important_positions = np.array([20, 21, 22, 33, 45]) # zero indexed, known from analysis elsewhere
sequences.aligned
>>> False
sequences.fixed_length
>>> False

# align the training sequences and get the important positions
msa = sequences.align_all()
msa.fixed_length
>>> False
msa.aligned
>>> True

wt_alignment_mapping = msa.get_alignment_mapping()['my_id_for_WT']
aligned_important_positions = wt_alignment_mapping[wt_important_positions]

# model defenitions
embedder = OneHotAlignedEmbedding(important_positions=aligned_important_positions).fit(msa)
scaler = StandardScaler()
feature_selector = VarianceThreshold(threshold=0.2)
predictor = RandomForestRegressor()
pipeline = Pipeline([
    ('embedder', embedder),
    ('scaler', scaler),
    ('feature_selector', feature_selector),
    ('predictor', predictor)
])

# model fitting
pipeline.fit(sequences, y_train)

# score new analigned homologs
new_homologs = ProteinSequences.from_fasta("data/new_homologs.fasta")
y_pred = pipeline.predict(new_homologs)

Supported tools

Import aide_predict.utils.checks.get_supported_tools() to see the tools that are available based on your environment. The base package has few dependencies and concurrently few tools. Additional tools can be accessed with additional dependency steps. This choice was made to reduce dependency clashes for the codebase. For example, the base package does not include pytorch, but the environment can be extended with "requirements-transformers.txt" to access ESM2 embeddings and log likelihood predictors.

Available Tools

You can always check which modules are installed/available to you by running get_supported_tools(). The following is a list of tools that are available. Models marked with a * require additional dependencies or envionments to be installed, see Installation

Data Structures and Utilities

  • Protein Sequence and Structure data structures
  • StructureMapper - A utility for mapping a folder of PDB structures to sequences

Prediction Models

  1. HMM (Hidden Markov Model)

    • Computes statistics over matching columns in an MSA, treating each column independantly but allowing for alignment of query sequences before scoring
    • Requires MSA for fitting
    • Can handle aligned sequences during inference
  2. EVMutation

    • Computes pairwise couplings between AAs in an MSA for select positions well represented in the MSA, variants are scored by the change in coupling energy.
    • Requires MSA for fitting
    • Requires wild-type sequence for inference
    • Requires fixed-length sequences
  3. ESM2 Likelihood Wrapper*

    • Pretrained PLM (BERT style) model for protein sequences, scores variants according to masked, mutant, or wild type marginal likelihoods. Mutant marginal computes likelihoods in the context of the mutant sequence, while masked and wild type marginal compute likelihoods in the context of the wild type sequence. These methods are apprximations of the joint likelihood.
    • Can handle aligned sequences
    • Requires additional dependencies (see requirements-transformers.txt)
  4. SaProt Likelihood Wrapper*

    • ESM except using a size 400 vocabulary including local structure tokens from Foldseek's VAE. The authors only used Masked marginal, but we've made Wild type, Mutant, and masked marginals avialable.
    • Requires fixed-length sequences
    • Uses WT structure if structures of sequences are not passed
    • Requires additional dependencies:
      • requirements-transformers.txt
  5. MSA Transformer Likelihood Wrapper*

    • Like ESM but with a transformer model that is trained on MSAs. The variants are placed at the top position in the MSA and scores are computed along that row. Wild type, Mutant, and masked marginals avialable.
    • Requires MSA for fitting
    • Requires wild-type sequence during inference
    • Requires additional dependencies (see requirements-fair-esm.txt)
  6. VESPA*

    • Conservation head model trained on PLM embeddings and logistic regression used to predict if mutation is detrimental.
    • Requires wild type, only works for single point mutations
    • Requires fixed-length sequences
    • Requires additional dependencies (see requirements-vespa.txt)
  7. EVE*

    • VAE trained on MSA, learns conditional distribution of AA. Latent space tends to be bimodal for deleterious vs neutral mutations.
    • Requires MSA for fitting
    • Requires fixed-length sequences
    • Requires independant EVE environment, see Installation.

Embeddings for Downstream ML

  1. One Hot Protein Embedding

    • Columnwise one hot encoding of amino acids for a fixed length set of sequences
    • Requires fixed-length sequences
    • Position specific
  2. One Hot Aligned Embedding

    • Columnwise one hot encoding including gaps for sequences aligned to an MSA.
    • Requires MSA for fitting
    • Position specific
  3. Kmer Embedding

    • Counts of observed amino acid kmers in the sequences
    • Allows for variable length sequences
  4. ESM2 Embedding*

    • Pretrained PLM (BERT style) model for protein sequences, outputs embeddings for each amino acid in the sequece from the last transformer layer.
    • Position specific
    • Requires additional dependencies (see requirements-transformers.txt)
  5. SaProt Embedding*

    • ESM except using a size 400 vocabulary including local structure tokens from Foldseek's VAE. AA embeddings from the last layer of the transformer are used.
    • Position specific
    • Requires additional dependencies:
      • requirements-transformers.txt
      • foldseek executable must be available in the PATH
  6. MSA Transformer Embedding*

    • Like ESM but with a transformer model that is trained on MSAs. The embeddings are computed for each amino acid in the query sequence in the context of an existing MSA
    • Requires MSA for fitting
    • Requires fixed-length sequences
    • Requires additional dependencies (see requirements-fair-esm.txt)

Each model in this package is implemented as a subclass of ProteinModelWrapper, which provides a consistent interface for all models. The specific behaviors (e.g., requiring MSA, fixed-length sequences, etc.) are implemented using mixins, making it easy to understand and extend the functionality of each model.

Installation

conda env create -f environment.yaml
pip install .

Installation of additional modules

Tools that require additional dependancies can be installed with the corresponding requirements file. See above for those files. For example, to access VESPA:

pip install -r requirements-vespa.txt

Some tools were deemed to heavy in terms of their environment to be included as a pip module. These require manual setup, see below.

Installation of EVE

To access the EVE module, first clone the repo (NOT inside of AIDE):

git clone https://github.com/OATML/EVE.git

IMPORTANT: set the environment variable EVE_REPO to the path of the cloned repo. This is used by AIDE to import EVE modules as it is not installable.

Build a new conda environment according to instructions/.yaml file there.

We recommend testing that the environment is set up correctly and that the package is using any GPUs but running their example script and observing the log.

IMPORTANT: set the environment variable EVE_CONDA_ENV to the name of the conda environment you created for EVE. This is used by AIDE to activate the EVE environment.

Confirm AIDE now has access to the EVE module:

from aide_predicts import get_supported_tools
get_supported_tools()

Tests

Continuous integration only runs base module tests, eg. pytest -v -m "not slow and not optional"

Additional tests are availabe to check the scientific output of wrapped models, that they meet the expected values, such as:

  • Score of ESM2 log liklihood, MSATransformer, SaProt, VESPA, EVE against ENVZ_ECOLI_Ghose benchmark of ProteinGym
  • run with pytest -v -m tests/not_base_models

TODO:

Third party software

  1. EVCouplings is a dependancy and their software is to avoid redundant code writing, as a result EVMutation is available as a base module.
  2. Of course, many of the tools here are just wrapping of the work of others - see above.

Citations and Acknowledgements

No software or code with viral licenses was used in the creation of this project.

The following deserve credit as they are either directly wrapped within AIDE, serve as code inspiration (noted in modules when necessary), or are used for testing:

  1. Frazer, J. et al. Disease variant prediction with deep generative models of evolutionary data. Nature 599, 91–95 (2021).
  2. Hopf, T. A. et al. The EVcouplings Python framework for coevolutionary sequence analysis. Bioinforma. Oxf. Engl. 35, 1582–1584 (2019).
  3. Notin, P. et al. Tranception: protein fitness prediction with autoregressive transformers and inference-time retrieval. Preprint at https://doi.org/10.48550/arXiv.2205.13760 (2022). Rao, R. et al. MSA Transformer. 2021.02.12.430858 Pre-print at https://doi.org/10.1101/2021.02.12.430858 (2021).
  4. Hopf, T. A. et al. Mutation effects predicted from se-quence co-variation. Nat. Biotechnol. 35, 128–135 (2017).
  5. Hsu, C., Nisonoff, H., Fannjiang, C. & Listgarten, J. Learning protein fitness models from evolutionary and assay-labeled data. Nat. Biotechnol. 40, 1114–1122 (2022).
  6. Meier, J. et al. Language models enable zero-shot prediction of the effects of mutations on protein func-tion. Preprint at https://doi.org/10.1101/2021.07.09.450648 (2021).
  7. Verkuil, R. et al. Language models generalize beyond natural proteins. 2022.12.21.521521 Preprint at https://doi.org/10.1101/2022.12.21.521521 (2022).
  8. Su, J. et al. SaProt: Protein Language Modeling with Structure-aware Vocabulary. 2023.10.01.560349 Preprint at https://doi.org/10.1101/2023.10.01.560349 (2023).
  9. Marquet, C. et al. Embeddings from protein language models predict conservation and variant effects. Hum Genet 141, 1629–1647 (2022).
  10. Eddy, S. R. Accelerated Profile HMM Searches. PLOS Computational Biology 7, e1002195 (2011).
  11. Pedregosa, F. et al. Scikit-learn: Machine Learning in Python. MACHINE LEARNING IN PYTHON.
  12. Notin, P. et al. ProteinGym: Large-Scale Benchmarks for Protein Fitness Prediction and Design.

License

This project is licensed under the MIT License.

About

A toolset and pipeline for running zero shot and supervised protein fitness prediction, drop in compatible with scikitlearn

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published