-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Snakemake rehaul #35
Snakemake rehaul #35
Conversation
edge_feats: none # label, onehot, or none | ||
drugs: | ||
max_num_atoms: 150 | ||
node_feats: label # label, onehot, glycan |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
another option is "IUPAC"
The esm-script raises the following error: Traceback (most recent call last):
File "/home/rjo21/Desktop/rindti/.snakemake/scripts/tmphxhv1pa3.prot_esm.py", line 38, in <module>
prots = generate_esm_python(prots)
File "/home/rjo21/Desktop/rindti/.snakemake/scripts/tmphxhv1pa3.prot_esm.py", line 22, in generate_esm_python
results = model(batch_tokens, repr_layers=[33], return_contacts=True)
File "/home/rjo21/anaconda3/envs/gpcr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/rjo21/anaconda3/envs/gpcr/lib/python3.9/site-packages/esm/model.py", line 155, in forward
x, attn = layer(x, self_attn_padding_mask=padding_mask, need_head_weights=need_head_weights)
File "/home/rjo21/anaconda3/envs/gpcr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/rjo21/anaconda3/envs/gpcr/lib/python3.9/site-packages/esm/modules.py", line 107, in forward
x, attn = self.self_attn(
File "/home/rjo21/anaconda3/envs/gpcr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/rjo21/anaconda3/envs/gpcr/lib/python3.9/site-packages/esm/multihead_attention.py", line 359, in forward
attn_weights = torch.bmm(q, k.transpose(1, 2))
RuntimeError: [enforce fail at CPUAllocator.cpp:68] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 60985180160 bytes. Error code 12 (Cannot allocate memory) I also added a length restriction to the sequence length in generate_esm_python. Should be 1022. I know, esm-input is supposed to be of length 1024, but this doesn't work, raises error stating the sequence is too long |
Closes #28, closes #25, closes #26, closes #28, closes #27, closes #19.
Massive overall rehaul, should be much more extensible and sane to use.