Skip to content

Commit

Permalink
Update flex_attention.py
Browse files Browse the repository at this point in the history
  • Loading branch information
danielhanchen committed Sep 3, 2024
1 parent 315136a commit 7ea6395
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion unsloth/kernels/flex_attention.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@
pass

# Logit softcapping
@torch.compile(fullgraph = False, dynamic = True, options = torch_compile_options)
@torch.compile(fullgraph = True, dynamic = True, options = torch_compile_options)
def slow_attention_softcapping(Q, K, V, causal_mask, self, bsz, q_len):
n_heads = self.num_heads
head_dim = self.head_dim
Expand Down

0 comments on commit 7ea6395

Please sign in to comment.