FIX: Small regression in BNB LoRA output #2238
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Our regression tests reveal that the 8bit LoRA BNB regression test is failing. To reproduce, run:
pytest tests/regression/test_regression.py -s --regression -k test_lora_8bit
The regression was introduced in #2122. We didn't notice this earlier because of other failing tests in the nightly CI.
The cause of the error is subtle. In the original code, we would calculate the LoRA output, convert the
dtype
if necessary, then add it to the base output. After the mentioned PR, we calculate the LoRA output, add it to the base output, then convert thedtype
if necessary (code). The difference is very small on a per layer basis, but it can accumulate over the layers, leading to a significant difference in outputs, as witnessed by the regression test.This PR rolls back this specific part of the PR (both for 8bit and 4bit) while leaving the main change of that PR intact.