Skip to content

Commit

Permalink
Fix configuration errors
Browse files Browse the repository at this point in the history
  • Loading branch information
binkjakub committed Jan 3, 2025
1 parent dfa47af commit 19acaed
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 8 deletions.
2 changes: 1 addition & 1 deletion configs/fine_tuning.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ training_args:
report_to: "wandb"
seed: 46
packing: True
torch_compile: True
torch_compile: False # setting to True might cause libcuda.so errors

peft_args:
r: 8
Expand Down
3 changes: 0 additions & 3 deletions configs/model/llama_3.1_8b_instruct.yaml
Original file line number Diff line number Diff line change
@@ -1,11 +1,8 @@
name: meta-llama/Llama-3.1-8B-Instruct
tokenizer_name: ${.name}

adapter_path: null

max_seq_length: 32_000
batch_size: 1
padding: longest
use_4bit: false

use_unsloth: false
8 changes: 4 additions & 4 deletions slurm/run_on_cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@

#SBATCH --job-name=juddges_sft
#SBATCH --output=logs/%j-%x.log
#SBATCH --time=72:00:00
#SBATCH --time=48:00:00
#SBATCH --nodes=1
#SBATCH --gpus-per-node=1
#SBATCH --cpus-per-gpu=8
#SBATCH --mem=64G
#SBATCH --gpus-per-node=4
#SBATCH --cpus-per-gpu=4
#SBATCH --mem=128G
# NOTE: You can reconfigure the above parameters to your needs in the sbatch call.
# NOTE: All env variables must be exported to be available after calling srun.
# NOTE: You may need to specify some NCCL args in .env file depending on your cluster configuration
Expand Down

0 comments on commit 19acaed

Please sign in to comment.