We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent cdd928d commit 1c1c619Copy full SHA for 1c1c619
llama2_70b_lora/README.md
@@ -84,7 +84,7 @@ accelerate launch --config_file configs/default_config.yaml scripts/train.py \
84
--seed 1234 \
85
--lora_target_modules "qkv_proj,o_proj"
86
```
87
-where the Accelerate config file is [this one](https://github.com/regisss/lora/blob/main/configs/default_config.yaml).
+where the Accelerate config file is [this one](https://github.com/mlcommons/training/blob/master/llama2_70b_lora/configs/default_config.yaml).
88
89
> Using flash attention with `--use_flash_attn` is necessary for training on 8k-token sequences.
90
0 commit comments