Skip to content

Commit 1c1c619

Browse files
hiwotadeseHiwot Kassa
and
Hiwot Kassa
authored
fix llama2_70b_lora broken link for Accelerate config file in the readme (#766)
Co-authored-by: Hiwot Kassa <[email protected]>
1 parent cdd928d commit 1c1c619

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

llama2_70b_lora/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ accelerate launch --config_file configs/default_config.yaml scripts/train.py \
8484
--seed 1234 \
8585
--lora_target_modules "qkv_proj,o_proj"
8686
```
87-
where the Accelerate config file is [this one](https://github.com/regisss/lora/blob/main/configs/default_config.yaml).
87+
where the Accelerate config file is [this one](https://github.com/mlcommons/training/blob/master/llama2_70b_lora/configs/default_config.yaml).
8888

8989
> Using flash attention with `--use_flash_attn` is necessary for training on 8k-token sequences.
9090

0 commit comments

Comments
 (0)