Skip to content

Commit a51b6cc

Browse files
[Docs] Fix typos (#7451)
* Fix typos * Fix typos * Fix typos --------- Co-authored-by: Sayak Paul <[email protected]>
1 parent 3bce0f3 commit a51b6cc

38 files changed

+65
-65
lines changed

docs/source/en/training/controlnet.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ accelerate config default
8888

8989
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
9090

91-
```bash
91+
```py
9292
from accelerate.utils import write_basic_config
9393

9494
write_basic_config()

docs/source/en/training/custom_diffusion.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ accelerate config default
5454

5555
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
5656

57-
```bash
57+
```py
5858
from accelerate.utils import write_basic_config
5959

6060
write_basic_config()
@@ -84,7 +84,7 @@ Many of the basic parameters are described in the [DreamBooth](dreambooth#script
8484
- `--freeze_model`: freezes the key and value parameters in the cross-attention layer; the default is `crossattn_kv`, but you can set it to `crossattn` to train all the parameters in the cross-attention layer
8585
- `--concepts_list`: to learn multiple concepts, provide a path to a JSON file containing the concepts
8686
- `--modifier_token`: a special word used to represent the learned concept
87-
- `--initializer_token`:
87+
- `--initializer_token`: a special word used to initialize the embeddings of the `modifier_token`
8888

8989
### Prior preservation loss
9090

docs/source/en/training/dreambooth.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ accelerate config default
6767

6868
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
6969

70-
```bash
70+
```py
7171
from accelerate.utils import write_basic_config
7272

7373
write_basic_config()
@@ -180,7 +180,7 @@ elif args.pretrained_model_name_or_path:
180180
revision=args.revision,
181181
use_fast=False,
182182
)
183-
183+
184184
# Load scheduler and models
185185
noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
186186
text_encoder = text_encoder_cls.from_pretrained(

docs/source/en/training/instructpix2pix.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ accelerate config default
5151

5252
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
5353

54-
```bash
54+
```py
5555
from accelerate.utils import write_basic_config
5656

5757
write_basic_config()
@@ -89,7 +89,7 @@ The dataset preprocessing code and training loop are found in the [`main()`](htt
8989

9090
As with the script parameters, a walkthrough of the training script is provided in the [Text-to-image](text2image#training-script) training guide. Instead, this guide takes a look at the InstructPix2Pix relevant parts of the script.
9191

92-
The script begins by modifing the [number of input channels](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L445) in the first convolutional layer of the UNet to account for InstructPix2Pix's additional conditioning image:
92+
The script begins by modifying the [number of input channels](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L445) in the first convolutional layer of the UNet to account for InstructPix2Pix's additional conditioning image:
9393

9494
```py
9595
in_channels = 8

docs/source/en/training/kandinsky.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ accelerate config default
5959

6060
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
6161

62-
```bash
62+
```py
6363
from accelerate.utils import write_basic_config
6464

6565
write_basic_config()
@@ -235,7 +235,7 @@ accelerate launch --mixed_precision="fp16" train_text_to_image_prior.py \
235235
--validation_prompts="A robot pokemon, 4k photo" \
236236
--report_to="wandb" \
237237
--push_to_hub \
238-
--output_dir="kandi2-prior-pokemon-model"
238+
--output_dir="kandi2-prior-pokemon-model"
239239
```
240240

241241
</hfoption>
@@ -259,7 +259,7 @@ accelerate launch --mixed_precision="fp16" train_text_to_image_decoder.py \
259259
--validation_prompts="A robot pokemon, 4k photo" \
260260
--report_to="wandb" \
261261
--push_to_hub \
262-
--output_dir="kandi2-decoder-pokemon-model"
262+
--output_dir="kandi2-decoder-pokemon-model"
263263
```
264264

265265
</hfoption>

docs/source/en/training/lcm_distill.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ accelerate config default
5353

5454
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
5555

56-
```bash
56+
```py
5757
from accelerate.utils import write_basic_config
5858

5959
write_basic_config()
@@ -252,4 +252,4 @@ The SDXL training script is discussed in more detail in the [SDXL training](sdxl
252252
Congratulations on distilling a LCM model! To learn more about LCM, the following may be helpful:
253253

254254
- Learn how to use [LCMs for inference](../using-diffusers/lcm) for text-to-image, image-to-image, and with LoRA checkpoints.
255-
- Read the [SDXL in 4 steps with Latent Consistency LoRAs](https://huggingface.co/blog/lcm_lora) blog post to learn more about SDXL LCM-LoRA's for super fast inference, quality comparisons, benchmarks, and more.
255+
- Read the [SDXL in 4 steps with Latent Consistency LoRAs](https://huggingface.co/blog/lcm_lora) blog post to learn more about SDXL LCM-LoRA's for super fast inference, quality comparisons, benchmarks, and more.

docs/source/en/training/sdxl.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ accelerate config default
5959

6060
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
6161

62-
```bash
62+
```py
6363
from accelerate.utils import write_basic_config
6464

6565
write_basic_config()

docs/source/en/training/t2i_adapters.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ accelerate config default
5353

5454
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
5555

56-
```bash
56+
```py
5757
from accelerate.utils import write_basic_config
5858

5959
write_basic_config()

docs/source/en/training/text2image.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ accelerate config default
6969

7070
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
7171

72-
```bash
72+
```py
7373
from accelerate.utils import write_basic_config
7474

7575
write_basic_config()

docs/source/en/training/text_inversion.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ accelerate config default
6767

6868
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
6969

70-
```bash
70+
```py
7171
from accelerate.utils import write_basic_config
7272

7373
write_basic_config()

docs/source/en/training/unconditional_training.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ accelerate config default
5151

5252
Or if your environment doesn't support an interactive shell like a notebook, you can use:
5353

54-
```bash
54+
```py
5555
from accelerate.utils import write_basic_config
5656

5757
write_basic_config()

docs/source/en/training/wuerstchen.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ accelerate config default
5353

5454
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
5555

56-
```bash
56+
```py
5757
from accelerate.utils import write_basic_config
5858

5959
write_basic_config()
@@ -173,7 +173,7 @@ pipeline = AutoPipelineForText2Image.from_pretrained("path/to/saved/model", torc
173173

174174
caption = "A cute bird pokemon holding a shield"
175175
images = pipeline(
176-
caption,
176+
caption,
177177
width=1024,
178178
height=1536,
179179
prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS,

examples/community/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -935,7 +935,7 @@ image = pipe(prompt, generator=generator, num_inference_steps=50).images[0]
935935
### Checkpoint Merger Pipeline
936936
Based on the AUTOMATIC1111/webui for checkpoint merging. This is a custom pipeline that merges upto 3 pretrained model checkpoints as long as they are in the HuggingFace model_index.json format.
937937

938-
The checkpoint merging is currently memory intensive as it modifies the weights of a DiffusionPipeline object in place. Expect atleast 13GB RAM Usage on Kaggle GPU kernels and
938+
The checkpoint merging is currently memory intensive as it modifies the weights of a DiffusionPipeline object in place. Expect at least 13GB RAM Usage on Kaggle GPU kernels and
939939
on colab you might run out of the 12GB memory even while merging two checkpoints.
940940

941941
Usage:-

examples/community/checkpoint_merger.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ def merge(self, pretrained_model_name_or_path_list: List[Union[str, os.PathLike]
103103
print(f"Combining with alpha={alpha}, interpolation mode={interp}")
104104

105105
checkpoint_count = len(pretrained_model_name_or_path_list)
106-
# Ignore result from model_index_json comparision of the two checkpoints
106+
# Ignore result from model_index_json comparison of the two checkpoints
107107
force = kwargs.pop("force", False)
108108

109109
# If less than 2 checkpoints, nothing to merge. If more than 3, not supported for now.
@@ -217,7 +217,7 @@ def merge(self, pretrained_model_name_or_path_list: List[Union[str, os.PathLike]
217217
]
218218
checkpoint_path_2 = files[0] if len(files) > 0 else None
219219
# For an attr if both checkpoint_path_1 and 2 are None, ignore.
220-
# If atleast one is present, deal with it according to interp method, of course only if the state_dict keys match.
220+
# If at least one is present, deal with it according to interp method, of course only if the state_dict keys match.
221221
if checkpoint_path_1 is None and checkpoint_path_2 is None:
222222
print(f"Skipping {attr}: not present in 2nd or 3d model")
223223
continue

examples/community/latent_consistency_interpolate.py

+9-9
Original file line numberDiff line numberDiff line change
@@ -726,7 +726,7 @@ def __call__(
726726
callback_on_step_end_tensor_inputs (`List`, *optional*):
727727
The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
728728
will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
729-
`._callback_tensor_inputs` attribute of your pipeine class.
729+
`._callback_tensor_inputs` attribute of your pipeline class.
730730
embedding_interpolation_type (`str`, *optional*, defaults to `"lerp"`):
731731
The type of interpolation to use for interpolating between text embeddings. Choose between `"lerp"` and `"slerp"`.
732732
latent_interpolation_type (`str`, *optional*, defaults to `"slerp"`):
@@ -779,7 +779,7 @@ def __call__(
779779
else:
780780
batch_size = prompt_embeds.shape[0]
781781
if batch_size < 2:
782-
raise ValueError(f"`prompt` must have length of atleast 2 but found {batch_size}")
782+
raise ValueError(f"`prompt` must have length of at least 2 but found {batch_size}")
783783
if num_images_per_prompt != 1:
784784
raise ValueError("`num_images_per_prompt` must be `1` as no other value is supported yet")
785785
if prompt_embeds is not None:
@@ -883,7 +883,7 @@ def __call__(
883883
) as batch_progress_bar:
884884
for batch_index in range(0, bs, process_batch_size):
885885
batch_inference_latents = inference_latents[batch_index : batch_index + process_batch_size]
886-
batch_inference_embedddings = inference_embeddings[
886+
batch_inference_embeddings = inference_embeddings[
887887
batch_index : batch_index + process_batch_size
888888
]
889889

@@ -892,7 +892,7 @@ def __call__(
892892
)
893893
timesteps = self.scheduler.timesteps
894894

895-
current_bs = batch_inference_embedddings.shape[0]
895+
current_bs = batch_inference_embeddings.shape[0]
896896
w = torch.tensor(self.guidance_scale - 1).repeat(current_bs)
897897
w_embedding = self.get_guidance_scale_embedding(
898898
w, embedding_dim=self.unet.config.time_cond_proj_dim
@@ -901,14 +901,14 @@ def __call__(
901901
# 10. Perform inference for current batch
902902
with self.progress_bar(total=num_inference_steps) as progress_bar:
903903
for index, t in enumerate(timesteps):
904-
batch_inference_latents = batch_inference_latents.to(batch_inference_embedddings.dtype)
904+
batch_inference_latents = batch_inference_latents.to(batch_inference_embeddings.dtype)
905905

906906
# model prediction (v-prediction, eps, x)
907907
model_pred = self.unet(
908908
batch_inference_latents,
909909
t,
910910
timestep_cond=w_embedding,
911-
encoder_hidden_states=batch_inference_embedddings,
911+
encoder_hidden_states=batch_inference_embeddings,
912912
cross_attention_kwargs=self.cross_attention_kwargs,
913913
return_dict=False,
914914
)[0]
@@ -924,8 +924,8 @@ def __call__(
924924
callback_outputs = callback_on_step_end(self, index, t, callback_kwargs)
925925

926926
batch_inference_latents = callback_outputs.pop("latents", batch_inference_latents)
927-
batch_inference_embedddings = callback_outputs.pop(
928-
"prompt_embeds", batch_inference_embedddings
927+
batch_inference_embeddings = callback_outputs.pop(
928+
"prompt_embeds", batch_inference_embeddings
929929
)
930930
w_embedding = callback_outputs.pop("w_embedding", w_embedding)
931931
denoised = callback_outputs.pop("denoised", denoised)
@@ -939,7 +939,7 @@ def __call__(
939939
step_idx = index // getattr(self.scheduler, "order", 1)
940940
callback(step_idx, t, batch_inference_latents)
941941

942-
denoised = denoised.to(batch_inference_embedddings.dtype)
942+
denoised = denoised.to(batch_inference_embeddings.dtype)
943943

944944
# Note: This is not supported because you would get black images in your latent walk if
945945
# NSFW concept is detected

examples/community/lpw_stable_diffusion_xl.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,7 @@ def get_prompts_tokens_with_weights(clip_tokenizer: CLIPTokenizer, prompt: str):
164164
text_tokens (list)
165165
A list contains token ids
166166
text_weight (list)
167-
A list contains the correspodent weight of token ids
167+
A list contains the correspondent weight of token ids
168168
169169
Example:
170170
import torch
@@ -1028,7 +1028,7 @@ def get_timesteps(self, num_inference_steps, strength, device, denoising_start=N
10281028
# because `num_inference_steps` might be even given that every timestep
10291029
# (except the highest one) is duplicated. If `num_inference_steps` is even it would
10301030
# mean that we cut the timesteps in the middle of the denoising step
1031-
# (between 1st and 2nd devirative) which leads to incorrect results. By adding 1
1031+
# (between 1st and 2nd derivative) which leads to incorrect results. By adding 1
10321032
# we ensure that the denoising process always ends after the 2nd derivate step of the scheduler
10331033
num_inference_steps = num_inference_steps + 1
10341034

@@ -1531,7 +1531,7 @@ def __call__(
15311531
callback_on_step_end_tensor_inputs (`List`, *optional*):
15321532
The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
15331533
will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
1534-
`._callback_tensor_inputs` attribute of your pipeine class.
1534+
`._callback_tensor_inputs` attribute of your pipeline class.
15351535
15361536
Examples:
15371537
@@ -2131,7 +2131,7 @@ def inpaint(
21312131
**kwargs,
21322132
)
21332133

2134-
# Overrride to properly handle the loading and unloading of the additional text encoder.
2134+
# Override to properly handle the loading and unloading of the additional text encoder.
21352135
def load_lora_weights(self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs):
21362136
# We could have accessed the unet config from `lora_state_dict()` too. We pass
21372137
# it here explicitly to be able to tell that it's coming from an SDXL

examples/community/mixture_tiling.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,7 @@ def __call__(
196196
guidance_scale_tiles: specific weights for classifier-free guidance in each tile.
197197
guidance_scale_tiles: specific weights for classifier-free guidance in each tile. If None, the value provided in guidance_scale will be used.
198198
seed_tiles: specific seeds for the initialization latents in each tile. These will override the latents generated for the whole canvas using the standard seed parameter.
199-
seed_tiles_mode: either "full" "exclusive". If "full", all the latents affected by the tile be overriden. If "exclusive", only the latents that are affected exclusively by this tile (and no other tiles) will be overrriden.
199+
seed_tiles_mode: either "full" "exclusive". If "full", all the latents affected by the tile be overriden. If "exclusive", only the latents that are affected exclusively by this tile (and no other tiles) will be overriden.
200200
seed_reroll_regions: a list of tuples in the form (start row, end row, start column, end column, seed) defining regions in pixel space for which the latents will be overriden using the given seed. Takes priority over seed_tiles.
201201
cpu_vae: the decoder from latent space to pixel space can require too mucho GPU RAM for large images. If you find out of memory errors at the end of the generation process, try setting this parameter to True to run the decoder in CPU. Slower, but should run without memory issues.
202202
@@ -325,7 +325,7 @@ def __call__(
325325
if accepts_eta:
326326
extra_step_kwargs["eta"] = eta
327327

328-
# Mask for tile weights strenght
328+
# Mask for tile weights strength
329329
tile_weights = self._gaussian_weights(tile_width, tile_height, batch_size)
330330

331331
# Diffusion timesteps

examples/community/pipeline_animatediff_controlnet.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -832,15 +832,15 @@ def __call__(
832832
clip_skip (`int`, *optional*):
833833
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
834834
the output of the pre-final layer will be used for computing the prompt embeddings.
835-
allback_on_step_end (`Callable`, *optional*):
835+
callback_on_step_end (`Callable`, *optional*):
836836
A function that calls at the end of each denoising steps during the inference. The function is called
837837
with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
838838
callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
839839
`callback_on_step_end_tensor_inputs`.
840840
callback_on_step_end_tensor_inputs (`List`, *optional*):
841841
The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
842842
will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
843-
`._callback_tensor_inputs` attribute of your pipeine class.
843+
`._callback_tensor_inputs` attribute of your pipeline class.
844844
845845
Examples:
846846

examples/community/pipeline_demofusion_sdxl.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1280,7 +1280,7 @@ def __call__(
12801280

12811281
return output_images
12821282

1283-
# Overrride to properly handle the loading and unloading of the additional text encoder.
1283+
# Override to properly handle the loading and unloading of the additional text encoder.
12841284
def load_lora_weights(self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs):
12851285
# We could have accessed the unet config from `lora_state_dict()` too. We pass
12861286
# it here explicitly to be able to tell that it's coming from an SDXL

examples/community/pipeline_sdxl_style_aligned.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -887,7 +887,7 @@ def get_timesteps(self, num_inference_steps, strength, device, denoising_start=N
887887
# because `num_inference_steps` might be even given that every timestep
888888
# (except the highest one) is duplicated. If `num_inference_steps` is even it would
889889
# mean that we cut the timesteps in the middle of the denoising step
890-
# (between 1st and 2nd devirative) which leads to incorrect results. By adding 1
890+
# (between 1st and 2nd derivative) which leads to incorrect results. By adding 1
891891
# we ensure that the denoising process always ends after the 2nd derivate step of the scheduler
892892
num_inference_steps = num_inference_steps + 1
893893

0 commit comments

Comments
 (0)