You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/training/custom_diffusion.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -54,7 +54,7 @@ accelerate config default
54
54
55
55
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
56
56
57
-
```bash
57
+
```py
58
58
from accelerate.utils import write_basic_config
59
59
60
60
write_basic_config()
@@ -84,7 +84,7 @@ Many of the basic parameters are described in the [DreamBooth](dreambooth#script
84
84
-`--freeze_model`: freezes the key and value parameters in the cross-attention layer; the default is `crossattn_kv`, but you can set it to `crossattn` to train all the parameters in the cross-attention layer
85
85
-`--concepts_list`: to learn multiple concepts, provide a path to a JSON file containing the concepts
86
86
-`--modifier_token`: a special word used to represent the learned concept
87
-
-`--initializer_token`:
87
+
-`--initializer_token`: a special word used to initialize the embeddings of the `modifier_token`
Copy file name to clipboardExpand all lines: docs/source/en/training/instructpix2pix.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -51,7 +51,7 @@ accelerate config default
51
51
52
52
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
53
53
54
-
```bash
54
+
```py
55
55
from accelerate.utils import write_basic_config
56
56
57
57
write_basic_config()
@@ -89,7 +89,7 @@ The dataset preprocessing code and training loop are found in the [`main()`](htt
89
89
90
90
As with the script parameters, a walkthrough of the training script is provided in the [Text-to-image](text2image#training-script) training guide. Instead, this guide takes a look at the InstructPix2Pix relevant parts of the script.
91
91
92
-
The script begins by modifing the [number of input channels](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L445) in the first convolutional layer of the UNet to account for InstructPix2Pix's additional conditioning image:
92
+
The script begins by modifying the [number of input channels](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L445) in the first convolutional layer of the UNet to account for InstructPix2Pix's additional conditioning image:
Copy file name to clipboardExpand all lines: docs/source/en/training/lcm_distill.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -53,7 +53,7 @@ accelerate config default
53
53
54
54
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
55
55
56
-
```bash
56
+
```py
57
57
from accelerate.utils import write_basic_config
58
58
59
59
write_basic_config()
@@ -252,4 +252,4 @@ The SDXL training script is discussed in more detail in the [SDXL training](sdxl
252
252
Congratulations on distilling a LCM model! To learn more about LCM, the following may be helpful:
253
253
254
254
- Learn how to use [LCMs for inference](../using-diffusers/lcm) for text-to-image, image-to-image, and with LoRA checkpoints.
255
-
- Read the [SDXL in 4 steps with Latent Consistency LoRAs](https://huggingface.co/blog/lcm_lora) blog post to learn more about SDXL LCM-LoRA's for super fast inference, quality comparisons, benchmarks, and more.
255
+
- Read the [SDXL in 4 steps with Latent Consistency LoRAs](https://huggingface.co/blog/lcm_lora) blog post to learn more about SDXL LCM-LoRA's for super fast inference, quality comparisons, benchmarks, and more.
Based on the AUTOMATIC1111/webui for checkpoint merging. This is a custom pipeline that merges upto 3 pretrained model checkpoints as long as they are in the HuggingFace model_index.json format.
937
937
938
-
The checkpoint merging is currently memory intensive as it modifies the weights of a DiffusionPipeline object in place. Expect atleast 13GB RAM Usage on Kaggle GPU kernels and
938
+
The checkpoint merging is currently memory intensive as it modifies the weights of a DiffusionPipeline object in place. Expect at least 13GB RAM Usage on Kaggle GPU kernels and
939
939
on colab you might run out of the 12GB memory even while merging two checkpoints.
Copy file name to clipboardExpand all lines: examples/community/mixture_tiling.py
+2-2
Original file line number
Diff line number
Diff line change
@@ -196,7 +196,7 @@ def __call__(
196
196
guidance_scale_tiles: specific weights for classifier-free guidance in each tile.
197
197
guidance_scale_tiles: specific weights for classifier-free guidance in each tile. If None, the value provided in guidance_scale will be used.
198
198
seed_tiles: specific seeds for the initialization latents in each tile. These will override the latents generated for the whole canvas using the standard seed parameter.
199
-
seed_tiles_mode: either "full" "exclusive". If "full", all the latents affected by the tile be overriden. If "exclusive", only the latents that are affected exclusively by this tile (and no other tiles) will be overrriden.
199
+
seed_tiles_mode: either "full" "exclusive". If "full", all the latents affected by the tile be overriden. If "exclusive", only the latents that are affected exclusively by this tile (and no other tiles) will be overriden.
200
200
seed_reroll_regions: a list of tuples in the form (start row, end row, start column, end column, seed) defining regions in pixel space for which the latents will be overriden using the given seed. Takes priority over seed_tiles.
201
201
cpu_vae: the decoder from latent space to pixel space can require too mucho GPU RAM for large images. If you find out of memory errors at the end of the generation process, try setting this parameter to True to run the decoder in CPU. Slower, but should run without memory issues.
0 commit comments