You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
BREAKING CHANGE: we have 3 different images now instead of just one: base, sdxl and sd3
* ci: use branch name for creating dev releases
* ci: replace "/" with "-" to have a valid tag name
* ci: correctly handle the tag name
* ci: build an image that contains sd3 using docker bake
* ci: use "set" instead of "args"
* ci: use "env" instead of "set"
* ci: use variables instead of args
* ci: set variables directly for the targets
* ci: write the secrets into the GITHUB_ENV
* ci: handle env variables correctly
* ci: use env variables from GitHub Variables
* ci: added back to env
* ci: print out env
* ci: adding the vars directly into the workflow
* ci: example workflow for sd3
* ci: renamed DOCKERHUB_REPO to DOCKERHUB_REPOSITORY
* ci: removed quotes for DOCKERHUB_REPOSITORY
* ci: only use DOCKERHUB_REPO in bake
* ci: added vars into sd3 target
* ci: added direct target
* ci: back to basics
* ci: multi-stage build to not expose the HUGGINGFACE_ACCESS_TOKEN
* ci: write everything into GITHUB_ENV again
* ci: use correct name for final stage
* ci: use correct runner
* fix: make sure to use the latest versions of all packages
* ci: simplified variables for all targets
* docs: added 3 images, updated build your own image
* docs: updated TOC
* ci: updated name
* ci: use docker bake to publish 3 images instead of just 1
Copy file name to clipboardExpand all lines: Dockerfile
+28-11Lines changed: 28 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
#Use Nvidia CUDA base image
1
+
#Stage 1: Base image with common dependencies
2
2
FROM nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04 as base
3
3
4
4
# Prevents prompts from packages asking for user input during installation
@@ -24,16 +24,9 @@ RUN git clone https://github.com/comfyanonymous/ComfyUI.git /comfyui
24
24
# Change working directory to ComfyUI
25
25
WORKDIR /comfyui
26
26
27
-
ARG SKIP_DEFAULT_MODELS
28
-
# Download checkpoints/vae/LoRA to include in image.
29
-
RUN if [ -z "$SKIP_DEFAULT_MODELS" ]; then wget -O models/checkpoints/sd_xl_base_1.0.safetensors https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors; fi
30
-
RUN if [ -z "$SKIP_DEFAULT_MODELS" ]; then wget -O models/vae/sdxl_vae.safetensors https://huggingface.co/stabilityai/sdxl-vae/resolve/main/sdxl_vae.safetensors; fi
31
-
RUN if [ -z "$SKIP_DEFAULT_MODELS" ]; then wget -O models/vae/sdxl-vae-fp16-fix.safetensors https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/resolve/main/sdxl_vae.safetensors; fi
32
-
33
27
# Install ComfyUI dependencies
34
-
RUN pip3 install --no-cache-dir torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 \
- 🐳 Use the latest release of the image for your worker: [timpietruskyblibla/runpod-worker-comfy:2.1.3](https://hub.docker.com/r/timpietruskyblibla/runpod-worker-comfy)
52
+
- 🐳 Choose one of the three available images for your serverless endpoint:
53
+
-`timpietruskyblibla/runpod-worker-comfy:3.0.0-base`: doesn't contain any checkpoints, just a clean ComfyUI image
54
+
-`timpietruskyblibla/runpod-worker-comfy:3.0.0-sdxl`: contains the checkpoints and VAE for Stable Diffusion XL
55
+
-`timpietruskyblibla/runpod-worker-comfy:3.0.0-sd3`: contains the medium checkpoint for Stable Diffusion 3
53
56
- ⚙️ [Set the environment variables](#config)
54
57
- ℹ️ [Use the Docker image on RunPod](#use-the-docker-image-on-runpod)
-`<version>-sd3`: contains the checkpoint [sd3_medium_incl_clips_t5xxlfp8.safetensors](https://huggingface.co/stabilityai/stable-diffusion-3-medium) for Stable Diffusion 3
68
74
-[Bring your own models](#bring-your-own-models)
69
75
- Based on [Ubuntu + NVIDIA CUDA](https://hub.docker.com/r/nvidia/cuda)
70
76
@@ -98,10 +104,10 @@ This is only needed if you want to upload the generated picture to AWS S3. If yo
98
104
- In the dialog, configure:
99
105
- Template Name: `runpod-worker-comfy` (it can be anything you want)
100
106
- Template Type: serverless (change template type to "serverless")
101
-
- Container Image: `<dockerhub_username>/<repository_name>:tag`, in this case: `timpietruskyblibla/runpod-worker-comfy:2.1.3` (or `dev` if you want to have the development release)
107
+
- Container Image: `<dockerhub_username>/<repository_name>:tag`, in this case: `timpietruskyblibla/runpod-worker-comfy:3.0.0-sd3` (or `-base` for a clean image or `-sdxl` for Stable Diffusion XL)
102
108
- Container Registry Credentials: You can leave everything as it is, as this repo is public
- Note: You can also not configure it, the images will then stay in the worker. In order to have them stored permanently, [we have to add the network volume](https://github.com/blib-la/runpod-worker-comfy/issues/1)
106
112
- Click on `Save Template`
107
113
- Navigate to [`Serverless > Endpoints`](https://www.runpod.io/console/serverless/user/endpoints) and click on `New Endpoint`
@@ -112,7 +118,7 @@ This is only needed if you want to upload the generated picture to AWS S3. If yo
112
118
- Max Workers: `3` (whatever makes sense for you)
113
119
- Idle Timeout: `5` (you can leave the default)
114
120
- Flash Boot: `enabled` (doesn't cost more, but provides faster boot of our worker, which is good)
115
-
- Advanced: If you are using a Network Volume, select it under `Select Network Volume`. Otherwise leave the defaults.
121
+
-(optional) Advanced: If you are using a Network Volume, select it under `Select Network Volume`. Otherwise leave the defaults.
116
122
- Select a GPU that has some availability
117
123
- GPUs/Worker: `1`
118
124
- Click `deploy`
@@ -283,15 +289,21 @@ If you prefer to include your models directly in the Docker image, follow these
> Ensure to specify `--platform linux/amd64` to avoid errors on RunPod, see [issue #13](https://github.com/blib-la/runpod-worker-comfy/issues/13).
295
307
296
308
## Local testing
297
309
@@ -385,14 +397,19 @@ The repo contains two workflows that publish the image to Docker hub using GitHu
385
397
- [dev.yml](.github/workflows/dev.yml): Creates the image and pushes it to Docker hub with the `dev` tag on every push to the `main` branch
386
398
- [release.yml](.github/workflows/release.yml): Creates the image and pushes it to Docker hub with the `latest` and the release tag. It will only be triggered when you create a release on GitHub
387
399
388
-
If you want to use this, you should add these secrets to your repository:
400
+
If you want to use this, you should add these **secrets** to your repository:
401
+
402
+
| Configuration Variable | Description | Example Value |
"text": "comic illustration of a white unicorn with a golden horn and pink mane and tail standing amidst a colorful and magical fantasy landscape. The background is filled with pastel-colored mountains and fluffy clouds and colorful balloons and stars. There are vibrant rainbows arching across the sky. The ground is adorned with oversized, candy-like plants, trees shaped like lollipops, and swirling ice cream cones. The scene is bathed in soft, dreamy light, giving it an enchanting and otherworldly feel. 4k, high resolution",
0 commit comments