Replies: 1 comment 7 replies
-
Hi, there are some variations on what you're asking, for example, to use the input image as a source and ask the model to edit something in it? or to use it as a reference? For the first use case you can study instruct-pix2pix and for the later ones you can study IP Adapters, Tile Controlnets or Flux Redux. Not all of them have training scripts but you can learn the theory behind them. |
Beta Was this translation helpful? Give feedback.
7 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I'm asking this question as a total beginner to the world of LLMs.
There seems to be a good bunch of tutorials and notebooks out there to fine-tune an existing Stable Diffusion model but I'm having a hard time trying to find an equivalent to fine-tune a pre-trained model when the task consists in providing an input image along with a text or prompt to generate an output image.
Could someone point me to the right direction here?
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions