The generations I currently do has the following workflow:

Start:

  • Text based image generation to find a good base image
  • Take output and scale it up 4x to get more pixels to work with

Circular part:

  • Image2Image inpaint mask, inpaint mask only with 1024x1024 to achieve more detail
  • Change text-prompt or add LORA to get specific results for the masked area
  • Save the resulting image, remove the mask and add another masked area on the new "updated" image

End:

  • When results are good enough, scale down by 2x and save the image

I can get the Start and End working, but the circular part where the output image is used as input doesn't seem to work in ComfyUI however I try to make it work without doing tons of manual steps.

Right now in Automatic1111 UI I can basically just send the output image to the next tab, or back to the inpaint image input when I'm happy with one of the masked results.

I'd love to be able to use the output as input in ComfyUI as well if that is possible.