Browse Models
The simplest way to self-host ControlNet SD 1.5 Inpaint. Launch a dedicated cloud GPU server running Lab Station OS to download and serve the model using any compatible app or framework.
Download model weights for local inference. Must be used with a compatible app, notebook, or codebase. May run slowly, or not work at all, depending on your system resources, particularly GPU(s) and available VRAM.
ControlNet SD 1.5 Inpaint enables selective image editing through dual-mask training (random and optical flow masks) and preserves unmodified areas. It handles thick input strokes and offers three control modes to balance prompt vs. control image influence. Built on Stable Diffusion 1.5 with specific inpainting optimizations.
ControlNet SD 1.5 Inpaint is a specialized model within the ControlNet 1.1 family, designed to provide precise control over image inpainting tasks while working with Stable Diffusion 1.5. The model represents a significant advancement in controlled image generation, particularly in areas requiring selective image modification and restoration.
The model maintains the same architecture as ControlNet 1.0, as noted in the ControlNet v1.1 repository. Its training dataset consists of a balanced mix of 50% random masks and 50% random optical flow occlusion masks, enabling it to handle both traditional inpainting tasks and video optical flow warping scenarios. This dual-purpose training approach makes it particularly versatile for various applications.
A key architectural feature is its ability to process input images through a neural network to create "detectmaps," which then guide the Stable Diffusion image generation process. The model demonstrates robust performance with thick scribbles up to 24 pixels wide in a 512-pixel canvas, making it practical for real-world applications.
The Inpaint model received a significant update in May 2023 that improved its ability to preserve unmasked areas, addressing a common concern in inpainting applications. This feature ensures that modifications remain confined to the intended areas, providing more precise control over the final output.
The model can be used as part of a larger workflow, supporting integration with other ControlNet models through the Multi-ControlNet feature. This capability allows for complex image manipulations combining multiple control types, such as inpainting with pose estimation or depth control.
The primary method for using ControlNet SD 1.5 Inpaint is through the Automatic1111 Stable Diffusion WebUI with the ControlNet extension. The model supports various memory optimization options, including a save_memory = True
setting in config.py
for users with 8GB GPUs, as detailed in the GitHub documentation.
For optimal results, users can adjust the balance between text prompts and control images using different control modes:
These modes are particularly useful when working with experimental models or when seeking specific output characteristics.
Within the ControlNet 1.1 family, which includes fourteen different models, the Inpaint variant stands out for its specialized focus on selective image modification. While other variants like Canny, Depth, and OpenPose focus on specific aspects of image control (edges, depth information, and pose estimation respectively), the Inpaint model specifically addresses the challenge of seamless image modification and restoration.
The model family has also evolved to include more efficient implementations through Control-LoRA variants, which reduce model size significantly while maintaining functionality. However, these variants may have limitations when working with manually created sketches or new content generated without source images.