Browse Models
The simplest way to self-host ControlNet SD 1.5 MLSD. Launch a dedicated cloud GPU server running Lab Station OS to download and serve the model using any compatible app or framework.
Download model weights for local inference. Must be used with a compatible app, notebook, or codebase. May run slowly, or not work at all, depending on your system resources, particularly GPU(s) and available VRAM.
ControlNet SD 1.5 MLSD is a Stable Diffusion variant optimized for detecting and preserving straight lines in images. It uses Mobile Line Segment Detection to process control images, making it effective for architectural visualization and geometric designs. The model employs smart resampling to maintain consistent scaling across resolutions.
ControlNet SD 1.5 MLSD is a specialized variant of the ControlNet neural network architecture designed to work with Stable Diffusion 1.5. This model focuses specifically on Mobile Line Segment Detection (MLSD), enabling precise control over straight line features in image generation. The model is part of the broader ControlNet project, which adds conditional control to text-to-image diffusion models.
The model's architecture follows the ControlNet 1.1 framework, which is an improvement over ControlNet 1.0, offering enhanced robustness and result quality. It works by taking an additional input image (the "control" image) and using it to guide the generation process. For the MLSD variant specifically, the model processes straight lines detected in the input image to guide the final image generation. This is achieved through a preprocessor (annotator) that converts input images into detectmaps, which are then fed into the ControlNet model.
The implementation includes smart resampling algorithms that ensure pixel-perfect control image scaling regardless of input resolution. This feature is particularly valuable when working with manually created control images or when maintaining consistency across different resolutions is crucial, as discussed in the sd-webui-controlnet repository.
The MLSD variant excels at identifying and extracting straight lines from input images, making it particularly effective for generating images of architectural structures and man-made objects with prominent linear features. This specialization distinguishes it from other variants in the ControlNet family, such as:
The model can be used within the Automatic1111 Stable Diffusion web UI via a plugin, allowing for complex image manipulations by combining multiple ControlNets and other techniques like LoRAs.
For optimal performance, several parameter configurations are recommended:
The model supports various control modes to balance between following the input prompt and the control image. Users working with 8GB GPUs should set save_memory = True
in config.py
for optimal performance.
The model was included in the sd-webui-controlnet 1.1.400 update released on September 4th, 2023, designed for webui versions 1.6.0 and later. It incorporates data augmentation techniques, such as random left-right flipping, to improve generalization capabilities.