Browse Models
The simplest way to self-host Flex.1 Alpha. Launch a dedicated cloud GPU server running Lab Station OS to download and serve the model using any compatible app or framework.
Download model weights for local inference. Must be used with a compatible app, notebook, or codebase. May run slowly, or not work at all, depending on your system resources, particularly GPU(s) and available VRAM.
Flex.1 Alpha is an 8B parameter text-to-image model featuring 8 double transformer blocks and a trained guidance embedder that eliminates the need for CFG. It evolved from FLUX.1 through strategic pruning and compression, maintaining output quality while reducing computational complexity. Compatible with standard inference engines and optimized for fine-tuning.
Flex.1 Alpha represents a significant advancement in text-to-image AI models, featuring an 8-billion parameter rectified flow transformer architecture. The model operates with fewer double transformer blocks compared to its relative FLUX.1-dev (8 versus 19), making it more efficient while maintaining strong performance capabilities. Originally developed as a fine-tune of FLUX.1-schnell, the model retains the Apache 2.0 license, ensuring open accessibility for researchers and developers.
A distinctive feature of Flex.1 Alpha is its trained guidance embedder, which eliminates the need for Classifier-Free Guidance (CFG) during image generation. This innovation streamlines the generation process while maintaining high-quality outputs. The model can process text descriptions up to 512 tokens in length, providing substantial flexibility for detailed prompts.
The journey of Flex.1 Alpha began with its creation as a training adapter for FLUX.1-schnell, specifically designed to enable LoRA training capabilities. This initial adapter was subsequently merged into FLUX.1-schnell and underwent further training on images generated by that model to improve compression, leading to the development of OpenFLUX.1, which saw ten distinct versions.
The development process included extensive experimentation with model pruning, resulting in unreleased 7B and 4B parameter versions. The final 8B parameter configuration was influenced by the pruning strategy of flux.1-lite-8B-alpha. A significant enhancement came with the training of a separate guidance embedder, which can be optionally utilized, enhancing both model flexibility and trainability.
Flex.1 Alpha maintains compatibility with most inference engines that support FLUX.1-dev, including Diffusers and ComfyUI. For ComfyUI implementation, users need to place the Flex.1-alpha.safetensors
file in the checkpoints folder. The model has been specifically designed with fine-tuning capabilities in mind, with optimal results achieved when bypassing the guidance embedder during the fine-tuning process.
Day 1 LoRA training support is available through the AI-Toolkit, providing users with robust tools for model customization and enhancement. Example configurations for training LoRAs can be found in the AI-Toolkit example config.