Skip to main content

Stable Diffusion

Stable Diffusion is a cutting-edge text-to-image model that generates photo-realistic images from text descriptions. This model is built on the Latent Diffusion Model and has been trained on a subset of the LAION-5B database using a frozen CLIP ViT-L/14 text encoder. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on a GPU with at least 10GB VRAM.

Stable Diffusion enables users to generate stunning images within seconds by simply providing a text prompt. The model is designed to cultivate autonomous creativity and produce incredible imagery without any human intervention.

Thanks to the generous compute donation from Stability AI and support from LAION, Stable Diffusion is able to push the boundaries of text-to-image generation and empower billions of people to create art like never before. To learn more about Stable Diffusion and how to use it, please refer to the model card and documentation provided by the creators.

By continuing to use the site, you agree to the use of cookies.