Run Your Own AI Image Generator: Local Setup with Docker and Open WebUI

By • min read

Overview

We've all been there: you need a few images for a project, fire up an AI image service, and suddenly you're counting credits, worrying about prompt privacy, or battling a "safe content" filter that rejects your perfectly reasonable request for a dragon in a business suit. What if you could skip the cloud entirely and run the whole pipeline on your own machine—with a sleek chat interface to boot?

Run Your Own AI Image Generator: Local Setup with Docker and Open WebUI
Source: www.docker.com

That's exactly what Docker Model Runner now enables. With just a couple of terminal commands, you can pull an image-generation model, connect it to Open WebUI, and start generating images from a chat interface—all fully local, fully private, and entirely under your control.

In this guide, you'll build your own private DALL-E replacement. No cloud subscription, no data leaving your computer, just your hardware and a few open‑source pieces. Let's dive in.

Prerequisites

Before you begin, make sure you have the following:

Step‑by‑Step Instructions

1. Pull an Image Generation Model

Docker Model Runner uses a compact packaging format called DDUF (Diffusers Unified Format) for distributing image generation models through Docker Hub, just like any other OCI artifact. The model is stored locally as a single file that bundles all components of a diffusion model (text encoder, VAE, UNet/DiT, scheduler config).

Open a terminal and pull the Stable Diffusion model:

docker model pull stable-diffusion

This downloads the model (about 7 GB for the default variant). You can verify it was pulled correctly:

docker model inspect stable-diffusion

You should see output similar to:

{
    "id": "sha256:5f60862074a4c585126288d08555e5ad9ef65044bf490ff3a64855fc84d06823",
    "tags": ["docker.io/ai/stable-diffusion:latest"],
    "created": 1768470632,
    "config": {
        "format": "diffusers",
        "architecture": "diffusers",
        "size": "6.94GB",
        "diffusers": {
            "dduf_file": "stable-diffusion-xl-base-1.0-FP16.dduf",
            "layout": "dduf"
        }
    }
}

What's happening under the hood? The DDUF file bundles all necessary components—the text encoder for understanding your prompt, the VAE for decoding latent images, and the core denoising U-Net (or DiT for newer models). Docker Model Runner will unpack and load this model at runtime, exposing an OpenAI‑compatible API endpoint (including POST /v1/images/generations).

2. Launch Open WebUI

This is the magic trick. Docker Model Runner has a built‑in launch command that knows exactly how to wire up Open WebUI against the local inference endpoint. Run:

docker model launch openwebui

That's it. Docker Model Runner will start the inference backend for the model you downloaded and automatically spin up the Open WebUI container, connecting them together. After a few moments, you'll see a URL—typically http://localhost:8080—that you can open in your browser.

3. Generate Images from the Chat Interface

Once Open WebUI loads, you'll see a familiar chat interface. To generate an image:

  1. Type your prompt in the chat box (e.g., "a dragon wearing a business suit, digital art").
  2. Open WebUI automatically detects the image‑generation capability and will create an image based on your text.
  3. The result appears inline—no need to switch tabs or manage separate tools.

Every request goes directly to your local model. No data is sent to external servers. Your prompts remain private, and there are no credit limits.

Run Your Own AI Image Generator: Local Setup with Docker and Open WebUI
Source: www.docker.com

4. Additional Tips: Switching Models and Monitoring

You can pull multiple models and switch between them. To see which models you have locally:

docker model list

To remove a model you no longer need:

docker model rm stable-diffusion

Docker Model Runner also supports other models beyond Stable Diffusion. Check Docker Hub for available DDUF artifacts. If you stop the Open WebUI container, you can restart it with the same docker model launch openwebui command—it will reuse the already pulled model.

Common Mistakes

Insufficient Memory

The most frequent issue is running out of RAM. If the model fails to load or you see OOM errors, close other applications, use a smaller model (some DDUF variants are 2–4 GB), or add more swap space. Always check your system memory usage with docker stats or your OS monitor.

GPU Driver Problems

If you have an NVIDIA GPU but the model runs on CPU (very slow), ensure Docker has access to the GPU. For Docker Desktop on Windows, enable the WSL2 backend and install the NVIDIA Container Toolkit. On Linux, install nvidia-docker2 and restart the Docker daemon. Verify with docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi.

Model Not Pulled

Forgetting to pull the model before launching Open WebUI will cause an error. Run docker model pull stable-diffusion first. You can confirm the pull succeeded with docker model inspect.

Port Conflicts

If port 8080 is already in use, you'll see a bind error. Either stop the other service or change the port mapping by using a custom Docker compose override (advanced). A quick fix is to use a different port by specifying it in the launch command (if supported) or by stopping the conflicting process.

Summary

You now have a fully local, private AI image generator running on your own hardware—no cloud, no credits, no privacy concerns. By combining Docker Model Runner's ability to pull and serve DDUF‑packaged models with Open WebUI's chatbot interface, you've created a seamless image‑creation experience that respects your data.

We covered pulling a model with docker model pull, launching Open WebUI with docker model launch openwebui, and generating images directly from the chat. We also highlighted common pitfalls like memory limits, GPU setup, and port conflicts.

This setup is ideal for developers, designers, and anyone who wants to experiment with AI image generation without vendor lock‑in. As Docker Model Runner evolves, expect more models and features—but for right now, you have a powerful tool right at your fingertips. Happy generating!

Recommended

Discover More

AWS MCP Server General Availability: Secure Agent Access to AWS ServicesZara Data Breach: Over 197,000 Customers Affected in Major Security IncidentWhy Flutter's Websites Now Run on Dart and JasprMicrosoft Launches Unified Python Environments Extension for VS Code After Year-Long PreviewMigrating from Amazon Q CLI to Kiro CLI: A Practical Guide