InvokeAI Guide (2026): The Professional Stable Diffusion Studio for Artists
Want to go deeper than this article?
Free account unlocks the first chapter of all 17 courses — RAG, agents, MCP, voice AI, MLOps, real GitHub repos.
InvokeAI is the Stable Diffusion application that thinks like a creative tool, not a settings panel. The unified canvas combines painting with AI generation in a single workspace. The Workflow editor is a node-based pipeline builder. The Model Manager handles installations, license tracking, and updates. Multi-user authentication makes it shareable. And the Apache 2.0 license makes it commercial-friendly in ways A1111 (AGPL) and ComfyUI (GPL) are not.
For digital artists, illustrators, and design studios, InvokeAI is the most natural fit among open-source SD UIs. This guide covers everything: installation across platforms, the unified canvas, model management, the Workflow editor, multi-user setup, Flux / SD 3.5 / SDXL support, API integration, and where InvokeAI does or doesn't belong vs the alternatives.
Table of Contents
- What InvokeAI Is
- Why the Unified Canvas Matters
- InvokeAI vs A1111 / Forge / ComfyUI
- Apache 2.0 License Implications
- Hardware Requirements
- Installation: Official Installer
- Docker Deployment
- The Generation Tab
- The Unified Canvas
- The Workflow Editor
- Model Manager
- LoRAs and Embeddings
- ControlNet and IPAdapter
- Flux / SD 3.5 / SDXL Support
- Multi-User Setup
- API Integration
- Performance Tuning
- Production Studio Use Cases
- Troubleshooting
- FAQ
Reading articles is good. Building is better.
Free account = 17+ structured chapters across 17 courses, with a per-chapter AI tutor. No card. Cancel anytime if you ever upgrade.
What InvokeAI Is {#what-it-is}
InvokeAI is an open-source Stable Diffusion application maintained by Invoke (the company behind invoke.com hosted service) and the community. It started as a fork of CompVis's reference SD code and has evolved into the most polished artist-focused SD UI in 2026.
Core components:
- Generation tab — quick prompt-to-image with curated controls
- Unified canvas — Photoshop-like workspace with brushes, masks, layers, generate
- Workflow editor — node-based pipeline builder
- Model Manager — install, update, organize, license-track models
- Gallery — browse, organize, and tag previous outputs
- Multi-user — auth, queues, per-user galleries
Project: github.com/invoke-ai/InvokeAI. License: Apache 2.0.
Why the Unified Canvas Matters {#unified-canvas}
Traditional SD UI workflow:
- Generate base image in txt2img
- Open in Photoshop / GIMP for compositing
- Bring back to img2img / inpaint
- Repeat
Each round-trip loses canvas state, layer information, and context. The unified canvas keeps everything in one workspace:
- Paint base shapes / colors
- Generate over your sketch
- Mask regions and inpaint
- Paint final touches
- All on the same canvas, with full undo history
For artists who use Stable Diffusion as one tool in a larger creative process, this is a fundamentally different (and better) workflow than A1111's tabbed forms.
InvokeAI vs A1111 / Forge / ComfyUI {#comparison}
| Property | InvokeAI | A1111 | Forge | ComfyUI |
|---|---|---|---|---|
| UX style | Canvas + workflow | Tabbed | Tabbed | Node graph |
| License | Apache 2.0 | AGPL | AGPL | GPL |
| Multi-user | Native | Single | Single | None |
| Model manager | Polished | Basic | Basic | None |
| Workflow editor | Node-based | Scripts | Scripts | Native |
| Unified canvas | Yes | No | No | No |
| Speed (RTX 4090 SDXL) | 4-5 sec | 4 sec | 3 sec | 3 sec |
| Best for | Artists, studios | Power users | A1111 + Flux | Workflow architects |
For a single-artist workstation: InvokeAI for canvas-driven work, A1111/Forge for prompt-driven work. For studios: InvokeAI for artist seats, ComfyUI for production.
Reading articles is good. Building is better.
Free account = 17+ structured chapters across 17 courses, with a per-chapter AI tutor. No card. Cancel anytime if you ever upgrade.
Apache 2.0 License Implications {#license}
InvokeAI's Apache 2.0 license is the most commercial-friendly among major SD UIs:
- ✅ Use in commercial / proprietary products
- ✅ Modify and redistribute (with attribution)
- ✅ Bundle into closed-source applications
- ✅ Sell as a paid service
- ✅ No copyleft requirement to release derivative source
A1111 / Forge use AGPL — derivative works must be open-sourced. ComfyUI uses GPL — same. For commercial deployment without source-release obligation, InvokeAI is the right choice.
Hardware Requirements {#requirements}
| GPU VRAM | Capability |
|---|---|
| 6 GB | SDXL via dynamic offload |
| 12 GB | SDXL fast, Flux Dev FP8 |
| 16 GB | SD 3.5 Large, smooth Flux |
| 24 GB | All models, large LoRA stacks |
System RAM 16 GB minimum, 32 GB recommended. Disk 100 GB+ for full studio setup with multiple base models.
Installation: Official Installer {#installation}
# Linux / Mac
curl https://raw.githubusercontent.com/invoke-ai/InvokeAI/main/installer/install.sh | bash
# Windows: download installer from invoke-ai.github.io and run
The installer handles Python venv, PyTorch + CUDA / ROCm / MPS detection, dependencies, and Model Manager initialization. First-run downloads ~5 GB of base SDXL model. Total install time: 15-30 minutes.
After install:
invokeai-web
# Browser opens at http://localhost:9090
Docker Deployment {#docker}
docker run -d --gpus all \
--name invokeai \
-p 9090:9090 \
-v ~/invokeai:/invokeai \
-e INVOKEAI_ROOT=/invokeai \
ghcr.io/invoke-ai/invokeai:latest
For multi-user / Kubernetes deployments, the official Helm chart provides StatefulSet + PVC + Ingress. See InvokeAI docs for Pro features (OAuth / SSO).
The Generation Tab {#generation}
For quick image generation:
- Pick base model from dropdown
- Type positive + negative prompt
- Set image size (presets for SDXL aspect ratios)
- Optional: add LoRA(s) via the LoRA picker
- Click Invoke
Generated images appear in the right panel and persist in the gallery. Re-use prompts via the gallery → "Use this image as basis" or right-click → Recall parameters.
For batch generation, set Image Number = 4 / 8 and Invoke runs them sequentially with the same prompt.
The Unified Canvas {#canvas}
Open Canvas mode. The workspace:
- Layers panel — each generation / mask / paint as a layer
- Brush tool — traditional painting
- Mask tool — for inpainting regions
- Generation tools — Outpaint, Inpaint, Generate (txt2img on canvas region)
- Bounding box — define what region to generate
Workflow:
- Drop a base image (or generate one to start)
- Position the bounding box over the area to regenerate
- Mask region with the mask tool
- Type prompt, click Generate
- Result fills the masked region; remaining canvas unchanged
- Continue: paint adjustments, mask another region, regenerate, etc.
For outpainting: drag the bounding box outside the image bounds and Generate fills the new area while matching the existing edge.
This canvas-centric workflow is what makes InvokeAI distinctive.
The Workflow Editor {#workflow-editor}
Switch to Workflows tab. The node graph editor:
- Drag nodes from the left panel (Loaders, Conditioning, Latents, Image, Math, Custom)
- Wire outputs to inputs
- Save as JSON for reuse / sharing
- Trigger from Generation tab via "Use Workflow"
Sample workflow: Base SDXL → Refiner SDXL → ESRGAN 4x upscale → ADetailer face fix → Save. Build once, run repeatedly with different prompts.
The community shares workflows on the InvokeAI Discord and the official examples gallery. Drop a JSON to import.
For architectural comparison with ComfyUI workflows — InvokeAI workflows are conceptually similar but with tighter integration to the canvas and Generation tab.
Model Manager {#model-manager}
The Model Manager tab handles:
- Browse and install from Hugging Face / Civitai (built-in search)
- Track license per model (commercial / non-commercial / specific terms)
- Auto-convert formats (
.ckpt→.safetensors) - Organize by base architecture (SD 1.5, SDXL, SD 3.5, Flux)
- Enable / disable without deleting
- Per-user permissions in multi-user mode
Adding a model:
- Search HF / Civitai → click Install
- Or drop a URL / local path
- InvokeAI downloads, validates, registers in the Generation tab dropdown
For LoRAs / VAEs / ControlNet / Embeddings, the Model Manager tracks all of them in unified storage.
LoRAs and Embeddings {#loras}
In Generation tab:
- LoRA dropdown — pick one or more
- Per-LoRA strength slider (0.0-2.0, default 1.0)
- Add / remove LoRAs via the + button
Trigger words: include in prompt manually (InvokeAI doesn't auto-inject).
Civitai LoRAs work via Model Manager → Add → Civitai URL. Most SDXL / SD 1.5 LoRAs work unchanged.
ControlNet and IPAdapter {#controlnet}
Generation tab → Advanced → ControlNet section:
- Drop reference image
- Pick preprocessor (canny, depth, openpose, etc.)
- Pick ControlNet model (auto-suggested based on base model architecture)
- Set Weight (0.0-2.0)
Multiple ControlNet units stack. IPAdapter is in the same panel for image-prompt conditioning.
In Canvas mode, the bounding box / mask can serve as ControlNet input automatically (e.g., generate within painted region using the painted shapes as canny conditioning).
Flux / SD 3.5 / SDXL Support {#models}
InvokeAI 5.x supports:
| Model | Status | VRAM (FP8 / BF16) |
|---|---|---|
| Flux Dev | ✅ | 12 / 24 GB |
| Flux Schnell | ✅ | 12 / 24 GB |
| SD 3.5 Large | ✅ | 16 GB |
| SD 3.5 Medium | ✅ | 10 GB |
| SDXL Base | ✅ | 8 GB |
| SDXL Lightning / Hyper | ✅ | 8 GB |
| SD 1.5 | ✅ | 4 GB |
| Pony Diffusion v6 XL | ✅ (custom checkpoint) | 8 GB |
Switch models via the dropdown — canvas state persists across model switches.
Multi-User Setup {#multi-user}
For shared / studio deployments:
invokeai-configure --root /path/to/invokeai --multi-user
This enables:
- User registration / login
- Per-user galleries (private)
- Shared model library (read-only for users, writable for admins)
- Queue isolation (each user has their own generation queue)
- Optional OAuth / SSO via Pro features
For full RBAC and audit logging, the InvokeAI Pro tier (paid) is the right path; the open-source version supports basic multi-user.
API Integration {#api}
InvokeAI exposes a REST API at /api/v1/... and OpenAPI docs at /docs. Endpoints:
POST /api/v1/queue/.../enqueue_batch— queue a generationGET /api/v1/images— list generated imagesPOST /api/v1/workflows— submit workflowGET /api/v1/models— list available models
For OpenAI-compatible image generation (/v1/images/generations), front InvokeAI with LocalAI which can wrap any backend.
Performance Tuning {#tuning}
Default settings work well. Advanced tuning in invokeai.yaml:
InvokeAI:
Generation:
sequential_guidance: false # parallel CFG
attention_type: torch-sdp # use PyTorch SDPA
precision: bfloat16 # vs float16
device: cuda # cuda / mps / cpu / rocm
For tight VRAM, set max_loaded_models: 1 to keep only one model in memory.
Production Studio Use Cases {#studio}
Real studios use InvokeAI for:
- Concept art teams — unified canvas + multi-user shared models
- Marketing studios — workflow templates for brand-consistent variants
- Game studios — texture / asset generation with custom model managers
- Photography post-processing — canvas-based selective AI enhancement
- Educational programs — multi-student access on one server
For VFX / film production with complex pipelines, ComfyUI is more common. InvokeAI is the right answer for traditional digital art workflows.
Troubleshooting {#troubleshooting}
| Symptom | Cause | Fix |
|---|---|---|
| Slow first generation | Lazy load | Subsequent fast |
| Canvas tools unresponsive | Browser extensions blocking | Disable ad blockers on InvokeAI domain |
| OOM with Flux | VRAM tight | Switch to FP8 model |
| Workflow node missing | Custom node not installed | Install via Model Manager → Workflows |
| Multi-user login fails | Auth backend not configured | Run invokeai-configure with --multi-user |
| AMD GPU slow | Vulkan path | Use ROCm PyTorch — see AMD ROCm guide |
FAQ {#faq}
See answers to common InvokeAI questions below.
Sources: InvokeAI GitHub | Invoke hosted service | InvokeAI documentation | Internal benchmarks RTX 4090, RX 7900 XTX.
Related guides:
Go from reading about AI to building with AI
10 structured courses. Hands-on projects. Runs on your machine. Start free.
Liked this? 17 full AI courses are waiting.
From fundamentals to RAG, agents, MCP servers, voice AI, and production deployment with real GitHub repos. First chapter free, every course.
Build Real AI on Your Machine
RAG, agents, NLP, vision, and MLOps - chapters across 17 courses that take you from reading about AI to building AI.
Want structured AI education?
17 courses, 160+ chapters, from $9. Understand AI, don't just use it.
Continue Your Local AI Journey
Comments (0)
No comments yet. Be the first to share your thoughts!