★ Reading this for free? Get 17 structured AI courses + per-chapter AI tutor — the first chapter of every course free, no card.Start free in 30 seconds
Image Generation

Fooocus Guide (2026): The Easiest Stable Diffusion UI for Beautiful Images

May 1, 2026
22 min read
LocalAimaster Research Team

Want to go deeper than this article?

Free account unlocks the first chapter of all 17 courses — RAG, agents, MCP, voice AI, MLOps, real GitHub repos.

Fooocus is the simplest path from "I want beautiful images" to actually having them. lllyasviel built it as the opinionated antidote to the parameter-overload of A1111 and ComfyUI: opinionated SDXL defaults, automatic prompt expansion, curated style presets, and one-click "Speed / Quality / Extreme Speed" performance modes. Type a short prompt, click Generate, get a polished image. No sampler dropdowns, no CFG sliders, no negative prompt required.

For everyone who is not a Stable Diffusion power user — photographers exploring AI, content creators on deadlines, designers prototyping concepts, hobbyists generating wallpaper — Fooocus is the right starting point. This guide covers everything: installation, the simplified UI, performance presets, style system, image prompt and inpainting, LoRA / checkpoint management, the Fooocus-MRE community fork, and where Fooocus does or doesn't fit.

Table of Contents

  1. What Fooocus Is
  2. Fooocus vs A1111 / Forge / ComfyUI
  3. Hardware Requirements
  4. Installation: Windows, Linux, Mac
  5. The Simplified UI Tour
  6. Performance Presets (Speed, Quality, Extreme Speed)
  7. Aspect Ratio Picker
  8. Style System
  9. Prompt Expansion
  10. Image Prompt and Image-to-Image
  11. Inpainting
  12. Using Custom Checkpoints (Pony, Illustrious)
  13. Adding LoRAs
  14. Advanced Settings
  15. Fooocus-MRE Community Fork
  16. API Mode
  17. Tuning by GPU
  18. When Fooocus Is Not the Right Tool
  19. Troubleshooting
  20. FAQ

Reading articles is good. Building is better.

Free account = 17+ structured chapters across 17 courses, with a per-chapter AI tutor. No card. Cancel anytime if you ever upgrade.

What Fooocus Is {#what-it-is}

Fooocus (lllyasviel/Fooocus) is a Python application built on top of the same backend that powers SD Forge — the dynamic UNet patcher, ControlNet integration, and SDXL pipeline. The difference is the user interface: where A1111 and Forge expose every knob, Fooocus exposes ~5 high-level controls and bakes good defaults into everything else.

The design thesis: most users want beautiful images, not parameter exploration. Default to "good enough that you don't need to tune" and provide escape hatches under an Advanced panel for the rest.

Project: github.com/lllyasviel/Fooocus. Maintenance status: lllyasviel paused active development mid-2024; the Fooocus-MRE fork (mashb1t/Fooocus) carries forward in 2026.


Fooocus vs A1111 / Forge / ComfyUI {#comparison}

PropertyFooocusA1111ForgeComfyUI
UX styleCurated, simpleTabbed, configurableTabbed, configurableNode graph
Default-to-goodYesNoNoNo
Sampler pickerHidden (per style)VisibleVisibleVisible
Prompt expansionBuilt-inExtensionExtensionCustom node
SDXL nativeYesYesYesYes
Flux nativeLimited (MRE fork)LimitedYesYes
ControlNet UXSimplifiedFull extensionBuilt-inPer-node
InpaintingBuilt-in (good)FullFullPer-node
Best forBeginners, content creators, fast iterationPower users with extension needsA1111 users wanting speed + FluxWorkflow architects

For photographers/artists who want to make beautiful pictures rather than learn diffusion theory, Fooocus is the rational choice.


Hardware Requirements {#requirements}

GPU VRAMCapability
4 GBPossible with --always-low-vram (slow)
6-8 GBSDXL via dynamic offload (~25 sec/image)
12 GBSDXL comfortable (~10 sec)
16 GBSDXL fast (~6 sec)
24 GBAll workflows fast (~3-5 sec)

System RAM 16 GB minimum. Disk 30 GB for SDXL base + a few LoRAs + outputs. Apple Silicon supported via MPS.


Reading articles is good. Building is better.

Free account = 17+ structured chapters across 17 courses, with a per-chapter AI tutor. No card. Cancel anytime if you ever upgrade.

Installation: Windows, Linux, Mac {#installation}

Windows (one-click)

# Download and unzip Fooocus_win64.zip from the releases page
# Run run.bat

The zip includes Python and dependencies. Total size ~5 GB unzipped.

Linux

git clone https://github.com/lllyasviel/Fooocus
cd Fooocus
python3.10 -m venv venv && source venv/bin/activate
pip install -r requirements_versions.txt
python launch.py

For the MRE fork: replace the clone URL with https://github.com/mashb1t/Fooocus.

Mac (Apple Silicon)

git clone https://github.com/lllyasviel/Fooocus
cd Fooocus
./run_anime.sh   # or run_realistic.sh — Mac-friendly entry points

MPS is auto-detected. Performance lags NVIDIA — Draw Things (MLX-native) is faster on Mac for image gen.


The Simplified UI Tour {#ui-tour}

After launch, browse to http://localhost:7865. The UI:

  • Top bar: prompt input + Generate button
  • Left: previous outputs gallery
  • Right (optional): Advanced panel toggle
  • Bottom dropdowns: Performance, Aspect Ratio, Style, Image Number

Type a prompt, click Generate. That is the entire required workflow.


Performance Presets (Speed, Quality, Extreme Speed) {#performance-presets}

PresetStepsSamplerSpeed (RTX 4090)Use
Speed (default)30DPM++ 2M Karras~5 secDaily use
Quality60DPM++ 2M Karras~10 secFinal outputs
Extreme Speed8LCM~1.5 secRapid iteration
Lightning4LCM Karras~1 secReal-time
Hyper SD4DPM++ 2M Hyper~1 secNew presets in MRE

Speed is the right default. Switch to Quality for final renders, Extreme Speed for brainstorming dozens of variations.


Aspect Ratio Picker {#aspect-ratio}

Fooocus exposes ~12 SDXL-trained aspect ratios:

  • 1024 × 1024 (square, default)
  • 1152 × 896, 896 × 1152
  • 1216 × 832, 832 × 1216
  • 1344 × 768, 768 × 1344
  • 1536 × 640, 640 × 1536
    • a few more

These match the resolutions SDXL was trained on. Off-spec ratios produce worse images. The UI nudges you toward the trained ratios; do not fight this.


Style System {#styles}

The Style dropdown contains 100+ pre-baked combinations: prompt prefixes/suffixes + recommended sampler/scheduler + style-appropriate negatives. Top picks:

  • Fooocus V2 — default refined Fooocus look
  • Fooocus Photograph — photorealistic
  • Fooocus Cinematic — film-style
  • Fooocus Anime — clean 2D
  • MRE Manga — manga panel style
  • MRE Bad Dream — surrealist
  • SAI Anime / SAI Cinematic / SAI Fantasy Art — Stability's presets
  • Photo Long Exposure / Photo Macro / Photo Polaroid — specific photo styles

Stack multiple styles by ticking checkboxes. Each style appends its prompt/parameters; combinations work well or weirdly depending on alignment.


Prompt Expansion {#prompt-expansion}

Fooocus uses a small GPT-2 model fine-tuned on Stable Diffusion prompts to automatically expand short user inputs.

Input: "a samurai" Expanded: "a samurai, cinematic photo, ultra-detailed, sharp focus, 35mm film, masterpiece, beautiful lighting, depth of field"

The expansion is non-deterministic and changes per generation, providing built-in variety. Disable in Advanced → Performance → Disable Prompt Expansion if you want raw control.

For users coming from "negative prompt soup" workflows, prompt expansion replaces ~80% of the value with zero effort.


Image Prompt and Image-to-Image {#image-prompt}

The Input Image panel (top right) accepts a reference image. Modes:

  • Image Prompt — match the reference's style/composition (similar to IPAdapter)
  • PyraCanny — sketch / line art conditioning
  • CPDS — depth-aware composition matching
  • FaceSwap — copy a face from reference
  • Inpaint — mask and regenerate part of the reference

You can stack up to 4 image prompts simultaneously, each with its own weight (Stop At slider).

Under the hood: Fooocus dispatches to ControlNet / IPAdapter as appropriate. The UI hides the model picker and just offers the high-level capability.


Inpainting {#inpainting}

In Input Image → Inpaint:

  1. Drop reference image
  2. Paint mask with the brush tool
  3. Pick mode:
    • Improve Detail — preserve content, enhance quality
    • Modify Content — replace with prompt
  4. Optional: type prompt for the masked area
  5. Generate

Fooocus auto-tunes denoising strength, sampler, and outpainting expansion based on the chosen mode. For deep inpainting workflows (multi-step refinement, dedicated inpaint checkpoints), use A1111 or ComfyUI.


Using Custom Checkpoints (Pony, Illustrious) {#custom-checkpoints}

Place SDXL .safetensors checkpoints in:

Fooocus/models/checkpoints/

For Pony Diffusion v6 XL, Illustrious XL, RealVisXL, etc. Fooocus auto-detects on next launch. Pick from the Base Model dropdown in Advanced panel.

For Pony / Illustrious specifically, you may need to disable some Fooocus prompt expansion (it adds tokens the anime models don't respond to well) — toggle in Advanced → Performance.


Adding LoRAs {#loras}

Place .safetensors LoRAs in:

Fooocus/models/loras/

Advanced panel → LoRAs section: pick up to 5 with individual weights (default 0.5). Stacking 3+ strong LoRAs typically over-conditions the output; stick to 1-2 with strength 0.6-1.0 unless you know what you're doing.

For trigger-word LoRAs (most Civitai LoRAs), include the trigger word in your prompt — Fooocus does not auto-inject.


Advanced Settings {#advanced}

The Advanced panel exposes everything Fooocus normally hides:

  • Base Model and 5 LoRA slots
  • Refiner Model + Refiner Switch (Base/Refiner crossover step)
  • Sampler / Scheduler (overrides preset)
  • Sharpness / Guidance Scale (CFG)
  • ADM Guidance (positive/negative scales)
  • Sampler-specific parameters
  • Prompt expansion enable/disable
  • Custom negative prompt

Most users should never touch this. When you do, you typically have a specific reason — chasing a particular aesthetic that the defaults don't hit.


Fooocus-MRE Community Fork {#mre}

mashb1t's fork (https://github.com/mashb1t/Fooocus) is the actively-developed branch in 2026. Adds:

  • Hyper-SD presets (4-step)
  • Lightning presets
  • More style options
  • Enhanced detailers (face / hand fix)
  • Flux experimental support
  • SD 3.5 support
  • API improvements

Install the same way as original Fooocus. UI is essentially unchanged.


API Mode {#api}

Fooocus has a community API project: mrhan1993/Fooocus-API provides REST endpoints wrapping Fooocus generation:

git clone https://github.com/mrhan1993/Fooocus-API
cd Fooocus-API
pip install -r requirements.txt
python main.py

Endpoints: POST /v1/generation/text-to-image, POST /v1/generation/image-prompt, POST /v1/generation/inpaint-outpaint. OpenAPI docs at /docs.

For OpenAI-compatible image generation API (/v1/images/generations), use LocalAI which can wrap any SDXL backend.


Tuning by GPU {#tuning}

RTX 3060 12 GB

Default Fooocus works. Performance: ~12-15 sec/image at Speed preset. Use Extreme Speed for iteration.

RTX 4090 24 GB

Default Fooocus works flawlessly. Generate 4 images at once via Image Number = 4 — still ~5 sec per image (parallel batching).

RTX 5090 32 GB

Same as 4090 but ~2x faster.

Tight VRAM (4-6 GB)

python launch.py --always-low-vram

SDXL still works, ~25-40 sec/image. Use Extreme Speed preset for ~10 sec.

Apple M4 Max

Default works via MPS. Performance: ~22-28 sec at Speed. For better Mac speed, switch to Draw Things (MLX-native).

AMD RX 7900 XTX

Use the lshqqytiger AMD-friendly fork or set ROCm PyTorch first. Performance: ~7-9 sec at Speed.


When Fooocus Is Not the Right Tool {#not-right}

  • Need Flux Dev / SD 3.5 with full feature support — use Forge or ComfyUI.
  • Need fine-grained workflow control (multi-stage refinement, regional prompting, custom node graphs) — ComfyUI.
  • Need specific A1111 extensions (Roop, advanced training, custom scripts) — A1111.
  • Need video generation (Wan, HunyuanVideo) — ComfyUI.
  • Need API-first deployment with full SDXL feature set — A1111 with --api or LocalAI.
  • Want to learn the diffusion stack — start with A1111 or ComfyUI; Fooocus hides too much.

For everyone else: Fooocus.


Troubleshooting {#troubleshooting}

SymptomCauseFix
Black imagesNaN VAEDisable half-VAE in Advanced
OOMVRAM too tightAdd --always-low-vram
Style not applyingWrong checkpointSome styles assume base SDXL; switch checkpoint
LoRA has no effectTrigger word missingAdd trigger word to prompt
Slow first generationLazy load + prompt expansion model downloadSubsequent runs faster
Pony / Illustrious looks wrongPrompt expansion adds incompatible tokensDisable prompt expansion
Mac: incompatible PyTorchWrong PythonUse Python 3.10 only

FAQ {#faq}

See answers to common Fooocus questions below.


Sources: Fooocus GitHub (original) | Fooocus-MRE | Fooocus-API | Internal benchmarks RTX 3060, 4090, RX 7900 XTX, M4 Max.

Related guides:

🎯
AI Learning Path

Go from reading about AI to building with AI

10 structured courses. Hands-on projects. Runs on your machine. Start free.

Liked this? 17 full AI courses are waiting.

From fundamentals to RAG, agents, MCP servers, voice AI, and production deployment with real GitHub repos. First chapter free, every course.

Reading now
Join the discussion

LocalAimaster Research Team

Creator of Local AI Master. I've built datasets with over 77,000 examples and trained AI models from scratch. Now I help people achieve AI independence through local AI mastery.

Build Real AI on Your Machine

RAG, agents, NLP, vision, and MLOps - chapters across 17 courses that take you from reading about AI to building AI.

Want structured AI education?

17 courses, 160+ chapters, from $9. Understand AI, don't just use it.

AI Learning Path

Comments (0)

No comments yet. Be the first to share your thoughts!

📅 Published: May 1, 2026🔄 Last Updated: May 1, 2026✓ Manually Reviewed

Bonus kit

Ollama Docker Templates

10 one-command Docker stacks. Includes a Fooocus + Open WebUI multimodal reference deploy. Included with paid plans, or free after subscribing to both Local AI Master and Little AI Master on YouTube.

See Plans →

Build Real AI on Your Machine

RAG, agents, NLP, vision, and MLOps - chapters across 17 courses that take you from reading about AI to building AI.

Was this helpful?

PR

Written by Pattanaik Ramswarup

Creator of Local AI Master

I build Local AI Master around practical, testable local AI workflows: model selection, hardware planning, RAG systems, agents, and MLOps. The goal is to turn scattered tutorials into a structured learning path you can follow on your own hardware.

✓ Local AI Curriculum✓ Hands-On Projects✓ Open Source Contributor
📚
Free · no account required

Grab the AI Starter Kit — career roadmap, cheat sheet, setup guide

No spam. Unsubscribe with one click.

🎯
AI Learning Path

Go from reading about AI to building with AI

10 structured courses. Hands-on projects. Runs on your machine. Start free.

Free Tools & Calculators