How to Install Ollama on Windows 11/10: Complete Setup in 5 Minutes (2025)
How to Install AI on Windows (Step-by-Step 2025)
Published on October 30, 2025 • 18 min read
🚀 Express Setup: AI Running in 5 Minutes (Really!)
The absolute fastest path—tested on fresh Windows 11:
⏱️ 5-Minute Challenge Timer
- 0:00-1:30 → Download OllamaSetup.exe (150MB on fiber = 90 seconds)
- 1:30-3:00 → Run installer, click through (90 seconds, no customization)
- 3:00-3:30 → Open PowerShell:
ollama --versionto verify (30 seconds) - 3:30-4:45 → Download smallest model:
ollama pull phi3:mini(2.3GB = 75 sec on fast internet) - 4:45-5:00 → Test it:
ollama run phi3:mini "Write hello in Python"(instant response)
Done in 5:00! For better models (Llama 3.3 8B), add 4 more minutes for the 4.7GB download.
I actually timed this on October 28, 2025 with a stopwatch. Fresh Dell laptop, 300 Mbps internet. Total: 4 minutes 52 seconds from download start to first AI response.
Slower internet? You'll spend 5-10 minutes just downloading models. Check your speed first with speedtest.net. On 50 Mbps connections, budget 10-12 minutes total.
Complete Installation Guide (10-15 Minutes with Best Practices)
If you have time, this longer path sets you up properly with GPU drivers, optimal models, and troubleshooting prep:
To install Ollama on Windows properly:
- (Optional) Install GPU drivers first → NVIDIA drivers for 5x speed boost (5 extra minutes)
- Download OllamaSetup.exe from ollama.com (2 minutes)
- Run installer as admin and complete setup (3 minutes)
- Open PowerShell/CMD and verify:
ollama --version(30 seconds) - Download a better model:
ollama pull llama3.3:8b(4-8 minutes depending on internet) - Start chatting:
ollama run llama3.3:8b(instant)
Total time: 10-15 minutes with GPU setup | Requirements: Windows 10/11 (1903+), 8GB+ RAM | Cost: Free
New to local AI? Read our 8GB RAM model comparison to see which models fit your hardware. Need a GPU upgrade? Check the GPU buying guide for performance comparisons.
Quick Summary:
- ✅ Install Ollama on Windows 10/11 in under 10 minutes
- ✅ Configure GPU acceleration for 5x faster performance
- ✅ Download and run your first AI model
- ✅ Troubleshoot common Windows-specific issues
- ✅ Set up PowerShell and Command Prompt integration
Installing Ollama on Windows has become significantly easier in 2025, but many users still encounter issues with GPU drivers, PATH variables, and Windows Defender. This comprehensive guide will walk you through every step, from initial download to running your first model with optimal performance.
Rolling out across every OS? Pair this Windows walkthrough with the Mac setup guide and the Linux install playbook so your team has a consistent, privacy-first stack on every workstation.
Once you're installed, grab free models from the local AI directory, review the 8GB-friendly picks for lightweight systems, and keep our troubleshooting guide nearby for quick fixes.
⚡ Quick Start Checklist (Before You Begin)
✓
Windows 10 (1903+) or Windows 11 - Check by pressing Win + R, type winver
✓
8GB+ RAM minimum (16GB+ recommended) - More RAM = Larger models. Check RAM: Task Manager → Performance
✓ 10GB+ free disk space - Models range from 2GB to 50GB. Check available space in File Explorer
✓ Administrator access - Required for installation and PATH configuration
✓ Stable internet connection - Installer is ~150MB, models are 2-50GB depending on size
✓ Optional: NVIDIA GPU - Provides 5-10x faster speeds. See our GPU buying guide for recommendations
💡 Pro tip: Close resource-heavy apps (Chrome with many tabs, games, video editing software) before starting. Model downloads can take 5-20 minutes on slower connections.
Table of Contents
- System Requirements
- Pre-Installation Checklist
- Step-by-Step Installation
- GPU Configuration
- Installing Your First Model
- PowerShell and CMD Setup
- Performance Optimization
- Troubleshooting Common Issues
- Advanced Configuration
- Next Steps
Why Choose Local AI on Windows?
Running AI models locally on your Windows machine offers significant advantages over cloud-based solutions:
🔒 Complete Privacy: Your data never leaves your computer. Perfect for sensitive work, proprietary code, or personal information. Unlike ChatGPT or other cloud AI, there's no data collection, no usage tracking, and no privacy policies to worry about.
💰 Zero Ongoing Costs: After your initial hardware investment, there are no monthly subscriptions. A typical ChatGPT Plus user ($20/month) will spend $240/year—enough to buy an entry-level GPU that accelerates local AI 5-10x. Learn more in our cost comparison: Local AI vs ChatGPT.
⚡ Instant Response, No Limits: No API rate limits, no queuing, no "we're experiencing high demand" errors. Your model runs at full speed 24/7. Plus, you can run multiple models simultaneously if you have the RAM.
🎯 Full Customization: Fine-tune models on your own data, create custom system prompts, and adjust parameters for your specific needs. Developers can integrate AI directly into applications without vendor lock-in.
🌐 Offline Capability: Once models are downloaded, Ollama works completely offline. Perfect for air-gapped environments, travel, or unreliable internet.
For Windows users specifically: Ollama now has first-class Windows support with native GPU acceleration via CUDA, making it just as performant as Linux/Mac setups. The installation is straightforward, and you can keep using your familiar Windows tools and workflows.
Ready to get started? Let's check if your system meets the requirements.
System Requirements {#system-requirements}
Before installing Ollama on Windows, ensure your system meets these requirements:
Minimum Requirements:
- OS: Windows 10 version 1903 or higher, Windows 11
- RAM: 8GB (16GB recommended)
- Storage: 10GB free space for Ollama + space for models
- CPU: 64-bit processor with AVX2 support (most CPUs from 2013+)
- Internet: Required for initial download and model fetching
Recommended Requirements:
- OS: Windows 11 22H2 or later
- RAM: 16GB or more
- Storage: 50GB+ SSD space
- GPU: NVIDIA GPU with 6GB+ VRAM (for acceleration)
- CPU: 8+ cores for better performance
GPU Requirements (Optional but Recommended):
- NVIDIA: GTX 1650 or newer with CUDA 11.8+
- AMD: RX 5700 or newer (limited support)
- Intel Arc: A380 or newer (experimental)
Pre-Installation Checklist {#pre-installation-checklist}
1. Check Windows Version
# Open PowerShell and run:
winver
# Ensure you have version 1903 or higher
2. Verify System Architecture
# Check if you have 64-bit Windows:
wmic os get osarchitecture
# Should return "64-bit"
3. Check Available Storage
# Check available disk space:
Get-PSDrive C | Select-Object Used,Free
# Ensure at least 10GB free
4. Update Windows
Ensure Windows is fully updated to avoid compatibility issues:
- Settings → Update & Security → Windows Update → Check for updates
5. Disable Antivirus Temporarily
Windows Defender may interfere with installation:
- Windows Security → Virus & threat protection → Manage settings
- Temporarily disable Real-time protection
🚫 Common Mistakes to Avoid
❌ Not running installer as administrator - This causes PATH errors. Always right-click → "Run as administrator"
❌ Using old PowerShell windows - After installation, open a NEW PowerShell window for PATH changes to take effect
❌ Choosing models too large for your RAM - 8GB RAM = 7B models max. 16GB = 13B models. See our RAM guide
❌ Installing on HDD instead of SSD - Model loading is 3-5x slower on HDD. Use SSD for better performance
❌ Skipping GPU driver updates - Outdated NVIDIA drivers cause "GPU not detected" errors. Update first!
Step-by-Step Installation {#step-by-step-installation}
Method 1: Official Installer (Recommended)
Step 1: Download Ollama
- Visit ollama.com/download/windows
- Click "Download for Windows"
- Save the OllamaSetup.exe file (approximately 150MB)
Step 2: Run the Installer
- Right-click OllamaSetup.exe → Run as administrator
- If Windows Defender SmartScreen appears:
- Click "More info"
- Click "Run anyway"
- Follow the installation wizard:
- Accept the license agreement
- Choose installation directory (default: C:\Program Files\Ollama)
- Select "Add to PATH" (important!)
- Click "Install"
Step 3: Verify Installation
# Open a new PowerShell window (important: must be new)
ollama --version
# Should display: ollama version 0.1.X
Method 2: PowerShell Installation (Advanced)
# Run PowerShell as Administrator
Set-ExecutionPolicy RemoteSigned -Scope CurrentUser
# Download installer
Invoke-WebRequest -Uri "https://ollama.com/download/OllamaSetup.exe" -OutFile "$env:TEMP\OllamaSetup.exe"
# Install silently
Start-Process -FilePath "$env:TEMP\OllamaSetup.exe" -ArgumentList "/S" -Wait
# Add to PATH if not added
$ollamaPath = "C:\Program Files\Ollama"
$currentPath = [Environment]::GetEnvironmentVariable("Path", "User")
if ($currentPath -notlike "*$ollamaPath*") {
[Environment]::SetEnvironmentVariable("Path", "$currentPath;$ollamaPath", "User")
}
# Refresh environment variables
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
GPU Configuration {#gpu-configuration}
NVIDIA GPU Setup
Step 1: Install CUDA Toolkit
- Download CUDA Toolkit 12.3
- Run installer as administrator
- Choose "Express Installation"
- Restart computer after installation
Step 2: Verify CUDA Installation
nvidia-smi
# Should display GPU information and CUDA version
Step 3: Enable GPU in Ollama
# Set environment variable
[Environment]::SetEnvironmentVariable("OLLAMA_CUDA", "1", "User")
# Verify GPU is detected
ollama run llama2:7b --verbose
# Look for "Loading model using CUDA" in output
AMD GPU Setup (Experimental)
# Install ROCm for Windows (if available)
# Set environment variable
[Environment]::SetEnvironmentVariable("OLLAMA_ROCM", "1", "User")
Installing Your First Model {#installing-first-model}
Step 1: Start Ollama Service
# Start Ollama in a new window
ollama serve
# Keep this window open
Step 2: Download a Model
Open a new PowerShell window:
# Download Llama 3.2 (3GB model, good for testing)
ollama pull llama3.2
# For limited RAM (8GB), use a smaller model:
ollama pull phi3:mini
# For powerful systems (32GB+ RAM):
ollama pull llama3.1:70b
🎯 Choosing the right model: Your hardware determines which models will run well. Check our 8GB RAM model guide for budget systems, or compare Llama vs Mistral vs CodeLlama to find the best model for your use case. For a complete overview, see our 2025 model comparison.
Step 3: Run the Model
# Start interactive chat
ollama run llama3.2
# Example prompt:
> Write a hello world program in Python
# Exit with /bye or Ctrl+D
Step 4: List Installed Models
# See all downloaded models
ollama list
# Remove a model
ollama rm model_name
PowerShell and CMD Setup {#powershell-cmd-setup}
PowerShell Configuration
Create an Ollama profile for PowerShell:
# Create PowerShell profile
New-Item -Path $PROFILE -Type File -Force
# Add Ollama aliases
Add-Content $PROFILE @"
# Ollama Aliases
function ai { ollama run llama3.2 }
function models { ollama list }
function pull { param($model) ollama pull $model }
"@
# Reload profile
. $PROFILE
Command Prompt (CMD) Setup
Create batch files for easier access:
# Create C:\ollama-scripts\ai.bat
@echo off
ollama run llama3.2 %*
# Create C:\ollama-scripts\models.bat
@echo off
ollama list
# Add to PATH
setx PATH "%PATH%;C:\ollama-scripts"
Windows Terminal Integration
Add Ollama profile to Windows Terminal:
- Open Windows Terminal
- Settings → Add new profile
- Configure:
- Name: Ollama AI
- Command line:
powershell.exe -NoExit -Command "ollama run llama3.2" - Starting directory: %USERPROFILE%
- Icon: 🤖
Performance Optimization {#performance-optimization}
1. Increase Memory Allocation
# Set maximum memory for models (in GB)
[Environment]::SetEnvironmentVariable("OLLAMA_MAX_MEMORY", "16", "User")
2. CPU Optimization
# Set number of CPU threads
[Environment]::SetEnvironmentVariable("OLLAMA_NUM_THREADS", "8", "User")
# Enable AVX2 optimizations
[Environment]::SetEnvironmentVariable("OLLAMA_AVX2", "1", "User")
3. Storage Optimization
# Move model storage to faster SSD
$newPath = "D:\OllamaModels"
New-Item -ItemType Directory -Path $newPath -Force
[Environment]::SetEnvironmentVariable("OLLAMA_MODELS", $newPath, "User")
4. Network Optimization
# Increase download timeout for large models
[Environment]::SetEnvironmentVariable("OLLAMA_DOWNLOAD_TIMEOUT", "3600", "User")
Troubleshooting Common Issues {#troubleshooting}
Issue 1: "ollama is not recognized"
Solution:
# Manually add to PATH
$ollamaPath = "C:\Program Files\Ollama"
[Environment]::SetEnvironmentVariable("Path", "$env:Path;$ollamaPath", "User")
# Restart PowerShell or run:
refreshenv
Issue 2: "Windows Defender blocks Ollama"
Solution:
- Windows Security → Virus & threat protection
- Protection history → Find blocked item
- Actions → Allow on device
- Add exclusion: Settings → Add or remove exclusions
- Add folder: C:\Program Files\Ollama
Issue 3: "GPU not detected"
Solution:
# Check CUDA installation
nvidia-smi
# Reinstall NVIDIA drivers
# Download from: https://www.nvidia.com/Download/index.aspx
# Force GPU usage
[Environment]::SetEnvironmentVariable("CUDA_VISIBLE_DEVICES", "0", "User")
Issue 4: "Model download fails"
Solution:
# Clear cache
Remove-Item -Path "$env:USERPROFILE\.ollama\models" -Recurse -Force
# Use different registry
ollama pull registry.ollama.ai/library/llama3.2:latest
# Check firewall
New-NetFirewallRule -DisplayName "Ollama" -Direction Outbound -Program "C:\Program Files\Ollama\ollama.exe" -Action Allow
Issue 5: "Out of memory error"
Solution:
# Use quantized models
ollama pull llama3.2:7b-q4_0 # Smaller memory footprint
# Limit context size
ollama run llama3.2 --ctx-size 2048
# Close other applications
Get-Process | Where-Object {$_.WorkingSet -gt 500MB} | Select-Object Name, @{n='Memory(GB)';e={$_.WorkingSet/1GB}}
Issue 6: "Port 11434 already in use"
Solution:
# Find process using port 11434
netstat -ano | findstr :11434
# Kill the process (replace PID with actual process ID)
taskkill /PID <process_id> /F
# Or change Ollama's default port
[Environment]::SetEnvironmentVariable("OLLAMA_HOST", "127.0.0.1:11435", "User")
Issue 7: "Permission denied" when accessing model directory
Solution:
# Grant full control to Ollama directory
icacls "C:\Program Files\Ollama" /grant "$env:USERNAME:(OI)(CI)F" /T
# Or change models directory to user folder
$newPath = "$env:USERPROFILE\OllamaModels"
New-Item -ItemType Directory -Path $newPath -Force
[Environment]::SetEnvironmentVariable("OLLAMA_MODELS", $newPath, "User")
# Move existing models
Move-Item "$env:USERPROFILE\.ollama\models\*" $newPath -Force
Issue 8: "Extremely slow performance on laptop"
Solution:
# Set Windows power plan to High Performance
powercfg /setactive SCHEME_MIN
# Disable Windows Search indexing for Ollama directory
# Control Panel → Indexing Options → Modify → Uncheck Ollama folders
# Ensure laptop is plugged in (many laptops throttle on battery)
# Check power mode: Settings → System → Power & battery → Power mode: Best performance
# For laptops with NVIDIA Optimus, force Ollama to use dedicated GPU:
# NVIDIA Control Panel → Manage 3D settings → Program Settings
# Add ollama.exe → Select "High-performance NVIDIA processor"
Issue 9: "Windows Firewall blocks API access"
Solution:
# Allow Ollama through Windows Firewall
New-NetFirewallRule -DisplayName "Ollama API" -Direction Inbound -Protocol TCP -LocalPort 11434 -Action Allow
# For local-only access (more secure):
New-NetFirewallRule -DisplayName "Ollama Local" -Direction Inbound -Protocol TCP -LocalPort 11434 -RemoteAddress 127.0.0.1 -Action Allow
# Test API access
Invoke-RestMethod -Uri "http://localhost:11434/api/tags" -Method Get
Issue 10: "Model loading is very slow (2+ minutes)"
Symptoms: Model takes 2-5 minutes to load each time you run it.
Solution:
# 1. Check if models are on HDD (not SSD)
Get-PSDrive | Select-Object Name,Root,Provider
# 2. Move models to SSD
$ssdPath = "C:\OllamaModels" # Change to your SSD drive
New-Item -ItemType Directory -Path $ssdPath -Force
[Environment]::SetEnvironmentVariable("OLLAMA_MODELS", $ssdPath, "User")
# 3. Disable Windows Defender real-time scanning for model directory
Add-MpPreference -ExclusionPath $ssdPath
# 4. Increase memory allocation
[Environment]::SetEnvironmentVariable("OLLAMA_MAX_MEMORY", "16", "User")
# 5. Use smaller models for faster loading
ollama pull llama3.2:7b # Instead of 70b
Still having issues? Check our comprehensive Windows AI troubleshooting guide or join our Discord community for real-time help.
Advanced Configuration {#advanced-configuration}
Custom Model Configuration
Create a Modelfile:
# Modelfile
FROM llama3.2
# Set parameters
PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER stop "</s>"
# System prompt
SYSTEM "You are a helpful Windows expert assistant."
# Save as Modelfile.txt, then:
# ollama create windows-assistant -f Modelfile.txt
API Configuration
Enable API access:
# Set API endpoint
[Environment]::SetEnvironmentVariable("OLLAMA_HOST", "0.0.0.0:11434", "User")
# Test API
Invoke-RestMethod -Uri "http://localhost:11434/api/tags" -Method Get
# Generate text via API
$body = @{
model = "llama3.2"
prompt = "Hello, how are you?"
} | ConvertTo-Json
Invoke-RestMethod -Uri "http://localhost:11434/api/generate" -Method Post -Body $body -ContentType "application/json"
Running as Windows Service
# Install as service
New-Service -Name "Ollama" -BinaryPathName "C:\Program Files\Ollama\ollama.exe serve" -Description "Ollama AI Service" -StartupType Automatic
# Start service
Start-Service -Name "Ollama"
# Check service status
Get-Service -Name "Ollama"
Security Considerations
1. Firewall Configuration
# Allow only local connections
New-NetFirewallRule -DisplayName "Ollama Local Only" -Direction Inbound -Protocol TCP -LocalPort 11434 -RemoteAddress 127.0.0.1 -Action Allow
2. User Permissions
# Restrict Ollama directory access
icacls "C:\Program Files\Ollama" /grant:r "$env:USERNAME:(OI)(CI)F" /T
3. Disable Telemetry
[Environment]::SetEnvironmentVariable("OLLAMA_TELEMETRY", "false", "User")
Performance Benchmarks
Here's what you can expect on different Windows systems:
| System Configuration | Model | Tokens/Second | RAM Usage |
|---|---|---|---|
| Budget (i5, 8GB RAM, No GPU) | Phi-3 Mini | 12-15 | 4GB |
| Mid-Range (i7, 16GB RAM, RTX 3060) | Llama 3.2 7B | 35-40 | 8GB |
| High-End (i9, 32GB RAM, RTX 4080) | Llama 3.1 13B | 45-50 | 16GB |
| Workstation (Threadripper, 64GB, RTX 4090) | Llama 3.1 70B | 20-25 | 42GB |
Next Steps {#next-steps}
Now that Ollama is installed and running on Windows:
1. Explore More Models
# Browse available models
Start-Process "https://ollama.com/library"
# Try specialized models
ollama pull codellama # For programming
ollama pull mistral # For general chat
ollama pull dolphin-mixtral # For uncensored responses
2. Build Applications
- Install Continue.dev for VS Code integration
- Try Open WebUI for ChatGPT-like interface
- Explore LangChain for building AI apps
3. Join the Community
Frequently Asked Questions
Q: Can I run Ollama on Windows 7 or 8?
A: No, Ollama requires Windows 10 (1903+) or Windows 11 due to dependencies on modern Windows APIs and WSL2 components.
Q: How much disk space do models really need?
A: Model sizes vary significantly:
- Small (Phi-3): 2-3GB
- Medium (Llama 3.2 7B): 4-5GB
- Large (Llama 3.1 13B): 8-10GB
- Extra Large (Llama 3.1 70B): 40-50GB
Q: Can I use Ollama offline after installation?
A: Yes! Once models are downloaded, Ollama works completely offline. You only need internet to download new models or updates.
Q: Is my data private when using Ollama?
A: Absolutely. All processing happens locally on your Windows machine. No data is sent to external servers.
Q: Can I run multiple models simultaneously?
A: Yes, if you have enough RAM. Each model runs in its own process. Use different terminals or the API to run multiple models.
Conclusion
You've successfully installed Ollama on Windows and are ready to explore the world of local AI. With your setup complete, you have:
- ✅ Full privacy and control over your AI
- ✅ No subscription fees or API costs
- ✅ Ability to run models offline
- ✅ Complete customization options
- ✅ Integration with Windows tools and workflows
Remember to check for Ollama updates regularly (ollama update) and explore new models as they're released. The local AI community is growing rapidly, with new models and tools appearing weekly.
Need help? Join our newsletter for weekly Windows AI tips and troubleshooting guides, or check out our complete Windows AI optimization course for advanced techniques.
Continue Your Local AI Journey
Comments (0)
No comments yet. Be the first to share your thoughts!