Affiliate Disclosure: This post contains affiliate links. As an Amazon Associate and partner with other retailers, we earn from qualifying purchases at no extra cost to you. This helps support our mission to provide free, high-quality local AI education. We only recommend products we have tested and believe will benefit your local AI setup.

Installation Guide

Complete Ollama Installation Guide for Windows (2025)

January 30, 2025
18 min read
Local AI Master

Complete Ollama Installation Guide for Windows (2025 Step-by-Step)

Published on January 30, 2025 • 18 min read

Quick Summary:

  • ✅ Install Ollama on Windows 10/11 in under 10 minutes
  • ✅ Configure GPU acceleration for 5x faster performance
  • ✅ Download and run your first AI model
  • ✅ Troubleshoot common Windows-specific issues
  • ✅ Set up PowerShell and Command Prompt integration

Installing Ollama on Windows has become significantly easier in 2025, but many users still encounter issues with GPU drivers, PATH variables, and Windows Defender. This comprehensive guide will walk you through every step, from initial download to running your first model with optimal performance.

Table of Contents

  1. System Requirements
  2. Pre-Installation Checklist
  3. Step-by-Step Installation
  4. GPU Configuration
  5. Installing Your First Model
  6. PowerShell and CMD Setup
  7. Performance Optimization
  8. Troubleshooting Common Issues
  9. Advanced Configuration
  10. Next Steps

System Requirements {#system-requirements}

Before installing Ollama on Windows, ensure your system meets these requirements:

Minimum Requirements:

  • OS: Windows 10 version 1903 or higher, Windows 11
  • RAM: 8GB (16GB recommended)
  • Storage: 10GB free space for Ollama + space for models
  • CPU: 64-bit processor with AVX2 support (most CPUs from 2013+)
  • Internet: Required for initial download and model fetching

Recommended Requirements:

  • OS: Windows 11 22H2 or later
  • RAM: 16GB or more
  • Storage: 50GB+ SSD space
  • GPU: NVIDIA GPU with 6GB+ VRAM (for acceleration)
  • CPU: 8+ cores for better performance

GPU Requirements (Optional but Recommended):

  • NVIDIA: GTX 1650 or newer with CUDA 11.8+
  • AMD: RX 5700 or newer (limited support)
  • Intel Arc: A380 or newer (experimental)

Pre-Installation Checklist {#pre-installation-checklist}

1. Check Windows Version

# Open PowerShell and run:
winver
# Ensure you have version 1903 or higher

2. Verify System Architecture

# Check if you have 64-bit Windows:
wmic os get osarchitecture
# Should return "64-bit"

3. Check Available Storage

# Check available disk space:
Get-PSDrive C | Select-Object Used,Free
# Ensure at least 10GB free

4. Update Windows

Ensure Windows is fully updated to avoid compatibility issues:

  • Settings → Update & Security → Windows Update → Check for updates

5. Disable Antivirus Temporarily

Windows Defender may interfere with installation:

  • Windows Security → Virus & threat protection → Manage settings
  • Temporarily disable Real-time protection

Step-by-Step Installation {#step-by-step-installation}

Step 1: Download Ollama

  1. Visit ollama.com/download/windows
  2. Click "Download for Windows"
  3. Save the OllamaSetup.exe file (approximately 150MB)

Step 2: Run the Installer

  1. Right-click OllamaSetup.exe → Run as administrator
  2. If Windows Defender SmartScreen appears:
    • Click "More info"
    • Click "Run anyway"
  3. Follow the installation wizard:
    • Accept the license agreement
    • Choose installation directory (default: C:\Program Files\Ollama)
    • Select "Add to PATH" (important!)
    • Click "Install"

Step 3: Verify Installation

# Open a new PowerShell window (important: must be new)
ollama --version
# Should display: ollama version 0.1.X

Method 2: PowerShell Installation (Advanced)

# Run PowerShell as Administrator
Set-ExecutionPolicy RemoteSigned -Scope CurrentUser

# Download installer
Invoke-WebRequest -Uri "https://ollama.com/download/OllamaSetup.exe" -OutFile "$env:TEMP\OllamaSetup.exe"

# Install silently
Start-Process -FilePath "$env:TEMP\OllamaSetup.exe" -ArgumentList "/S" -Wait

# Add to PATH if not added
$ollamaPath = "C:\Program Files\Ollama"
$currentPath = [Environment]::GetEnvironmentVariable("Path", "User")
if ($currentPath -notlike "*$ollamaPath*") {
    [Environment]::SetEnvironmentVariable("Path", "$currentPath;$ollamaPath", "User")
}

# Refresh environment variables
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")

GPU Configuration {#gpu-configuration}

NVIDIA GPU Setup

Step 1: Install CUDA Toolkit

  1. Download CUDA Toolkit 12.3
  2. Run installer as administrator
  3. Choose "Express Installation"
  4. Restart computer after installation

Step 2: Verify CUDA Installation

nvidia-smi
# Should display GPU information and CUDA version

Step 3: Enable GPU in Ollama

# Set environment variable
[Environment]::SetEnvironmentVariable("OLLAMA_CUDA", "1", "User")

# Verify GPU is detected
ollama run llama2:7b --verbose
# Look for "Loading model using CUDA" in output

AMD GPU Setup (Experimental)

# Install ROCm for Windows (if available)
# Set environment variable
[Environment]::SetEnvironmentVariable("OLLAMA_ROCM", "1", "User")

Installing Your First Model {#installing-first-model}

Step 1: Start Ollama Service

# Start Ollama in a new window
ollama serve
# Keep this window open

Step 2: Download a Model

Open a new PowerShell window:

# Download Llama 3.2 (3GB model, good for testing)
ollama pull llama3.2

# For limited RAM (8GB), use a smaller model:
ollama pull phi3:mini

# For powerful systems (32GB+ RAM):
ollama pull llama3.1:70b

Step 3: Run the Model

# Start interactive chat
ollama run llama3.2

# Example prompt:
> Write a hello world program in Python

# Exit with /bye or Ctrl+D

Step 4: List Installed Models

# See all downloaded models
ollama list

# Remove a model
ollama rm model_name

PowerShell and CMD Setup {#powershell-cmd-setup}

PowerShell Configuration

Create an Ollama profile for PowerShell:

# Create PowerShell profile
New-Item -Path $PROFILE -Type File -Force

# Add Ollama aliases
Add-Content $PROFILE @"
# Ollama Aliases
function ai { ollama run llama3.2 }
function models { ollama list }
function pull { param($model) ollama pull $model }
"@

# Reload profile
. $PROFILE

Command Prompt (CMD) Setup

Create batch files for easier access:

# Create C:\ollama-scripts\ai.bat
@echo off
ollama run llama3.2 %*

# Create C:\ollama-scripts\models.bat
@echo off
ollama list

# Add to PATH
setx PATH "%PATH%;C:\ollama-scripts"

Windows Terminal Integration

Add Ollama profile to Windows Terminal:

  1. Open Windows Terminal
  2. Settings → Add new profile
  3. Configure:
    • Name: Ollama AI
    • Command line: powershell.exe -NoExit -Command "ollama run llama3.2"
    • Starting directory: %USERPROFILE%
    • Icon: 🤖

Performance Optimization {#performance-optimization}

1. Increase Memory Allocation

# Set maximum memory for models (in GB)
[Environment]::SetEnvironmentVariable("OLLAMA_MAX_MEMORY", "16", "User")

2. CPU Optimization

# Set number of CPU threads
[Environment]::SetEnvironmentVariable("OLLAMA_NUM_THREADS", "8", "User")

# Enable AVX2 optimizations
[Environment]::SetEnvironmentVariable("OLLAMA_AVX2", "1", "User")

3. Storage Optimization

# Move model storage to faster SSD
$newPath = "D:\OllamaModels"
New-Item -ItemType Directory -Path $newPath -Force
[Environment]::SetEnvironmentVariable("OLLAMA_MODELS", $newPath, "User")

4. Network Optimization

# Increase download timeout for large models
[Environment]::SetEnvironmentVariable("OLLAMA_DOWNLOAD_TIMEOUT", "3600", "User")

Troubleshooting Common Issues {#troubleshooting}

Issue 1: "ollama is not recognized"

Solution:

# Manually add to PATH
$ollamaPath = "C:\Program Files\Ollama"
[Environment]::SetEnvironmentVariable("Path", "$env:Path;$ollamaPath", "User")

# Restart PowerShell or run:
refreshenv

Issue 2: "Windows Defender blocks Ollama"

Solution:

  1. Windows Security → Virus & threat protection
  2. Protection history → Find blocked item
  3. Actions → Allow on device
  4. Add exclusion: Settings → Add or remove exclusions
  5. Add folder: C:\Program Files\Ollama

Issue 3: "GPU not detected"

Solution:

# Check CUDA installation
nvidia-smi

# Reinstall NVIDIA drivers
# Download from: https://www.nvidia.com/Download/index.aspx

# Force GPU usage
[Environment]::SetEnvironmentVariable("CUDA_VISIBLE_DEVICES", "0", "User")

Issue 4: "Model download fails"

Solution:

# Clear cache
Remove-Item -Path "$env:USERPROFILE\.ollama\models" -Recurse -Force

# Use different registry
ollama pull registry.ollama.ai/library/llama3.2:latest

# Check firewall
New-NetFirewallRule -DisplayName "Ollama" -Direction Outbound -Program "C:\Program Files\Ollama\ollama.exe" -Action Allow

Issue 5: "Out of memory error"

Solution:

# Use quantized models
ollama pull llama3.2:7b-q4_0  # Smaller memory footprint

# Limit context size
ollama run llama3.2 --ctx-size 2048

# Close other applications
Get-Process | Where-Object {$_.WorkingSet -gt 500MB} | Select-Object Name, @{n='Memory(GB)';e={$_.WorkingSet/1GB}}

Advanced Configuration {#advanced-configuration}

Custom Model Configuration

Create a Modelfile:

# Modelfile
FROM llama3.2

# Set parameters
PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER stop "</s>"

# System prompt
SYSTEM "You are a helpful Windows expert assistant."

# Save as Modelfile.txt, then:
# ollama create windows-assistant -f Modelfile.txt

API Configuration

Enable API access:

# Set API endpoint
[Environment]::SetEnvironmentVariable("OLLAMA_HOST", "0.0.0.0:11434", "User")

# Test API
Invoke-RestMethod -Uri "http://localhost:11434/api/tags" -Method Get

# Generate text via API
$body = @{
    model = "llama3.2"
    prompt = "Hello, how are you?"
} | ConvertTo-Json

Invoke-RestMethod -Uri "http://localhost:11434/api/generate" -Method Post -Body $body -ContentType "application/json"

Running as Windows Service

# Install as service
New-Service -Name "Ollama" -BinaryPathName "C:\Program Files\Ollama\ollama.exe serve" -Description "Ollama AI Service" -StartupType Automatic

# Start service
Start-Service -Name "Ollama"

# Check service status
Get-Service -Name "Ollama"

Security Considerations

1. Firewall Configuration

# Allow only local connections
New-NetFirewallRule -DisplayName "Ollama Local Only" -Direction Inbound -Protocol TCP -LocalPort 11434 -RemoteAddress 127.0.0.1 -Action Allow

2. User Permissions

# Restrict Ollama directory access
icacls "C:\Program Files\Ollama" /grant:r "$env:USERNAME:(OI)(CI)F" /T

3. Disable Telemetry

[Environment]::SetEnvironmentVariable("OLLAMA_TELEMETRY", "false", "User")

Performance Benchmarks

Here's what you can expect on different Windows systems:

System ConfigurationModelTokens/SecondRAM Usage
Budget (i5, 8GB RAM, No GPU)Phi-3 Mini12-154GB
Mid-Range (i7, 16GB RAM, RTX 3060)Llama 3.2 7B35-408GB
High-End (i9, 32GB RAM, RTX 4080)Llama 3.1 13B45-5016GB
Workstation (Threadripper, 64GB, RTX 4090)Llama 3.1 70B20-2542GB

Next Steps {#next-steps}

Now that Ollama is installed and running on Windows:

1. Explore More Models

# Browse available models
Start-Process "https://ollama.com/library"

# Try specialized models
ollama pull codellama  # For programming
ollama pull mistral    # For general chat
ollama pull dolphin-mixtral  # For uncensored responses

2. Build Applications

3. Join the Community


Frequently Asked Questions

Q: Can I run Ollama on Windows 7 or 8?

A: No, Ollama requires Windows 10 (1903+) or Windows 11 due to dependencies on modern Windows APIs and WSL2 components.

Q: How much disk space do models really need?

A: Model sizes vary significantly:

  • Small (Phi-3): 2-3GB
  • Medium (Llama 3.2 7B): 4-5GB
  • Large (Llama 3.1 13B): 8-10GB
  • Extra Large (Llama 3.1 70B): 40-50GB

Q: Can I use Ollama offline after installation?

A: Yes! Once models are downloaded, Ollama works completely offline. You only need internet to download new models or updates.

Q: Is my data private when using Ollama?

A: Absolutely. All processing happens locally on your Windows machine. No data is sent to external servers.

Q: Can I run multiple models simultaneously?

A: Yes, if you have enough RAM. Each model runs in its own process. Use different terminals or the API to run multiple models.


Conclusion

You've successfully installed Ollama on Windows and are ready to explore the world of local AI. With your setup complete, you have:

  • ✅ Full privacy and control over your AI
  • ✅ No subscription fees or API costs
  • ✅ Ability to run models offline
  • ✅ Complete customization options
  • ✅ Integration with Windows tools and workflows

Remember to check for Ollama updates regularly (ollama update) and explore new models as they're released. The local AI community is growing rapidly, with new models and tools appearing weekly.


Need help? Join our newsletter for weekly Windows AI tips and troubleshooting guides, or check out our complete Windows AI optimization course for advanced techniques.

Reading now
Join the discussion

Local AI Master

Creator of Local AI Master. I've built datasets with over 77,000 examples and trained AI models from scratch. Now I help people achieve AI independence through local AI mastery.

Comments (0)

No comments yet. Be the first to share your thoughts!

📅 Published: January 30, 2025🔄 Last Updated: September 24, 2025✓ Manually Reviewed
PR

Written by Pattanaik Ramswarup

AI Engineer & Dataset Architect | Creator of the 77,000 Training Dataset

I've personally trained over 50 AI models from scratch and spent 2,000+ hours optimizing local AI deployments. My 77K dataset project revolutionized how businesses approach AI training. Every guide on this site is based on real hands-on experience, not theory. I test everything on my own hardware before writing about it.

✓ 10+ Years in ML/AI✓ 77K Dataset Creator✓ Open Source Contributor

Recommended GPUs for Ollama on Windows

While Ollama runs on CPU, adding a compatible GPU provides 5-10x faster inference speeds. These NVIDIA GPUs are tested and verified for Windows Ollama installations.

Affiliate Disclosure: This post contains affiliate links. As an Amazon Associate and partner with other retailers, we earn from qualifying purchases at no extra cost to you. This helps support our mission to provide free, high-quality local AI education. We only recommend products we have tested and believe will benefit your local AI setup.

Best GPUs for Local AI Acceleration

⭐ Recommended

NVIDIA RTX 4060 Ti 16GB

Best budget GPU for local AI with ample VRAM

  • 16GB VRAM for large models
  • CUDA cores for AI acceleration
  • Runs 13B models smoothly
  • Low power consumption

NVIDIA RTX 4070 Ti Super

Excellent price/performance for serious AI work

  • 16GB VRAM
  • Superior CUDA performance
  • Handles 30B models
  • DLSS 3 support

NVIDIA RTX 4090 24GB

Professional-grade AI workstation GPU

  • 24GB VRAM for 70B models
  • Fastest inference speeds
  • Professional AI training
  • Future-proof investment

Get More Windows AI Tips

Join 5,000+ Windows users mastering local AI. Get exclusive tips, troubleshooting guides, and performance optimizations delivered weekly.

Limited Time Offer

Get Your Free AI Setup Guide

Join 10,247+ developers who've already discovered the future of local AI.

A
B
C
D
E
★★★★★ 4.9/5 from recent subscribers
Limited Time: Only 753 spots left this month for the exclusive setup guide
🎯
Complete Local AI Setup Guide
($97 value - FREE)
📊
My 77K dataset optimization secrets
Exclusive insights
🚀
Weekly AI breakthroughs before everyone else
Be first to know
💡
Advanced model performance tricks
10x faster results
🔥
Access to private AI community
Network with experts

Sneak Peak: This Week's Newsletter

🧠 How I optimized Llama 3.1 to run 40% faster on 8GB RAM
📈 3 dataset cleaning tricks that improved accuracy by 23%
🔧 New local AI tools that just dropped (with benchmarks)

🔒 We respect your privacy. Unsubscribe anytime.

10,247
Happy subscribers
4.9★
Average rating
77K
Dataset insights
<2min
Weekly read
M
★★★★★

"The dataset optimization tips alone saved me 3 weeks of trial and error. This newsletter is gold for any AI developer."

Marcus K. - Senior ML Engineer at TechCorp
GDPR CompliantNo spam, everUnsubscribe anytime

Continue Your Setup