Ollama is great for running LLMs locally, but it doesn’t ask where you want to store your 5GB–10GB+ models. On Macs with limited internal SSD space, that’s a problem.
Here’s how I tracked down Ollama’s hidden model cache, moved it to an external drive, and kept everything running smoothly — even on my M1 Mac.
TL;DR
Models are secretly stored in ~/.ollama/models
Why This Guide?
Other guides (like this gist) touch on this, but:
What You’ll Need
Step-by-Step Instructions
1. Locate Ollama’s Storage Folder
Ollama doesn’t mention this anywhere, but it saves models to:
~/.ollama/models
Check if it exists:
ls -lah ~/.ollama/models
You’ll likely see folders and files totaling 5–15GB, depending on how many models you’ve pulled.
2. Create a Destination on Your External Drive
Plug in your external and make a folder for Ollama:
mkdir -p /Volumes/_EXTERNAL_DRIVE_NAME/_PATH_TO_/ollama
3. Move the Models
mv ~/.ollama/models /Volumes/_EXTERNAL_DRIVE_NAME_/_PATH_TO_/ollama
You now have freed up internal storage.
4. Symlink It Back
Ollama expects the folder to be in ~/.ollama/models, so we trick it:
ln -s /Volumes/_EXTERNAL_DRIVE_NAME_/_PATH_TO_/ollama ~/.ollama/models
This way, the CLI keeps working as if nothing changed.
5. Test It
List models:
ollama list
Run a model:
ollama run deepseek-r1:7b
If it responds — your symlink works.
You can use this instead of a symlink, but it’s less reliable in current builds:
echo 'export OLLAMA_MODELS="/Volumes/2USBEXT/DEV/ollama"' >> ~/.zshrc
source ~/.zshrc
Use it only if you want to switch between different drives easily.
Bonus Section: Best Models for M1 Macs
I tested a bunch, and these are fast + stable:
Model | Use Case | Size | Notes |
---|---|---|---|
phi3 | Chat + reasoning | ~2.2GB | Fast, small, ideal for M1 |
mistral | Chat + code | ~4.1GB | Great all-rounder |
llama3:8b | Smarter conversation | ~4.7GB | Best Meta model <10GB |
codellama | Code gen & fill | ~5.6GB | If you’re building code tools |
gemma:2b | Lightweight anything | ~1.4GB | Tiny model for light tasks |
Pull with:
ollama pull phi3
ollama pull mistral
ollama pull llama3
All of them will now download straight to your external drive thanks to the symlink.
Cleanup Tip
Want to find where that space went?
du -sh ~/.ollama/*
Or if you’re really unsure where Ollama is hiding files:
sudo find /Users -name "*.gguf" 2>/dev/null
Cleanup Tip
Ollama is fast and powerful, but its CLI assumes you’re fine giving up 10–20GB of SSD space without asking.
This guide puts that control back in your hands — and hopefully saves a few MacBook lives in the process.