The Ultimate Guide: Move Ollama Model Storage to an External Drive on macOS (M1/M2 Friendly)

Ollama is great for running LLMs locally, but it doesn’t ask where you want to store your 5GB–10GB+ models. On Macs with limited internal SSD space, that’s a problem.

Here’s how I tracked down Ollama’s hidden model cache, moved it to an external drive, and kept everything running smoothly — even on my M1 Mac.

TL;DR

Models are secretly stored in ~/.ollama/models

You can move them to an external drive
Then symlink the folder so Ollama still works
Bonus: how to pull new models directly to the external
Why This Guide?

Other guides (like this gist) touch on this, but:

Don’t cover external drives
Don’t verify model integrity after moving
Don’t explain what to do if OLLAMA_MODELS fails
Don’t include performance-aware model suggestions for M1 users
And absolutely don’t document the whirlwind debugging process to get here 😤
What You’ll Need
A Mac (Intel or M1/M2)
Ollama installed
At least one pulled model (so you can move it)
An external drive (mine was /Volumes/2USBEXT)
Terminal access

Step-by-Step Instructions

1. Locate Ollama’s Storage Folder

Ollama doesn’t mention this anywhere, but it saves models to:

~/.ollama/models

Check if it exists:

ls -lah ~/.ollama/models

You’ll likely see folders and files totaling 5–15GB, depending on how many models you’ve pulled.

2. Create a Destination on Your External Drive

Plug in your external and make a folder for Ollama:

mkdir -p /Volumes/_EXTERNAL_DRIVE_NAME/_PATH_TO_/ollama
3. Move the Models
mv ~/.ollama/models /Volumes/_EXTERNAL_DRIVE_NAME_/_PATH_TO_/ollama

You now have freed up internal storage.

4. Symlink It Back

Ollama expects the folder to be in ~/.ollama/models, so we trick it:

ln -s /Volumes/_EXTERNAL_DRIVE_NAME_/_PATH_TO_/ollama ~/.ollama/models

This way, the CLI keeps working as if nothing changed.

5. Test It

List models:

ollama list

Run a model:

ollama run deepseek-r1:7b

If it responds — your symlink works.

You can use this instead of a symlink, but it’s less reliable in current builds:

echo 'export OLLAMA_MODELS="/Volumes/2USBEXT/DEV/ollama"' >> ~/.zshrc
source ~/.zshrc

Use it only if you want to switch between different drives easily.

Bonus Section: Best Models for M1 Macs

I tested a bunch, and these are fast + stable:

ModelUse CaseSizeNotes
phi3Chat + reasoning~2.2GBFast, small, ideal for M1
mistralChat + code~4.1GBGreat all-rounder
llama3:8bSmarter conversation~4.7GBBest Meta model <10GB
codellamaCode gen & fill~5.6GBIf you’re building code tools
gemma:2bLightweight anything~1.4GBTiny model for light tasks

Pull with:

ollama pull phi3
ollama pull mistral
ollama pull llama3

All of them will now download straight to your external drive thanks to the symlink.

Cleanup Tip

Want to find where that space went?

du -sh ~/.ollama/*

Or if you’re really unsure where Ollama is hiding files:

sudo find /Users -name "*.gguf" 2>/dev/null

Cleanup Tip

Ollama is fast and powerful, but its CLI assumes you’re fine giving up 10–20GB of SSD space without asking.

This guide puts that control back in your hands — and hopefully saves a few MacBook lives in the process.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top