AboutExperienceSkillsWork
SK
GuestbookBlogResumeContact
HomeWorkExp.BlogGuests
Cover
BlogsBack to Blogs
January 10, 2026, 2:24 PM4 min read

Accessing Windows Ollama from WSL without the Networking Headache

I live in the terminal. Whether I'm building data pipelines or AI tools like this one Local Rag, WSL (Windows Subsystem for Linux) is my home.

But when I started working with local LLMs using Ollama, I hit a wall.

The Problem: WSL Memory Tax

By default, WSL is capped at 50% of your Windows RAM.

If you install Ollama inside WSL, you're loading 4GB+ model weights into that restricted memory pool, leaving practically nothing for your actual application, Docker containers, or vector DBs.

The smarter play: Install Ollama on Windows (where it has full access to RAM and GPU) and just call the API from WSL.

But there's a catch.

The Networking Nightmare

WSL can't just talk to localhost:11434 on Windows. You have to use the Windows Host IP.

You can find it manually:

bash
ip route show | grep default | awk '{print $3}' # Output: 172.25.16.1 (This changes constantly!)

The issue? This IP changes every time you reboot or switch networks. Hardcoding it in your .env file works for exactly one session before breaking everything.

🛠️ The Fix: Auto-Update Script

I got tired of manually editing .env files every morning, so I wrote a script to do it for me.

The concept is simple:

  1. Detect the dynamic Windows IP from inside WSL
  2. Find my project's .env file
  3. Regex-replace the OLLAMA_BASE_URL line automatically

Here's the core logic:

bash
# Get dynamic Host IP HOST_IP=$(ip route show | grep default | awk '{print $3}') # Update .env sed -i "s|^OLLAMA_BASE_URL=.*|OLLAMA_BASE_URL=http://$HOST_IP:11434|" .env

Now, every time I run my backend, the environment config is instantly synced. Zero manual work.

📦 View the Full Script on GitHub


🚀 Setup Guide (5 Minutes)

1. Install Ollama on Windows

Don't install it in WSL. Download the Windows installer from ollama.com.

2. Expose to Network

By default, Ollama only listens on localhost. We need it to listen on all interfaces.

  1. Quit Ollama from the taskbar.
  2. Open Windows Environment Variables.
  3. Add User Variable: OLLAMA_HOST = 0.0.0.0
  4. Restart Ollama.

3. Updates Windows Firewall

Run this in PowerShell (Admin) to allow WSL traffic:

powershell
New-NetFirewallRule -DisplayName "Ollama Allow WSL" -Direction Inbound -LocalPort 11434 -Protocol TCP -Action Allow

4. Add the Script

Download the script to your project:

bash
wget https://raw.githubusercontent.com/smaxiso/ollama-wsl-hack/main/update_ip.sh chmod +x update_ip.sh

Now, just run ./update_ip.sh before starting your app, or add it to your package.json / Makefile.


Why Do This?

  • Zero WSL RAM Overhead: Keep your Linux environment lightweight.
  • Native GPU Access: Windows handles the drivers; WSL just sends HTTP requests.
  • Set & Forget: No more "Connection Refused" errors after a reboot.

I use this daily for my local development. It's a small hack, but it saves me the headache of debugging connectivity issues every single day.

Get the script here and let me know if it helps your workflow!

Topics
#Ollama#WSL2#Local LLM#Windows#Linux#DevOps#Productivity#Bash Scripting#AI Engineering
Thanks for reading!
Share this post: