Skip to content
Josh Pocock
0:16:35
1 255
59
8
Last update : 23/08/2024

Run Any Ollama AI Model with n8n 🦙🧠

Tired of paying for AI models? 😩 This is how to run them locally with n8n! 🤯

This is your one-stop shop for integrating powerful, open-source AI models like Llama 3.1 directly into your n8n workflows.

No more relying on expensive cloud services! 🚀

Setting the Stage 🎬

What you’ll need:

  • A basic understanding of n8n (it’s like Zapier or Make, but better! 😉)
  • A server with Docker installed (I recommend Coolify – super easy to use!)

Why this matters:

  • Cost savings: 💰 Stop paying for every API call and use powerful models for free.
  • Privacy: 🔐 Keep your data on your own server, under your control.
  • Customization: 🔧 Fine-tune models to your exact needs.

Installing Ollama: Your AI Powerhouse 🔌

  1. Get the Docker image:
   docker pull ollama/ollama
  1. Run Ollama (CPU Only):
   docker run -d -p 11434:11434 ollama/ollama

Bridging the Gap: Connecting Ollama and n8n 🔗

Think of it like this: you need to get Ollama and n8n talking on the same network. 🗣️

  1. Find your n8n network:
   docker network ls
  • Look for the network your n8n container is using (you can find this in Coolify or Portainer too).
  1. Find your Ollama container ID:
   docker ps
  1. Connect Ollama to your n8n network:
   docker network connect <YOUR_N8N_NETWORK_NAME> <YOUR_OLLAMA_CONTAINER_ID>
  • Replace <YOUR_N8N_NETWORK_NAME> and <YOUR_OLLAMA_CONTAINER_ID> with the values you found.
  1. Restart your containers: This ensures the connection takes effect.

Installing Your Desired Model 🧠

  1. Access your Ollama container:
   docker exec -it <YOUR_OLLAMA_CONTAINER_ID> ollama run llama3.1 
  • Replace <YOUR_OLLAMA_CONTAINER_ID> with your Ollama container ID.
  • Replace llama3.1 with your desired model (e.g., mixtral, llama2).

Unleashing the Power: Using Ollama in n8n 🚀

  1. Get your Ollama container’s IP address:
   docker inspect <YOUR_OLLAMA_CONTAINER_ID>
  • Look for the IPAddress field in the output.
  1. In n8n, add an “AI Agent” node.

  2. Select “Ollama Chat” as your chat model.

  3. Enter your Ollama Base URL:

  • Use the format: http://<YOUR_OLLAMA_CONTAINER_IP>:11434

You did it! 🎉

You’ve unlocked a world of AI possibilities with locally run Ollama models in your n8n workflows! Now go build something amazing! 🤖

Resources 🧰

Other videos of

Play Video
Josh Pocock
0:29:53
858
50
5
Last update : 19/09/2024
Play Video
Josh Pocock
0:15:35
2 326
54
8
Last update : 18/09/2024
Play Video
Josh Pocock
0:18:07
4 051
121
15
Last update : 18/09/2024
Play Video
Josh Pocock
0:15:44
1 673
38
6
Last update : 18/09/2024
Play Video
Josh Pocock
0:17:00
2 310
31
11
Last update : 18/09/2024
Play Video
Josh Pocock
0:22:44
6 077
179
21
Last update : 18/09/2024
Play Video
Josh Pocock
0:32:41
523
20
6
Last update : 18/09/2024
Play Video
Josh Pocock
0:15:38
1 968
69
15
Last update : 18/09/2024
Play Video
Josh Pocock
0:13:10
3 943
150
22
Last update : 18/09/2024