Skip to content
Josh Pocock
0:16:35
1 255
59
8
Last update : 23/08/2024

Run Any Ollama AI Model with n8n 🦙🧠

Tired of paying for AI models? 😩 This is how to run them locally with n8n! 🤯

This is your one-stop shop for integrating powerful, open-source AI models like Llama 3.1 directly into your n8n workflows.

No more relying on expensive cloud services! 🚀

Setting the Stage 🎬

What you’ll need:

  • A basic understanding of n8n (it’s like Zapier or Make, but better! 😉)
  • A server with Docker installed (I recommend Coolify – super easy to use!)

Why this matters:

  • Cost savings: 💰 Stop paying for every API call and use powerful models for free.
  • Privacy: 🔐 Keep your data on your own server, under your control.
  • Customization: 🔧 Fine-tune models to your exact needs.

Installing Ollama: Your AI Powerhouse 🔌

  1. Get the Docker image:
   docker pull ollama/ollama
  1. Run Ollama (CPU Only):
   docker run -d -p 11434:11434 ollama/ollama

Bridging the Gap: Connecting Ollama and n8n 🔗

Think of it like this: you need to get Ollama and n8n talking on the same network. 🗣️

  1. Find your n8n network:
   docker network ls
  • Look for the network your n8n container is using (you can find this in Coolify or Portainer too).
  1. Find your Ollama container ID:
   docker ps
  1. Connect Ollama to your n8n network:
   docker network connect <YOUR_N8N_NETWORK_NAME> <YOUR_OLLAMA_CONTAINER_ID>
  • Replace <YOUR_N8N_NETWORK_NAME> and <YOUR_OLLAMA_CONTAINER_ID> with the values you found.
  1. Restart your containers: This ensures the connection takes effect.

Installing Your Desired Model 🧠

  1. Access your Ollama container:
   docker exec -it <YOUR_OLLAMA_CONTAINER_ID> ollama run llama3.1 
  • Replace <YOUR_OLLAMA_CONTAINER_ID> with your Ollama container ID.
  • Replace llama3.1 with your desired model (e.g., mixtral, llama2).

Unleashing the Power: Using Ollama in n8n 🚀

  1. Get your Ollama container’s IP address:
   docker inspect <YOUR_OLLAMA_CONTAINER_ID>
  • Look for the IPAddress field in the output.
  1. In n8n, add an “AI Agent” node.

  2. Select “Ollama Chat” as your chat model.

  3. Enter your Ollama Base URL:

  • Use the format: http://<YOUR_OLLAMA_CONTAINER_IP>:11434

You did it! 🎉

You’ve unlocked a world of AI possibilities with locally run Ollama models in your n8n workflows! Now go build something amazing! 🤖

Resources 🧰

Other videos of

Play Video
Josh Pocock
0:11:08
78
8
1
Last update : 18/11/2024
Play Video
Josh Pocock
0:10:49
663
34
2
Last update : 17/11/2024
Play Video
Josh Pocock
0:14:49
2 226
78
7
Last update : 16/11/2024
Play Video
Josh Pocock
0:20:59
2 326
92
29
Last update : 15/11/2024
Play Video
Josh Pocock
0:11:16
844
31
4
Last update : 14/11/2024
Play Video
Josh Pocock
0:11:20
1 582
41
12
Last update : 13/11/2024
Play Video
Josh Pocock
0:17:39
89
14
3
Last update : 10/11/2024
Play Video
Josh Pocock
0:22:35
472
32
8
Last update : 07/11/2024
Play Video
Josh Pocock
0:25:13
5 724
215
20
Last update : 07/11/2024