Tired of paying for AI models? 😩 This is how to run them locally with n8n! 🤯
This is your one-stop shop for integrating powerful, open-source AI models like Llama 3.1 directly into your n8n workflows.
No more relying on expensive cloud services! 🚀
Setting the Stage 🎬
What you’ll need:
- A basic understanding of n8n (it’s like Zapier or Make, but better! 😉)
- A server with Docker installed (I recommend Coolify – super easy to use!)
Why this matters:
- Cost savings: 💰 Stop paying for every API call and use powerful models for free.
- Privacy: 🔐 Keep your data on your own server, under your control.
- Customization: 🔧 Fine-tune models to your exact needs.
Installing Ollama: Your AI Powerhouse 🔌
- Get the Docker image:
docker pull ollama/ollama
- Run Ollama (CPU Only):
docker run -d -p 11434:11434 ollama/ollama
- For Nvidia GPUs: Check Ollama’s documentation for specific instructions (https://hub.docker.com/r/ollama/ollama).
Bridging the Gap: Connecting Ollama and n8n 🔗
Think of it like this: you need to get Ollama and n8n talking on the same network. 🗣️
- Find your n8n network:
docker network ls
- Look for the network your n8n container is using (you can find this in Coolify or Portainer too).
- Find your Ollama container ID:
docker ps
- Connect Ollama to your n8n network:
docker network connect <YOUR_N8N_NETWORK_NAME> <YOUR_OLLAMA_CONTAINER_ID>
- Replace
<YOUR_N8N_NETWORK_NAME>
and<YOUR_OLLAMA_CONTAINER_ID>
with the values you found.
- Restart your containers: This ensures the connection takes effect.
Installing Your Desired Model 🧠
- Access your Ollama container:
docker exec -it <YOUR_OLLAMA_CONTAINER_ID> ollama run llama3.1
- Replace
<YOUR_OLLAMA_CONTAINER_ID>
with your Ollama container ID. - Replace
llama3.1
with your desired model (e.g.,mixtral
,llama2
).
Unleashing the Power: Using Ollama in n8n 🚀
- Get your Ollama container’s IP address:
docker inspect <YOUR_OLLAMA_CONTAINER_ID>
- Look for the
IPAddress
field in the output.
-
In n8n, add an “AI Agent” node.
-
Select “Ollama Chat” as your chat model.
-
Enter your Ollama Base URL:
- Use the format:
http://<YOUR_OLLAMA_CONTAINER_IP>:11434
You did it! 🎉
You’ve unlocked a world of AI possibilities with locally run Ollama models in your n8n workflows! Now go build something amazing! 🤖
Resources 🧰
- Ollama Docker Hub: https://hub.docker.com/r/ollama/ollama – Get Ollama and explore different models.
- Ollama Website: https://ollama.com/ – Learn more about Ollama and its capabilities.
- Coolify: https://coolify.io/self-hosted – Simplify your Docker experience with this awesome tool.
- Portainer: https://portainer.com – Another great GUI for managing Docker containers.
- n8n: https://n8n.io/ – The ultimate workflow automation tool!