Why I Split My Coolify Server in Two
My Coolify server kept dying during deployments. The fix was stupidly simple - separate the UI from the workload.
Why I Split My Coolify Server in Two
My Coolify server kept dying. Every time I tried to deploy something new or make a config change, the whole thing would lock up. CPU at 100%, can't even SSH in, just dead.
I blamed Docker because I don't know what else to blame and Docker is always a safe bet.
The Pattern
It was always the same: I'd be doing something in the Coolify UI while a deployment was running, and the server would just... give up. The UI runs on the same box as all the apps, and when Docker decides to eat all the RAM building some image, everything else suffers.
I kept upgrading the VPS. 2 cores became 4. 4GB RAM became 8. Then 16. Still dying.
The Idea
Rather than keep throwing money at a single server, I thought: what if I split them?
- Tiny VPS - Just runs the Coolify UI
- Beefy VPS - Runs all the actual applications
The UI doesn't need much. It's just a web interface. The apps need the resources.
Setting It Up
Created two Hetzner VPSs in the same private network:
- UI server: 2 cores, 4GB RAM (€4/month)
- App server: 4 cores, 16GB RAM (€15/month)
Installed Coolify on the tiny one, pointed my domain at it. Then added the beefy server as a "remote server" in Coolify's settings.
Hit a snag immediately - Coolify created an SSH key but then couldn't connect to the app server. It was using the public IP instead of the private network IP. Switched to the private IP and the default key, connection established.
Network Bullshit
Then hit a weirder issue. The app server could reach most of the internet but couldn't connect to GitHub's container registry (ghcr.io). Deployments were failing because it couldn't pull Docker images.
Checked everything:
- Firewalls looked fine
- DNS was working
- Could ping github.com
- But ghcr.io specifically? Nope.
Turns out Hetzner was having a partial network issue affecting specific routes. The UI server on the same network could reach GitHub fine. The app server couldn't.
I spent way too long on this before just deleting the app server and creating a new one. Worked immediately. Sometimes it's faster to start fresh than debug networking ghosts.
Was It Worth It?
Absolutely. The UI is responsive now. I can click around Coolify while deploys are running without everything grinding to a halt. When Docker decides to max out the CPU building images, it's on a separate box - the UI keeps working.
The setup is cleaner too:
- UI server handles orchestration and the web interface
- App server just runs containers
- If one dies, the other keeps going
Cost me maybe €5 more per month and saved hours of frustration.
If You're Doing This
Few tips:
- Put both servers in the same private network - way faster than going over the public internet
- Use private IPs for the connection between them
- Don't skimp on the app server - that's where your stuff actually runs
- The UI server can be tiny, genuinely doesn't need much
- Have backups sorted before you start migrating apps
Get the Friday email
What I shipped this week, what I learned, one useful thing.
No spam. Unsubscribe anytime. Privacy policy.