Coolify: Deploying your app with Docker Compose
It’s not a walk in the park, but it works
I’ve been trying Coolify some the last week or so, since I had to update some very old servers with very old codebases and Linux kernels. After turning them up (you can use either Ansible, or even OpenTofu or Nix if you have started to despise Red Hat), the next part was having a “server framework” that could store and deploy these apps isolated through Docker, instead of letting me do the hardware work manually each time I wanted to deploy a new version.
The isolation is the key part. There was the attempt to bring one app into one VM, mostly because the dependencies and libraries where different, but the containerization as fixed that problem long time ago.
After asking for some direction, I ended with Coolify, a project created with Laravel that handles “projects” that are deployed through a given source, like a private git repository or a Docker Compose stack. It’s not something that will rival Kubernetes, Nomad, Docker Swarm, to name a few, but it’s great when you have web projects that are static, simple, single binaries, or require some Docker magic to be built.
Docker Compose play its own games
The way Coolify deploys an app based on Docker Compose is relatively simple. It brings a “coolify-helper” container that will bring the code from a git repository and call docker compose
to build the images and start the services.
Coolify does a lot of “magic” behind the scenes. Most importantly, it parses the docker-compose.yml
file with additional properties. For example, labels for let Traefik discover the server, volumes set to keep between rolling releases, and some other environment values.
The problem on this is deployment. The helper container brings the code in, but once done the container s gone, so you won’t be able to mount a volume from the host. In other words, you need to copy your app inside one of the containers at build time (usually the last step).
Also, if you need to copy data from the root of your project, your Docker Compose file should be at that location. Docker files can only look below where they’re in, so you can’t bring files beyond its parent directory (a security feature).
A practical example
Imagine I have a Node application that requires a Valkey instance. I can declare both in a Docker Compose file. This is not new, but there is a small piece of magic on the build
key.
# /home/developer/projects/example/docker-compose.yml
version: "3"
services:
app:
build:
context: .
dockerfile: /docker/app/Dockerfile
target: deployment
image: app:latest
environment:
APP_URL: $COOLIFY_URL
volumes:
- app_storage:/app/storage
expose:
- "8080"
depends_on:
- db
db:
image: valkey/valkey
# ... other thigs
Notice the “context” key being the root folder where this docker-compose.yml
file is. From there, we need to point the correct Dockerfile to build the image through the dockerfile
key, located in docker/app/Dockerfile
. A good practice to store Docker related files is on its own directory.
The contents of the Dockerfile
follow the multi-step approach. This allows us to set the “target” of the multi-step that we want to bring as an image. If we name a step as “deployment”, we can use the target
key to use that step declared in the Dockerfile
.
That Dockerfile
is something like this:
###########################################################
# Base image
###########################################################
FROM node:22-alpine AS base
WORKDIR /app
###########################################################
# Development image
###########################################################
FROM base AS development
# Allow the developer to set its UID and GUID
ARG UID=1000
ARG GID=1000
# Create the web user to it matches the developer one
RUN groupadd -g $GID -o web && \
useradd -m -u $UID -g $GID -o -s /bin/bash web
# Configure some debug flags here and there
ENV APP_DEBUG=true
# Become the web user
USER web
# Run the app as a self-contained server
ENTRYPOINT ["npm", "run", "dev-server", "--foreground"]
###########################################################
# Deployment image
###########################################################
FROM base AS deployment
# Create the web user as 1000:1000
RUN groupadd -g 1000 -o web
RUN useradd -m -u 1000 -g 1000 -o -s /bin/bash web
# Copy the application into the container as the "web" user
COPY --chown=1000:1000 . /app
# Become the web user
USER web
# Build the app
RUN npm run production
# Run the production server
ENTRYPOINT ["npm", "run", "server", "--foreground"]
This Dockerfile
is very over-simplified, but you can see the build steps being based on other steps, the base
image.
Developers can use this container by pinpointing the development
step image as target, while on deployment (through the docker-compose.yml
file) we can pinpoint the deployment
step image on the build configuration. No need to use different files when both environments are based on the same image.
Notice the deployment
step image is what copies the proyect files and builds the application, instead of using a bind mount on the Docker Compose file. This is becase the retrieved code from the source is ephemeral, and the only thing it will remain will be the container itself.