I have a docker container that run an apps that ba...
Created on: February 20, 2026
Created on: February 20, 2026
I have a docker container that run an apps that basically listen port 80 over http and responde the requests. If the apps crash. Waht happends with the container ?
Here is my docker-compose.yml file:
version: '3.8'
services:
autoheal:
image: willfarrell/autoheal:latest
container_name: autoheal
restart: always
environment:
- AUTOHEAL_CONTAINER_LABEL=all
volumes:
- /var/run/docker.sock:/var/run/docker.sock
hid-amico-interface:
build: .
container_name: hid-amico-interface
ports:
- "{HID_AMICO_PUSH_PORT:-8080}:8080"
deploy:
resources:
limits:
memory: 512M
reservations:
memory: 256M
environment:
# Override environment variables if needed
- HID_AMICO_PUSH_PORT={HID_AMICO_PUSH_PORT:-8080}
- PG_HOST={PG_DATABASE}
- PG_USER={PG_PORT:-5432}
- PG_PASSWORD={DKEY}
- ENABLE_CONSOLE_LOG={ENABLE_FILE_LOG:-true}
- INTERVAL_MS={CONTRACT_ID:-0}
- RUN_LOOP=${RUN_LOOP:-true}
volumes:
# Mount logs directory to persist logs
- ./logs:/app/logs
# Optional: mount .env file if you want to modify it without rebuilding
# - ./.env:/app/.env:ro
restart: unless-stopped
healthcheck:
test: ["CMD", "node", "-e", "const http = require('http'); const options = { host: 'localhost', port: process.env.HID_AMICO_PUSH_PORT || 8080, path: '/push?deviceId=healthcheck', timeout: 2000 }; const req = http.request(options, (res) => { process.exit(res.statusCode === 200 ? 0 : 1); }); req.on('error', () => process.exit(1)); req.end();"]
interval: 3m
timeout: 10s
retries: 3
start_period: 40s
In Docker, a container’s lifetime is tied to its PID 1 process (the main process started by the image/command). So what happens depends on what you mean by “app crashes”.
If your Node app is the container’s main process and it crashes/exits, then:
restart: unless-stopped, Docker will restart the container automatically (unless you explicitly stopped it with docker stop).So: app crash → container exits → Docker restarts it.
Sometimes the HTTP server stops responding, or the app is hung, but the Node process doesn’t exit.
In that situation:
healthcheck will start failing and the container becomes unhealthy.unhealthy. Restart policies trigger when the container exits, not when healthcheck fails.autoheal service comes inYou’re running willfarrell/autoheal. That tool can restart containers that are unhealthy, but only if they are labeled according to its config.
You set:
yamlenvironment: - AUTOHEAL_CONTAINER_LABEL=all
That means autoheal looks for containers with a label like autoheal=all (or similar depending on the image behavior). Your hid-amico-interface service currently does not have any labels, so autoheal likely won’t touch it even if it turns unhealthy.
So in Case 2, without the right label, you get: app stuck → container still running → healthcheck unhealthy → no restart (unless autoheal is correctly enabled for that container).
Add a label to the service so autoheal restarts it when unhealthy. Example:
yamlhid-amico-interface: ... labels: - "autoheal=all"
(If you wanted a different selector than all, you’d match whatever you set in AUTOHEAL_CONTAINER_LABEL.)
bashdocker ps -a docker inspect hid-amico-interface --format='{{.State.Status}} {{.RestartCount}} {{.State.Health.Status}}'
bashdocker logs -f hid-amico-interface
restart: unless-stopped restarts the container.unhealthy.unhealthy → you need something like autoheal, and you must label the target container appropriately.If you tell me what “crash” looks like in your case (process exits vs stops responding), I can suggest the most reliable restart strategy (restart policy vs healthcheck+autoheal vs a supervisor).
The apps crash because of memory leaks
Got it — with memory leaks you usually see one of two outcomes, and Docker behaves differently in each.
Node often dies with something like “JavaScript heap out of memory”, and the main process exits.
What happens:
restart: unless-stopped, Docker will restart the container automatically.✅ This is the “good” failure mode (self-healing via restart policy).
If the container exceeds its memory limit, the Linux kernel may kill the process (OOM kill). That usually makes PID 1 die too.
What happens:
restart: unless-stopped kicks in → container restartsYou can confirm OOM kills with:
bashdocker inspect hid-amico-interface --format='OOMKilled={{.State.OOMKilled}} ExitCode={{.State.ExitCode}}'
Sometimes memory pressure makes the app hang, stop responding, or become extremely slow, but the process technically stays alive.
What happens:
healthcheck will likely start failing → container becomes unhealthySo in this mode, you only get automatic recovery if something external restarts unhealthy containers (like your autoheal).
Your autoheal container will only restart containers that match its label rule. You set:
yamlAUTOHEAL_CONTAINER_LABEL=all
That means your app container must have a matching label (commonly autoheal=all). Right now your hid-amico-interface service has no labels, so autoheal likely won’t restart it when it becomes unhealthy.
Add:
yamlhid-amico-interface: ... labels: - "autoheal=all"
deploy.resources memory limits don’t apply in normal docker-composeThis is a big one.
In Compose (non-Swarm), the deploy: section is ignored, so your 512M limit probably is not enforced unless you are using Docker Swarm.
If you want an actual memory limit with docker compose, use mem_limit (supported in the classic compose implementation) or the newer equivalent depending on your setup.
Common practical option:
yamlhid-amico-interface: mem_limit: 512m
Then you’ll reliably get OOM kills → container exits → restart policy restarts it.
(Example for a 512MB container; you leave headroom for native memory.)yamlenvironment: - NODE_OPTIONS=--max-old-space-size=384
bashdocker ps docker inspect hid-amico-interface --format='Status={{.State.Status}} Health={{.State.Health.Status}} Restart={{.RestartCount}} OOMKilled={{.State.OOMKilled}} ExitCode={{.State.ExitCode}}' docker logs --tail=200 hid-amico-interface
If you tell me whether you’re running plain docker compose or Swarm, I’ll give you the exact compose snippet to enforce memory limits correctly in your environment.
plain docker-compose
With plain docker-compose (non-Swarm), here’s what really happens with memory leaks in your setup.
deploy.resources memory limits are ignoredIn normal docker-compose, the whole deploy: section does nothing. So right now your container likely has no memory limit, and a leak can grow until it hurts the host. That also means you may not get clean OOM kills/restarts when you expect them.
restart: unless-stopped, Docker will restart the container.✅ This is the case where your restart policy helps.
So without a helper, it can sit there unhealthy forever.
Use mem_limit (compose v2 supports it in practice):
yamlhid-amico-interface: build: . container_name: hid-amico-interface ports: - "${HID_AMICO_PUSH_PORT:-8080}:8080" mem_limit: 512m restart: unless-stopped environment: - HID_AMICO_PUSH_PORT=${HID_AMICO_PUSH_PORT:-8080} - NODE_OPTIONS=--max-old-space-size=384 # ... your other env vars ...
Why add NODE_OPTIONS?
restart: unless-stopped.(For a 512MB container, 384 is a reasonable starting point to leave headroom for native memory, buffers, etc.)
Right now, your app service has no label, so autoheal likely won’t restart it.
Add:
yamlhid-amico-interface: ... labels: - "autoheal=all"
Then if your healthcheck fails (app wedged), autoheal can restart it.
Check whether OOM kills are happening and whether health is failing:
bashdocker inspect hid-amico-interface --format \ 'Status={{.State.Status}} Health={{if .State.Health}}{{.State.Health.Status}}{{else}}none{{end}} Restart={{.RestartCount}} OOMKilled={{.State.OOMKilled}} ExitCode={{.State.ExitCode}}'
OOMKilled=true or ExitCode=137 usually indicates the container hit a memory kill.Health=unhealthy while Status=running means it’s stuck and compose won’t restart it by itself.If you paste the docker inspect ... output after a failure, I can tell you exactly which failure mode you’re hitting (Node heap crash vs kernel OOM vs “wedged but alive”) and tune the limits accordingly.