The claim sounds neat: Docker Compose is for local development, and production should use Kubernetes, Helm, and Argo CD. It is also incomplete.
The more honest answer is this:
- Docker Compose is excellent for development
- Docker Compose can also be a reasonable production choice on a simple single server
- Kubernetes becomes more attractive when your system is larger, more distributed, or needs stronger operational guarantees
- Helm and Argo CD are not alternatives to Docker Compose; they are layers that become useful when you are already operating in Kubernetes
That distinction matters because teams often over-upgrade too early. They trade a working deployment model for an impressive one, then spend months paying an operational tax they did not need.
This article was researched and audited against primary documentation on March 31, 2026.
TL;DR
- Docker’s own documentation explicitly says Compose works across development, testing, CI, and production environments, so “Compose is only for local dev” is not an official limitation.
- Compose is especially strong when one machine is enough: a web app, API, worker, reverse proxy, and database on a single server can be perfectly reasonable with good backups, health checks, pinned images, restart policies, and monitoring.
- Kubernetes is better when you need cluster-level behavior: multi-node scheduling, self-healing across nodes, service discovery across changing pods, autoscaling, rolling updates, and higher availability patterns.
- Helm is a packaging and templating layer for Kubernetes resources. It helps you manage charts, defaults, and environment-specific values once you are already on Kubernetes.
- Argo CD is a GitOps deployment tool for Kubernetes. It continuously compares what is in Git with what is running in the cluster and syncs drift.
- The right question is not “Which tool is more production-grade?” It is “How much operational complexity does this system actually need today?”
What you’ll learn here
- Why the “Compose is only for dev” claim exists in the first place
- Where Docker Compose is genuinely strong in production
- Where Kubernetes becomes the better fit
- What Helm adds on top of Kubernetes
- What Argo CD adds on top of Helm and manifests
- Common myths versus reality
- Common issues when one project uses Compose for development and Kubernetes for production
- A practical decision framework for teams choosing between these paths
The short answer: this is really two different deployment philosophies
At a high level, these tools solve different problems:
| Tool | Core job | Best fit |
|---|---|---|
| Docker Compose | Define and run a multi-container application from one model | Local development, ephemeral environments, and simple single-host production |
| Kubernetes | Orchestrate containers across nodes while maintaining desired state | Complex, multi-service, highly available, or multi-node production |
| Helm | Package Kubernetes resources and parameterize them | Teams already deploying to Kubernetes |
| Argo CD | Continuously reconcile Kubernetes state from Git | Teams using GitOps on Kubernetes |
This means the original claim mixes up application definition with cluster orchestration and GitOps delivery.
Compose answers, “What containers make up this app, and how do they run together?”
Kubernetes answers, “How do I operate this app reliably across a cluster?”
Helm answers, “How do I package and configure those Kubernetes resources cleanly?”
Argo CD answers, “How do I make Git the source of truth for what runs in the cluster?”
That stack makes sense together, but only if your problem actually requires the stack.
Why people say “Compose is only for local development”
The phrase usually comes from a real observation, just pushed too far.
Compose is fantastic for development because it is:
- fast to understand
- fast to run
- close to the application code
- good at spinning up dependent services like databases, caches, queues, and local reverse proxies
Docker’s documentation leans into this strength, especially around local workflows and Compose-based app definitions. That leads many engineers to mentally file Compose under “developer tooling.”
But Docker’s official docs also say Compose works across environments, including production. Docker also documents how to configure Compose applications for production and even provides Compose Bridge as a path for transforming Compose applications toward Kubernetes manifests.
So the more accurate reading is:
- Compose is developer-friendly first
- Compose is not limited to development only
- Compose is not a full replacement for a cluster orchestrator when your production needs become cluster-shaped
That last point is the real issue.
Where Docker Compose is a very good production choice
If your production environment is a single server, Compose can be pragmatic, clear, and sufficient.
Think about systems like these:
- an internal dashboard for one company
- an early-stage SaaS with one API, one worker, PostgreSQL, and Redis
- a marketing site plus CMS backend
- a small B2B integration service with modest traffic
- a side business or solo product where operational simplicity matters more than theoretical scalability
In those cases, Compose gives you several practical advantages:
1. Simpler operations
A single compose.yaml is often easier to understand than a set of Deployments, Services, Ingresses, ConfigMaps, Secrets, HPAs, RBAC rules, and Helm values files.
That simplicity is not trivial. It affects:
- onboarding time
- debugging speed
- incident response
- how many moving parts can fail
2. Lower operational tax
Kubernetes solves real problems, but it also introduces real cost:
- cluster provisioning and upgrades
- ingress controllers
- storage classes
- secret management patterns
- observability setup
- RBAC and policy layers
- networking complexity
- more YAML and more abstractions
If you do not need those capabilities yet, you may just be buying ceremony.
3. Good enough isolation and repeatability
For many small production systems, the big win is not multi-node orchestration. It is simply:
- consistent container images
- reproducible startup
- stable networking between services
- explicit environment variables and volumes
- easy restarts after host reboot
Compose provides those things well.
4. A clean path from local to simple production
Using the same application model in local development and on a production host can reduce accidental drift.
Here is a realistic single-server example:
services:
app:
image: ghcr.io/acme/myapp:2026-03-31.1
restart: unless-stopped
env_file:
- .env.production
depends_on:
- db
- redis
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 5s
retries: 5
worker:
image: ghcr.io/acme/myapp:2026-03-31.1
command: ["node", "worker.js"]
restart: unless-stopped
env_file:
- .env.production
depends_on:
- db
- redis
db:
image: postgres:16
restart: unless-stopped
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: app
POSTGRES_USER: app
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
redis:
image: redis:7
restart: unless-stopped
caddy:
image: caddy:2
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
depends_on:
- app
volumes:
postgres_data:
That setup is not “toy infrastructure.” It can run a real business if:
- traffic is moderate
- downtime tolerance is reasonable
- failover across multiple nodes is not a hard requirement
- the team can manage backups, logs, alerts, and host maintenance competently
The caveat is important: single-server Compose is not the same thing as highly available infrastructure. If the host dies, your app is down until that host returns or you restore elsewhere. For many products, especially early on, that is an acceptable trade. For others, it is not.
The hidden boundary: when production stops looking like one machine
This is where Kubernetes starts to win clearly.
Once your system needs to behave like a cluster, Compose becomes progressively more manual and brittle.
Typical signals:
- you need multiple nodes
- you need higher availability than one server can provide
- you want rolling updates across many replicas
- you need service discovery across changing workloads
- you want automatic rescheduling after node failure
- different services need independent scaling behavior
- you operate many microservices across several environments
- different teams need a shared operational platform
That is exactly the kind of environment Kubernetes was built for.
What Kubernetes gives you that Compose does not give you as naturally
Kubernetes is not just “more YAML.” It is a control plane for distributed systems.
From the official docs, Kubernetes is designed to manage containerized workloads and services while maintaining desired state. The current Kubernetes docs also explicitly describe self-healing behavior such as:
- restarting failed containers
- replacing failed pods
- rescheduling workloads when nodes fail
- removing unhealthy pods from Service endpoints
That is a different operational model than “run these containers on this host.”
A simple mental model
Docker Compose
-> one machine
-> start containers together
-> restart containers on that host
Kubernetes
-> many machines
-> schedule workloads across nodes
-> maintain desired replicas
-> route traffic to healthy pods
-> replace failed workloads automatically
Why that matters in real production
Imagine an API that needs six replicas across three nodes.
With Kubernetes, the platform already gives you first-class concepts for:
- desired replica count
- service discovery
- rolling updates
- readiness gating
- autoscaling
- node-aware rescheduling
With Compose, once you move beyond a single host, you start assembling those guarantees from outside the tool or changing platforms entirely.
That does not make Compose “bad.” It just means the problem has changed.
Helm is not a replacement for Compose
This is one of the most common sources of confusion.
Helm is not an orchestrator. Helm is a packaging system for Kubernetes.
A helpful way to think about it:
Compose
-> app definition for Docker environments
Kubernetes manifests
-> raw runtime definition for Kubernetes
Helm
-> packaging + templating + versioning layer over Kubernetes manifests
Helm becomes useful when you already know:
- you are deploying to Kubernetes
- you want reusable charts
- you need clean environment-specific configuration
For example, a Helm chart can package:
- a Deployment
- a Service
- an Ingress
- ConfigMaps and Secrets references
- autoscaling configuration
- probes and resource requests
With different values for staging and production.
Example:
# values.yaml
image:
repository: ghcr.io/acme/myapp
tag: "2026-03-31.1"
replicaCount: 3
resources:
requests:
cpu: "250m"
memory: "512Mi"
limits:
cpu: "1"
memory: "1Gi"
That is valuable when the platform itself is Kubernetes. It is unnecessary if your whole production system is one server with Compose.
Argo CD is also not a replacement for Compose
Argo CD is even further downstream.
Argo CD’s job is not to define your application architecture. Its job is to reconcile Kubernetes state from Git.
According to the official docs, Argo CD follows the GitOps pattern by using Git as the source of truth for desired application state, then automates deployment of that desired state into target environments. Its automated sync model means CI can push a Git change instead of holding direct deployment credentials to the cluster.
That is powerful, especially for teams that want:
- pull-based deployments
- auditable environment changes
- cluster drift detection
- safer separation between CI and cluster credentials
- clear deployment history in Git
But again, Argo CD matters after you are already in Kubernetes land.
If your app runs on one virtual machine with Compose, Argo CD is not the missing piece.
A realistic architecture comparison
Option A: simple single-server production with Compose
Developer push
|
v
CI builds image
|
v
Registry
|
v
SSH / pull / restart on one server
|
v
Docker host
-> reverse proxy
-> app
-> worker
-> database
-> cache
This is often enough for a surprisingly long time.
Option B: cluster-based production with Kubernetes, Helm, and Argo CD
Developer push
|
v
CI builds image
|
v
Registry + manifest update in Git
|
v
Argo CD detects Git change
|
v
Helm renders chart values for environment
|
v
Kubernetes reconciles desired state
-> Deployments
-> Services
-> Ingress
-> Autoscaling
-> multi-node scheduling
This is better when you need stronger guarantees, more services, more environments, or more team-scale coordination.
Myths vs reality
| Myth | Reality |
|---|---|
| Compose is only for local development. | Docker’s docs explicitly describe Compose across environments, including production. |
| If it is production, Kubernetes is automatically the right answer. | Only if your production needs are cluster-shaped enough to justify the operational cost. |
| Helm replaces Compose. | Helm packages Kubernetes manifests. It solves a different problem. |
| Argo CD replaces CI/CD. | Argo CD complements CI by reconciling Kubernetes state from Git. CI still builds, tests, and usually publishes artifacts. |
| Using Compose in production is unprofessional. | Using a simpler tool for a simpler system is often the more professional choice. |
| Kubernetes always reduces ops work. | Kubernetes automates many things, but it also creates platform work that smaller teams may not need yet. |
Trade-offs in plain English
Here is the practical trade:
Compose optimizes for simplicity
You give up some cluster-grade guarantees, but you gain:
- lower cognitive load
- faster onboarding
- faster debugging
- cheaper infrastructure
- fewer abstractions between the app and the operator
Kubernetes optimizes for control and resilience at scale
You gain:
- declarative desired state
- cluster scheduling
- self-healing behavior
- service abstractions for dynamic workloads
- horizontal scaling patterns
- stronger building blocks for availability
But you pay with:
- more infrastructure
- more platform concepts
- more configuration surface
- more ways to misconfigure the system
This is why “best practice” without context becomes expensive.
Real-world scenarios
Scenario 1: internal business application
You have:
- 1 API
- 1 PostgreSQL instance
- 1 queue worker
- maybe Redis
- a small team
- moderate traffic
Recommendation:
- start with Docker Compose on a well-managed server
- add backups, alerts, image pinning, health checks, TLS, and host hardening
Why:
- the operational overhead of Kubernetes is probably not justified yet
Scenario 2: early-stage SaaS with growing traffic
You have:
- one customer-facing API
- background jobs
- staging and production
- some uptime expectations
- likely future need for horizontal scale
Recommendation:
- Compose can still work early
- move to Kubernetes when scaling pressure, team count, or availability requirements make single-host operations risky
Why:
- this is the gray zone where good teams can reasonably choose either path
Scenario 3: multi-service platform with several teams
You have:
- many services
- multiple environments
- independent deploy cadences
- stronger security boundaries
- rolling upgrade expectations
- node failures you want the platform to absorb
Recommendation:
- Kubernetes + Helm
- Argo CD if you want GitOps and in-cluster reconciliation
Why:
- the platform problem is now real, and Kubernetes is designed for it
Scenario 4: regulated or high-change environment
You have:
- strong audit needs
- frequent environment changes
- desire for Git-based approvals and drift control
Recommendation:
- Kubernetes with Helm and Argo CD
Why:
- GitOps starts paying back much more clearly here
Common issues when one project uses Compose for development and Kubernetes for production
This is a very common setup:
- Docker Compose for local development
- Kubernetes for staging or production
There is nothing wrong with that split. In fact, it is often a healthy operating model. But it creates a subtle risk: teams start assuming that “working in Compose” means “ready for Kubernetes.”
The tricky part is that Compose and Kubernetes reward different habits.
- Compose is optimized for fast local feedback on one machine
- Kubernetes is optimized for replicated, dynamic workloads running across a cluster
That means some problems do not show up until you cross the boundary.
1. Dependency startup works locally, then fails in the cluster
This is one of the most common surprises.
Docker’s startup-order docs are explicit: Compose starts services in dependency order, but it does not wait until a dependency is actually ready unless you use health-aware conditions. So a local stack can already be fragile if the app starts before the database can really accept connections.
Kubernetes is stricter in a different way: separate Deployments do not give you a simple global startup order. The platform expects your application to tolerate retries, readiness delays, and temporarily unavailable dependencies.
Typical symptom:
- the app starts fine in
docker compose up - the Kubernetes Pod starts, but immediately throws connection errors to Postgres, Redis, or another API
What usually helps:
- use Compose health checks locally, not just
depends_on - in Kubernetes, use
startupProbeandreadinessProbecorrectly - if a Pod truly must wait for something, use an
initContainerfor dependency checks - still make the app retry connections gracefully instead of assuming perfect startup order
How to troubleshoot:
- run
docker compose logsand confirm whether the dependency is merely running or actually ready - run
kubectl describe pod <pod>and inspect probe failures and Events - run
kubectl logs <pod>and look for connection refused, timeout, or DNS failures
2. Migrations run during app startup and create race conditions
This is the sharp edge you called out, and it is a very real one.
A lot of projects start with an entrypoint like this:
./migrate-db && ./start-app
In Compose, that often works because there may only be one app container running.
In Kubernetes, a Deployment with 2 or more replicas can turn that pattern into a race:
Pod A starts
-> runs migration
Pod B starts
-> runs the same migration
Result
-> lock contention
-> duplicate DDL
-> partial rollout
-> CrashLoopBackOff or stuck deployment
This is where teams often reach for an initContainer. That can help with setup that should happen once per Pod, but it does not make the migration a one-time cluster-wide action. If you have 3 replicas, the init container still runs 3 times.
The safer patterns are usually:
- a dedicated Kubernetes Job for schema migration
- a CI/CD step that runs migrations before the Deployment rollout
- a Helm hook or Argo CD sync hook when your process is already built around those tools
- migration tooling that is idempotent and uses database-level locking where possible
The operational idea is simple:
Build image
|
v
Run one migration task
|
v
Only then roll out app replicas
How to troubleshoot:
- look for repeated migration attempts in multiple Pod logs
- check for database lock timeouts, “relation already exists”, or concurrent DDL errors
- inspect rollout status with
kubectl rollout status deployment/<name> - if using a Job, inspect it directly with
kubectl get jobsandkubectl logs job/<name>
3. Local bind mounts hide problems that the real image still has
Compose-based development often mounts source code, config files, and generated assets from the host machine.
That is great for iteration speed, but it can hide production problems such as:
- the built image is missing a file
- the container expects a writable path that does not exist
- a script works locally only because it is mounted from the repo
Then the same app reaches Kubernetes and fails even though it “worked in Docker.”
What usually helps:
- test the image locally without bind mounts before promoting it
- treat the image as the deployable artifact, not the local filesystem
- make writable paths explicit
- if data must persist, use a proper persistent volume or external storage, not the container filesystem
How to troubleshoot:
- compare local Compose behavior with a plain
docker runor a Compose override that removes bind mounts - inspect the image contents directly
- in Kubernetes, check whether the failing path should come from the image, a Secret, a ConfigMap, or a PersistentVolumeClaim
4. Networking assumptions drift between Compose and Kubernetes
Compose gives every service a name on the app network. Docker’s networking docs show that services can discover each other by service name, such as db.
Kubernetes also supports service discovery, but the abstraction is different:
- Pods are ephemeral
- Services provide stable access to changing backends
- DNS and namespace boundaries matter more
This creates a few common mistakes:
- using
localhostfor another service - hardcoding Pod IPs
- assuming
dbin local dev means the same thing as production DNS in another namespace - relying on environment-injected service variables instead of DNS and explicit config
What usually helps:
- make connection settings explicit:
DATABASE_HOST,REDIS_URL,API_BASE_URL - connect to Kubernetes Services, not Pod IPs
- prefer DNS-based service discovery
- keep environment names consistent where you can, even if the underlying platform differs
How to troubleshoot:
- run
kubectl get svcto confirm the Service really exists - run
kubectl exec <pod> -- nslookup <service-name>to test DNS resolution - compare local Compose service names with Kubernetes Service names and namespaces
5. Secrets and config handling drift between environments
Compose often starts with .env, env_file, and maybe a mounted config file.
Kubernetes usually pushes teams toward:
- Secrets
- ConfigMaps
- mounted files
- environment injection from cluster resources
The failure mode is not always dramatic. Sometimes the app starts, but with the wrong configuration shape:
- a Secret name is wrong
- a ConfigMap key is missing
- a value that exists locally was never created in the cluster
- a config file path differs between environments
What usually helps:
- define a clear runtime contract for required environment variables and files
- keep production configuration explicit in Git or in your external secret system
- avoid relying on undocumented local
.envbehavior - validate configuration on startup and fail clearly
How to troubleshoot:
- use
docker compose configto inspect the effective local config - use
kubectl describe pod <pod>to inspect referenced config and recent Events - verify that required Secrets and ConfigMaps exist in the correct namespace
6. Probes and resource limits turn a healthy app into a restart loop
Compose usually does not subject your app to Kubernetes-style liveness, readiness, startup, CPU, and memory constraints.
So an app that looks fine locally may fail in cluster because:
- startup is slower than the probe allows
- readiness is checked before caches or models are loaded
- memory limits are too low
- CPU throttling makes warmup much slower than expected
This is especially common for:
- applications with heavy startup
- apps that run migrations or cache warmup on boot
- AI services loading models or large dependencies
What usually helps:
- use
startupProbefor slow boot sequences - use
readinessProbeto gate traffic until the app is actually ready - use
livenessProbemore conservatively - set resource requests and limits based on real measurements, not guesses
How to troubleshoot:
- run
kubectl describe pod <pod>and check for probe failures andOOMKilled - inspect restart counts and Events
- compare app startup time locally versus in-cluster under actual limits
7. Local state works on one container, then breaks with replicas
Single-host Compose setups can accidentally normalize patterns like:
- writing uploads to the container filesystem
- storing sessions in memory
- caching important state on one instance
Those shortcuts can survive for a while on one machine. They break much faster in Kubernetes when multiple replicas are serving traffic.
Symptoms:
- one Pod sees a file that another Pod does not
- sessions disappear when requests land on another replica
- background jobs behave inconsistently across Pods
What usually helps:
- move shared state to the database, Redis, object storage, or a correctly designed persistent volume
- treat containers as replaceable
- assume any single Pod can disappear
How to troubleshoot:
- trace where state is actually written
- check whether that state must be shared across replicas
- verify whether the application is depending on local disk or process memory in ways that only worked on one host
The practical lesson
Using Compose for development and Kubernetes for production is not a smell. It is often the right split.
The real discipline is to avoid letting local assumptions silently become production assumptions.
If I were hardening one project across both environments, I would check these first:
- app startup and dependency retry behavior
- one-time migrations versus per-Pod initialization
- image correctness without bind mounts
- service discovery and DNS expectations
- config and secret contract completeness
- probe settings and resource budgets
- statefulness hidden inside local filesystems or process memory
A practical decision framework
If you are deciding today, ask these questions in order.
1. Can one server handle the workload and failure model?
If yes, Compose stays on the table.
If no, you are moving toward Kubernetes territory.
2. Do you need multi-node scheduling or automatic rescheduling after node loss?
If yes, Kubernetes is the stronger fit.
3. Do you need many replicas, controlled rollouts, or autoscaling?
If yes, Kubernetes is usually worth the added complexity.
4. Are you operating many services across teams and environments?
If yes, Kubernetes usually provides better long-term structure.
5. Do you specifically want GitOps, drift detection, and pull-based delivery?
If yes, that points to Argo CD, but only once Kubernetes is already your platform.
6. Do you need chart packaging and environment-specific manifest reuse?
If yes, that points to Helm, again only once Kubernetes is in the picture.
In short:
Single host + modest complexity
-> Compose is reasonable
Multiple nodes + higher availability + many services
-> Kubernetes becomes reasonable
Kubernetes + reusable packaging
-> add Helm
Kubernetes + Git as deployment source of truth
-> add Argo CD
So, what should most teams actually do?
My recommendation is intentionally boring:
- start with Compose when your product and team are still simple
- adopt Kubernetes when your operational needs clearly justify it
- adopt Helm when raw Kubernetes manifests start becoming repetitive and hard to manage
- adopt Argo CD when GitOps and declarative reconciliation are worth the extra machinery
That sequence is usually healthier than starting with the full stack because it is fashionable.
There is also a useful strategic point here: premature platform complexity is a form of technical debt. It just looks more sophisticated than other kinds of debt.
Final conclusion
The original claim gets one thing right: for complex production systems, Kubernetes is often the better platform, and Helm plus Argo CD can make that platform much more manageable.
But it gets the first half wrong by being too absolute.
Docker Compose is not “only for local development.” It is a valid operational choice for a class of production systems, especially when:
- one server is enough
- the team values simplicity
- the architecture is still compact
- uptime requirements are meaningful but not cluster-grade
Kubernetes is not “more correct.” It is more capable and more expensive operationally. That trade is worth it when the system is large enough, critical enough, or distributed enough.
The right deployment tool is not the one with the biggest ecosystem. It is the one that matches your current operational reality without cornering your future.
If you already know your system is headed toward a full GitOps Kubernetes setup, you might also like From Repo to Cluster: An End-to-End Deployment Pipeline for AI SaaS.
Sources
Verified on March 31, 2026 against primary documentation:
- Docker Docs: Docker Compose overview
- Docker Docs: Use Compose in production
- Docker Docs: Control startup order in Compose
- Docker Docs: Networking in Compose
- Docker Docs: Compose Bridge
- Docker Docs: Compose Deploy Specification
- Kubernetes Docs: What is Kubernetes
- Kubernetes Docs: Deployment
- Kubernetes Docs: Job
- Kubernetes Docs: Init Containers
- Kubernetes Docs: Service
- Kubernetes Docs: Horizontal Pod Autoscaling
- Kubernetes Docs: Kubernetes Self-Healing
- Kubernetes Docs: Configure Liveness, Readiness, and Startup Probes
- Kubernetes Docs: Persistent Volumes
- Helm Docs: Helm home
- Helm Docs: Charts
- Argo CD Docs: Overview
- Argo CD Docs: Automated Sync Policy
A few judgments in this article are interpretations of those docs rather than direct claims from the vendors. In particular, the recommendation to keep Compose for simple single-server production and move to Kubernetes when the system becomes multi-node, highly available, or team-complex is an engineering inference based on the capabilities and trade-offs documented above.