Introduction
The container and compute category is where Cloudflare's architectural divergence from the hyperscalers is most pronounced. AWS, Azure, and Google have spent a decade building container orchestration platforms — managed Kubernetes, serverless containers, and rich ecosystems of tools for deploying, scaling, and managing containerized applications.
Cloudflare took a fundamentally different path. Instead of containers, Cloudflare built compute around V8 isolates (Workers) — lighter, faster, and more globally distributed, but more constrained. Cloudflare has more recently added container support for workloads that exceed Workers' limits, but the primary philosophy remains: if a workload can run in an isolate, it should.
This comparison examines three compute models:
- Edge isolates (Cloudflare Workers) — sub-millisecond startup, global deployment, constrained resources
- Managed Kubernetes (EKS, AKS, GKE) — full orchestration, maximum flexibility, operational complexity
- Serverless containers (Fargate, Cloud Run, Container Apps) — container flexibility without cluster management
Understanding when each model wins is more useful than declaring a single winner.
Compute Architecture Spectrum
The four providers offer compute at different points on the complexity-capability spectrum:
Simpler ←——————————————————————————→ More Capable
Workers ──→ Cloud Run ──→ Fargate/Container Apps ──→ Kubernetes ──→ Bare VMs
(isolates) (serverless (serverless containers (orchestrated (full control)
containers) in cluster) containers)
Each step right adds capability and operational complexity. Each step left adds simplicity and constraints.
Cloudflare: Edge Compute Model
Workers (V8 Isolates)
Covered in detail in the serverless comparison, Workers are Cloudflare's primary compute primitive. Key constraints for this comparison:
| Constraint | Limit | Impact |
|---|---|---|
| Memory | 128 MB | Cannot run large runtimes (JVM, .NET CLR) or process large datasets in memory |
| CPU time | 30 seconds (paid) | Cannot run long batch jobs or ML inference |
| Languages | JS/TS, WASM | No native Python, Java, Go, Ruby, PHP, .NET |
| File system | None | Cannot read/write files, use temp storage, or load native libraries |
| Network | HTTP(S) only (plus TCP connect) | No raw TCP/UDP servers, no gRPC server, no database wire protocols |
| Package size | 10 MB (compressed) | Cannot include large dependencies or ML models |
These constraints are by design — they enable sub-millisecond startup and global deployment. But they also mean Workers cannot run many workloads that containers handle easily.
Cloudflare Containers (Newer)
Cloudflare's container platform (launched 2025) runs OCI-compatible containers on Cloudflare's edge network:
| Feature | Details |
|---|---|
| Container format | OCI-compatible (Docker images) |
| Languages | Any (full Linux environment) |
| Memory | Configurable (larger than Workers' 128MB) |
| Networking | Accessible from Workers via service bindings |
| Scaling | Automatic |
| Locations | Cloudflare edge network |
| Integration | Workers service bindings, R2, D1, KV |
Cloudflare Containers bridge the gap between Workers and traditional container platforms. A typical pattern: Workers handle the request/response edge layer (routing, auth, caching) and delegate compute-heavy operations to containers running on the same network.
Limitations to acknowledge: Cloudflare's container platform is newer than any hyperscaler equivalent. It lacks the ecosystem depth, tooling maturity, and orchestration capabilities of EKS/AKS/GKE. There is no Kubernetes API, no service mesh, no Helm charts, no established operational patterns. For simple containerized workloads, this simplicity is an advantage. For complex microservice architectures, it is a limitation.
The Workers + Containers Pattern
Cloudflare's recommended architecture for complex applications:
User Request
→ Workers (edge: routing, auth, caching, personalization)
→ Container (heavy compute: ML inference, image processing)
→ D1/KV/R2 (data: storage, state, objects)
→ Response (assembled at edge)
This is architecturally elegant: the edge handles what it is good at (low-latency request processing), and containers handle what they are good at (compute-heavy tasks). The pieces run on the same network.
AWS Container Ecosystem
AWS has the most comprehensive container ecosystem — three orchestrators, two compute engines, and deep integration across the AWS service catalog.
ECS (Elastic Container Service)
ECS is AWS's proprietary container orchestrator — simpler than Kubernetes, with zero control plane cost.
| Feature | Details |
|---|---|
| Orchestrator | AWS proprietary (not Kubernetes) |
| Control plane | Free |
| Compute options | EC2 (self-managed), Fargate (serverless) |
| Task definition | JSON spec for containers, resources, networking, volumes |
| Service discovery | AWS Cloud Map integration |
| Load balancing | ALB/NLB integration (native) |
| Auto-scaling | Target tracking, step scaling, scheduled |
| Logging | CloudWatch Logs, FireLens (Fluentd/Fluent Bit) |
| Secrets | AWS Secrets Manager, SSM Parameter Store |
| IAM | Task role (per-container IAM permissions) |
ECS advantages over EKS:
- No control plane cost ($73/month savings)
- Simpler mental model (tasks, services, clusters vs pods, deployments, services, ingress, etc.)
- Deeper AWS integration (IAM task roles, native ALB integration, CloudWatch)
ECS disadvantages:
- AWS-only (not portable to other clouds)
- No Helm, Kustomize, or Kubernetes ecosystem tools
- Smaller community and fewer third-party integrations
EKS (Elastic Kubernetes Service)
EKS is managed Kubernetes on AWS. AWS manages the control plane; you manage worker nodes (or use Fargate for serverless pods).
| Feature | Details |
|---|---|
| Orchestrator | Kubernetes (upstream-compatible) |
| Control plane | $0.10/hour ($73/month) |
| Compute options | EC2 managed node groups, self-managed nodes, Fargate |
| Kubernetes version | Standard K8s versions, typically 1-2 behind upstream |
| Networking | Amazon VPC CNI (pod IPs from VPC), Calico option |
| Service mesh | App Mesh (AWS) or any K8s service mesh (Istio, Linkerd) |
| Ingress | AWS Load Balancer Controller, nginx, Traefik, etc. |
| Storage | EBS CSI, EFS CSI, FSx CSI drivers |
| Monitoring | CloudWatch Container Insights, Prometheus, Grafana |
| GitOps | ArgoCD, Flux (any K8s-compatible) |
Fargate (Serverless Compute Engine)
Fargate eliminates node management for both ECS and EKS. You define CPU and memory per task/pod, and Fargate provisions isolated compute.
| Dimension | Fargate |
|---|---|
| vCPU options | 0.25 to 16 vCPU |
| Memory options | 0.5 to 120 GB |
| Pricing (vCPU) | $0.04048/hour ($29.15/month) |
| Pricing (memory) | $0.004445/GB/hour ($3.20/GB/month) |
| Spot pricing | Up to 70% discount, with interruption |
| Storage | Ephemeral (20-200GB), EFS persistent |
| Startup time | 30-60 seconds (image pull dependent) |
Fargate is more expensive per-resource than EC2 but eliminates node provisioning, patching, scaling, and capacity planning. For variable workloads, the simplicity often justifies the premium.
Azure Container Ecosystem
AKS (Azure Kubernetes Service)
AKS is Azure's managed Kubernetes with a free control plane — the most notable pricing advantage over EKS.
| Feature | Details |
|---|---|
| Control plane | Free (Standard tier: free. Premium: uptime SLA) |
| Compute | Azure VMs (node pools), virtual nodes (ACI) |
| Networking | Azure CNI, kubenet, Azure CNI Overlay |
| Service mesh | Istio (managed add-on), Open Service Mesh |
| Ingress | Azure Application Gateway Ingress Controller, nginx |
| Storage | Azure Disk, Azure Files, Azure Blob CSI drivers |
| Windows containers | Supported (mixed Linux/Windows clusters) |
| GitOps | Flux v2 (managed add-on) |
| Monitoring | Azure Monitor Container Insights, Prometheus |
| Dev Spaces | Bridge to Kubernetes for local development |
AKS's free control plane makes it the cheapest managed Kubernetes for experimentation and development. Windows container support is strongest on AKS — relevant for .NET Framework workloads that cannot run on Linux.
Azure Container Apps
Container Apps is Azure's serverless container platform, built on Kubernetes and KEDA but abstracting away all cluster management:
| Feature | Details |
|---|---|
| Abstraction | No cluster, no nodes, no Kubernetes knowledge needed |
| Scaling | 0 to N based on HTTP traffic, events, cron, or custom metrics |
| Revisions | Blue/green deployments, traffic splitting |
| Dapr integration | Service invocation, state management, pub/sub, bindings |
| Networking | VNet integration, custom domains, mTLS between apps |
| Pricing (Consumption) | $0.000012/vCPU-second, $0.000002/GiB-second |
| Pricing (Dedicated) | Reserved compute with fixed pricing |
Container Apps is Azure's most direct competitor to Cloud Run and AWS Fargate on ECS. The Dapr integration is unique — Dapr provides language-agnostic building blocks (service invocation, state stores, pub/sub, bindings) that simplify microservice development without Kubernetes-level complexity.
Google Cloud Container Ecosystem
GKE (Google Kubernetes Engine)
GKE is widely considered the most mature managed Kubernetes platform — unsurprising given that Google created Kubernetes.
| Feature | Details |
|---|---|
| Modes | Standard (you manage nodes) and Autopilot (Google manages everything) |
| Control plane | Free (one zonal cluster) or $0.10/hour (regional/Autopilot) |
| Autopilot pricing | Per-pod vCPU ($0.0445/hour) and memory ($0.0049/GB/hour) |
| Networking | GKE Dataplane V2 (eBPF-based, Cilium) |
| Service mesh | Anthos Service Mesh (managed Istio) |
| Ingress | GKE Gateway API, Google Cloud Load Balancer |
| Multi-cluster | GKE Multi-cluster Services, Anthos |
| Security | Binary Authorization, Workload Identity, Config Sync |
| Monitoring | Google Cloud Managed Prometheus, Cloud Logging |
| Release channels | Rapid, Regular, Stable (automatic upgrades) |
GKE Autopilot deserves special attention: it is the closest thing to "serverless Kubernetes" — you define pods and GKE handles nodes, scaling, security patches, and resource optimization. Pricing is per-pod, and you never interact with nodes. This eliminates the biggest operational burden of Kubernetes (node management) while preserving full Kubernetes API compatibility.
Cloud Run
Cloud Run is Google's serverless container platform and the cleanest "deploy a container, get a URL" experience:
| Feature | Details |
|---|---|
| Input | Container image (any language, any framework) |
| Scaling | 0 to 1,000 instances, automatic |
| Cold start | 1-10 seconds (image-size dependent) |
| Max concurrency | 1,000 requests per instance |
| Max memory | 32 GB |
| Max timeout | 60 minutes |
| Min instances | 0 (scale to zero) or configurable minimum |
| Pricing | $0.00002400/vCPU-second, $0.00000250/GiB-second |
| Free tier | 2M requests, 360K vCPU-seconds, 180K GiB-seconds/month |
| Jobs | Cloud Run Jobs for batch/scheduled tasks |
| Traffic splitting | Automatic with revision-based routing |
Cloud Run's killer feature: concurrency. Unlike AWS Lambda (1 request per instance) or even Fargate (you manage concurrency), Cloud Run handles up to 1,000 concurrent requests per instance. This dramatically reduces the number of instances needed and minimizes cold starts.
Comparison: When Each Model Wins
Feature Comparison
| Dimension | Workers | Cloudflare Containers | ECS/Fargate | EKS | AKS | GKE | Cloud Run | Container Apps |
|---|---|---|---|---|---|---|---|---|
| Startup time | <1ms | Seconds | 30-60s | Seconds (running pods) | Seconds (running pods) | Seconds (running pods) | 1-10s | 1-10s |
| Global deploy | All 310+ PoPs | Edge locations | Per-region | Per-region | Per-region | Per-region + multi-cluster | Per-region | Per-region |
| Max memory | 128 MB | Configurable | 120 GB | Node-limited | Node-limited | Node/Autopilot-limited | 32 GB | 4 GB (Consumption) |
| Max CPU | N/A (CPU time) | Configurable | 16 vCPU | Node-limited | Node-limited | Node-limited | 8 vCPU | 4 vCPU (Consumption) |
| Languages | JS/TS/WASM | Any | Any | Any | Any | Any | Any | Any |
| Scale to zero | Yes (always ready) | Yes | Fargate: limited, ECS: no | No | No | Autopilot: per-pod | Yes | Yes |
| Kubernetes API | No | No | No (ECS), Yes (EKS) | Yes | Yes | Yes | No | No |
| Service mesh | No | No | App Mesh | Any K8s mesh | Istio, OSM | Anthos Service Mesh | No | Dapr |
| GPU support | No | No | Yes | Yes | Yes | Yes | No | No |
| Persistent storage | KV, D1, R2, DO | R2, D1 | EBS, EFS, FSx | EBS, EFS, FSx | Azure Disk, Files | PD, Filestore | No (Cloud Storage via mount) | Azure Files |
| Windows containers | No | No | Yes | Yes | Yes (best support) | Yes (limited) | No | Yes |
Cost Comparison: Same Workload, Different Platforms
Scenario: API backend serving 10M requests/month, 100ms average processing, 512MB memory
| Platform | Configuration | Monthly Cost |
|---|---|---|
| Workers | Paid plan + CPU time | ~$15-30 |
| Cloud Run | 0.5 vCPU, 512MB, scale-to-zero | ~$25-40 |
| Container Apps | Consumption, 0.5 vCPU, 1GB | ~$20-35 |
| ECS + Fargate | 0.5 vCPU, 1GB, 2 tasks always running | ~$50-70 |
| EKS + EC2 | 2x t3.small nodes + control plane | ~$110-140 |
| AKS + VMs | 2x Standard_B2s nodes | ~$80-100 |
| GKE Autopilot | Per-pod pricing, auto-scaled | ~$60-90 |
Workers is cheapest by a wide margin for workloads that fit its constraints. Serverless containers (Cloud Run, Container Apps) are next. Managed Kubernetes is most expensive due to always-running control plane and node costs — but the cost includes capabilities (service mesh, persistent storage, custom networking) that simpler platforms lack.
Scenario: ML inference service, 4 vCPU, 16GB memory, always running
| Platform | Configuration | Monthly Cost |
|---|---|---|
| Workers | Not viable (128MB limit) | — |
| Cloud Run | 4 vCPU, 16GB, min 1 instance | ~$250-350 |
| ECS + Fargate | 4 vCPU, 16GB, 1 task | ~$200-280 |
| EKS + EC2 | m6i.xlarge spot instance | ~$80-120 |
| GKE + Spot | e2-standard-4 preemptible | ~$70-100 |
For compute-heavy, always-running workloads, self-managed nodes on Kubernetes with spot/preemptible instances are dramatically cheaper than serverless containers. The trade-off is operational complexity.
Calculate Your Costs
Use the calculator below to estimate costs for your specific workload:
Container Compute Cost Calculator
Compare container hosting costs for your workloads.
Estimates based on published pricing as of February 2026. Actual costs may vary by region, commitment, and usage patterns.
The Container vs Isolate Decision
This is the fundamental architectural choice in this comparison:
Choose Isolates (Workers) When:
- Latency is paramount — 0ms cold start, response from nearest of 310+ locations
- Workloads are stateless HTTP — API proxies, middleware, edge routing, personalization, auth
- Memory needs are modest — under 128MB, no large runtimes or datasets in memory
- Execution is short — under 30 seconds, typically under 1 second
- JavaScript/TypeScript is acceptable — or you can compile to WASM
- Global deployment is valuable — your users are distributed worldwide
Choose Serverless Containers (Cloud Run, Container Apps, Fargate) When:
- You need full language runtimes — Python with ML libraries, Java with Spring, .NET, Go with CGO
- Memory needs are moderate — 512MB to 32GB
- You want container simplicity — deploy an image, get a URL, auto-scale
- Cold starts are tolerable — 1-10 seconds is acceptable for your use case
- You do not need Kubernetes — no service mesh, no custom scheduling, no persistent volumes
- Scale-to-zero matters — pay only when processing requests
Choose Kubernetes (EKS, AKS, GKE) When:
- Complex microservices — service mesh, custom networking, inter-service communication patterns
- Stateful workloads — databases, message queues, cache servers running in the cluster
- GPU workloads — ML training, inference, video processing requiring GPU scheduling
- Custom scheduling — node affinity, taints, tolerations, priority classes
- Maximum control — you want to define every aspect of the deployment, scaling, and networking
- Portability — Kubernetes API is the same across clouds (with provider-specific extensions)
- Existing investment — your team has Kubernetes expertise and established operational practices
The Honest Assessment
Cloudflare's compute model is the most architecturally distinct in cloud computing. Workers are not "containers lite" — they are a different paradigm: globally distributed, sub-millisecond startup, request-level isolation, CPU-time billing. For the workloads they support, Workers deliver better latency at lower cost than any container platform.
But "the workloads they support" is a significant qualifier. You cannot run a Django application on Workers. You cannot run a JVM-based microservice. You cannot process a 1GB file in memory. You cannot train an ML model. For these workloads — which represent the majority of enterprise compute — container platforms (Kubernetes or serverless) remain necessary.
Cloudflare Containers address part of this gap, but the platform is nascent compared to the hyperscaler container ecosystems that have been maturing for a decade. If you need Kubernetes today, EKS, AKS, or GKE are the choices. If you need serverless containers, Cloud Run and Azure Container Apps provide the best developer experience.
The most forward-looking architecture: Workers at the edge for request handling + serverless containers in a region for heavy compute + Kubernetes for complex orchestration when needed. This layered approach matches each workload to the compute model that best serves it — and Cloudflare's container platform may eventually absorb the middle tier as it matures.
Among the hyperscalers: GKE is the most mature Kubernetes (Google built Kubernetes), Cloud Run is the best serverless container experience (simplest deployment, best concurrency model), ECS is the best AWS-native orchestrator (simplest, free control plane), and AKS offers the best value (free control plane, best Windows support). The choice depends more on your existing cloud investment than on container platform quality — all three hyperscalers offer excellent container infrastructure.