Home/Blog/Cloud/Containers & Compute Compared: Cloudflare Workers/Containers vs AWS ECS/EKS vs Azure AKS vs Google GKE
Cloud

Containers & Compute Compared: Cloudflare Workers/Containers vs AWS ECS/EKS vs Azure AKS vs Google GKE

A deep technical comparison of container and compute platforms — Cloudflare's edge compute model vs AWS ECS/EKS/Fargate, Azure AKS/Container Apps, and Google GKE/Cloud Run. Architecture, orchestration, pricing, and when containers vs edge isolates vs serverless containers win.

By InventiveHQ Team

Introduction

The container and compute category is where Cloudflare's architectural divergence from the hyperscalers is most pronounced. AWS, Azure, and Google have spent a decade building container orchestration platforms — managed Kubernetes, serverless containers, and rich ecosystems of tools for deploying, scaling, and managing containerized applications.

Cloudflare took a fundamentally different path. Instead of containers, Cloudflare built compute around V8 isolates (Workers) — lighter, faster, and more globally distributed, but more constrained. Cloudflare has more recently added container support for workloads that exceed Workers' limits, but the primary philosophy remains: if a workload can run in an isolate, it should.

This comparison examines three compute models:

  1. Edge isolates (Cloudflare Workers) — sub-millisecond startup, global deployment, constrained resources
  2. Managed Kubernetes (EKS, AKS, GKE) — full orchestration, maximum flexibility, operational complexity
  3. Serverless containers (Fargate, Cloud Run, Container Apps) — container flexibility without cluster management

Understanding when each model wins is more useful than declaring a single winner.

Compute Architecture Spectrum

The four providers offer compute at different points on the complexity-capability spectrum:

Simpler ←——————————————————————————→ More Capable

Workers ──→ Cloud Run ──→ Fargate/Container Apps ──→ Kubernetes ──→ Bare VMs
(isolates)   (serverless    (serverless containers     (orchestrated    (full control)
              containers)    in cluster)                 containers)

Each step right adds capability and operational complexity. Each step left adds simplicity and constraints.

Cloudflare: Edge Compute Model

Workers (V8 Isolates)

Covered in detail in the serverless comparison, Workers are Cloudflare's primary compute primitive. Key constraints for this comparison:

ConstraintLimitImpact
Memory128 MBCannot run large runtimes (JVM, .NET CLR) or process large datasets in memory
CPU time30 seconds (paid)Cannot run long batch jobs or ML inference
LanguagesJS/TS, WASMNo native Python, Java, Go, Ruby, PHP, .NET
File systemNoneCannot read/write files, use temp storage, or load native libraries
NetworkHTTP(S) only (plus TCP connect)No raw TCP/UDP servers, no gRPC server, no database wire protocols
Package size10 MB (compressed)Cannot include large dependencies or ML models

These constraints are by design — they enable sub-millisecond startup and global deployment. But they also mean Workers cannot run many workloads that containers handle easily.

Cloudflare Containers (Newer)

Cloudflare's container platform (launched 2025) runs OCI-compatible containers on Cloudflare's edge network:

FeatureDetails
Container formatOCI-compatible (Docker images)
LanguagesAny (full Linux environment)
MemoryConfigurable (larger than Workers' 128MB)
NetworkingAccessible from Workers via service bindings
ScalingAutomatic
LocationsCloudflare edge network
IntegrationWorkers service bindings, R2, D1, KV

Cloudflare Containers bridge the gap between Workers and traditional container platforms. A typical pattern: Workers handle the request/response edge layer (routing, auth, caching) and delegate compute-heavy operations to containers running on the same network.

Limitations to acknowledge: Cloudflare's container platform is newer than any hyperscaler equivalent. It lacks the ecosystem depth, tooling maturity, and orchestration capabilities of EKS/AKS/GKE. There is no Kubernetes API, no service mesh, no Helm charts, no established operational patterns. For simple containerized workloads, this simplicity is an advantage. For complex microservice architectures, it is a limitation.

The Workers + Containers Pattern

Cloudflare's recommended architecture for complex applications:

User Request
  → Workers (edge: routing, auth, caching, personalization)
    → Container (heavy compute: ML inference, image processing)
    → D1/KV/R2 (data: storage, state, objects)
  → Response (assembled at edge)

This is architecturally elegant: the edge handles what it is good at (low-latency request processing), and containers handle what they are good at (compute-heavy tasks). The pieces run on the same network.

AWS Container Ecosystem

AWS has the most comprehensive container ecosystem — three orchestrators, two compute engines, and deep integration across the AWS service catalog.

ECS (Elastic Container Service)

ECS is AWS's proprietary container orchestrator — simpler than Kubernetes, with zero control plane cost.

FeatureDetails
OrchestratorAWS proprietary (not Kubernetes)
Control planeFree
Compute optionsEC2 (self-managed), Fargate (serverless)
Task definitionJSON spec for containers, resources, networking, volumes
Service discoveryAWS Cloud Map integration
Load balancingALB/NLB integration (native)
Auto-scalingTarget tracking, step scaling, scheduled
LoggingCloudWatch Logs, FireLens (Fluentd/Fluent Bit)
SecretsAWS Secrets Manager, SSM Parameter Store
IAMTask role (per-container IAM permissions)

ECS advantages over EKS:

  • No control plane cost ($73/month savings)
  • Simpler mental model (tasks, services, clusters vs pods, deployments, services, ingress, etc.)
  • Deeper AWS integration (IAM task roles, native ALB integration, CloudWatch)

ECS disadvantages:

  • AWS-only (not portable to other clouds)
  • No Helm, Kustomize, or Kubernetes ecosystem tools
  • Smaller community and fewer third-party integrations

EKS (Elastic Kubernetes Service)

EKS is managed Kubernetes on AWS. AWS manages the control plane; you manage worker nodes (or use Fargate for serverless pods).

FeatureDetails
OrchestratorKubernetes (upstream-compatible)
Control plane$0.10/hour ($73/month)
Compute optionsEC2 managed node groups, self-managed nodes, Fargate
Kubernetes versionStandard K8s versions, typically 1-2 behind upstream
NetworkingAmazon VPC CNI (pod IPs from VPC), Calico option
Service meshApp Mesh (AWS) or any K8s service mesh (Istio, Linkerd)
IngressAWS Load Balancer Controller, nginx, Traefik, etc.
StorageEBS CSI, EFS CSI, FSx CSI drivers
MonitoringCloudWatch Container Insights, Prometheus, Grafana
GitOpsArgoCD, Flux (any K8s-compatible)

Fargate (Serverless Compute Engine)

Fargate eliminates node management for both ECS and EKS. You define CPU and memory per task/pod, and Fargate provisions isolated compute.

DimensionFargate
vCPU options0.25 to 16 vCPU
Memory options0.5 to 120 GB
Pricing (vCPU)$0.04048/hour ($29.15/month)
Pricing (memory)$0.004445/GB/hour ($3.20/GB/month)
Spot pricingUp to 70% discount, with interruption
StorageEphemeral (20-200GB), EFS persistent
Startup time30-60 seconds (image pull dependent)

Fargate is more expensive per-resource than EC2 but eliminates node provisioning, patching, scaling, and capacity planning. For variable workloads, the simplicity often justifies the premium.

Azure Container Ecosystem

AKS (Azure Kubernetes Service)

AKS is Azure's managed Kubernetes with a free control plane — the most notable pricing advantage over EKS.

FeatureDetails
Control planeFree (Standard tier: free. Premium: uptime SLA)
ComputeAzure VMs (node pools), virtual nodes (ACI)
NetworkingAzure CNI, kubenet, Azure CNI Overlay
Service meshIstio (managed add-on), Open Service Mesh
IngressAzure Application Gateway Ingress Controller, nginx
StorageAzure Disk, Azure Files, Azure Blob CSI drivers
Windows containersSupported (mixed Linux/Windows clusters)
GitOpsFlux v2 (managed add-on)
MonitoringAzure Monitor Container Insights, Prometheus
Dev SpacesBridge to Kubernetes for local development

AKS's free control plane makes it the cheapest managed Kubernetes for experimentation and development. Windows container support is strongest on AKS — relevant for .NET Framework workloads that cannot run on Linux.

Azure Container Apps

Container Apps is Azure's serverless container platform, built on Kubernetes and KEDA but abstracting away all cluster management:

FeatureDetails
AbstractionNo cluster, no nodes, no Kubernetes knowledge needed
Scaling0 to N based on HTTP traffic, events, cron, or custom metrics
RevisionsBlue/green deployments, traffic splitting
Dapr integrationService invocation, state management, pub/sub, bindings
NetworkingVNet integration, custom domains, mTLS between apps
Pricing (Consumption)$0.000012/vCPU-second, $0.000002/GiB-second
Pricing (Dedicated)Reserved compute with fixed pricing

Container Apps is Azure's most direct competitor to Cloud Run and AWS Fargate on ECS. The Dapr integration is unique — Dapr provides language-agnostic building blocks (service invocation, state stores, pub/sub, bindings) that simplify microservice development without Kubernetes-level complexity.

Google Cloud Container Ecosystem

GKE (Google Kubernetes Engine)

GKE is widely considered the most mature managed Kubernetes platform — unsurprising given that Google created Kubernetes.

FeatureDetails
ModesStandard (you manage nodes) and Autopilot (Google manages everything)
Control planeFree (one zonal cluster) or $0.10/hour (regional/Autopilot)
Autopilot pricingPer-pod vCPU ($0.0445/hour) and memory ($0.0049/GB/hour)
NetworkingGKE Dataplane V2 (eBPF-based, Cilium)
Service meshAnthos Service Mesh (managed Istio)
IngressGKE Gateway API, Google Cloud Load Balancer
Multi-clusterGKE Multi-cluster Services, Anthos
SecurityBinary Authorization, Workload Identity, Config Sync
MonitoringGoogle Cloud Managed Prometheus, Cloud Logging
Release channelsRapid, Regular, Stable (automatic upgrades)

GKE Autopilot deserves special attention: it is the closest thing to "serverless Kubernetes" — you define pods and GKE handles nodes, scaling, security patches, and resource optimization. Pricing is per-pod, and you never interact with nodes. This eliminates the biggest operational burden of Kubernetes (node management) while preserving full Kubernetes API compatibility.

Cloud Run

Cloud Run is Google's serverless container platform and the cleanest "deploy a container, get a URL" experience:

FeatureDetails
InputContainer image (any language, any framework)
Scaling0 to 1,000 instances, automatic
Cold start1-10 seconds (image-size dependent)
Max concurrency1,000 requests per instance
Max memory32 GB
Max timeout60 minutes
Min instances0 (scale to zero) or configurable minimum
Pricing$0.00002400/vCPU-second, $0.00000250/GiB-second
Free tier2M requests, 360K vCPU-seconds, 180K GiB-seconds/month
JobsCloud Run Jobs for batch/scheduled tasks
Traffic splittingAutomatic with revision-based routing

Cloud Run's killer feature: concurrency. Unlike AWS Lambda (1 request per instance) or even Fargate (you manage concurrency), Cloud Run handles up to 1,000 concurrent requests per instance. This dramatically reduces the number of instances needed and minimizes cold starts.

Comparison: When Each Model Wins

Feature Comparison

DimensionWorkersCloudflare ContainersECS/FargateEKSAKSGKECloud RunContainer Apps
Startup time<1msSeconds30-60sSeconds (running pods)Seconds (running pods)Seconds (running pods)1-10s1-10s
Global deployAll 310+ PoPsEdge locationsPer-regionPer-regionPer-regionPer-region + multi-clusterPer-regionPer-region
Max memory128 MBConfigurable120 GBNode-limitedNode-limitedNode/Autopilot-limited32 GB4 GB (Consumption)
Max CPUN/A (CPU time)Configurable16 vCPUNode-limitedNode-limitedNode-limited8 vCPU4 vCPU (Consumption)
LanguagesJS/TS/WASMAnyAnyAnyAnyAnyAnyAny
Scale to zeroYes (always ready)YesFargate: limited, ECS: noNoNoAutopilot: per-podYesYes
Kubernetes APINoNoNo (ECS), Yes (EKS)YesYesYesNoNo
Service meshNoNoApp MeshAny K8s meshIstio, OSMAnthos Service MeshNoDapr
GPU supportNoNoYesYesYesYesNoNo
Persistent storageKV, D1, R2, DOR2, D1EBS, EFS, FSxEBS, EFS, FSxAzure Disk, FilesPD, FilestoreNo (Cloud Storage via mount)Azure Files
Windows containersNoNoYesYesYes (best support)Yes (limited)NoYes

Cost Comparison: Same Workload, Different Platforms

Scenario: API backend serving 10M requests/month, 100ms average processing, 512MB memory

PlatformConfigurationMonthly Cost
WorkersPaid plan + CPU time~$15-30
Cloud Run0.5 vCPU, 512MB, scale-to-zero~$25-40
Container AppsConsumption, 0.5 vCPU, 1GB~$20-35
ECS + Fargate0.5 vCPU, 1GB, 2 tasks always running~$50-70
EKS + EC22x t3.small nodes + control plane~$110-140
AKS + VMs2x Standard_B2s nodes~$80-100
GKE AutopilotPer-pod pricing, auto-scaled~$60-90

Workers is cheapest by a wide margin for workloads that fit its constraints. Serverless containers (Cloud Run, Container Apps) are next. Managed Kubernetes is most expensive due to always-running control plane and node costs — but the cost includes capabilities (service mesh, persistent storage, custom networking) that simpler platforms lack.

Scenario: ML inference service, 4 vCPU, 16GB memory, always running

PlatformConfigurationMonthly Cost
WorkersNot viable (128MB limit)
Cloud Run4 vCPU, 16GB, min 1 instance~$250-350
ECS + Fargate4 vCPU, 16GB, 1 task~$200-280
EKS + EC2m6i.xlarge spot instance~$80-120
GKE + Spote2-standard-4 preemptible~$70-100

For compute-heavy, always-running workloads, self-managed nodes on Kubernetes with spot/preemptible instances are dramatically cheaper than serverless containers. The trade-off is operational complexity.

Calculate Your Costs

Use the calculator below to estimate costs for your specific workload:

Container Compute Cost Calculator

Compare container hosting costs for your workloads.

vCPU-hours/mo
GB-hours/mo
AWS Fargate1st
$49.37/mo
$592.44/yearServerless containers. No cluster management. Per-second billing.
Azure AKS2nd
$52.40/mo
$628.80/yearAKS control plane is free. Pay for VM node costs only. Pricing based on D-series VMs.
Cloudflare Workers3rd
$71.00/mo
$852.03/yearWorkers are not traditional containers but handle many container use cases. CPU-time billing only (I/O wait is free).
Google GKE / Cloud Run4th
$104.40/mo
$1,252.80/yearCloud Run pricing shown (serverless containers). GKE Autopilot is ~50% cheaper for sustained workloads.

Estimates based on published pricing as of February 2026. Actual costs may vary by region, commitment, and usage patterns.

The Container vs Isolate Decision

This is the fundamental architectural choice in this comparison:

Choose Isolates (Workers) When:

  • Latency is paramount — 0ms cold start, response from nearest of 310+ locations
  • Workloads are stateless HTTP — API proxies, middleware, edge routing, personalization, auth
  • Memory needs are modest — under 128MB, no large runtimes or datasets in memory
  • Execution is short — under 30 seconds, typically under 1 second
  • JavaScript/TypeScript is acceptable — or you can compile to WASM
  • Global deployment is valuable — your users are distributed worldwide

Choose Serverless Containers (Cloud Run, Container Apps, Fargate) When:

  • You need full language runtimes — Python with ML libraries, Java with Spring, .NET, Go with CGO
  • Memory needs are moderate — 512MB to 32GB
  • You want container simplicity — deploy an image, get a URL, auto-scale
  • Cold starts are tolerable — 1-10 seconds is acceptable for your use case
  • You do not need Kubernetes — no service mesh, no custom scheduling, no persistent volumes
  • Scale-to-zero matters — pay only when processing requests

Choose Kubernetes (EKS, AKS, GKE) When:

  • Complex microservices — service mesh, custom networking, inter-service communication patterns
  • Stateful workloads — databases, message queues, cache servers running in the cluster
  • GPU workloads — ML training, inference, video processing requiring GPU scheduling
  • Custom scheduling — node affinity, taints, tolerations, priority classes
  • Maximum control — you want to define every aspect of the deployment, scaling, and networking
  • Portability — Kubernetes API is the same across clouds (with provider-specific extensions)
  • Existing investment — your team has Kubernetes expertise and established operational practices

The Honest Assessment

Cloudflare's compute model is the most architecturally distinct in cloud computing. Workers are not "containers lite" — they are a different paradigm: globally distributed, sub-millisecond startup, request-level isolation, CPU-time billing. For the workloads they support, Workers deliver better latency at lower cost than any container platform.

But "the workloads they support" is a significant qualifier. You cannot run a Django application on Workers. You cannot run a JVM-based microservice. You cannot process a 1GB file in memory. You cannot train an ML model. For these workloads — which represent the majority of enterprise compute — container platforms (Kubernetes or serverless) remain necessary.

Cloudflare Containers address part of this gap, but the platform is nascent compared to the hyperscaler container ecosystems that have been maturing for a decade. If you need Kubernetes today, EKS, AKS, or GKE are the choices. If you need serverless containers, Cloud Run and Azure Container Apps provide the best developer experience.

The most forward-looking architecture: Workers at the edge for request handling + serverless containers in a region for heavy compute + Kubernetes for complex orchestration when needed. This layered approach matches each workload to the compute model that best serves it — and Cloudflare's container platform may eventually absorb the middle tier as it matures.

Among the hyperscalers: GKE is the most mature Kubernetes (Google built Kubernetes), Cloud Run is the best serverless container experience (simplest deployment, best concurrency model), ECS is the best AWS-native orchestrator (simplest, free control plane), and AKS offers the best value (free control plane, best Windows support). The choice depends more on your existing cloud investment than on container platform quality — all three hyperscalers offer excellent container infrastructure.

Frequently Asked Questions

Find answers to common questions

Cloudflare launched a container platform in 2025 that runs OCI-compatible containers at the edge. However, Cloudflare's primary compute model remains V8 isolates (Workers), which are lighter and faster than containers. Cloudflare Containers are designed for workloads that need full Linux environments, native language runtimes, or larger memory allocations that Workers' 128MB limit cannot accommodate. The container platform is newer and less mature than AWS ECS/EKS, AKS, or GKE.

They solve different problems. Kubernetes orchestrates containers in a cluster — managing scheduling, scaling, networking, and lifecycle across nodes. Workers are individual request handlers that run in V8 isolates at edge locations. Workers have no concept of pods, services, ingress, or persistent volumes. For simple request/response workloads, Workers is dramatically simpler. For complex microservice architectures with persistent processes, service mesh, and stateful workloads, Kubernetes is necessary.

ECS (Elastic Container Service) is AWS's proprietary container orchestrator — simpler than Kubernetes, deeply integrated with AWS services, no control plane cost. EKS (Elastic Kubernetes Service) is managed Kubernetes on AWS — standard Kubernetes APIs, broader ecosystem compatibility, $0.10/hour control plane cost. Both can use Fargate (serverless) or EC2 (self-managed) for compute. Choose ECS for AWS-native simplicity; choose EKS for Kubernetes ecosystem compatibility and portability.

Cloud Run is Google's serverless container platform — you deploy a container image, and Cloud Run handles scaling (including to zero), load balancing, and SSL. It is more capable than Workers (full Linux, any language, up to 32GB memory, 60-minute timeout) but slower (container cold starts vs no cold starts) and regional (not global). Cloud Run is the best choice when you need container flexibility with serverless simplicity on GCP.

The control plane: EKS costs $0.10/hour ($73/month), AKS is free (control plane), GKE Autopilot includes control plane in pod pricing, GKE Standard is free (one zonal cluster) or $0.10/hour for regional. Node compute is additional: EC2/Azure VMs/GCE instances at standard pricing. Fargate/serverless costs more per-vCPU but eliminates node management. A minimal production Kubernetes cluster typically costs $150-500/month before application workloads.

Use serverless containers (Fargate, Cloud Run, Azure Container Apps) when: you have simple scaling requirements, want zero infrastructure management, run HTTP-based workloads, and prefer per-request/per-second pricing. Use Kubernetes (EKS, AKS, GKE) when: you need service mesh, custom scheduling, stateful workloads (databases, message queues), GPU workloads, complex networking, or want maximum control and portability. Kubernetes is more powerful but requires significantly more operational expertise.

No. Azure Container Apps is a serverless container platform built on top of Kubernetes and KEDA (event-driven autoscaling). It abstracts away cluster management — you deploy containers without configuring nodes, networking, or Kubernetes resources. AKS is managed Kubernetes where you manage the cluster configuration, node pools, and Kubernetes resources. Container Apps is simpler but less flexible; AKS is more powerful but more complex. Container Apps is Microsoft's answer to Cloud Run and AWS Fargate on ECS.

For certain workloads, yes. If your application is a stateless HTTP API, a web proxy, an edge middleware layer, or a data aggregation service — and it fits within Workers' constraints (128MB memory, 30s execution, JS/TS/WASM) — Workers can replace containers with better latency and simpler operations. For microservices requiring full language runtimes, large memory, persistent processes, gRPC service mesh, or complex orchestration, Kubernetes is still necessary.

GKE Autopilot is Google's fully managed Kubernetes mode where Google manages nodes, scaling, and infrastructure. You only define pods and Kubernetes resources — GKE handles the rest. Pricing is per-pod (vCPU and memory) with no node management overhead. Autopilot is more expensive per-resource than Standard mode but eliminates node provisioning, patching, and right-sizing. It is the closest Kubernetes experience to 'serverless' while remaining fully Kubernetes-compatible.

Directly and significantly. A 50MB container image might have a 500ms cold start on Cloud Run; a 2GB image might take 10-15 seconds. Kubernetes mitigates this by keeping pods running (no cold starts for established services), but initial deployments and scaling events are slower with large images. Workers avoid this entirely — V8 isolates do not load container images. If cold starts matter, minimize image size (use Alpine/distroless bases) or use Workers for latency-sensitive paths.

Is your cloud secure? Find out free.

Get a complimentary cloud security review. We'll identify misconfigurations, excess costs, and security gaps across AWS, GCP, or Azure.