Containers transformed how we build and deploy applications—but they also introduced new attack surfaces. A vulnerable base image, misconfigured Kubernetes cluster, or exposed Docker socket can compromise your entire infrastructure.
This guide covers container security best practices across the full lifecycle: build, ship, and run.
Why Container Security Is Different
Containers create unique security challenges:
- Ephemeral workloads - Traditional security tools expect persistent servers
- Shared kernel - Container isolation is weaker than VM isolation
- Image sprawl - Thousands of images mean thousands of potential vulnerabilities
- Complex orchestration - Kubernetes adds RBAC, networking, and secrets management
- Fast deployment cycles - Security can't slow down CI/CD pipelines
The goal: secure containers without sacrificing the speed and agility they provide.
Build: Secure Your Images
Security starts at the build stage. Vulnerabilities baked into images deploy to production.
Use Minimal Base Images
Smaller images have fewer vulnerabilities:
| Base Image | Size | Packages |
|---|---|---|
| Ubuntu | ~77 MB | ~100 |
| Alpine | ~5 MB | ~15 |
| Distroless | ~2 MB | Runtime only |
| Scratch | 0 | Nothing |
Dockerfile example with distroless:
# Build stage
FROM golang:1.21 AS builder
WORKDIR /app
COPY . .
RUN CGO_ENABLED=0 go build -o /app/server
# Production stage - distroless
FROM gcr.io/distroless/static-debian12
COPY --from=builder /app/server /server
USER nonroot:nonroot
ENTRYPOINT ["/server"]
Scan Images for Vulnerabilities
Integrate scanning into your CI/CD pipeline:
GitHub Actions with Trivy:
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'myapp:${{ github.sha }}'
format: 'sarif'
severity: 'CRITICAL,HIGH'
exit-code: '1' # Fail build on critical vulnerabilities
Popular scanning tools:
- Trivy - Fast, comprehensive, open source
- Grype - Anchore's open source scanner
- Snyk Container - Developer-friendly with fix suggestions
- Clair - CoreOS/Quay scanner
- AWS ECR scanning - Native for ECR users
Don't Run as Root
The default container user is root. Change it:
# Create non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Set ownership
COPY --chown=appuser:appgroup . /app
# Switch to non-root user
USER appuser
CMD ["./app"]
Pin Image Versions
Avoid :latest tags—they create unpredictable deployments:
# Bad - unpredictable
FROM node:latest
# Good - pinned version
FROM node:20.10.0-alpine3.19
# Better - pinned digest
FROM node@sha256:abc123...
Remove Unnecessary Tools
Don't include shells, package managers, or debugging tools in production images:
# Multi-stage build keeps only the binary
FROM golang:1.21 AS builder
WORKDIR /app
COPY . .
RUN go build -o /app/server
FROM scratch
COPY --from=builder /app/server /server
ENTRYPOINT ["/server"]
Ship: Secure Your Registry
Your container registry is a critical asset—compromise it and attackers can inject malicious images.
Use Private Registries
Don't pull base images from public registries in production:
- AWS ECR - Integrated with IAM
- Azure Container Registry - Integrated with Entra ID
- Google Artifact Registry - Integrated with IAM
- Harbor - Self-hosted with vulnerability scanning
Enable Image Signing
Verify images are from trusted sources:
Cosign (Sigstore):
# Sign an image
cosign sign --key cosign.key myregistry/myapp:v1.0
# Verify before deployment
cosign verify --key cosign.pub myregistry/myapp:v1.0
Kubernetes admission with Cosign:
apiVersion: policy.sigstore.dev/v1alpha1
kind: ClusterImagePolicy
metadata:
name: require-signatures
spec:
images:
- glob: "myregistry.com/**"
authorities:
- keyless:
url: https://fulcio.sigstore.dev
Implement Registry Access Controls
Limit who can push and pull images:
# AWS ECR policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPush",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/ci-cd-role"
},
"Action": [
"ecr:PutImage",
"ecr:InitiateLayerUpload"
]
}
]
}
Run: Secure Your Runtime
Even secure images can be exploited at runtime. Apply defense-in-depth.
Kubernetes Pod Security Standards
Enforce baseline security for all pods:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
Restricted policy enforces:
- Non-root containers
- Read-only root filesystem
- No privilege escalation
- Drop all capabilities
- Restricted volume types
Apply Security Contexts
Define security settings per pod or container:
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: myapp:v1.0
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
resources:
limits:
cpu: "500m"
memory: "128Mi"
Implement Network Policies
Default Kubernetes networking allows all pod-to-pod communication. Restrict it:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-api
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
Protect the Kubernetes API Server
The API server is the control plane—protect it:
- Enable RBAC (Role-Based Access Control)
- Use network policies to restrict API access
- Enable audit logging
- Rotate service account tokens
- Disable anonymous authentication
# RBAC example - least privilege
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: production
subjects:
- kind: ServiceAccount
name: monitoring-sa
namespace: production
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
Secure Secrets Management
Don't store secrets in environment variables or ConfigMaps:
# Use external secrets operators
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: database-credentials
spec:
secretStoreRef:
name: aws-secrets-manager
kind: ClusterSecretStore
target:
name: database-secret
data:
- secretKey: password
remoteRef:
key: prod/database
property: password
Enable Runtime Protection
Detect and block malicious behavior at runtime:
Falco rules example:
- rule: Shell Spawned in Container
desc: Detect shell spawned in a container
condition: >
spawned_process and
container and
shell_procs
output: >
Shell spawned in container
(user=%user.name container=%container.name
shell=%proc.name parent=%proc.pname)
priority: WARNING
Cloud Provider Container Security
AWS (EKS)
- ECR image scanning - Automatic vulnerability scanning
- GuardDuty for EKS - Runtime threat detection
- Pod Identity - IAM roles for service accounts
- Security groups for pods - Network segmentation
Azure (AKS)
- Microsoft Defender for Containers - Full lifecycle protection
- Azure Policy for AKS - Enforce pod security standards
- Workload identity - Azure AD for pods
- Network policies - Azure CNI or Calico
GCP (GKE)
- Binary Authorization - Require signed images
- Container Threat Detection - Runtime monitoring
- Workload Identity - GCP IAM for pods
- GKE Autopilot - Hardened by default
Security Checklist
| Practice | Priority |
|---|---|
| Scan images for vulnerabilities | Critical |
| Run containers as non-root | Critical |
| Use minimal base images | High |
| Enable Pod Security Standards | High |
| Implement network policies | High |
| Sign and verify images | High |
| Apply security contexts | High |
| Protect secrets | High |
| Enable runtime protection | Medium |
| Enable audit logging | Medium |
Frequently Asked Questions
What's the biggest container security risk?
Vulnerable base images are the most common issue. Public images often contain known CVEs that attackers actively exploit. Always scan images before deployment and use minimal base images to reduce attack surface.
Should I use Docker or Kubernetes security features?
Both. Docker security (non-root users, capabilities, seccomp) applies at the container level. Kubernetes security (RBAC, network policies, pod security) applies at the orchestration level. They're complementary, not alternatives.
How do I handle container vulnerabilities in production?
Implement a vulnerability management process: scan images in CI/CD, block critical vulnerabilities from deploying, continuously scan running containers, and have a process for emergency patching. Not every vulnerability needs immediate action—prioritize by exploitability and exposure.
Is container isolation as strong as VM isolation?
No. Containers share the host kernel, so a kernel vulnerability can affect all containers. VMs have stronger isolation through hypervisor separation. For high-security workloads, consider gVisor, Kata Containers, or running containers in VMs.
How do I secure the Docker socket?
Never expose the Docker socket to containers—it's equivalent to root access to the host. If you need container management from within containers, use Kubernetes APIs instead, or carefully restrict access with tools like Docker socket proxies.
Take Action
- Audit your images - Scan all production images for vulnerabilities
- Enable Pod Security Standards - Apply the restricted profile to namespaces
- Implement network policies - Start with default deny, allow explicitly
- Secure your registry - Enable scanning, access controls, and image signing
- Add runtime protection - Deploy Falco or your cloud provider's container security
For more cloud security guidance, see our comprehensive guide: 30 Cloud Security Tips for 2026.
