Envoy Gateway Chart
Modern Kubernetes Gateway API implementation powered by Envoy Proxy. Envoy Gateway provides a high-performance, extensible gateway for managing ingress traffic with native support for HTTP, HTTPS, gRPC, TCP, and UDP protocols.
Key Features
-
Profile Presets: Production-ready deployment configurations
- Development profile (single replica, minimal resources)
- Production HA profile (DaemonSet, anti-affinity)
-
Gateway API Native: First-class Gateway API support
- Working Gateway and HTTPRoute examples
- Automated GatewayClass creation
- Support for all Gateway API v1.2.1 resources
- Default
Gatewayresource (whengateway.create: true)
-
Certgen Job: Automatic internal TLS cert generation (pre-install hook)
- Runs as a Helm pre-install hook
- Generates TLS certs for internal EG components
-
SecurityPolicy: Native JWT, OIDC, API Key, CORS authentication
- JWT with remote JWKS support
- OIDC with provider integration (Google, Auth0, etc.)
- CORS configuration at the Gateway or HTTPRoute level
- API Key authentication
-
BackendTrafficPolicy: Retries, circuit breaking, timeouts, health checks
- Configurable retry policies per route
- Circuit breaker with connection limits
- Active health checks
-
ClientTrafficPolicy: Connection limits, TLS version control, HTTP/2 tuning
- Minimum/maximum TLS version enforcement
- Connection limit configuration
- Request timeout settings
-
Rate Limiting: Distributed rate limiting with Redis
- helmforge/redis subchart (standalone topology)
- External Redis support for bring-your-own setups
- Pre-configured rate limit presets (API, Strict)
-
Comprehensive Observability: Production-grade monitoring
- Prometheus ServiceMonitors for metrics
- PrometheusRule with 6 alert rules
- Pre-built Grafana dashboards
- Structured access logs (JSON/text)
-
Security Hardening: Zero-trust network policies
- NetworkPolicies for controller, proxy, and Redis
- PodSecurityStandards (restricted mode)
- Non-root containers with minimal capabilities
Production-Ready Capabilities
- ✅ High availability with leader election and anti-affinity
- ✅ Horizontal Pod Autoscaler for proxy scaling
- ✅ PodDisruptionBudgets for maintenance safety
- ✅ Resource limits and requests tuning per profile
- ✅ Health checks (liveness and readiness probes)
- ✅ Security contexts (runAsNonRoot, drop ALL capabilities)
- ✅ Support for Deployment and DaemonSet proxy kinds (operator-managed)
- ✅ Comprehensive RBAC for Gateway API and Envoy Gateway CRDs
Installation
Prerequisites
- Kubernetes 1.24+ (Gateway API v1 support)
- Helm 3.8+
- Gateway API CRDs installed (see Quick Start)
HTTPS Repository (Recommended)
helm repo add helmforge https://repo.helmforge.dev
helm repo update
helm install envoy-gateway helmforge/envoy-gateway
OCI Registry
helm install envoy-gateway oci://ghcr.io/helmforgedev/helm/envoy-gateway --version 1.3.0
Verify Installation
kubectl get pods -l app.kubernetes.io/name=envoy-gateway
kubectl get gatewayclass envoy-gateway
kubectl get gateway,httproute -A
Quick Start
1. Install Gateway API CRDs
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml
2. Choose a Profile
Development Profile
Perfect for local testing on k3d, kind, or minikube:
helm install envoy-gateway helmforge/envoy-gateway \
--set profile=dev
Profile Configuration:
- 1 replica controller
- Minimal resources (100m CPU, 128Mi memory)
- No TLS (HTTP only)
- Example Gateway and HTTPRoute created
- EG operator provisions proxy pods automatically
Production HA Profile
For production deployments with high availability:
helm install envoy-gateway helmforge/envoy-gateway \
--set profile=production-ha
Profile Configuration:
- 2 replica controller with leader election
- Proxy as DaemonSet (
proxy.kind: DaemonSet) — EG operator provisions one proxy per node - Production resources (1000m CPU, 1Gi memory)
- Anti-affinity rules
- PodDisruptionBudgets
3. Test the Gateway
Wait for the Gateway to be ready:
kubectl wait --for=condition=Programmed gateway/envoy-gateway -n default --timeout=300s
Get the Gateway IP and test:
# Get the Gateway IP (EG creates the service automatically)
export GATEWAY_IP=$(kubectl get gateway envoy-gateway -o jsonpath='{.status.addresses[0].value}')
# Or find the EG-provisioned service (name is dynamic: envoy-<namespace>-<gateway-name>-<uid>)
kubectl get svc -l gateway.envoyproxy.io/owning-gateway-name=envoy-gateway
# Test the example HTTPRoute
curl -H "Host: example.local" http://$GATEWAY_IP/
Note: Envoy Gateway is a Kubernetes operator. When a
Gatewayresource exists, EG automatically provisions proxy pods and creates a service with a dynamic name (envoy-<namespace>-<gateway-name>-<uid>). There is no manually-createdenvoy-gateway-proxyservice.
Expected response from the example backend (httpbin):
{
"args": {},
"headers": {
"Host": "example.local",
...
},
"url": "http://example.local/"
}
Feature Deep-Dives
Profile Presets
Profile presets provide opinionated, production-ready configurations for different environments. Profiles override specific values to match deployment requirements.
Available Profiles
| Profile | Replicas | Proxy Kind | Resources | TLS | Use Case |
|---|---|---|---|---|---|
| dev | 1 | Deployment | Minimal | No | Local development |
| production-ha | 2 + DaemonSet | DaemonSet | Production | Optional | Production |
| custom | Configurable | Configurable | Configurable | Optional | Full control |
How Profiles Work
Profiles use Helm template helpers to override values at render time:
# values.yaml
profile: production-ha
# Translates to:
controller:
replicaCount: 2 # via envoy-gateway.controller.replicaCount helper
resources:
requests:
cpu: 1000m
memory: 1Gi
proxy:
kind: DaemonSet # EG operator provisions DaemonSet proxies per node
highAvailability:
enabled: true
Switching Profiles
Upgrade an existing deployment to a different profile:
helm upgrade envoy-gateway helmforge/envoy-gateway \
--set profile=production-ha \
--reuse-values
Profile Customization
Override individual values while using a profile:
helm install envoy-gateway helmforge/envoy-gateway \
--set profile=production-ha \
--set controller.replicaCount=3 \
--set proxy.resources.requests.memory=2Gi
Custom values take precedence over profile defaults.
Gateway API Examples
The chart includes working Gateway API examples for quick learning and validation.
Example Resources Created
When gatewayAPI.examples.enabled=true (default):
-
Gateway (
envoy-gateway-example)- HTTP listener on port 80
- HTTPS listener on port 443 (if cert-manager enabled)
- Allows routes from all namespaces
-
HTTPRoute (
envoy-gateway-example)- Routes traffic to example backend (httpbin)
- Hostname:
example.local - Path prefix:
/
-
Backend Deployment and Service (
example-backend)- httpbin container for testing
- ClusterIP service on port 80
-
Certificate (if cert-manager enabled)
- TLS certificate for
example.local - Wildcard support:
*.example.local
- TLS certificate for
Accessing Examples
HTTP Access:
# Get Gateway IP (EG provisions the service automatically)
export GATEWAY_IP=$(kubectl get gateway envoy-gateway -o jsonpath='{.status.addresses[0].value}')
# Or find the EG-provisioned service by label
kubectl get svc -l gateway.envoyproxy.io/owning-gateway-name=envoy-gateway
# Test HTTP
curl -H "Host: example.local" http://$GATEWAY_IP/
HTTPS Access (production-ha profile):
# Test HTTPS (with self-signed cert warning)
curl -k -H "Host: example.local" https://$GATEWAY_IP/
Port-forward (for local development):
# Get the EG-provisioned service name first
EG_SVC=$(kubectl get svc -l gateway.envoyproxy.io/owning-gateway-name=envoy-gateway -o name | head -1)
kubectl port-forward $EG_SVC 8080:80
curl -H "Host: example.local" http://localhost:8080/
Inspecting Example Resources
# View Gateway configuration
kubectl describe gateway envoy-gateway-example
# View HTTPRoute configuration
kubectl describe httproute envoy-gateway-example
# Check certificate status (if cert-manager enabled)
kubectl get certificate envoy-gateway-example-tls
kubectl describe certificate envoy-gateway-example-tls
Disabling Examples
For production deployments, disable examples:
helm install envoy-gateway helmforge/envoy-gateway \
--set gatewayAPI.examples.enabled=false
Certificate Management
Certgen Job (Internal TLS)
The chart ships a certgen job that runs as a pre-install Helm hook. It automatically generates TLS certificates for internal Envoy Gateway components (controller ↔ envoy xDS communication). No manual setup is required.
The certgen binary creates four hardcoded secrets in the release namespace:
| Secret | Type | Purpose |
|---|---|---|
envoy-gateway | kubernetes.io/tls | Controller TLS (xDS server) |
envoy | kubernetes.io/tls | Proxy TLS (xDS client) |
envoy-rate-limit | kubernetes.io/tls | Rate limit service TLS |
envoy-oidc-hmac | Opaque | OIDC HMAC key |
Important: These secret names are hardcoded by the EG certgen binary — they are always named as above regardless of your Helm release name. The controller deployment mounts the
envoy-gatewaysecret at/certs.
The certgen job runs with --disable-topology-injector to skip patching MutatingWebhookConfiguration resources (the topology injector webhook is not deployed by this chart).
# Verify the certgen job ran successfully
kubectl get job -l app.kubernetes.io/component=certgen
kubectl logs -l app.kubernetes.io/component=certgen
# Verify all 4 secrets were created
kubectl get secrets | grep -E "envoy-gateway|^envoy |envoy-rate-limit|envoy-oidc-hmac"
xDS Internal Service
The Envoy Gateway controller serves the xDS gRPC API on port 18000. The chart creates a dedicated Kubernetes Service named exactly envoy-gateway (namespace-scoped) that exposes this port:
envoy-gateway.<release-namespace>.svc.cluster.local:18000
Why the hardcoded name? EG proxy pods receive a bootstrap config generated by the EG operator. This bootstrap config hardcodes the xDS server address as
envoy-gateway.<namespace>.svc.cluster.local:18000. The Service name must match exactly — this is an upstream EG convention.
The chart also exposes port 18002 (wasm OCI cache) on this service.
| Port | Protocol | Purpose |
|---|---|---|
| 8080 | HTTP | Admin API (internal) |
| 8081 | HTTP | Health probes (/healthz, /readyz) and Prometheus metrics |
| 18000 | gRPC | xDS server (for proxy communication) |
| 18002 | gRPC | Wasm OCI cache |
# Verify the xDS service exists
kubectl get svc envoy-gateway -n <release-namespace>
# Expected: ClusterIP service with ports 18000/TCP, 18002/TCP
# Test xDS connectivity (gRPC)
kubectl port-forward svc/envoy-gateway 18000:18000 -n <release-namespace>
# If proxy is connected, controller logs show: "open delta watch" messages
#### HTTPS Gateway Listeners with External cert-manager
For HTTPS Gateway listeners, you can use [cert-manager](https://cert-manager.io/) to provision TLS secrets independently of this chart. The `certificates.certManager` chart integration has been removed — instead, create the `Certificate` resource separately and reference the resulting secret in your Gateway listener.
**Step 1 — Create a ClusterIssuer:**
```yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-prod-key
solvers:
- http01:
gateway:
parentRefs:
- name: envoy-gateway
namespace: default
Step 2 — Create a Certificate:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: my-gateway-tls
namespace: default
spec:
secretName: my-gateway-tls
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
dnsNames:
- api.example.com
- '*.api.example.com'
Step 3 — Reference the secret in your Gateway:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: my-gateway
spec:
gatewayClassName: envoy-gateway
listeners:
- name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: my-gateway-tls
Or enable the HTTPS listener via chart values:
gateway:
create: true
listeners:
https:
enabled: true
Rate Limiting
Distributed rate limiting with Redis backend for API protection.
Architecture
┌─────────────┐
│ Gateway │
│ (Envoy) │
└──────┬──────┘
│
├─────────────┐
│ │
▼ ▼
┌────────────┐ ┌─────────┐
│ Backend │ │ Redis │
│ Service │ │ (Rate │
└────────────┘ │ Limits) │
└─────────┘
Enabling Rate Limiting
Rate limiting state is stored in Redis. The chart uses helmforge/redis as a subchart dependency — no separate Redis installation required.
Option 1: Redis subchart (recommended)
helm install envoy-gateway helmforge/envoy-gateway \
--set rateLimiting.enabled=true \
--set redis.enabled=true \
--set rateLimiting.presets.api=true
This deploys an envoy-gateway-redis StatefulSet using the standalone topology from the
helmforge/redis chart. Redis is available at <release>-redis.<namespace>.svc.cluster.local:6379.
Option 2: External Redis (bring your own)
helm install envoy-gateway helmforge/envoy-gateway \
--set rateLimiting.enabled=true \
--set redis.enabled=false \
--set rateLimiting.externalRedis.host=redis.example.com \
--set rateLimiting.externalRedis.port=6379
Redis Subchart Configuration
The redis: section is forwarded directly to the helmforge/redis chart.
Persistent storage (default: enabled: true, 1Gi):
redis:
enabled: true
architecture: standalone
auth:
enabled: true
password: 'changeme' # or use existingSecret
standalone:
persistence:
enabled: true
size: 2Gi
storageClass: fast-ssd
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
Disable persistence (test/dev environments):
redis:
enabled: true
auth:
enabled: false
standalone:
persistence:
enabled: false
External Redis with authentication:
redis:
enabled: false
rateLimiting:
externalRedis:
host: redis.example.com
port: 6379
auth:
enabled: true
secretName: redis-auth
secretKey: password
Create the secret:
kubectl create secret generic redis-auth \
--from-literal=password=your-redis-password
Rate Limit Presets
The chart includes two pre-configured rate limit policies:
API Preset (100 requests/minute per IP):
rateLimiting:
presets:
api: true
Creates BackendTrafficPolicy:
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: envoy-gateway-ratelimit-api
spec:
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- headers:
- name: x-real-ip
type: Distinct
limit:
requests: 100
unit: Minute
Strict Preset (10 requests/minute per IP):
rateLimiting:
presets:
strict: true
Attaching Rate Limits to Routes
Attach a rate limit policy to an HTTPRoute using ExtensionRef:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: my-api
spec:
parentRefs:
- name: my-gateway
rules:
- matches:
- path:
type: PathPrefix
value: /api
backendRefs:
- name: api-service
port: 80
filters:
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: BackendTrafficPolicy
name: envoy-gateway-ratelimit-api
Custom Rate Limit Policies
Create a custom BackendTrafficPolicy for specific needs:
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: custom-ratelimit
spec:
targetRef:
kind: HTTPRoute
name: my-api
rateLimit:
type: Global
global:
rules:
# 1000 requests/minute for authenticated users
- clientSelectors:
- headers:
- name: authorization
type: Exists
limit:
requests: 1000
unit: Minute
# 100 requests/minute for anonymous users
- clientSelectors:
- headers:
- name: x-real-ip
type: Distinct
limit:
requests: 100
unit: Minute
Monitoring Rate Limits
Check Redis status:
# With subchart (StatefulSet named <release>-redis)
kubectl get statefulset envoy-gateway-redis -n <namespace>
kubectl exec envoy-gateway-redis-0 -- redis-cli info clients
View rate limit metrics:
kubectl port-forward svc/envoy-gateway-controller 8081:8081
curl http://localhost:8081/metrics | grep ratelimit
Observability
Comprehensive monitoring with Prometheus and Grafana integration.
Components
- ServiceMonitors: Scrape metrics from controller (proxy service name is dynamic — managed by EG operator)
- PrometheusRule: 6 alert rules for common issues
- Grafana Dashboard: Pre-built dashboard for request metrics
- Access Logs: Structured logs in JSON or text format
Enabling Monitoring
Full monitoring stack:
helm install envoy-gateway helmforge/envoy-gateway \
--set monitoring.enabled=true \
--set monitoring.prometheus.serviceMonitor=true \
--set monitoring.prometheus.prometheusRule=true \
--set monitoring.grafana.dashboards=true
Prometheus Integration
ServiceMonitors scrape metrics from:
-
Controller (port 8081):
envoy_gateway_*metrics- Reconciliation stats
- Go runtime metrics
-
Proxy (port 19000):
envoy_cluster_*metrics- Request rate, latency, errors
- Connection pool stats
- Circuit breaker status
- Note: The proxy service name is dynamic (
envoy-<namespace>-<gateway-name>-<uid>); a proxy ServiceMonitor is not included because of this dynamic naming. Use pod-level port-forward or a custom ServiceMonitor with label selectors.
Access metrics:
# Controller metrics
kubectl port-forward svc/envoy-gateway-controller 8081:8081
curl http://localhost:8081/metrics
# Proxy metrics (find pod via EG label)
PROXY_POD=$(kubectl get pod -l gateway.envoyproxy.io/owning-gateway-name=envoy-gateway -o name | head -1)
kubectl port-forward $PROXY_POD 19000:19000
curl http://localhost:19000/stats/prometheus
Alert Rules
The chart includes 6 PrometheusRule alerts:
| Alert | Threshold | Severity | Description |
|---|---|---|---|
| EnvoyGatewayHighErrorRate | >5% 5xx | Warning | High error rate on backend |
| EnvoyGatewayHighLatency | p99 >1s | Warning | High response latency |
| EnvoyGatewayCircuitBreakerOpen | >0 | Warning | Circuit breaker triggered |
| EnvoyGatewayControllerDown | 5 min | Critical | Controller unavailable |
| EnvoyGatewayHighConnectionCount | >1000 | Warning | High active connections |
| EnvoyGatewayRateLimitExceeded | >10 req/s | Info | Rate limit rejections |
View alerts:
kubectl get prometheusrule envoy-gateway
kubectl describe prometheusrule envoy-gateway
Grafana Dashboard
The chart creates a ConfigMap with a pre-built Grafana dashboard:
Dashboard Panels:
- Request rate per route
- Error rate (4xx, 5xx)
- Latency percentiles (p50, p95, p99)
- Active connections
- Circuit breaker status
- Rate limit rejections
Import Dashboard:
The dashboard is automatically discovered by Grafana Operator via the grafana_dashboard: "1" label.
Manual import:
# Extract dashboard JSON
kubectl get configmap envoy-gateway-grafana-dashboard -o jsonpath='{.data.envoy-gateway-overview\.json}' > dashboard.json
# Import to Grafana UI
# Dashboards → Import → Upload JSON file
Access Logs
Enable structured access logs:
monitoring:
accessLogs:
enabled: true
format: json # or "text"
View access logs:
kubectl logs -l app.kubernetes.io/component=proxy -f
JSON format example:
{
"start_time": "2024-04-09T13:00:00.000Z",
"method": "GET",
"path": "/api/users",
"protocol": "HTTP/1.1",
"response_code": 200,
"response_flags": "-",
"bytes_received": 0,
"bytes_sent": 1234,
"duration": 45,
"upstream_service_time": "42",
"x_forwarded_for": "10.0.0.5",
"user_agent": "curl/7.68.0",
"request_id": "f47ac10b-58cc-4372-a567-0e02b2c3d479"
}
Custom Metrics
Expose custom metrics via Envoy’s stats endpoint:
# View all available metrics
kubectl port-forward <proxy-pod> 19000:19000
curl http://localhost:19000/stats
# Filter specific metrics
curl http://localhost:19000/stats?filter=http.downstream_rq
Security
Zero-trust network policies and security hardening.
NetworkPolicies
When security.networkPolicies=true, the chart creates NetworkPolicies for:
Controller Policy:
- Ingress: Allow from proxy pods only
- Egress: Allow to Kubernetes API (443, 6443) and DNS (53)
Proxy Policy:
- Ingress: Allow from all namespaces (ingress traffic)
- Egress: Allow to controller, backends, and Redis (if enabled)
Redis Policy (if rate limiting enabled):
- Ingress: Allow from controller and proxy only
- Egress: None (no external connections needed)
Enabling NetworkPolicies
helm install envoy-gateway helmforge/envoy-gateway \
--set security.networkPolicies=true
View policies:
kubectl get networkpolicy
kubectl describe networkpolicy envoy-gateway-controller
Testing NetworkPolicies
Verify connectivity from a test pod:
# Test pod in same namespace (should succeed)
kubectl run test --rm -it --image=curlimages/curl -- sh
curl http://envoy-gateway-proxy
# Test Redis access from unauthorized pod (should fail)
kubectl run unauthorized --rm -it --image=redis:alpine -- sh
redis-cli -h envoy-gateway-redis ping
# Expected: connection refused or timeout
Pod Security Standards
The chart enforces Pod Security Standards (restricted mode):
security:
podSecurityStandards: true
Security Context (controller and proxy):
podSecurityContext:
runAsNonRoot: true
runAsUser: 65532
fsGroup: 65532
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ['ALL']
readOnlyRootFilesystem: true
RBAC
The chart creates comprehensive RBAC resources. The controller requires cluster-wide permissions because it manages proxy infrastructure across all namespaces.
ClusterRole Permissions:
| API Group | Resources | Verbs |
|---|---|---|
"" (core) | namespaces, nodes, pods, endpoints | get, list, watch |
"" (core) | services, secrets, configmaps, serviceaccounts | get, list, watch, create, update, patch, delete, deletecollection |
apps | deployments, daemonsets | get, list, watch, create, update, patch, delete, deletecollection |
autoscaling | horizontalpodautoscalers | get, list, watch, create, update, patch, delete, deletecollection |
batch | jobs | get, list, watch, create, update, patch, delete, deletecollection |
policy | poddisruptionbudgets | get, list, watch, create, update, patch, delete, deletecollection |
discovery.k8s.io | endpointslices | get, list, watch |
rbac.authorization.k8s.io | roles, rolebindings | get, list, watch, create, update, patch, delete |
gateway.networking.k8s.io | gateways, httproutes, gatewayclasses, grpcroutes, etc. | get, list, watch, create, update, patch, delete |
gateway.envoyproxy.io | envoyproxies, securitypolicies, backendtrafficpolicies, etc. | get, list, watch, create, update, patch, delete |
"" (core) | events | create, patch |
Note on
deletecollection: The EG operator callsdeletecollectionbefore reconciling proxy infrastructure (deployments, daemonsets, HPA, jobs, PDB). This is an upstream EG behavior for clean reconciliation.
Namespace Role (leader election):
- ConfigMaps and Leases for leader election
- Events for audit trail
Certgen ClusterRole (pre-install hook):
The certgen job runs with a separate ClusterRole (not Role) because mutatingwebhookconfigurations is a cluster-scoped resource:
| API Group | Resources | Verbs |
|---|---|---|
"" (core) | secrets | get, list, create, update, patch |
admissionregistration.k8s.io | mutatingwebhookconfigurations | get, update, patch |
View RBAC:
kubectl get clusterrole <release-name>-envoy-gateway
kubectl describe clusterrole <release-name>-envoy-gateway
kubectl get rolebinding -n <namespace>
Security Best Practices
-
Use NetworkPolicies in production:
--set security.networkPolicies=true -
Enable TLS with an external cert-manager Certificate resource referencing your Gateway listener’s
certificateRefs. -
Restrict service access with Gateway allowedRoutes:
listeners: - allowedRoutes: namespaces: from: Same # Only same namespace -
Use JWT authentication (via SecurityPolicy):
apiVersion: gateway.envoyproxy.io/v1alpha1 kind: SecurityPolicy spec: jwt: providers: - name: auth0 issuer: https://your-tenant.auth0.com/ -
Monitor security events:
kubectl get events --field-selector reason=NetworkPolicy
SecurityPolicy — Native Authentication
Envoy Gateway’s SecurityPolicy CRD provides authentication and authorization at the Gateway or HTTPRoute level — a major advantage over nginx-based controllers.
JWT Authentication
securityPolicy:
create: true
jwt:
enabled: true
providers:
- name: auth0
issuer: https://your-tenant.auth0.com/
audiences:
- your-api-audience
remoteJWKS:
uri: https://your-tenant.auth0.com/.well-known/jwks.json
OIDC Authentication
securityPolicy:
create: true
oidc:
enabled: true
provider:
issuer: https://accounts.google.com
clientID: your-client-id
clientSecret:
name: oidc-secret # kubectl create secret generic oidc-secret --from-literal=clientSecret=...
redirectURL: https://myapp.example.com/oauth2/callback
scopes:
- openid
- email
- profile
CORS
securityPolicy:
create: true
cors:
enabled: true
allowOrigins:
- 'https://app.example.com'
allowMethods:
- GET
- POST
- PUT
- DELETE
allowHeaders:
- Authorization
- Content-Type
maxAge: 86400
API Key
securityPolicy:
create: true
apiKey:
enabled: true
credentials:
- name: my-api-keys # Secret with API keys
BackendTrafficPolicy — Traffic Management
Configure retries, circuit breaking, timeouts, and health checks per Gateway or HTTPRoute.
Retries with Circuit Breaking
backendTrafficPolicy:
create: true
retries:
enabled: true
numRetries: 3
perTryTimeout: '10s'
retryOn:
httpStatusCodes:
- 502
- 503
- 504
triggers:
- connect-failure
- retriable-status-codes
circuitBreaker:
enabled: true
maxConnections: 1024
maxParallelRequests: 1024
Timeouts
backendTrafficPolicy:
create: true
timeout:
http:
requestTimeout: '30s'
idleTimeout: '60s'
Active Health Checks
backendTrafficPolicy:
create: true
healthCheck:
active:
enabled: true
timeout: '1s'
interval: '3s'
unhealthyThreshold: 3
healthyThreshold: 1
http:
path: '/health'
expectedStatuses:
- 200
ClientTrafficPolicy — Listener Configuration
Configure connection limits, TLS settings, and HTTP protocol options at the listener level.
TLS Hardening
clientTrafficPolicy:
create: true
tls:
minVersion: 'TLSv1.3'
maxVersion: 'TLSv1.3'
Connection Limits
clientTrafficPolicy:
create: true
connection:
connectionLimit:
value: 10000
Timeouts
clientTrafficPolicy:
create: true
timeout:
http:
requestReceivedTimeout: '10s'
Configuration Reference
Global Values
| Parameter | Description | Default |
|---|---|---|
profile | Profile preset (dev/production-ha/custom) | custom |
nameOverride | Override chart name | "" |
fullnameOverride | Override full name | "" |
imagePullSecrets | Image pull secrets | [] |
Controller Configuration
| Parameter | Description | Default |
|---|---|---|
controller.replicaCount | Number of controller replicas | 1 |
controller.image.repository | Controller image | docker.io/envoyproxy/gateway |
controller.image.tag | Controller image tag | v1.7.1 |
controller.image.pullPolicy | Image pull policy | IfNotPresent |
controller.resources.requests.cpu | CPU request | 100m |
controller.resources.requests.memory | Memory request | 128Mi |
controller.resources.limits.cpu | CPU limit | 500m |
controller.resources.limits.memory | Memory limit | 512Mi |
controller.nodeSelector | Node selector | {} |
controller.tolerations | Tolerations | [] |
controller.affinity | Affinity rules | {} |
Proxy Configuration
| Parameter | Description | Default |
|---|---|---|
proxy.kind | EG proxy kind: Deployment or DaemonSet (managed by EG operator) | Deployment |
proxy.replicaCount | Number of proxy replicas (Deployment only) | 1 |
proxy.image.repository | Proxy image | docker.io/envoyproxy/envoy |
proxy.image.tag | Proxy image tag | distroless-v1.33.0 |
proxy.resources.requests.cpu | CPU request | 100m |
proxy.resources.requests.memory | Memory request | 128Mi |
proxy.resources.limits.cpu | CPU limit | 1000m |
proxy.resources.limits.memory | Memory limit | 1Gi |
proxy.autoscaling.enabled | Enable HPA | false |
proxy.autoscaling.minReplicas | Minimum replicas | 2 |
proxy.autoscaling.maxReplicas | Maximum replicas | 10 |
Gateway API Examples
| Parameter | Description | Default |
|---|---|---|
gatewayAPI.examples.enabled | Create example resources | true |
gatewayAPI.examples.namespace | Namespace for examples | "" (Release namespace) |
Gateway Resource
| Parameter | Description | Default |
|---|---|---|
gateway.create | Create default Gateway resource | true |
gateway.listeners.http.enabled | Enable HTTP listener | true |
gateway.listeners.https.enabled | Enable HTTPS listener | false |
Certgen Job
| Parameter | Description | Default |
|---|---|---|
certgen.image.tag | Certgen image tag | v1.7.1 |
SecurityPolicy
| Parameter | Description | Default |
|---|---|---|
securityPolicy.create | Create SecurityPolicy resource | false |
securityPolicy.jwt.enabled | Enable JWT authentication | false |
securityPolicy.oidc.enabled | Enable OIDC authentication | false |
securityPolicy.cors.enabled | Enable CORS | false |
BackendTrafficPolicy
| Parameter | Description | Default |
|---|---|---|
backendTrafficPolicy.create | Create BackendTrafficPolicy | false |
backendTrafficPolicy.retries.enabled | Enable request retries | false |
backendTrafficPolicy.circuitBreaker.enabled | Enable circuit breaker | false |
ClientTrafficPolicy
| Parameter | Description | Default |
|---|---|---|
clientTrafficPolicy.create | Create ClientTrafficPolicy | false |
Rate Limiting
| Parameter | Description | Default |
|---|---|---|
rateLimiting.enabled | Enable rate limiting | false |
rateLimiting.externalRedis.host | External Redis host (when redis.enabled=false) | "" |
rateLimiting.externalRedis.port | External Redis port | 6379 |
rateLimiting.presets.api | Enable API preset (100 req/min per IP) | false |
rateLimiting.presets.strict | Enable strict preset (10 req/min per IP) | false |
Redis Subchart
Redis is deployed as a helmforge/redis subchart.
All redis.* values are forwarded to the redis chart — refer to its documentation for the full list.
| Parameter | Description | Default |
|---|---|---|
redis.enabled | Deploy helmforge/redis subchart | false |
redis.architecture | Redis topology | standalone |
redis.auth.enabled | Enable password authentication | true |
redis.auth.password | Redis password (auto-generated if empty) | "" |
redis.standalone.persistence.enabled | Enable persistent storage | true |
redis.standalone.persistence.size | PVC size | 1Gi |
Monitoring
| Parameter | Description | Default |
|---|---|---|
monitoring.enabled | Enable monitoring | false |
monitoring.prometheus.serviceMonitor | Create ServiceMonitors | true |
monitoring.prometheus.prometheusRule | Create PrometheusRule | false |
monitoring.grafana.dashboards | Create Grafana dashboards | false |
monitoring.accessLogs.enabled | Enable access logs | true |
monitoring.accessLogs.format | Access log format (json/text) | json |
Security
| Parameter | Description | Default |
|---|---|---|
security.networkPolicies | Enable NetworkPolicies | false |
security.podSecurityStandards | Enable PodSecurityStandards | true |
High Availability
| Parameter | Description | Default |
|---|---|---|
highAvailability.enabled | Enable HA mode | false |
highAvailability.podDisruptionBudget.minAvailable | PDB min available | 1 |
RBAC and ServiceAccount
| Parameter | Description | Default |
|---|---|---|
serviceAccount.create | Create ServiceAccount | true |
serviceAccount.name | ServiceAccount name | "" |
rbac.create | Create RBAC resources | true |
GatewayClass
| Parameter | Description | Default |
|---|---|---|
gatewayClass.name | GatewayClass name | envoy-gateway |
gatewayClass.create | Create GatewayClass | true |
Common Scenarios
Scenario 1: Local Development (k3d)
Deploy Envoy Gateway for local testing:
# Install Gateway API CRDs
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml
# Install with dev profile
helm install envoy-gateway helmforge/envoy-gateway \
--set profile=dev
# Port-forward for local access (find EG-provisioned service first)
EG_SVC=$(kubectl get svc -l gateway.envoyproxy.io/owning-gateway-name=envoy-gateway -o name | head -1)
kubectl port-forward $EG_SVC 8080:80
# Test
curl -H "Host: example.local" http://localhost:8080/
Configuration:
- 1 replica controller and proxy
- Minimal resources (works on laptops)
- HTTP only (no TLS overhead)
- Example Gateway and HTTPRoute included
Scenario 2: HTTPS with External cert-manager
Deploy with HTTPS using cert-manager for TLS provisioning:
# Prerequisites: cert-manager installed
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml
# Install chart with HTTPS listener enabled
helm install envoy-gateway helmforge/envoy-gateway \
--set gateway.listeners.https.enabled=true
# Create a self-signed ClusterIssuer
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned
spec:
selfSigned: {}
EOF
# Create a Certificate (cert-manager creates the secret)
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: envoy-gateway-tls
namespace: default
spec:
secretName: envoy-gateway-tls
issuerRef:
name: selfsigned
kind: ClusterIssuer
dnsNames:
- example.local
EOF
# Wait for certificate
kubectl wait --for=condition=Ready certificate/envoy-gateway-tls --timeout=300s
# Test HTTPS
export GATEWAY_IP=$(kubectl get gateway envoy-gateway -o jsonpath='{.status.addresses[0].value}')
curl -k -H "Host: example.local" https://$GATEWAY_IP/
Configuration:
- EG operator provisions proxy automatically
- HTTPS listener with self-signed TLS certificate
- cert-manager manages certificate lifecycle independently of the chart
Scenario 3: Production HA with Let’s Encrypt
Deploy for production with high availability and production TLS:
# Create Let's Encrypt ClusterIssuer
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-prod-key
solvers:
- http01:
ingress:
class: envoy-gateway
EOF
# Install with production-ha profile
helm install envoy-gateway helmforge/envoy-gateway \
--set profile=production-ha \
--set gateway.listeners.https.enabled=true \
--set gatewayAPI.examples.enabled=false
Configuration:
- 2 replica controller (leader election)
- Proxy as DaemonSet (
proxy.kind: DaemonSet) — EG operator provisions one per node - Production resources (1000m CPU, 1Gi memory)
- Let’s Encrypt TLS certificates (managed externally via cert-manager)
- Anti-affinity for controller and proxy
- PodDisruptionBudgets
- No examples (production-only)
Scenario 4: API Gateway with Rate Limiting
Deploy as API gateway with rate limiting and monitoring:
helm install envoy-gateway helmforge/envoy-gateway \
--set profile=production-ha \
--set rateLimiting.enabled=true \
--set redis.enabled=true \
--set rateLimiting.presets.api=true \
--set monitoring.enabled=true \
--set monitoring.prometheus.serviceMonitor=true \
--set monitoring.prometheus.prometheusRule=true \
--set monitoring.grafana.dashboards=true \
--set security.networkPolicies=true
Create HTTPRoute with rate limiting:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: api-route
spec:
parentRefs:
- name: envoy-gateway-example
hostnames:
- api.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /api/v1
backendRefs:
- name: api-service
port: 8080
filters:
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: BackendTrafficPolicy
name: envoy-gateway-ratelimit-api
Features Enabled:
- Rate limiting: 100 requests/minute per IP
- Redis persistence for distributed limits
- Prometheus metrics and alerts
- Grafana dashboard
- NetworkPolicies for security
- HA deployment
Scenario 5: Multi-Tenancy with Namespace Isolation
Deploy with namespace-based routing isolation:
# Install Envoy Gateway
helm install envoy-gateway helmforge/envoy-gateway \
--set profile=production-ha \
--set gatewayAPI.examples.enabled=false
# Create Gateway with namespace isolation
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: tenant-gateway
namespace: gateway-system
spec:
gatewayClassName: envoy-gateway
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
tenant: enabled
EOF
Label tenant namespaces:
kubectl label namespace tenant-a tenant=enabled
kubectl label namespace tenant-b tenant=enabled
Create HTTPRoutes in tenant namespaces:
# tenant-a namespace
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: tenant-a-route
namespace: tenant-a
spec:
parentRefs:
- name: tenant-gateway
namespace: gateway-system
hostnames:
- tenant-a.example.com
rules:
- backendRefs:
- name: tenant-a-service
port: 80
Multi-Tenancy Features:
- Namespace-based route isolation
- Label selectors for tenant control
- Centralized Gateway management
- Per-tenant HTTPRoutes
Troubleshooting
Proxy Pods Stuck at 1/2 Ready
Symptom: Proxy pods show 1/2 Running — envoy container not ready, startup probe fails on port 19003
Diagnosis:
kubectl describe pod <proxy-pod> | grep -A5 "Events:"
# Look for: Startup probe failed: Get "http://<ip>:19003/ready": connection refused
kubectl logs <proxy-pod> -c envoy | tail -20
# Look for: DeltaAggregatedResources gRPC config stream to xds_cluster closed: no healthy upstream
Root Cause: The Envoy proxy cannot connect to the EG controller xDS server. The proxy bootstrap config hardcodes the xDS address as envoy-gateway.<namespace>.svc.cluster.local:18000. If this Service doesn’t exist or port 18000 is not exposed, the proxy never receives its configuration and stays not ready.
Solution:
# Verify the xDS service exists
kubectl get svc envoy-gateway -n <release-namespace>
# If missing: chart version is outdated — upgrade to 1.3.0+
# Verify port 18000 is exposed
kubectl get svc envoy-gateway -n <release-namespace> -o jsonpath='{.spec.ports}'
# Should show port 18000 (xDS) and 18002 (wasm)
# Once the service exists, proxy pods reconnect automatically
kubectl get pods -n <release-namespace> -w
Controller Startup Probe Fails (404)
Symptom: Controller pod stuck at 0/1, startup probe fails with HTTP probe failed with statuscode: 404
Diagnosis:
kubectl describe pod <controller-pod> | grep -A3 "Startup\|Readiness\|Liveness"
# Check probe port — must be 8081 (metrics), NOT 8080 (http)
kubectl port-forward <controller-pod> 18081:8081
curl http://localhost:18081/healthz # Should return 200
curl http://localhost:18081/readyz # Should return 200
Root Cause: EG controller serves health endpoints (/healthz, /readyz) on port 8081 (metrics/admin port). Port 8080 serves a different internal API and returns 404 for health paths.
Solution: Upgrade to chart version 1.3.0+ which correctly configures probes on port metrics (8081).
Gateway Programmed=False: No Address Assigned
Symptom: kubectl get gateway shows PROGRAMMED: False with message “No addresses have been assigned to the Gateway”
Diagnosis:
kubectl describe gateway <gateway-name> -n <namespace>
# Look for: "No addresses have been assigned to the Gateway"
kubectl get svc -n <namespace>
# Check if proxy LoadBalancer service has EXTERNAL-IP: <pending>
kubectl get pods -n kube-system | grep svclb
# Check if klipper-lb pods are Pending
Common Cause in k3d: The proxy service type is LoadBalancer with port 80. If Traefik is running, klipper-lb already owns port 80 and can’t schedule another service on the same port.
Solution for k3d:
# Option 1: Remove Traefik to free port 80
helm -n kube-system uninstall traefik traefik-crd
# Option 2: Use ClusterIP for the proxy service
helm install envoy-gateway helmforge/envoy-gateway \
--set proxy.service.type=ClusterIP
# Option 3: Port-forward for testing (no IP needed)
EG_SVC=$(kubectl get svc -l gateway.envoyproxy.io/owning-gateway-namespace=<namespace> -o name | head -1)
kubectl port-forward $EG_SVC 8080:80
curl -H "Host: example.local" http://localhost:8080/
Note: Even when
PROGRAMMED=Falsedue to missing IP, the proxy IS serving traffic if the proxy pods are2/2 Running. Test via port-forward or NodePort to verify.
Gateway Not Ready
Symptom: Gateway status shows Pending or NotReady
Diagnosis:
kubectl describe gateway envoy-gateway-example
kubectl get events --field-selector involvedObject.name=envoy-gateway-example
Common Causes:
-
GatewayClass not found:
kubectl get gatewayclass # If missing: helm upgrade to recreate -
Controller not running:
kubectl get pods -l app.kubernetes.io/component=controller kubectl logs -l app.kubernetes.io/component=controller -
Insufficient RBAC permissions:
kubectl describe clusterrole envoy-gateway # Check for missing permissions in logs
Solution:
# Recreate Gateway
kubectl delete gateway envoy-gateway-example
kubectl apply -f <gateway-manifest>
# Or reinstall chart
helm upgrade envoy-gateway helmforge/envoy-gateway --reuse-values
HTTPRoute Not Working
Symptom: 404 errors or no route to backend
Diagnosis:
kubectl describe httproute envoy-gateway-example
kubectl get httproute envoy-gateway-example -o yaml
Common Causes:
-
Parent Gateway not specified:
spec: parentRefs: # Must match Gateway name and namespace - name: envoy-gateway-example namespace: default -
Hostname mismatch:
# Test with correct hostname curl -H "Host: example.local" http://$GATEWAY_IP/ # Not: curl http://$GATEWAY_IP/ -
Backend service not found:
kubectl get svc example-backend # If missing, check HTTPRoute spec.rules[].backendRefs -
Route not attached to Gateway:
kubectl get httproute envoy-gateway-example -o jsonpath='{.status.parents[0].conditions}' # Look for "Accepted": true
Solution:
# Verify backend service exists
kubectl get svc -l app=example-backend
# Check route status
kubectl get httproute -o wide
# View proxy logs
kubectl logs -l app.kubernetes.io/component=proxy
TLS Certificate Issues
Symptom: Internal EG components fail to communicate (xDS TLS errors) or HTTPS Gateway listener has no certificate
Diagnosis:
# Check certgen job (internal TLS)
kubectl get job -l app.kubernetes.io/component=certgen
kubectl logs -l app.kubernetes.io/component=certgen
# Check external certificates (if using cert-manager for HTTPS listeners)
kubectl get certificate
kubectl describe certificate envoy-gateway-tls
kubectl get certificaterequest
Common Causes:
-
Certgen job failed:
kubectl describe job -l app.kubernetes.io/component=certgen # Check for RBAC or image pull errors -
External cert-manager not installed (for HTTPS Gateway listeners):
kubectl get pods -n cert-manager # If missing: install cert-manager separately -
Issuer not found:
kubectl get clusterissuer kubectl describe clusterissuer letsencrypt-prod -
ACME challenge failed (Let’s Encrypt):
kubectl get challenges kubectl describe challenge <challenge-name> -
DNS not resolving:
dig example.com # Should point to Gateway IP
Solution:
# Re-run certgen (delete and reinstall chart or run job manually)
helm upgrade --install envoy-gateway helmforge/envoy-gateway --reuse-values
# Check cert-manager logs (for external HTTPS certificates)
kubectl logs -n cert-manager -l app=cert-manager
# Delete and recreate certificate
kubectl delete certificate envoy-gateway-tls
kubectl apply -f <certificate-manifest>
# Force renewal
kubectl delete secret envoy-gateway-tls
# Certificate controller will recreate
Rate Limiting Not Working
Symptom: No rate limit errors despite exceeding limits
Diagnosis:
kubectl get backendtrafficpolicy
kubectl describe backendtrafficpolicy envoy-gateway-ratelimit-api
kubectl get statefulset envoy-gateway-redis
Common Causes:
-
Redis not running:
kubectl get pods -l app.kubernetes.io/component=redis kubectl logs envoy-gateway-redis-0 -
Rate limit policy not attached:
kubectl get httproute <route-name> -o yaml | grep -A 10 filters # Should see ExtensionRef with BackendTrafficPolicy -
Wrong client selector:
# Rate limit matches on x-real-ip header # Ensure proxy passes client IP correctly spec: rateLimit: global: rules: - clientSelectors: - headers: - name: x-real-ip # Check header name type: Distinct
Solution:
# Check Redis connectivity
kubectl exec -it envoy-gateway-redis-0 -- redis-cli ping
# View rate limit config
kubectl get configmap envoy-gateway-ratelimit-config -o yaml
# Test rate limit
for i in {1..150}; do
curl -H "Host: example.local" http://$GATEWAY_IP/
sleep 0.5
done
# Should see 429 (Too Many Requests) after 100 requests
High Memory Usage
Symptom: Proxy pods using excessive memory
Diagnosis:
kubectl top pods -l app.kubernetes.io/component=proxy
kubectl describe pod <proxy-pod>
Common Causes:
-
Too many active connections:
kubectl port-forward <proxy-pod> 19000:19000 curl http://localhost:19000/stats | grep downstream_cx_active -
Large request/response bodies:
# Check max buffer size in EnvoyProxy config kubectl get envoyproxy -o yaml -
Memory leaks:
# Check proxy logs for errors kubectl logs <proxy-pod> | grep -i "memory\|oom"
Solution:
# Increase memory limits
helm upgrade envoy-gateway helmforge/envoy-gateway \
--set proxy.resources.limits.memory=2Gi \
--reuse-values
# Enable HPA for scaling
helm upgrade envoy-gateway helmforge/envoy-gateway \
--set proxy.autoscaling.enabled=true \
--set proxy.autoscaling.maxReplicas=10 \
--reuse-values
# Restart proxy pods (EG operator manages the proxy deployment/daemonset — use rollout on the EG-provisioned resource)
PROXY_DEPLOY=$(kubectl get deployment -l gateway.envoyproxy.io/owning-gateway-name=envoy-gateway -o name | head -1)
kubectl rollout restart $PROXY_DEPLOY
Monitoring Not Working
Symptom: No metrics in Prometheus or Grafana
Diagnosis:
kubectl get servicemonitor
kubectl describe servicemonitor envoy-gateway-controller
kubectl get prometheusrule
Common Causes:
-
Prometheus Operator not installed:
kubectl get pods -n monitoring -l app=prometheus-operator -
ServiceMonitor not scraped:
# Check Prometheus targets kubectl port-forward -n monitoring svc/prometheus-operated 9090:9090 # Visit http://localhost:9090/targets # Look for "envoy-gateway" targets -
Wrong labels on ServiceMonitor:
kubectl get servicemonitor envoy-gateway-controller -o yaml # Check spec.selector.matchLabels matches Service labels -
Metrics endpoint not accessible:
kubectl port-forward svc/envoy-gateway-controller 8081:8081 curl http://localhost:8081/metrics # Should return Prometheus metrics
Solution:
# Verify ServiceMonitor is created
helm upgrade envoy-gateway helmforge/envoy-gateway \
--set monitoring.enabled=true \
--set monitoring.prometheus.serviceMonitor=true \
--reuse-values
# Check Prometheus config
kubectl get prometheus -n monitoring -o yaml | grep serviceMonitorSelector
# Manually test metrics scrape
kubectl run curl --rm -it --image=curlimages/curl -- \
curl http://envoy-gateway-controller.default.svc:8081/metrics
NetworkPolicy Blocking Traffic
Symptom: Connections timeout after enabling NetworkPolicies
Diagnosis:
kubectl get networkpolicy
kubectl describe networkpolicy envoy-gateway-proxy
kubectl logs <pod> | grep -i "connection refused\|timeout"
Common Causes:
-
Missing egress rule for backend:
# NetworkPolicy doesn't allow proxy → backend traffic kubectl describe networkpolicy envoy-gateway-proxy | grep -A 20 Egress -
DNS not allowed:
# Proxy can't resolve backend service names kubectl exec <proxy-pod> -- nslookup example-backend -
Wrong pod selectors:
# NetworkPolicy doesn't match actual pod labels kubectl get pods --show-labels
Solution:
# Test without NetworkPolicies first
helm upgrade envoy-gateway helmforge/envoy-gateway \
--set security.networkPolicies=false \
--reuse-values
# If working, add custom NetworkPolicy for your backend
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-proxy
namespace: default
spec:
podSelector:
matchLabels:
app: my-backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/component: proxy
ports:
- protocol: TCP
port: 8080
EOF
# Re-enable NetworkPolicies
helm upgrade envoy-gateway helmforge/envoy-gateway \
--set security.networkPolicies=true \
--reuse-values
Version History
Version 1.3.0 (Current)
Functional validation release with k3d-verified fixes for xDS connectivity, health probes, certgen, and RBAC.
Patch Fixes (k3d validation):
The following issues were discovered and fixed during k3d functional validation:
certgen: ChangedRole→ClusterRole(required for cluster-scopedmutatingwebhookconfigurations)certgen: Added--disable-topology-injectorflag (skips MutatingWebhookConfiguration patching when webhook not deployed)rbac: Addeddeletecollectiontoapps,autoscaling,batch,policygroupsrbac: Addednodes,pods,endpoints,endpointslices,roles,rolebindingsto ClusterRoledeployment-controller: Fixed health probes to use port8081(metrics), not8080deployment-controller: Exposed ports18000(xDS) and18002(wasm) on containerservice-controller: Added dedicated Service namedenvoy-gatewayexposing port18000for xDS
Version 1.0.0
Complete architectural redesign for EG v1.7.1 with operator-managed proxy provisioning and new policy CRDs.
Breaking Changes:
proxy.moderenamed toproxy.kindprofile: stagingremovedcertificates.certManagersection removed — use external cert-manager directly- Proxy ServiceMonitor removed (proxy service name is dynamic, managed by EG operator)
- Proxy Deployment/DaemonSet no longer created by the chart — EG operator provisions them when a
Gatewayresource exists
New Features:
certgenpre-install job for automatic internal TLS cert generation- Default
Gatewayresource (gateway.create: true) SecurityPolicyCRD support: JWT, OIDC, API Key, CORSBackendTrafficPolicyCRD: retries, circuit breaking, timeouts, active health checksClientTrafficPolicyCRD: connection limits, TLS version control, HTTP/2 settings
Versions Updated:
- EG: v1.7.1 (was v1.0.0)
- Envoy: distroless-v1.33.0 (was v1.29.0)
- Redis: helmforge/redis subchart v1.6.9 (replaces inline StatefulSet)
- Gateway API CRDs: v1.2.1 (was v1.0.0)
- PrometheusRule: 6 alerts (was 7)
Deployment Options:
All features are opt-in and can be enabled individually:
# Full production deployment
helm install envoy-gateway helmforge/envoy-gateway \
--set profile=production-ha \
--set rateLimiting.enabled=true \
--set redis.enabled=true \
--set rateLimiting.presets.api=true \
--set monitoring.enabled=true \
--set monitoring.prometheus.prometheusRule=true \
--set monitoring.grafana.dashboards=true \
--set security.networkPolicies=true \
--set securityPolicy.create=true \
--set backendTrafficPolicy.create=true
Upgrade from 1.0.0:
# Update Helm repo
helm repo update
# Note: proxy.mode is now proxy.kind — update your values before upgrading
helm upgrade envoy-gateway helmforge/envoy-gateway \
--version 1.3.0 \
--set proxy.kind=Deployment \
--reuse-values
# Verify upgrade
kubectl get pods -l app.kubernetes.io/name=envoy-gateway
kubectl get gateway,httproute
Version 1.0.0
First stable release with complete P1 (MVP) and P2 (Production) features.
- Profile presets (dev, staging, production-ha)
- Gateway API examples (Gateway, HTTPRoute, example backend)
- cert-manager integration
- Rate limiting with Redis backend
- Observability bundle (Prometheus ServiceMonitors, PrometheusRule with 7 alerts, Grafana dashboards)
- Security hardening (NetworkPolicies, PodSecurityStandards)
Resources
Official Documentation
- Envoy Gateway: https://gateway.envoyproxy.io/
- Gateway API: https://gateway-api.sigs.k8s.io/
- Envoy Proxy: https://www.envoyproxy.io/
Gateway API Resources
- Gateway API Guides: https://gateway-api.sigs.k8s.io/guides/
- HTTPRoute Reference: https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.HTTPRoute
- GRPCRoute Reference: https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.GRPCRoute
- TCPRoute Reference: https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1alpha2.TCPRoute
Envoy Gateway CRDs
- BackendTrafficPolicy: https://gateway.envoyproxy.io/latest/api/extension_types/#backendtrafficpolicy
- SecurityPolicy: https://gateway.envoyproxy.io/latest/api/extension_types/#securitypolicy
- EnvoyProxy: https://gateway.envoyproxy.io/latest/api/extension_types/#envoyproxy
cert-manager
- cert-manager Docs: https://cert-manager.io/docs/
- Let’s Encrypt: https://letsencrypt.org/
- ClusterIssuer: https://cert-manager.io/docs/configuration/
Prometheus and Grafana
- Prometheus Operator: https://prometheus-operator.dev/
- ServiceMonitor: https://prometheus-operator.dev/docs/operator/design/#servicemonitor
- PrometheusRule: https://prometheus-operator.dev/docs/operator/design/#prometheusrule
Community
- HelmForge GitHub: https://github.com/helmforgedev/charts
- Report Issues: https://github.com/helmforgedev/charts/issues
- Chart Repository: https://repo.helmforge.dev
Chart Information
| Field | Value |
|---|---|
| Chart Version | 1.3.0 |
| App Version | v1.7.1 |
| Kubernetes Version | >= 1.24 |
| Helm Version | >= 3.8 |
| License | Apache 2.0 |
Source Code: https://github.com/helmforgedev/charts/tree/main/charts/envoy-gateway