openHAB
Helm chart for deploying openHAB on Kubernetes using the official openhab/openhab Docker image. openHAB is an open-source home automation platform that integrates with hundreds of smart home technologies — from Zigbee and Z-Wave to MQTT, KNX, and cloud services.
Single-instance only: openHAB does not support horizontal scaling. This chart enforces
replicaCount: 1and will fail fast if you attempt to set it higher, preventing accidental data corruption from concurrent writes to shared PVCs.
Key Features
- StatefulSet workload for stable, predictable PVC attachment on every restart
- Three persistent volumes (
userdata,conf,addons) with independent sizing and storage class configuration - ConfigMap live-reload for sitemaps, things, and items — changes applied automatically within seconds, no pod restart required
- Correct security context —
fsGroup: 9001only;runAsUser/runAsGroupintentionally unset so the entrypoint can bootstrap user creation before dropping privileges viagosu - Smart health probes via
/rest/uuid(available in openHAB 4.x, no auth required) with a 10-minute startup window for OSGi bundle loading - Prometheus metrics via
/rest/metrics/prometheus— pod annotations and ServiceMonitor supported - Optional Ingress with websocket annotation guidance for the
/rest/eventsSSE endpoint - Optional Karaf SSH console for administrative access to the OSGi runtime
- Fail-fast validation with clear error messages for common misconfigurations
Installation
HTTPS Repository
helm repo add helmforge https://repo.helmforge.dev
helm repo update
helm install my-openhab helmforge/openhab
OCI Registry
helm install my-openhab oci://ghcr.io/helmforgedev/helm/openhab --version 0.1.0
Quick Start
Minimal Deployment
Deploy openHAB with persistent storage and port-forward access:
helm install my-openhab helmforge/openhab
kubectl wait --namespace default pod \
-l app.kubernetes.io/name=openhab \
--for=condition=Ready --timeout=300s
kubectl port-forward svc/my-openhab 8080:8080
Open http://127.0.0.1:8080 and complete the first-boot admin setup wizard.
Custom Timezone and Resources
# values.yaml
env:
TZ: 'Europe/Berlin'
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 2000m
memory: 2Gi
helm install my-openhab helmforge/openhab -f values.yaml
First Boot — Admin Setup
openHAB does not support injecting admin credentials via environment variables. On first launch, openHAB presents a setup wizard where you create the administrator account.
Steps:
- Wait for the pod to be Ready (may take 60-120 seconds on first boot)
- Navigate to the web UI (
http://localhost:8080if using port-forward) - Click Create an administrator account
- Enter username and password
- Complete the optional initial configuration steps
Credentials are stored persistently in /openhab/userdata/jsondb/auth.json and survive pod restarts as long as the userdata PVC exists.
Persistent Storage
openHAB requires three persistent directories. The chart creates a PVC for each by default.
| Directory | PVC | Default Size | Contents |
|---|---|---|---|
/openhab/userdata | <release>-userdata | 5Gi | Runtime state, JSONDB, logs, persistence data |
/openhab/conf | <release>-conf | 1Gi | Items, things, rules, sitemaps, services |
/openhab/addons | <release>-addons | 2Gi | Drop-in JAR bindings/addons |
Custom Sizes and Storage Class
persistence:
userdata:
size: 20Gi
storageClass: 'fast-ssd'
conf:
size: 5Gi
storageClass: 'fast-ssd'
addons:
size: 10Gi
Using Existing PVCs
persistence:
userdata:
existingClaim: my-openhab-userdata
conf:
existingClaim: my-openhab-conf
addons:
existingClaim: my-openhab-addons
ConfigMap Live Reload
This is the key differentiator of this chart. openHAB natively monitors /openhab/conf/ using a file watcher. Any change to .sitemap, .things, or .items files is applied automatically within 2-5 seconds — no pod restart required.
The chart mounts Kubernetes ConfigMaps into the conf PVC using subPath, so ConfigMap-managed files coexist with any existing files without overwriting them.
How It Works
Helm values → ConfigMap (K8s) → subPath mount → openHAB file watcher → Live reload (~2-5s)
Sitemaps
Sitemaps define the UI layout for openHAB’s BasicUI and HABPanel:
configMaps:
sitemaps:
enabled: true
files:
myhome.sitemap: |
sitemap myhome label="My Home" {
Frame label="Ground Floor" {
Switch item=Light_GF_Corridor label="Corridor Light"
Switch item=Light_GF_Kitchen label="Kitchen Light"
Text item=Temperature_GF label="Temperature [%.1f °C]"
}
Frame label="Climate" {
Text item=Temperature_Outdoor label="Outdoor [%.1f °C]"
Text item=Humidity_Outdoor label="Humidity [%d %%]"
}
}
Things
Things define physical devices and their channels:
configMaps:
things:
enabled: true
files:
network.things: |
Thing network:pingdevice:router [
hostname="192.168.1.1",
retry=1,
timeout=5000,
refreshInterval=60000
]
mqtt.things: |
Bridge mqtt:broker:mybroker [ host="mosquitto", port=1883, secure=false ] {
Thing mqtt:topic:mysensor "Temperature Sensor" {
Channels:
Type number : temperature [ stateTopic="home/sensor/temperature" ]
Type number : humidity [ stateTopic="home/sensor/humidity" ]
}
}
Items
Items define logical entities visible in the UI and used in rules:
configMaps:
items:
enabled: true
files:
lights.items: |
Switch Light_GF_Corridor "Corridor Light" <light>
Switch Light_GF_Kitchen "Kitchen Light" <light>
climate.items: |
Number:Temperature Temperature_GF "Ground Floor [%.1f %unit%]" <temperature>
Number:Temperature Temperature_Outdoor "Outdoor [%.1f %unit%]" <temperature>
Number:Dimensionless Humidity_Outdoor "Humidity [%d %%]" <humidity>
Applying Changes
After updating ConfigMap values, run helm upgrade. openHAB picks up the changes automatically:
helm upgrade my-openhab helmforge/openhab -f values.yaml
# No restart needed — openHAB reloads configuration within seconds
Prometheus Metrics
openHAB exposes Prometheus-format metrics via the Metrics addon at:
GET /rest/metrics/prometheus (port 8080, no authentication required)
Step 1 — Install the Metrics Addon
Install the addon once via the openHAB UI:
Settings → Add-on Store → Integrations → Metrics → Install
Or via the Karaf console (if enabled):
kubectl port-forward svc/my-openhab-karaf 8101:8101
ssh -p 8101 [email protected]
# Inside Karaf:
feature:install openhab-io-metrics
Step 2 — Enable Metrics in Chart Values
Mode 1: Pod annotations — works with any Prometheus that watches pod annotations:
metrics:
enabled: true
podAnnotations:
enabled: true
This adds the following annotations to the pod:
prometheus.io/scrape: 'true'
prometheus.io/path: /rest/metrics/prometheus
prometheus.io/port: '8080'
Mode 2: ServiceMonitor — for Prometheus Operator / kube-prometheus-stack:
metrics:
enabled: true
podAnnotations:
enabled: false
serviceMonitor:
enabled: true
interval: 60s
scrapeTimeout: 10s
# Must match your Prometheus instance's serviceMonitorSelector labels
additionalLabels:
release: prometheus
Verify the Endpoint
kubectl port-forward svc/my-openhab 8080:8080
curl -s http://localhost:8080/rest/metrics/prometheus | head -20
# Expected: Prometheus text format with jvm_*, openhab_* metrics
Metrics Exposed
| Category | Metrics |
|---|---|
| openHAB events | openhab_events_total per topic |
| Bundle states | openhab_bundle_state (32 = active) |
| Thing states | openhab_thing_state (online/offline) |
| Rule executions | openhab_rule_runs_total |
| Threadpool | openhab_threadpool_* (size, active, queue) |
| JVM memory | jvm_memory_used_bytes, jvm_gc_pause_seconds |
| JVM threads | jvm_threads_*, jvm_classes_loaded |
| Process | process_cpu_usage, process_uptime_seconds |
Custom Relabelings (ServiceMonitor)
metrics:
serviceMonitor:
enabled: true
relabelings:
- sourceLabels: [__meta_kubernetes_pod_label_app_kubernetes_io_instance]
targetLabel: instance
metricRelabelings:
- sourceLabels: [__name__]
regex: 'jvm_.*'
action: keep
Ingress
openHAB’s /rest/events endpoint uses Server-Sent Events (SSE), which requires long-lived HTTP connections. When using nginx Ingress, add the following annotations for proper support:
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: '3600'
nginx.ingress.kubernetes.io/proxy-send-timeout: '3600'
nginx.ingress.kubernetes.io/proxy-http-version: '1.1'
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
hosts:
- host: openhab.yourdomain.com
paths:
- path: /
pathType: Prefix
With TLS (cert-manager)
ingress:
enabled: true
ingressClassName: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-read-timeout: '3600'
nginx.ingress.kubernetes.io/proxy-send-timeout: '3600'
hosts:
- host: openhab.yourdomain.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: openhab-tls
hosts:
- openhab.yourdomain.com
Security
Security Context
openHAB runs as UID/GID 9001 by default (enforced by the official image). The chart configures this correctly out of the box:
podSecurityContext:
runAsUser: 9001
runAsGroup: 9001
fsGroup: 9001 # Ensures PVC volumes are group-writable by 9001
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: false # openHAB writes to internal dirs at runtime
capabilities:
drop:
- ALL
Why
readOnlyRootFilesystem: false? openHAB (OSGi/Karaf) writes to several internal directories at runtime (/openhab/runtime/,/openhab/userdata/tmp/,/openhab/userdata/cache/). These cannot be relocated. Mount your persistent data on the three PVCs to ensure durability across restarts.
Karaf SSH Console
The Apache Karaf admin console is disabled by default. When enabled, always access it via kubectl port-forward — never expose port 8101 publicly:
karaf:
enabled: true
service:
type: ClusterIP # Never NodePort or LoadBalancer
port: 8101
kubectl port-forward svc/my-openhab-karaf 8101:8101
ssh -p 8101 [email protected]
# Default Karaf password: habopen
Admin Credentials Secret
For operational reference (documentation, tooling), you can store the admin credentials in a Kubernetes Secret:
admin:
secretEnabled: true
username: admin
password: 'strongpassword' # Set via --set or external secret manager
Important: This Secret is for reference only. It does NOT automatically configure openHAB. You still need to complete the first-boot wizard with the same credentials.
Retrieve the stored password:
kubectl get secret my-openhab-admin \
-o jsonpath="{.data.password}" | base64 --decode
Deployment Scenarios
Scenario 1: Minimal Home Lab
# Simple single-node home deployment
image:
tag: '4.2.2'
env:
TZ: 'Europe/Berlin'
persistence:
userdata:
size: 10Gi
conf:
size: 2Gi
addons:
size: 5Gi
Scenario 2: GitOps-Managed Configuration
Manage all openHAB configuration declaratively via Helm — ideal for teams or reproducible setups:
env:
TZ: 'America/Sao_Paulo'
configMaps:
sitemaps:
enabled: true
files:
default.sitemap: |
sitemap default label="openHAB" {
Frame label="Overview" {
Text item=gTemperature label="Temperature [%.1f °C]"
}
}
things:
enabled: true
files:
network.things: |
Thing network:pingdevice:gateway [ hostname="192.168.1.1" ]
items:
enabled: true
files:
home.items: |
Number:Temperature gTemperature "Temperature [%.1f %unit%]" <temperature>
persistence:
userdata:
size: 10Gi
conf:
size: 2Gi
Scenario 3: Full Production with Ingress and Metrics
ingress:
enabled: true
ingressClassName: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-read-timeout: '3600'
nginx.ingress.kubernetes.io/proxy-send-timeout: '3600'
nginx.ingress.kubernetes.io/proxy-http-version: '1.1'
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
hosts:
- host: openhab.yourdomain.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: openhab-tls
hosts:
- openhab.yourdomain.com
admin:
secretEnabled: true
username: admin
password: '' # Set via: --set admin.password=<value>
karaf:
enabled: true
# Prometheus Operator (kube-prometheus-stack)
# Requires: openHAB Metrics addon installed via UI
metrics:
enabled: true
podAnnotations:
enabled: false
serviceMonitor:
enabled: true
interval: 60s
additionalLabels:
release: prometheus
env:
TZ: 'Europe/Berlin'
EXTRA_JAVA_OPTS: '-Xms512m -Xmx1536m'
persistence:
userdata:
size: 20Gi
storageClass: 'fast-ssd'
conf:
size: 5Gi
storageClass: 'fast-ssd'
addons:
size: 10Gi
resources:
requests:
cpu: 500m
memory: 768Mi
limits:
cpu: 4000m
memory: 3Gi
Automated Backup
The chart includes an optional CronJob that creates compressed archives of your openHAB data and uploads them to any S3-compatible object storage using the MinIO client (mc).
How It Works
The backup job runs as two containers sharing an emptyDir volume:
backupinitContainer (alpine) — tars selected directories and writes the archive to/tmpuploadcontainer (helmforge/mc) — picks up the archive and uploads it to S3
Both containers run as UID/GID 9001 to match openHAB’s PVC ownership.
Enabling Backup
backup:
enabled: true
schedule: '0 3 * * *' # Daily at 03:00 UTC
s3:
endpoint: 'https://minio.example.com'
bucket: 'openhab-backups'
prefix: 'prod'
accessKey: 'AKIAEXAMPLE'
secretKey: 'supersecretkey'
Using an Existing Secret
Avoid storing credentials in values by referencing a pre-created Secret:
kubectl create secret generic my-s3-creds \
--from-literal=access-key=AKIAEXAMPLE \
--from-literal=secret-key=supersecretkey \
-n openhab
backup:
enabled: true
s3:
endpoint: 'https://minio.example.com'
bucket: 'openhab-backups'
existingSecret: 'my-s3-creds'
What Gets Backed Up
| Directory | Default | Description |
|---|---|---|
/openhab/userdata | ✅ | JSONDB, persistence data, rules state |
/openhab/conf | ✅ | Items, things, sitemaps, rules files |
Always excluded from userdata: logs/, tmp/, cache/ (ephemeral — not needed for restore).
If you manage /openhab/conf via ConfigMaps (GitOps), you can skip it:
backup:
include:
userdata: true
conf: false
S3 Compatibility
The uploader (helmforge/mc) is compatible with any S3-compatible service:
| Provider | Endpoint format |
|---|---|
| MinIO | https://minio.example.com |
| AWS S3 | https://s3.amazonaws.com |
| Cloudflare R2 | https://<account>.r2.cloudflarestorage.com |
| Backblaze B2 | https://s3.<region>.backblazeb2.com |
| DigitalOcean Spaces | https://<region>.digitaloceanspaces.com |
Archive Naming
Archives follow this pattern: <archivePrefix>-backup-<YYYY-MM-DD-HHmmss>.tar.gz
Default: openhab-backup-2025-01-15-030000.tar.gz
Restore Process
-
Download the archive from your S3 bucket.
-
Scale down openHAB (required — it holds locks on its data):
kubectl scale statefulset my-openhab -n openhab --replicas=0
- Extract via a temporary pod:
kubectl run restore --rm -it --image=alpine --restart=Never \
--overrides='{"spec":{"volumes":[{"name":"ud","persistentVolumeClaim":{"claimName":"my-openhab-userdata"}}],"containers":[{"name":"r","image":"alpine","command":["sh"],"stdin":true,"tty":true,"volumeMounts":[{"name":"ud","mountPath":"/openhab/userdata"}]}]}}' \
-- sh
# Inside the pod:
# tar -xzf /path/to/openhab-backup-<timestamp>.tar.gz -C /
- Scale back to 1:
kubectl scale statefulset my-openhab -n openhab --replicas=1
Configuration Reference
Image
| Parameter | Description | Default |
|---|---|---|
image.repository | Image repository | docker.io/openhab/openhab |
image.tag | Image tag | 4.2.2 |
image.pullPolicy | Pull policy | IfNotPresent |
Workload
| Parameter | Description | Default |
|---|---|---|
replicaCount | Must be 1 — clustering not supported | 1 |
podSecurityContext.runAsUser | UID (required by openHAB image) | 9001 |
podSecurityContext.runAsGroup | GID (required by openHAB image) | 9001 |
podSecurityContext.fsGroup | fsGroup for PVC ownership | 9001 |
Service
| Parameter | Description | Default |
|---|---|---|
service.type | Service type | ClusterIP |
service.port | HTTP port | 8080 |
Persistence
| Parameter | Description | Default |
|---|---|---|
persistence.userdata.enabled | Enable userdata PVC | true |
persistence.userdata.size | PVC size | 5Gi |
persistence.userdata.storageClass | Storage class | "" |
persistence.userdata.existingClaim | Use existing PVC | "" |
persistence.conf.enabled | Enable conf PVC | true |
persistence.conf.size | PVC size | 1Gi |
persistence.addons.enabled | Enable addons PVC | true |
persistence.addons.size | PVC size | 2Gi |
ConfigMaps
| Parameter | Description | Default |
|---|---|---|
configMaps.sitemaps.enabled | Enable sitemaps ConfigMap | false |
configMaps.sitemaps.files | Map of filename → content | {} |
configMaps.things.enabled | Enable things ConfigMap | false |
configMaps.things.files | Map of filename → content | {} |
configMaps.items.enabled | Enable items ConfigMap | false |
configMaps.items.files | Map of filename → content | {} |
Metrics
| Parameter | Description | Default |
|---|---|---|
metrics.enabled | Enable Prometheus metrics | false |
metrics.podAnnotations.enabled | Add prometheus.io/* pod annotations | true |
metrics.serviceMonitor.enabled | Create ServiceMonitor | false |
metrics.serviceMonitor.namespace | ServiceMonitor namespace | release namespace |
metrics.serviceMonitor.interval | Scrape interval | 60s |
metrics.serviceMonitor.scrapeTimeout | Scrape timeout | 10s |
metrics.serviceMonitor.additionalLabels | Extra labels on ServiceMonitor | {} |
metrics.serviceMonitor.relabelings | Relabeling rules | [] |
metrics.serviceMonitor.metricRelabelings | Metric relabeling rules | [] |
Environment
| Parameter | Description | Default |
|---|---|---|
env.TZ | Timezone | UTC |
env.EXTRA_JAVA_OPTS | Extra JVM options | "" |
env.OPENHAB_HTTP_PORT | HTTP port | 8080 |
env.OPENHAB_HTTPS_PORT | HTTPS port | 8443 |
Optional Components
| Parameter | Description | Default |
|---|---|---|
karaf.enabled | Enable Karaf SSH console | false |
karaf.service.port | Karaf SSH port | 8101 |
admin.secretEnabled | Create admin credentials Secret | false |
admin.username | Admin username (stored in Secret) | admin |
admin.password | Admin password (stored in Secret) | "" |
admin.existingSecret | Use existing Secret | "" |
Backup
| Parameter | Description | Default |
|---|---|---|
backup.enabled | Enable automated backup CronJob | false |
backup.schedule | Cron schedule | 0 3 * * * |
backup.suspend | Suspend the CronJob without deleting it | false |
backup.concurrencyPolicy | CronJob concurrency policy | Forbid |
backup.successfulJobsHistoryLimit | Successful job history to retain | 3 |
backup.failedJobsHistoryLimit | Failed job history to retain | 3 |
backup.backoffLimit | Job backoff limit | 1 |
backup.archivePrefix | Archive filename prefix | openhab |
backup.include.userdata | Back up /openhab/userdata | true |
backup.include.conf | Back up /openhab/conf | true |
backup.images.utility.repository | Backup utility image | docker.io/library/alpine |
backup.images.utility.tag | Backup utility tag | 3.22 |
backup.images.uploader.repository | S3 uploader image | docker.io/helmforge/mc |
backup.images.uploader.tag | S3 uploader tag | 1.0.0 |
backup.resources | Resource requests/limits for backup containers | {} |
backup.s3.endpoint | S3-compatible endpoint URL | "" |
backup.s3.bucket | Target bucket name | "" |
backup.s3.prefix | Key prefix within the bucket | openhab |
backup.s3.accessKey | S3 access key | "" |
backup.s3.secretKey | S3 secret key | "" |
backup.s3.existingSecret | Existing Secret name (keys: access-key, secret-key) | "" |
Troubleshooting
Pod stuck in Init state for several minutes
This is normal on first boot. openHAB loads all OSGi bundles at startup, which takes 60-120 seconds. The chart configures a startup probe with a 5-minute window. Wait patiently before investigating.
kubectl get pod -l app.kubernetes.io/name=openhab \
-o jsonpath='{.items[0].status.containerStatuses[0].started}'
CrashLoopBackOff after PVC reuse
Incorrect permissions on a reused PVC will prevent openHAB from writing to its directories.
kubectl logs -l app.kubernetes.io/name=openhab --previous
# Fix permissions via a temporary pod
kubectl run fix-perms --image=busybox --restart=Never \
--overrides='{"spec":{"volumes":[{"name":"data","persistentVolumeClaim":{"claimName":"my-openhab-userdata"}}],"containers":[{"name":"fix","image":"busybox","command":["chown","-R","9001:9001","/data"],"volumeMounts":[{"name":"data","mountPath":"/data"}]}]}}' \
-- chown -R 9001:9001 /data
ConfigMap changes not reflected
Kubernetes syncs ConfigMaps to pods every ~60 seconds (kubelet --sync-frequency). After the file appears on disk, openHAB applies it within 2-5 seconds. If still not reflected after 2-3 minutes:
# Verify the file exists in the pod
kubectl exec -l app.kubernetes.io/name=openhab -- ls /openhab/conf/sitemaps/
# Check openHAB log for parsing errors
kubectl exec -l app.kubernetes.io/name=openhab -- \
tail -50 /openhab/userdata/logs/openhab.log
403 Forbidden on web UI
The first-boot admin setup wizard has not been completed. Navigate to the UI root and create your administrator account.
Web UI works but real-time updates are broken
The /rest/events SSE endpoint requires long-lived connections. Verify the websocket annotations are present on your Ingress:
kubectl describe ingress my-openhab
# Look for: nginx.ingress.kubernetes.io/proxy-read-timeout: 3600
Prometheus metrics endpoint returns 404
The Metrics addon is not installed or not yet active. Install it via Settings → Add-on Store → Integrations → Metrics.
# Verify the endpoint manually
kubectl port-forward svc/my-openhab 8080:8080
curl -s http://localhost:8080/rest/metrics/prometheus | head -5
No metrics in Prometheus (annotations mode)
Verify the pod has the scrape annotations:
kubectl get pod -l app.kubernetes.io/name=openhab \
-o jsonpath='{.items[0].metadata.annotations}' | jq .
ServiceMonitor not discovered by Prometheus
The additionalLabels on the ServiceMonitor must match the serviceMonitorSelector of your Prometheus instance:
kubectl get prometheus -o jsonpath='{.items[0].spec.serviceMonitorSelector}'
# Then set matching labels in metrics.serviceMonitor.additionalLabels
openHAB log shows bundle resolution errors on startup
Delete the OSGi bundle cache (rebuilt automatically on next start):
kubectl exec -l app.kubernetes.io/name=openhab -- \
rm -rf /openhab/userdata/cache
kubectl rollout restart statefulset my-openhab
High memory usage
Tune the JVM heap. Keep resource limits above the -Xmx value to avoid OOM kills:
env:
EXTRA_JAVA_OPTS: '-Xms256m -Xmx1024m'
resources:
limits:
memory: 1536Mi # must be > Xmx
Pod evicted due to storage pressure
The userdata volume accumulates logs over time. Increase the PVC size or configure log rotation:
persistence:
userdata:
size: 20Gi
Additional Resources
- openHAB Documentation
- openHAB Metrics Addon
- openHAB Community Forum
- openHAB Add-ons
- Docker Hub — openhab/openhab
- Chart Source
- Report an Issue