Skip to content

openHAB

Helm chart for deploying openHAB on Kubernetes using the official openhab/openhab Docker image. openHAB is an open-source home automation platform that integrates with hundreds of smart home technologies — from Zigbee and Z-Wave to MQTT, KNX, and cloud services.

Single-instance only: openHAB does not support horizontal scaling. This chart enforces replicaCount: 1 and will fail fast if you attempt to set it higher, preventing accidental data corruption from concurrent writes to shared PVCs.

Key Features

  • StatefulSet workload for stable, predictable PVC attachment on every restart
  • Three persistent volumes (userdata, conf, addons) with independent sizing and storage class configuration
  • ConfigMap live-reload for sitemaps, things, and items — changes applied automatically within seconds, no pod restart required
  • Correct security contextfsGroup: 9001 only; runAsUser/runAsGroup intentionally unset so the entrypoint can bootstrap user creation before dropping privileges via gosu
  • Smart health probes via /rest/uuid (available in openHAB 4.x, no auth required) with a 10-minute startup window for OSGi bundle loading
  • Prometheus metrics via /rest/metrics/prometheus — pod annotations and ServiceMonitor supported
  • Optional Ingress with websocket annotation guidance for the /rest/events SSE endpoint
  • Optional Karaf SSH console for administrative access to the OSGi runtime
  • Fail-fast validation with clear error messages for common misconfigurations

Installation

HTTPS Repository

helm repo add helmforge https://repo.helmforge.dev
helm repo update
helm install my-openhab helmforge/openhab

OCI Registry

helm install my-openhab oci://ghcr.io/helmforgedev/helm/openhab --version 0.1.0

Quick Start

Minimal Deployment

Deploy openHAB with persistent storage and port-forward access:

helm install my-openhab helmforge/openhab
kubectl wait --namespace default pod \
  -l app.kubernetes.io/name=openhab \
  --for=condition=Ready --timeout=300s
kubectl port-forward svc/my-openhab 8080:8080

Open http://127.0.0.1:8080 and complete the first-boot admin setup wizard.

Custom Timezone and Resources

# values.yaml
env:
  TZ: 'Europe/Berlin'

resources:
  requests:
    cpu: 500m
    memory: 512Mi
  limits:
    cpu: 2000m
    memory: 2Gi
helm install my-openhab helmforge/openhab -f values.yaml

First Boot — Admin Setup

openHAB does not support injecting admin credentials via environment variables. On first launch, openHAB presents a setup wizard where you create the administrator account.

Steps:

  1. Wait for the pod to be Ready (may take 60-120 seconds on first boot)
  2. Navigate to the web UI (http://localhost:8080 if using port-forward)
  3. Click Create an administrator account
  4. Enter username and password
  5. Complete the optional initial configuration steps

Credentials are stored persistently in /openhab/userdata/jsondb/auth.json and survive pod restarts as long as the userdata PVC exists.

Persistent Storage

openHAB requires three persistent directories. The chart creates a PVC for each by default.

DirectoryPVCDefault SizeContents
/openhab/userdata<release>-userdata5GiRuntime state, JSONDB, logs, persistence data
/openhab/conf<release>-conf1GiItems, things, rules, sitemaps, services
/openhab/addons<release>-addons2GiDrop-in JAR bindings/addons

Custom Sizes and Storage Class

persistence:
  userdata:
    size: 20Gi
    storageClass: 'fast-ssd'
  conf:
    size: 5Gi
    storageClass: 'fast-ssd'
  addons:
    size: 10Gi

Using Existing PVCs

persistence:
  userdata:
    existingClaim: my-openhab-userdata
  conf:
    existingClaim: my-openhab-conf
  addons:
    existingClaim: my-openhab-addons

ConfigMap Live Reload

This is the key differentiator of this chart. openHAB natively monitors /openhab/conf/ using a file watcher. Any change to .sitemap, .things, or .items files is applied automatically within 2-5 seconds — no pod restart required.

The chart mounts Kubernetes ConfigMaps into the conf PVC using subPath, so ConfigMap-managed files coexist with any existing files without overwriting them.

How It Works

Helm values  →  ConfigMap (K8s)  →  subPath mount  →  openHAB file watcher  →  Live reload (~2-5s)

Sitemaps

Sitemaps define the UI layout for openHAB’s BasicUI and HABPanel:

configMaps:
  sitemaps:
    enabled: true
    files:
      myhome.sitemap: |
        sitemap myhome label="My Home" {
          Frame label="Ground Floor" {
            Switch item=Light_GF_Corridor label="Corridor Light"
            Switch item=Light_GF_Kitchen  label="Kitchen Light"
            Text   item=Temperature_GF    label="Temperature [%.1f °C]"
          }
          Frame label="Climate" {
            Text item=Temperature_Outdoor label="Outdoor [%.1f °C]"
            Text item=Humidity_Outdoor   label="Humidity [%d %%]"
          }
        }

Things

Things define physical devices and their channels:

configMaps:
  things:
    enabled: true
    files:
      network.things: |
        Thing network:pingdevice:router [
          hostname="192.168.1.1",
          retry=1,
          timeout=5000,
          refreshInterval=60000
        ]
      mqtt.things: |
        Bridge mqtt:broker:mybroker [ host="mosquitto", port=1883, secure=false ] {
          Thing mqtt:topic:mysensor "Temperature Sensor" {
            Channels:
              Type number : temperature [ stateTopic="home/sensor/temperature" ]
              Type number : humidity    [ stateTopic="home/sensor/humidity" ]
          }
        }

Items

Items define logical entities visible in the UI and used in rules:

configMaps:
  items:
    enabled: true
    files:
      lights.items: |
        Switch Light_GF_Corridor "Corridor Light"  <light>
        Switch Light_GF_Kitchen  "Kitchen Light"   <light>
      climate.items: |
        Number:Temperature Temperature_GF       "Ground Floor [%.1f %unit%]" <temperature>
        Number:Temperature Temperature_Outdoor  "Outdoor [%.1f %unit%]"      <temperature>
        Number:Dimensionless Humidity_Outdoor   "Humidity [%d %%]"           <humidity>

Applying Changes

After updating ConfigMap values, run helm upgrade. openHAB picks up the changes automatically:

helm upgrade my-openhab helmforge/openhab -f values.yaml
# No restart needed — openHAB reloads configuration within seconds

Prometheus Metrics

openHAB exposes Prometheus-format metrics via the Metrics addon at:

GET /rest/metrics/prometheus   (port 8080, no authentication required)

Step 1 — Install the Metrics Addon

Install the addon once via the openHAB UI:

Settings → Add-on Store → Integrations → Metrics → Install

Or via the Karaf console (if enabled):

kubectl port-forward svc/my-openhab-karaf 8101:8101
ssh -p 8101 [email protected]
# Inside Karaf:
feature:install openhab-io-metrics

Step 2 — Enable Metrics in Chart Values

Mode 1: Pod annotations — works with any Prometheus that watches pod annotations:

metrics:
  enabled: true
  podAnnotations:
    enabled: true

This adds the following annotations to the pod:

prometheus.io/scrape: 'true'
prometheus.io/path: /rest/metrics/prometheus
prometheus.io/port: '8080'

Mode 2: ServiceMonitor — for Prometheus Operator / kube-prometheus-stack:

metrics:
  enabled: true
  podAnnotations:
    enabled: false
  serviceMonitor:
    enabled: true
    interval: 60s
    scrapeTimeout: 10s
    # Must match your Prometheus instance's serviceMonitorSelector labels
    additionalLabels:
      release: prometheus

Verify the Endpoint

kubectl port-forward svc/my-openhab 8080:8080
curl -s http://localhost:8080/rest/metrics/prometheus | head -20
# Expected: Prometheus text format with jvm_*, openhab_* metrics

Metrics Exposed

CategoryMetrics
openHAB eventsopenhab_events_total per topic
Bundle statesopenhab_bundle_state (32 = active)
Thing statesopenhab_thing_state (online/offline)
Rule executionsopenhab_rule_runs_total
Threadpoolopenhab_threadpool_* (size, active, queue)
JVM memoryjvm_memory_used_bytes, jvm_gc_pause_seconds
JVM threadsjvm_threads_*, jvm_classes_loaded
Processprocess_cpu_usage, process_uptime_seconds

Custom Relabelings (ServiceMonitor)

metrics:
  serviceMonitor:
    enabled: true
    relabelings:
      - sourceLabels: [__meta_kubernetes_pod_label_app_kubernetes_io_instance]
        targetLabel: instance
    metricRelabelings:
      - sourceLabels: [__name__]
        regex: 'jvm_.*'
        action: keep

Ingress

openHAB’s /rest/events endpoint uses Server-Sent Events (SSE), which requires long-lived HTTP connections. When using nginx Ingress, add the following annotations for proper support:

ingress:
  enabled: true
  ingressClassName: nginx
  annotations:
    nginx.ingress.kubernetes.io/proxy-read-timeout: '3600'
    nginx.ingress.kubernetes.io/proxy-send-timeout: '3600'
    nginx.ingress.kubernetes.io/proxy-http-version: '1.1'
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
  hosts:
    - host: openhab.yourdomain.com
      paths:
        - path: /
          pathType: Prefix

With TLS (cert-manager)

ingress:
  enabled: true
  ingressClassName: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/proxy-read-timeout: '3600'
    nginx.ingress.kubernetes.io/proxy-send-timeout: '3600'
  hosts:
    - host: openhab.yourdomain.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: openhab-tls
      hosts:
        - openhab.yourdomain.com

Security

Security Context

openHAB runs as UID/GID 9001 by default (enforced by the official image). The chart configures this correctly out of the box:

podSecurityContext:
  runAsUser: 9001
  runAsGroup: 9001
  fsGroup: 9001 # Ensures PVC volumes are group-writable by 9001

securityContext:
  allowPrivilegeEscalation: false
  readOnlyRootFilesystem: false # openHAB writes to internal dirs at runtime
  capabilities:
    drop:
      - ALL

Why readOnlyRootFilesystem: false? openHAB (OSGi/Karaf) writes to several internal directories at runtime (/openhab/runtime/, /openhab/userdata/tmp/, /openhab/userdata/cache/). These cannot be relocated. Mount your persistent data on the three PVCs to ensure durability across restarts.

Karaf SSH Console

The Apache Karaf admin console is disabled by default. When enabled, always access it via kubectl port-forward — never expose port 8101 publicly:

karaf:
  enabled: true
  service:
    type: ClusterIP # Never NodePort or LoadBalancer
    port: 8101
kubectl port-forward svc/my-openhab-karaf 8101:8101
ssh -p 8101 [email protected]
# Default Karaf password: habopen

Admin Credentials Secret

For operational reference (documentation, tooling), you can store the admin credentials in a Kubernetes Secret:

admin:
  secretEnabled: true
  username: admin
  password: 'strongpassword' # Set via --set or external secret manager

Important: This Secret is for reference only. It does NOT automatically configure openHAB. You still need to complete the first-boot wizard with the same credentials.

Retrieve the stored password:

kubectl get secret my-openhab-admin \
  -o jsonpath="{.data.password}" | base64 --decode

Deployment Scenarios

Scenario 1: Minimal Home Lab

# Simple single-node home deployment
image:
  tag: '4.2.2'

env:
  TZ: 'Europe/Berlin'

persistence:
  userdata:
    size: 10Gi
  conf:
    size: 2Gi
  addons:
    size: 5Gi

Scenario 2: GitOps-Managed Configuration

Manage all openHAB configuration declaratively via Helm — ideal for teams or reproducible setups:

env:
  TZ: 'America/Sao_Paulo'

configMaps:
  sitemaps:
    enabled: true
    files:
      default.sitemap: |
        sitemap default label="openHAB" {
          Frame label="Overview" {
            Text item=gTemperature label="Temperature [%.1f °C]"
          }
        }
  things:
    enabled: true
    files:
      network.things: |
        Thing network:pingdevice:gateway [ hostname="192.168.1.1" ]
  items:
    enabled: true
    files:
      home.items: |
        Number:Temperature gTemperature "Temperature [%.1f %unit%]" <temperature>

persistence:
  userdata:
    size: 10Gi
  conf:
    size: 2Gi

Scenario 3: Full Production with Ingress and Metrics

ingress:
  enabled: true
  ingressClassName: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/proxy-read-timeout: '3600'
    nginx.ingress.kubernetes.io/proxy-send-timeout: '3600'
    nginx.ingress.kubernetes.io/proxy-http-version: '1.1'
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
  hosts:
    - host: openhab.yourdomain.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: openhab-tls
      hosts:
        - openhab.yourdomain.com

admin:
  secretEnabled: true
  username: admin
  password: '' # Set via: --set admin.password=<value>

karaf:
  enabled: true

# Prometheus Operator (kube-prometheus-stack)
# Requires: openHAB Metrics addon installed via UI
metrics:
  enabled: true
  podAnnotations:
    enabled: false
  serviceMonitor:
    enabled: true
    interval: 60s
    additionalLabels:
      release: prometheus

env:
  TZ: 'Europe/Berlin'
  EXTRA_JAVA_OPTS: '-Xms512m -Xmx1536m'

persistence:
  userdata:
    size: 20Gi
    storageClass: 'fast-ssd'
  conf:
    size: 5Gi
    storageClass: 'fast-ssd'
  addons:
    size: 10Gi

resources:
  requests:
    cpu: 500m
    memory: 768Mi
  limits:
    cpu: 4000m
    memory: 3Gi

Automated Backup

The chart includes an optional CronJob that creates compressed archives of your openHAB data and uploads them to any S3-compatible object storage using the MinIO client (mc).

How It Works

The backup job runs as two containers sharing an emptyDir volume:

  1. backup initContainer (alpine) — tars selected directories and writes the archive to /tmp
  2. upload container (helmforge/mc) — picks up the archive and uploads it to S3

Both containers run as UID/GID 9001 to match openHAB’s PVC ownership.

Enabling Backup

backup:
  enabled: true
  schedule: '0 3 * * *' # Daily at 03:00 UTC

  s3:
    endpoint: 'https://minio.example.com'
    bucket: 'openhab-backups'
    prefix: 'prod'
    accessKey: 'AKIAEXAMPLE'
    secretKey: 'supersecretkey'

Using an Existing Secret

Avoid storing credentials in values by referencing a pre-created Secret:

kubectl create secret generic my-s3-creds \
  --from-literal=access-key=AKIAEXAMPLE \
  --from-literal=secret-key=supersecretkey \
  -n openhab
backup:
  enabled: true
  s3:
    endpoint: 'https://minio.example.com'
    bucket: 'openhab-backups'
    existingSecret: 'my-s3-creds'

What Gets Backed Up

DirectoryDefaultDescription
/openhab/userdataJSONDB, persistence data, rules state
/openhab/confItems, things, sitemaps, rules files

Always excluded from userdata: logs/, tmp/, cache/ (ephemeral — not needed for restore).

If you manage /openhab/conf via ConfigMaps (GitOps), you can skip it:

backup:
  include:
    userdata: true
    conf: false

S3 Compatibility

The uploader (helmforge/mc) is compatible with any S3-compatible service:

ProviderEndpoint format
MinIOhttps://minio.example.com
AWS S3https://s3.amazonaws.com
Cloudflare R2https://<account>.r2.cloudflarestorage.com
Backblaze B2https://s3.<region>.backblazeb2.com
DigitalOcean Spaceshttps://<region>.digitaloceanspaces.com

Archive Naming

Archives follow this pattern: <archivePrefix>-backup-<YYYY-MM-DD-HHmmss>.tar.gz

Default: openhab-backup-2025-01-15-030000.tar.gz

Restore Process

  1. Download the archive from your S3 bucket.

  2. Scale down openHAB (required — it holds locks on its data):

kubectl scale statefulset my-openhab -n openhab --replicas=0
  1. Extract via a temporary pod:
kubectl run restore --rm -it --image=alpine --restart=Never \
  --overrides='{"spec":{"volumes":[{"name":"ud","persistentVolumeClaim":{"claimName":"my-openhab-userdata"}}],"containers":[{"name":"r","image":"alpine","command":["sh"],"stdin":true,"tty":true,"volumeMounts":[{"name":"ud","mountPath":"/openhab/userdata"}]}]}}' \
  -- sh

# Inside the pod:
# tar -xzf /path/to/openhab-backup-<timestamp>.tar.gz -C /
  1. Scale back to 1:
kubectl scale statefulset my-openhab -n openhab --replicas=1

Configuration Reference

Image

ParameterDescriptionDefault
image.repositoryImage repositorydocker.io/openhab/openhab
image.tagImage tag4.2.2
image.pullPolicyPull policyIfNotPresent

Workload

ParameterDescriptionDefault
replicaCountMust be 1 — clustering not supported1
podSecurityContext.runAsUserUID (required by openHAB image)9001
podSecurityContext.runAsGroupGID (required by openHAB image)9001
podSecurityContext.fsGroupfsGroup for PVC ownership9001

Service

ParameterDescriptionDefault
service.typeService typeClusterIP
service.portHTTP port8080

Persistence

ParameterDescriptionDefault
persistence.userdata.enabledEnable userdata PVCtrue
persistence.userdata.sizePVC size5Gi
persistence.userdata.storageClassStorage class""
persistence.userdata.existingClaimUse existing PVC""
persistence.conf.enabledEnable conf PVCtrue
persistence.conf.sizePVC size1Gi
persistence.addons.enabledEnable addons PVCtrue
persistence.addons.sizePVC size2Gi

ConfigMaps

ParameterDescriptionDefault
configMaps.sitemaps.enabledEnable sitemaps ConfigMapfalse
configMaps.sitemaps.filesMap of filename → content{}
configMaps.things.enabledEnable things ConfigMapfalse
configMaps.things.filesMap of filename → content{}
configMaps.items.enabledEnable items ConfigMapfalse
configMaps.items.filesMap of filename → content{}

Metrics

ParameterDescriptionDefault
metrics.enabledEnable Prometheus metricsfalse
metrics.podAnnotations.enabledAdd prometheus.io/* pod annotationstrue
metrics.serviceMonitor.enabledCreate ServiceMonitorfalse
metrics.serviceMonitor.namespaceServiceMonitor namespacerelease namespace
metrics.serviceMonitor.intervalScrape interval60s
metrics.serviceMonitor.scrapeTimeoutScrape timeout10s
metrics.serviceMonitor.additionalLabelsExtra labels on ServiceMonitor{}
metrics.serviceMonitor.relabelingsRelabeling rules[]
metrics.serviceMonitor.metricRelabelingsMetric relabeling rules[]

Environment

ParameterDescriptionDefault
env.TZTimezoneUTC
env.EXTRA_JAVA_OPTSExtra JVM options""
env.OPENHAB_HTTP_PORTHTTP port8080
env.OPENHAB_HTTPS_PORTHTTPS port8443

Optional Components

ParameterDescriptionDefault
karaf.enabledEnable Karaf SSH consolefalse
karaf.service.portKaraf SSH port8101
admin.secretEnabledCreate admin credentials Secretfalse
admin.usernameAdmin username (stored in Secret)admin
admin.passwordAdmin password (stored in Secret)""
admin.existingSecretUse existing Secret""

Backup

ParameterDescriptionDefault
backup.enabledEnable automated backup CronJobfalse
backup.scheduleCron schedule0 3 * * *
backup.suspendSuspend the CronJob without deleting itfalse
backup.concurrencyPolicyCronJob concurrency policyForbid
backup.successfulJobsHistoryLimitSuccessful job history to retain3
backup.failedJobsHistoryLimitFailed job history to retain3
backup.backoffLimitJob backoff limit1
backup.archivePrefixArchive filename prefixopenhab
backup.include.userdataBack up /openhab/userdatatrue
backup.include.confBack up /openhab/conftrue
backup.images.utility.repositoryBackup utility imagedocker.io/library/alpine
backup.images.utility.tagBackup utility tag3.22
backup.images.uploader.repositoryS3 uploader imagedocker.io/helmforge/mc
backup.images.uploader.tagS3 uploader tag1.0.0
backup.resourcesResource requests/limits for backup containers{}
backup.s3.endpointS3-compatible endpoint URL""
backup.s3.bucketTarget bucket name""
backup.s3.prefixKey prefix within the bucketopenhab
backup.s3.accessKeyS3 access key""
backup.s3.secretKeyS3 secret key""
backup.s3.existingSecretExisting Secret name (keys: access-key, secret-key)""

Troubleshooting

Pod stuck in Init state for several minutes

This is normal on first boot. openHAB loads all OSGi bundles at startup, which takes 60-120 seconds. The chart configures a startup probe with a 5-minute window. Wait patiently before investigating.

kubectl get pod -l app.kubernetes.io/name=openhab \
  -o jsonpath='{.items[0].status.containerStatuses[0].started}'

CrashLoopBackOff after PVC reuse

Incorrect permissions on a reused PVC will prevent openHAB from writing to its directories.

kubectl logs -l app.kubernetes.io/name=openhab --previous

# Fix permissions via a temporary pod
kubectl run fix-perms --image=busybox --restart=Never \
  --overrides='{"spec":{"volumes":[{"name":"data","persistentVolumeClaim":{"claimName":"my-openhab-userdata"}}],"containers":[{"name":"fix","image":"busybox","command":["chown","-R","9001:9001","/data"],"volumeMounts":[{"name":"data","mountPath":"/data"}]}]}}' \
  -- chown -R 9001:9001 /data

ConfigMap changes not reflected

Kubernetes syncs ConfigMaps to pods every ~60 seconds (kubelet --sync-frequency). After the file appears on disk, openHAB applies it within 2-5 seconds. If still not reflected after 2-3 minutes:

# Verify the file exists in the pod
kubectl exec -l app.kubernetes.io/name=openhab -- ls /openhab/conf/sitemaps/

# Check openHAB log for parsing errors
kubectl exec -l app.kubernetes.io/name=openhab -- \
  tail -50 /openhab/userdata/logs/openhab.log

403 Forbidden on web UI

The first-boot admin setup wizard has not been completed. Navigate to the UI root and create your administrator account.

Web UI works but real-time updates are broken

The /rest/events SSE endpoint requires long-lived connections. Verify the websocket annotations are present on your Ingress:

kubectl describe ingress my-openhab
# Look for: nginx.ingress.kubernetes.io/proxy-read-timeout: 3600

Prometheus metrics endpoint returns 404

The Metrics addon is not installed or not yet active. Install it via Settings → Add-on Store → Integrations → Metrics.

# Verify the endpoint manually
kubectl port-forward svc/my-openhab 8080:8080
curl -s http://localhost:8080/rest/metrics/prometheus | head -5

No metrics in Prometheus (annotations mode)

Verify the pod has the scrape annotations:

kubectl get pod -l app.kubernetes.io/name=openhab \
  -o jsonpath='{.items[0].metadata.annotations}' | jq .

ServiceMonitor not discovered by Prometheus

The additionalLabels on the ServiceMonitor must match the serviceMonitorSelector of your Prometheus instance:

kubectl get prometheus -o jsonpath='{.items[0].spec.serviceMonitorSelector}'
# Then set matching labels in metrics.serviceMonitor.additionalLabels

openHAB log shows bundle resolution errors on startup

Delete the OSGi bundle cache (rebuilt automatically on next start):

kubectl exec -l app.kubernetes.io/name=openhab -- \
  rm -rf /openhab/userdata/cache
kubectl rollout restart statefulset my-openhab

High memory usage

Tune the JVM heap. Keep resource limits above the -Xmx value to avoid OOM kills:

env:
  EXTRA_JAVA_OPTS: '-Xms256m -Xmx1024m'
resources:
  limits:
    memory: 1536Mi # must be > Xmx

Pod evicted due to storage pressure

The userdata volume accumulates logs over time. Increase the PVC size or configure log rotation:

persistence:
  userdata:
    size: 20Gi

Additional Resources

  • Mosquitto — MQTT broker for openHAB smart home integrations
  • MariaDB — Database backend for openHAB JDBC persistence