Is this a critical security issue?
Describe the Bug
When deploying the openvoxdb container with a PersistentVolumeClaim (PVC) mounted at
/opt/puppetlabs/server/data/puppetdb/, the JVM fails to start on every fresh deployment.
The OpenVoxDB package configures JVM GC logging via:
-Xlog:gc*:file=/opt/puppetlabs/server/data/puppetdb/logs/puppetdb_gc.log
However, when the PVC is empty (first boot), the logs/ subdirectory does not exist. The
container's entrypoint scripts (10-wait-for-hosts.sh, 20-configure-ssl.sh,
30-certificate-allowlist.sh) do not create this directory before starting the JVM. This
causes a fatal JVM startup error — PuppetDB never starts, and the container enters a
CrashLoopBackOff.
This affects all tested image tags: 8.9.0-main, 8.11.0-main, and 8.12.1-latest.
Workaround: Use the Helm chart's puppetdb.extraInitContainers to create the directory
before the main container starts:
extraInitContainers:
- name: create-log-dir
image: busybox:1.37
command: ["sh", "-c", "mkdir -p /data/logs && chown 999:999 /data/logs"]
volumeMounts:
- name: puppetdb-storage
mountPath: /data
Expected Behavior
The container should create the /opt/puppetlabs/server/data/puppetdb/logs/ directory
(and set correct ownership to UID 999 / puppetdb user) during its entrypoint initialization,
before the JVM is launched. This would allow PuppetDB to start successfully on a fresh PVC
without any external workarounds.
A one-line fix in one of the existing entrypoint scripts (e.g., 30-certificate-allowlist.sh
or a new script) would suffice:
mkdir -p /opt/puppetlabs/server/data/puppetdb/logs
Steps to Reproduce
- Deploy the openvoxdb container using Kubernetes with a PVC mounted at
/opt/puppetlabs/server/data/puppetdb/ (e.g., via the openvox/puppetserver Helm chart
v10.0.1 with puppetdb.enabled: true)
- Ensure the PVC is empty (fresh deployment, no pre-existing data)
- Set
OPENVOXDB_POSTGRES_HOSTNAME to a reachable PostgreSQL instance
- Wait for init containers to pass (pgchecker, wait-puppetserver)
- Observe the puppetdb container crash with a fatal JVM error
- Repeat with any image tag (8.9.0-main, 8.11.0-main, 8.12.1-latest) — same result
Environment
- OpenVoxDB container image: ghcr.io/openvoxproject/openvoxdb:8.12.1-latest
(also reproduced with 8.11.0-main and 8.9.0-main)
- OpenVox Server: ghcr.io/openvoxproject/openvoxserver:8.12.1-main
- Helm chart: openvox/puppetserver v10.0.1
- Kubernetes: GKE v1.33.5-gke.2326000
- Node OS: Container-Optimized OS (cos_containerd)
- Container runtime: containerd 2.0.6
- Storage: GKE standard-rwo (pd.csi.storage.gke.io), ReadWriteOnce PVCs
- PostgreSQL: Cloud SQL PostgreSQL 16 (external, connected via private IP)
Additional Context
The root cause is in the OpenVoxDB Debian package's JVM configuration. The file
/etc/default/puppetdb (or equivalent) contains the -Xlog:gc*:file= argument pointing to
a path under the PVC mount. When the container is used without a PVC (e.g., Docker Compose
with a named volume that preserves directory structure), the logs/ directory may already
exist from the package installation. However, in Kubernetes with a PVC mount, the mount
overlays the entire directory with an empty volume, erasing any directories the package
installer created.
This is a Kubernetes-specific issue that would not manifest in Docker Compose deployments
where volumes are typically bind mounts or named volumes that don't overlay the entire
directory tree.
Suggested fix: Add mkdir -p /opt/puppetlabs/server/data/puppetdb/logs to one of the
existing container-entrypoint.d/ scripts, or add a new script
(e.g., 05-create-directories.sh) that runs before the JVM starts.
Relevant log output
Running /container-entrypoint.d/10-wait-for-hosts.sh
wtfc.sh: waiting 30 seconds for pg_isready -h 10.83.176.3 --port '5432'
wtfc.sh: pg_isready -h 10.83.176.3 --port '5432' finished with expected status 0 after 0 seconds
wtfc.sh: waiting 360 seconds for curl --silent --fail --insecure 'https://openvox-puppetserver-puppet:8140/status/v1/simple' | grep -q '^running$'
wtfc.sh: curl --silent --fail --insecure 'https://openvox-puppetserver-puppet:8140/status/v1/simple' | grep -q '^running$' finished with expected status 0 after 0 seconds
Running /container-entrypoint.d/20-configure-ssl.sh
(/ssl.sh) Certificates (openvoxdb.pem) have already been generated - exiting!
(/ssl.sh) Securing permissions on /opt/puppetlabs/server/data/puppetdb/certs
Setting ownership for /opt/puppetlabs/server/data/puppetdb/certs to puppetdb:puppetdb
Running /container-entrypoint.d/30-certificate-allowlist.sh
[0.001s][error][logging] Error opening log file '/opt/puppetlabs/server/data/puppetdb/logs/puppetdb_gc.log': No such file or directory
[0.001s][error][logging] Initialization of output 'file=/opt/puppetlabs/server/data/puppetdb/logs/puppetdb_gc.log' using options '(null)' failed.
Invalid -Xlog option '-Xlog:gc*:file=/opt/puppetlabs/server/data/puppetdb/logs/puppetdb_gc.log', see error log for details.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Is this a critical security issue?
Describe the Bug
When deploying the openvoxdb container with a PersistentVolumeClaim (PVC) mounted at
/opt/puppetlabs/server/data/puppetdb/, the JVM fails to start on every fresh deployment.The OpenVoxDB package configures JVM GC logging via:
-Xlog:gc*:file=/opt/puppetlabs/server/data/puppetdb/logs/puppetdb_gc.logHowever, when the PVC is empty (first boot), the
logs/subdirectory does not exist. Thecontainer's entrypoint scripts (
10-wait-for-hosts.sh,20-configure-ssl.sh,30-certificate-allowlist.sh) do not create this directory before starting the JVM. Thiscauses a fatal JVM startup error — PuppetDB never starts, and the container enters a
CrashLoopBackOff.
This affects all tested image tags:
8.9.0-main,8.11.0-main, and8.12.1-latest.Workaround: Use the Helm chart's
puppetdb.extraInitContainersto create the directorybefore the main container starts:
Expected Behavior
The container should create the
/opt/puppetlabs/server/data/puppetdb/logs/directory(and set correct ownership to UID 999 / puppetdb user) during its entrypoint initialization,
before the JVM is launched. This would allow PuppetDB to start successfully on a fresh PVC
without any external workarounds.
A one-line fix in one of the existing entrypoint scripts (e.g.,
30-certificate-allowlist.shor a new script) would suffice:
mkdir -p /opt/puppetlabs/server/data/puppetdb/logs
Steps to Reproduce
/opt/puppetlabs/server/data/puppetdb/(e.g., via the openvox/puppetserver Helm chartv10.0.1 with
puppetdb.enabled: true)OPENVOXDB_POSTGRES_HOSTNAMEto a reachable PostgreSQL instanceEnvironment
(also reproduced with 8.11.0-main and 8.9.0-main)
Additional Context
The root cause is in the OpenVoxDB Debian package's JVM configuration. The file
/etc/default/puppetdb (or equivalent) contains the -Xlog:gc*:file= argument pointing to
a path under the PVC mount. When the container is used without a PVC (e.g., Docker Compose
with a named volume that preserves directory structure), the logs/ directory may already
exist from the package installation. However, in Kubernetes with a PVC mount, the mount
overlays the entire directory with an empty volume, erasing any directories the package
installer created.
This is a Kubernetes-specific issue that would not manifest in Docker Compose deployments
where volumes are typically bind mounts or named volumes that don't overlay the entire
directory tree.
Suggested fix: Add
mkdir -p /opt/puppetlabs/server/data/puppetdb/logsto one of theexisting container-entrypoint.d/ scripts, or add a new script
(e.g.,
05-create-directories.sh) that runs before the JVM starts.Relevant log output