Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 49 additions & 0 deletions .github/workflows/build-and-push-image.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
name: Build and Push Docker Image

on:
push:
branches: [main]
tags: ['v*']
workflow_dispatch:

env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}

jobs:
build-and-push:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write

steps:
- uses: actions/checkout@v4

- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Extract metadata (tags, labels)
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=semver,pattern={{version}}
type=sha,prefix=

- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile_simple
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
GITHUB_TOKEN=${{ secrets.GITHUB_TOKEN }}
Comment on lines +48 to +49
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find and examine Dockerfiles and their usage of GITHUB_TOKEN
fd -HI '^Dockerfile' | head -20

Repository: OpenMS/streamlit-template

Length of output: 97


🏁 Script executed:

# Check for GITHUB_TOKEN references in Dockerfiles
rg -n 'ARG\s+GITHUB_TOKEN|ENV\s+GITHUB_TOKEN|--mount=type=secret|GITHUB_TOKEN' --type dockerfile

Repository: OpenMS/streamlit-template

Length of output: 103


🏁 Script executed:

# Check the workflow file for build-args and secrets usage
rg -n 'build-args:|secrets:' .github/workflows/build-and-push-image.yml -A 3 -B 1

Repository: OpenMS/streamlit-template

Length of output: 205


🏁 Script executed:

# Get context around lines 48-49 in the workflow file
cat -n .github/workflows/build-and-push-image.yml | sed -n '40,60p'

Repository: OpenMS/streamlit-template

Length of output: 487


🏁 Script executed:

# Check Dockerfile_simple
cat -n Dockerfile_simple

Repository: OpenMS/streamlit-template

Length of output: 8542


🏁 Script executed:

# Check regular Dockerfile
cat -n Dockerfile

Repository: OpenMS/streamlit-template

Length of output: 12981


🏁 Script executed:

# Search for GITHUB_TOKEN references in all Dockerfiles
rg 'GITHUB_TOKEN' -n

Repository: OpenMS/streamlit-template

Length of output: 901


Use Docker BuildKit secret mounts instead of build-args for GITHUB_TOKEN.

Lines 48-49 pass the credential through build-args, which embeds it in image layers and cache. Both Dockerfile_simple (line 15) and Dockerfile (line 15) convert the argument to an environment variable that persists in the final image.

Since the token is only needed during build time, use --mount=type=secret with BuildKit to prevent the credential from being written to any image layers.

🔐 Proposed workflow change
       - name: Build and push Docker image
         uses: docker/build-push-action@v5
         with:
           context: .
           file: Dockerfile_simple
           push: true
           tags: ${{ steps.meta.outputs.tags }}
           labels: ${{ steps.meta.outputs.labels }}
-          build-args: |
-            GITHUB_TOKEN=${{ secrets.GITHUB_TOKEN }}
+          secrets: |
+            github_token=${{ secrets.GITHUB_TOKEN }}

Update the Dockerfile to use the secret mount instead of ARG/ENV:

- ARG GITHUB_TOKEN
- ENV GH_TOKEN=${GITHUB_TOKEN}
+ RUN --mount=type=secret,id=github_token \
+   GH_TOKEN=$(cat /run/secrets/github_token) && \
+   if [ -n "$GH_TOKEN" ]; then \
+     gh release download -R ${GITHUB_USER}/${GITHUB_REPO} -p "OpenMS-App.zip" -D /app; \
+   fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/build-and-push-image.yml around lines 48 - 49, Replace the
insecure use of build-args GITHUB_TOKEN in the GitHub Actions workflow and
Dockerfiles: stop passing GITHUB_TOKEN via build-args (symbol: build-args:
GITHUB_TOKEN) and instead configure BuildKit secret mounts in the workflow (use
--secret id=github_token) and update Dockerfile/Dockerfile_simple to remove ARG
GITHUB_TOKEN and ENV GH_TOKEN and use a RUN step that reads
/run/secrets/github_token (use --mount=type=secret,id=github_token) to populate
GH_TOKEN at build time and conditionally run gh release download only when the
secret is present; ensure the workflow supplies the repo secret as the BuildKit
secret rather than a build-arg.

105 changes: 105 additions & 0 deletions .github/workflows/k8s-manifests-ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
name: K8s Manifests CI

on:
push:
paths:
- 'k8s/**'
pull_request:
paths:
- 'k8s/**'

jobs:
validate-manifests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Install kubeconform
run: |
curl -sSL https://github.com/yannh/kubeconform/releases/latest/download/kubeconform-linux-amd64.tar.gz | tar xz
sudo mv kubeconform /usr/local/bin/

- name: Validate K8s manifests (base)
run: |
kubeconform -summary -strict -kubernetes-version 1.28.0 -ignore-filename-pattern 'kustomization.yaml' k8s/base/*.yaml

- name: Install kubectl
uses: azure/setup-kubectl@v3

- name: Kustomize build (template-app overlay)
run: |
kubectl kustomize k8s/overlays/template-app/ > /dev/null
echo "Kustomize build succeeded for template-app"

- name: Validate kustomized output
run: |
kubectl kustomize k8s/overlays/template-app/ | kubeconform -summary -strict -kubernetes-version 1.28.0

integration-test:
runs-on: ubuntu-latest
needs: validate-manifests
strategy:
fail-fast: false
matrix:
dockerfile:
- Dockerfile_simple
- Dockerfile
steps:
- uses: actions/checkout@v4

- name: Check if Dockerfile exists
id: check
run: |
if [ -f "${{ matrix.dockerfile }}" ]; then
echo "exists=true" >> "$GITHUB_OUTPUT"
echo "Found ${{ matrix.dockerfile }}, will run integration test"
else
echo "exists=false" >> "$GITHUB_OUTPUT"
echo "Skipping: ${{ matrix.dockerfile }} not found"
fi

- name: Build Docker image from current code
if: steps.check.outputs.exists == 'true'
run: |
docker build -t openms-streamlit:test -f ${{ matrix.dockerfile }} .

- name: Create kind cluster
if: steps.check.outputs.exists == 'true'
uses: helm/kind-action@v1
with:
cluster_name: test-cluster

- name: Load image into kind cluster
if: steps.check.outputs.exists == 'true'
run: |
kind load docker-image openms-streamlit:test --name test-cluster

- name: Install nginx ingress controller
if: steps.check.outputs.exists == 'true'
run: |
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=90s

- name: Deploy with Kustomize
if: steps.check.outputs.exists == 'true'
run: |
kubectl kustomize k8s/overlays/template-app/ | \
sed 's|imagePullPolicy: IfNotPresent|imagePullPolicy: Never|g' | \
kubectl apply -f -

- name: Wait for Redis to be ready
if: steps.check.outputs.exists == 'true'
run: |
kubectl wait --for=condition=ready pod -l app=template-app,component=redis --timeout=60s

- name: Verify Redis Service is reachable
if: steps.check.outputs.exists == 'true'
run: |
kubectl run redis-test --image=redis:7-alpine --rm -i --restart=Never -- redis-cli -h template-app-redis ping

- name: Verify all deployments are available
if: steps.check.outputs.exists == 'true'
run: |
kubectl wait --for=condition=available deployment -l app=template-app --timeout=120s || true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, locate the workflow file and check its structure
fd "k8s-manifests-ci.yml" .github/workflows/

Repository: OpenMS/streamlit-template

Length of output: 107


🏁 Script executed:

# Check the specific line and surrounding context
sed -n '95,110p' .github/workflows/k8s-manifests-ci.yml | cat -n

Repository: OpenMS/streamlit-template

Length of output: 694


🏁 Script executed:

# Look for all instances of kubectl wait and || true patterns in workflows
rg -n "kubectl wait" .github/workflows/

Repository: OpenMS/streamlit-template

Length of output: 536


Remove || true to prevent masking deployment failures in tests.

Line 103 uses || true, which swallows kubectl wait failures and allows the integration test to pass even when deployments never become available. This contradicts other similar verification steps in the workflow (lines 81, 93) that properly fail when conditions aren't met. The step is named "Verify all deployments are available," but the current implementation only reports status without enforcing it.

Proposed fix
-          kubectl wait --for=condition=available deployment -l app=template-app --timeout=120s || true
+          kubectl wait --for=condition=available deployment -l app=template-app --timeout=120s
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
kubectl wait --for=condition=available deployment -l app=template-app --timeout=120s || true
kubectl wait --for=condition=available deployment -l app=template-app --timeout=120s
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/k8s-manifests-ci.yml at line 103, The CI step "Verify all
deployments are available" currently appends "|| true" to the kubectl wait
command ("kubectl wait --for=condition=available deployment -l app=template-app
--timeout=120s || true"), which masks failures; remove the "|| true" so the step
fails on non-zero exit and the workflow correctly fails when deployments do not
become available.

kubectl get pods -l app=template-app
kubectl get services -l app=template-app
2 changes: 0 additions & 2 deletions .streamlit/config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,6 @@ developmentMode = false
address = "0.0.0.0"
maxUploadSize = 200 #MB
port = 8501 # should be same as configured in deployment repo
enableCORS = false
enableXsrfProtection = false


[theme]
Expand Down
2 changes: 1 addition & 1 deletion clean-up-workspaces.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
from datetime import datetime

# Define the workspaces directory
workspaces_directory = Path("/workspaces-streamlit-template")
workspaces_directory = Path(os.environ.get("WORKSPACES_DIR", "/workspaces-streamlit-template"))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate WORKSPACES_DIR before using it for recursive deletion.

If WORKSPACES_DIR is empty/invalid, this can target unintended directories and delete data.

🛡️ Proposed safety guard
-workspaces_directory = Path(os.environ.get("WORKSPACES_DIR", "/workspaces-streamlit-template"))
+raw_workspaces_dir = os.environ.get("WORKSPACES_DIR", "/workspaces-streamlit-template").strip()
+if not raw_workspaces_dir:
+    raise ValueError("WORKSPACES_DIR cannot be empty")
+workspaces_directory = Path(raw_workspaces_dir).resolve()
+if not workspaces_directory.exists() or not workspaces_directory.is_dir():
+    raise FileNotFoundError(f"WORKSPACES_DIR is invalid: {workspaces_directory}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@clean-up-workspaces.py` at line 9, The code sets workspaces_directory from
WORKSPACES_DIR without validation; add checks before any recursive deletion:
ensure the env var is non-empty, resolve the Path
(workspaces_directory.resolve()), verify it exists and is a directory, and
confirm it is within an allowed base/whitelist (e.g., startswith or
is_relative_to a known safe base like "/workspaces-streamlit-template") and not
a top-level/root/home path; if any check fails, log an error and abort the
delete operation that uses workspaces_directory.


# Get the current time in seconds
current_time = time.time()
Expand Down
45 changes: 45 additions & 0 deletions k8s/base/cleanup-cronjob.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
apiVersion: batch/v1
kind: CronJob
metadata:
name: workspace-cleanup
labels:
component: cleanup
spec:
schedule: "0 3 * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
metadata:
labels:
component: cleanup
spec:
restartPolicy: OnFailure
containers:
- name: cleanup
image: openms-streamlit
imagePullPolicy: IfNotPresent
command: ["/bin/bash", "-c"]
args:
- |
source /root/miniforge3/bin/activate streamlit-env
exec python clean-up-workspaces.py
env:
- name: WORKSPACES_DIR
value: "/workspaces-streamlit-template"
volumeMounts:
- name: workspaces
mountPath: /workspaces-streamlit-template
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
volumes:
- name: workspaces
persistentVolumeClaim:
claimName: workspaces-pvc
Comment on lines +18 to +45
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check current security context in cleanup-cronjob.yaml
echo "=== Current state in cleanup-cronjob.yaml ==="
rg -n 'securityContext|allowPrivilegeEscalation|readOnlyRootFilesystem|runAsNonRoot|seccompProfile' k8s/base/cleanup-cronjob.yaml -C2

# Find and examine Dockerfile
echo ""
echo "=== Searching for Dockerfile ==="
fd -HI '^Dockerfile' | head -20

# Check Dockerfile content for USER and relevant directives
echo ""
echo "=== Dockerfile USER/WORKDIR settings ==="
fd -HI '^Dockerfile' | xargs -r head -50

Repository: OpenMS/streamlit-template

Length of output: 5906


Harden pod/container security context for the cleanup job.

Lines 18-45 currently run with default privileges. The Dockerfile defaults to root user and installs miniforge to /root/miniforge3, leaving avoidable privilege-escalation and runtime-hardening gaps.

🛡️ Proposed hardening patch
         spec:
+          securityContext:
+            seccompProfile:
+              type: RuntimeDefault
           restartPolicy: OnFailure
           containers:
             - name: cleanup
               image: openms-streamlit
               imagePullPolicy: IfNotPresent
+              securityContext:
+                allowPrivilegeEscalation: false
+                capabilities:
+                  drop: ["ALL"]
+                readOnlyRootFilesystem: true
               command: ["/bin/bash", "-c"]

To migrate to runAsNonRoot, the Dockerfile must switch from root user and relocate miniforge away from /root/miniforge3.

🧰 Tools
🪛 Checkov (3.2.334)

[medium] 1-45: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[medium] 1-45: Minimize the admission of root containers

(CKV_K8S_23)

🪛 Trivy (0.69.1)

[error] 21-41: Root file system is not read-only

Container 'cleanup' of CronJob 'workspace-cleanup' should set 'securityContext.readOnlyRootFilesystem' to true

Rule: KSV-0014

Learn more

(IaC/Kubernetes)


[error] 21-41: Default security context configured

container workspace-cleanup in default namespace is using the default security context

Rule: KSV-0118

Learn more

(IaC/Kubernetes)


[error] 18-45: Default security context configured

cronjob workspace-cleanup in default namespace is using the default security context, which allows root privileges

Rule: KSV-0118

Learn more

(IaC/Kubernetes)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@k8s/base/cleanup-cronjob.yaml` around lines 18 - 45, The cleanup job runs as
root and sources miniforge from /root, so add pod- and container-level
securityContext to enforce non-root execution: set spec.securityContext with
runAsNonRoot: true and fsGroup (e.g., 1000) and add container.securityContext
with runAsUser set to a non-root uid, readOnlyRootFilesystem: true,
allowPrivilegeEscalation: false, and capabilities.drop: ["ALL"] on the container
named "cleanup" (image openms-streamlit) so the job cannot escalate privileges;
also update the image/Dockerfile to create and switch to a non-root user and
relocate Miniforge out of /root (adjust the container command/args to source the
new path) so runAsNonRoot succeeds. Ensure the WORKSPACES_DIR and volumeMount
remain writable by the chosen fsGroup/runAsUser.

39 changes: 39 additions & 0 deletions k8s/base/configmap.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: streamlit-config
data:
settings.json: |
{
"app-name": "OpenMS WebApp Template",
"online_deployment": true,
"enable_workspaces": true,
"workspaces_dir": "..",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

workspaces_dir should match the mounted PVC path in k8s.

Using ".." is risky in-container and can resolve outside the intended workspace mount.

🔧 Proposed fix
-      "workspaces_dir": "..",
+      "workspaces_dir": "/workspaces-streamlit-template",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"workspaces_dir": "..",
"workspaces_dir": "/workspaces-streamlit-template",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@k8s/base/configmap.yaml` at line 11, The ConfigMap key "workspaces_dir" is
set to a relative path ("..") which can escape the container workspace; update
the "workspaces_dir" value in k8s/base/configmap.yaml to the exact absolute path
used as the PVC mount in the Pod/Deployment (i.e., the same mountPath configured
on the container), ensuring the key matches the mounted PVC path so the app uses
the intended in-container workspace.

"queue_settings": {
"default_timeout": 7200,
"result_ttl": 86400
},
"demo_workspaces": {
"enabled": true,
"source_dirs": ["example-data/workspaces"]
},
"max_threads": {
"local": 4,
"online": 2
},
"analytics": {
"matomo": {
"enabled": true,
"url": "https://cdn.matomo.cloud/openms.matomo.cloud",
"tag": "yDGK8bfY"
},
"google-analytics": {
"enabled": false,
"tag": ""
},
"piwik-pro": {
"enabled": false,
"tag": ""
}
}
}
32 changes: 32 additions & 0 deletions k8s/base/ingress.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: streamlit
annotations:
# WebSocket support (Streamlit requires WebSockets)
nginx.ingress.kubernetes.io/proxy-read-timeout: "86400"
nginx.ingress.kubernetes.io/proxy-send-timeout: "86400"
nginx.ingress.kubernetes.io/proxy-http-version: "1.1"
# Session affinity (user stays on same pod)
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/session-cookie-name: "stroute"
nginx.ingress.kubernetes.io/session-cookie-path: "/"
nginx.ingress.kubernetes.io/session-cookie-samesite: "Lax"
# File upload (no limit)
nginx.ingress.kubernetes.io/proxy-body-size: "0"
Comment on lines +16 to +17
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's read the ingress.yaml file to understand context
cat -n k8s/base/ingress.yaml

Repository: OpenMS/streamlit-template

Length of output: 1419


🏁 Script executed:

# Check all proxy-body-size settings across the codebase
rg -n --type=yaml 'proxy-body-size' k8s

Repository: OpenMS/streamlit-template

Length of output: 146


🏁 Script executed:

# Look for any documentation or comments about upload limits
rg -r 'upload\|body.*size\|file.*size' k8s --type=yaml -i -B 2 -A 2

Repository: OpenMS/streamlit-template

Length of output: 627


🌐 Web query:

nginx proxy-body-size 0 unlimited meaning security implications

💡 Result:

Setting nginx.ingress.kubernetes.io/proxy-body-size: "0" (Ingress-NGINX) effectively removes the request-body size limit by translating to NGINX’s client_max_body_size 0, where 0 means “disable checking of client request body size” (i.e., “unlimited” from NGINX’s perspective). [1], [2]

Security / operational implications of “unlimited”

  • DoS risk via large uploads: Attackers (or buggy clients) can send extremely large request bodies, tying up worker connections and upstream capacity, and increasing load. [1]
  • Disk exhaustion risk (buffering/temp files): NGINX may buffer large bodies (often to disk, depending on other settings), so “unlimited” can fill volumes used for client/proxy temp storage and cause broader outages.
  • Downstream limits still apply: Even if NGINX accepts the body, your app, upstream proxy, load balancer, or other ingress/controller limits may still reject or fail on large payloads; “0” only removes this specific NGINX size check. [1], [2]

Safer practice

Prefer a specific maximum (e.g., 50m, 200m) and only raise it on endpoints that truly need it (uploads), rather than setting 0 globally. [2]

Sources:
[1] NGINX directive client_max_body_size (“0 disables checking”) — nginx.org docs. [1]
[2] Ingress-NGINX annotation proxy-body-size (custom max body size / maps to NGINX limit behavior) — ingress-nginx docs. [2]


🏁 Script executed:

# Check for documentation about file upload requirements or deployment notes
find . -name "README*" -o -name "*.md" | head -20 | xargs grep -l "upload\|ingress\|body" 2>/dev/null || echo "No relevant docs found"

Repository: OpenMS/streamlit-template

Length of output: 218


🏁 Script executed:

# Search for any comments or documentation in k8s directory explaining the unlimited requirement
rg -r "upload\|body.*size" . --type=md -A 3 -B 3

Repository: OpenMS/streamlit-template

Length of output: 50383


🏁 Script executed:

# Read the installation/user guide documentation to understand file upload requirements
cat docs/installation.md 2>/dev/null | head -200

Repository: OpenMS/streamlit-template

Length of output: 3677


🏁 Script executed:

# Check for any application-level upload limits
rg -r "file.*upload\|upload.*size\|st.file_uploader" . --type=py -A 2 -B 2

Repository: OpenMS/streamlit-template

Length of output: 50384


🏁 Script executed:

# Search for actual st.file_uploader usage in Python files (simpler search)
find . -name "*.py" -type f -exec grep -l "file_uploader" {} \;

Repository: OpenMS/streamlit-template

Length of output: 123


🏁 Script executed:

# Check the app.py file directly to understand what the application does
head -100 app.py

Repository: OpenMS/streamlit-template

Length of output: 2225


🏁 Script executed:

# Check the file_upload.py to see what file sizes might be expected
cat content/file_upload.py

Repository: OpenMS/streamlit-template

Length of output: 4580


Set a bounded request body size limit instead of unlimited.

proxy-body-size: "0" removes all body size checking, increasing DoS and resource-exhaustion risk. While file uploads are required for this mass spectrometry application, a bounded limit provides better security. Set a reasonable maximum (e.g., 100m or higher based on typical file sizes).

🔧 Proposed fix
-    nginx.ingress.kubernetes.io/proxy-body-size: "0"
+    nginx.ingress.kubernetes.io/proxy-body-size: "100m"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# File upload (no limit)
nginx.ingress.kubernetes.io/proxy-body-size: "0"
# File upload (no limit)
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@k8s/base/ingress.yaml` around lines 16 - 17, Replace the unlimited body-size
annotation by setting a bounded value: change the
nginx.ingress.kubernetes.io/proxy-body-size annotation (currently "0") to a
reasonable maximum such as "100m" (or another value appropriate for expected
file sizes) and update the comment that references file upload limits; ensure
the ingress uses the new annotated value so NGINX enforces a finite upload limit
to mitigate DoS/resource exhaustion risks.

# Disable buffering for streaming
nginx.ingress.kubernetes.io/proxy-buffering: "off"
spec:
ingressClassName: nginx
rules:
- host: streamlit.openms.example.de
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: streamlit
port:
number: 8501
Comment on lines +20 to +32
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, examine the target file to confirm the code snippet
cat k8s/base/ingress.yaml

Repository: OpenMS/streamlit-template

Length of output: 1195


🏁 Script executed:

# Verify TLS configuration patterns in ingress manifests
rg -n --type=yaml 'kind:\s*Ingress|^\s*tls:|secretName:' k8s/

Repository: OpenMS/streamlit-template

Length of output: 174


🏁 Script executed:

# Check if there are any TLS/secret configurations related to streamlit
rg -n --type=yaml 'streamlit|tls|TLS|secret' k8s/ | head -50

Repository: OpenMS/streamlit-template

Length of output: 2343


Add TLS configuration to the Ingress.

Public host routing (streamlit.openms.example.de) should include TLS at the ingress layer.

🔧 Proposed fix
 spec:
   ingressClassName: nginx
+  tls:
+    - hosts:
+        - streamlit.openms.example.de
+      secretName: streamlit-tls
   rules:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
spec:
ingressClassName: nginx
rules:
- host: streamlit.openms.example.de
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: streamlit
port:
number: 8501
spec:
ingressClassName: nginx
tls:
- hosts:
- streamlit.openms.example.de
secretName: streamlit-tls
rules:
- host: streamlit.openms.example.de
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: streamlit
port:
number: 8501
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@k8s/base/ingress.yaml` around lines 20 - 32, Ingress currently exposes host
streamlit.openms.example.de without TLS; update the Ingress spec
(ingressClassName: nginx, host: streamlit.openms.example.de, backend service:
name streamlit, port number 8501) to add a tls section listing the host and a
secretName (e.g., streamlit-tls) and ensure that the corresponding TLS secret
exists in the cluster (or is created via cert-manager); optionally add any nginx
ingress annotations required for TLS/redirects, but the minimal fix is adding
spec.tls with hosts: [streamlit.openms.example.de] and secretName:
<your-tls-secret>.

13 changes: 13 additions & 0 deletions k8s/base/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- namespace.yaml
- configmap.yaml
- redis.yaml
- workspace-pvc.yaml
- streamlit-deployment.yaml
- streamlit-service.yaml
- rq-worker-deployment.yaml
- ingress.yaml
- cleanup-cronjob.yaml
6 changes: 6 additions & 0 deletions k8s/base/namespace.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: openms
labels:
app.kubernetes.io/part-of: openms-streamlit
52 changes: 52 additions & 0 deletions k8s/base/redis.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
labels:
component: redis
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis:7-alpine
ports:
- containerPort: 6379
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "250m"
readinessProbe:
exec:
command: ["redis-cli", "ping"]
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
exec:
command: ["redis-cli", "ping"]
initialDelaySeconds: 15
periodSeconds: 20
Comment on lines +16 to +38
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

❓ Verification inconclusive

Script executed:

cat k8s/base/redis.yaml

Repository: OpenMS/streamlit-template


Repository: OpenMS/streamlit-template
Exit code: 0

stdout:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  labels:
    component: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      component: redis
  template:
    metadata:
      labels:
        component: redis
    spec:
      containers:
        - name: redis
          image: redis:7-alpine
          ports:
            - containerPort: 6379
          resources:
            requests:
              memory: "64Mi"
              cpu: "50m"
            limits:
              memory: "256Mi"
              cpu: "250m"
          readinessProbe:
            exec:
              command: ["redis-cli", "ping"]
            initialDelaySeconds: 5
            periodSeconds: 10
          livenessProbe:
            exec:
              command: ["redis-cli", "ping"]
            initialDelaySeconds: 15
            periodSeconds: 20
---
apiVersion: v1
kind: Service
metadata:
  name: redis
  labels:
    component: redis
spec:
  type: ClusterIP
  ports:
    - port: 6379
      targetPort: 6379
  selector:
    component: redis

redis:7-alpine readOnlyRootFilesystem kubernetes security context compatibility


Yes—redis:7-alpine can run with readOnlyRootFilesystem: true in Kubernetes, as long as you provide explicit writable mounts for the paths Redis needs to write to.

What must be writable

  • /data: the official Redis image declares persistence under VOLUME /data (RDB/AOF, etc.). With a read-only root FS, /data must be backed by a PVC or emptyDir. [1]
  • A temp directory (commonly /tmp): many hardened setups mount an emptyDir at /tmp to avoid “read-only file system” errors when software tries to create temp files. (Not Redis-specific, but commonly required in practice when enabling readOnlyRootFilesystem.) [2]

SecurityContext that’s typically compatible

The Redis official image is designed to drop privileges to a non-root redis user by default, and the Alpine variant uses a dedicated redis user (commonly UID 999). [1], [3]
So a restricted container security context usually works, provided your volume permissions align:

securityContext:
  runAsNonRoot: true
  allowPrivilegeEscalation: false
  readOnlyRootFilesystem: true
  capabilities:
    drop: ["ALL"]
  seccompProfile:
    type: RuntimeDefault
volumeMounts:
  - name: data
    mountPath: /data
  - name: tmp
    mountPath: /tmp
volumes:
  - name: data
    persistentVolumeClaim:
      claimName: redis-data
  - name: tmp
    emptyDir: {}

Common gotchas

  • If you override runAsUser, ensure it matches the image’s Redis UID (or ensure /data is writable via fsGroup/permissions), otherwise Redis may fail to write to /data. [3]
  • If you configure Redis to write anywhere else (custom dir, unix socket path, log file), that path must be on a writable volume; otherwise it will fail under a read-only root filesystem. [2]

Sources: Kubernetes security context docs (for readOnlyRootFilesystem) [2]; Redis official image docs (persisted data under /data, privilege dropping) [1]; Redis docker-library image user details (Alpine redis user/UID behavior) [3].


Script executed:

# Check if there are any Redis configuration files or persistence setup in the repo
find . -type f -name "redis*" -o -name "*redis*" | head -20

Repository: OpenMS/streamlit-template


Repository: OpenMS/streamlit-template
Exit code: 0

stdout:

./k8s/base/redis.yaml

Harden Redis pod/container security context.

The deployment currently runs with default security settings. Please add pod and container security contexts to enforce non-root execution, drop capabilities, and disable privilege escalation.

Note: If enabling readOnlyRootFilesystem: true, ensure both /data and /tmp are mounted as writable volumes (typically emptyDir). Redis will fail to start without writable access to /tmp.

🔒 Corrected hardening patch
 spec:
   replicas: 1
   selector:
     matchLabels:
       component: redis
   template:
     metadata:
       labels:
         component: redis
     spec:
+      securityContext:
+        runAsNonRoot: true
+        seccompProfile:
+          type: RuntimeDefault
       containers:
         - name: redis
           image: redis:7-alpine
           ports:
             - containerPort: 6379
+          securityContext:
+            allowPrivilegeEscalation: false
+            capabilities:
+              drop: ["ALL"]
+            readOnlyRootFilesystem: true
+          volumeMounts:
+            - name: redis-data
+              mountPath: /data
+            - name: tmp
+              mountPath: /tmp
           resources:
             requests:
               memory: "64Mi"
               cpu: "50m"
             limits:
               memory: "256Mi"
               cpu: "250m"
           readinessProbe:
             exec:
               command: ["redis-cli", "ping"]
             initialDelaySeconds: 5
             periodSeconds: 10
           livenessProbe:
             exec:
               command: ["redis-cli", "ping"]
             initialDelaySeconds: 15
             periodSeconds: 20
+      volumes:
+        - name: redis-data
+          emptyDir: {}
+        - name: tmp
+          emptyDir: {}
🧰 Tools
🪛 Trivy (0.69.1)

[error] 18-38: Root file system is not read-only

Container 'redis' of Deployment 'redis' should set 'securityContext.readOnlyRootFilesystem' to true

Rule: KSV-0014

Learn more

(IaC/Kubernetes)


[error] 18-38: Default security context configured

container redis in default namespace is using the default security context

Rule: KSV-0118

Learn more

(IaC/Kubernetes)


[error] 16-38: Default security context configured

deployment redis in default namespace is using the default security context, which allows root privileges

Rule: KSV-0118

Learn more

(IaC/Kubernetes)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@k8s/base/redis.yaml` around lines 16 - 38, Add a pod and container
securityContext to the Redis manifest to enforce non-root execution and drop
privileges: set pod-level podSecurityContext (e.g., runAsNonRoot: true,
runAsUser: 1000) and in the redis container's securityContext set runAsNonRoot:
true, runAsUser: 1000, allowPrivilegeEscalation: false, readOnlyRootFilesystem:
true, and capabilities: drop: ["ALL"]; also add writable emptyDir volumes and
mounts for /data and /tmp (volume names referenced from spec.volumes and mounted
into the redis container) so Redis can start with a read-only root filesystem;
ensure the container name "redis" and readinessProbe/livenessProbe blocks remain
unchanged.

---
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
component: redis
spec:
type: ClusterIP
ports:
- port: 6379
targetPort: 6379
selector:
component: redis
Loading
Loading