diff --git a/.github/instructions/copilot-instructions.md b/.github/instructions/copilot-instructions.md new file mode 100644 index 00000000..b54a1632 --- /dev/null +++ b/.github/instructions/copilot-instructions.md @@ -0,0 +1,23 @@ +# Cloud Harness + +Cloud Harness provides software infrastructure and tools for neuroscience data computing and analysis in a monorepo. + +## General concepts + +### Files content + +- `applications`: Cloud Harness custom server applications go here +- `client`: Cloud Harness generated client api +- `deployment`: deployment related scripts and files +- `deployment-configuration`: deployment customization files +- `infrastructure`: infrastructure utilities +- `libraries`: Cloud Harness shared libraries +- `docs`: developers documentation files +- `tools`: Cloud Harness CLI and other tools +- `test`: Cloud Harness test utilities and test code + +Verify which application/components is in scope and read specific prompt instruction before proceeding. + +Check best practices in every instruction file in scope and docs and apply them when writing code or performing code reviews. +Use reference for any questions regarding project structure, development workflow, and best practices. +If you have any doubts about where to find information, ask for clarification before proceeding. \ No newline at end of file diff --git a/.github/instructions/test-e2e.instructions.md b/.github/instructions/test-e2e.instructions.md new file mode 100644 index 00000000..0a89583d --- /dev/null +++ b/.github/instructions/test-e2e.instructions.md @@ -0,0 +1,23 @@ +--- +applyTo: "test/e2e/*" +--- +# Neuroglass Research E2E Tests + +## End-to-end (E2E) Tests +- **Location**: `test/e2e/*.spec.ts` +- **Framework**: Jest + Puppeteer (see existing tests for patterns) + +### Login Flow +- Tests must handle the 2-step login redirect when `APP_URL` points to the accounts domain. +- Use `USERNAME` and `PASSWORD` environment variables for credentials. +- Follow the existing flow: + - Navigate to `APP_URL` and detect redirect to accounts. + - Enter username, submit, wait for password field, enter password, submit. + - Wait for redirect back to the app and confirm route. + +### Stability Requirements (Mandatory) +- **Waiting**: Always use Puppeteer explicit waits (`waitForSelector`, `waitForFunction`, `waitForNavigation`) with timeouts. +- **Selectors**: Rely on stable, custom selectors (see `test/e2e/selectors.ts`). +- **Do not** depend on UI copy, text content, or fragile DOM structure. +- **Avoid fixed sleeps** unless there is no deterministic signal; prefer state-based waits. +- **Resilience**: When possible, guard against flaky overlays (cookie/announcement modals). diff --git a/.github/instructions/tools.instructions.md b/.github/instructions/tools.instructions.md new file mode 100644 index 00000000..ec4a3789 --- /dev/null +++ b/.github/instructions/tools.instructions.md @@ -0,0 +1,74 @@ +--- +applyTo: "tools/deployment-cli-tools/* +--- +# Neuroglass Project Developer Reference + +## Environment Setup + +### Required Environment +- **Conda Environment Name**: `ch` +- **Python Version**: 3.12+ +- **Activation Command**: `conda activate ch` + +### Package Managers +- **Frontend**: Yarn (NEVER use npm) +- **Backend**: pip (within conda environment only) + +### Pre-requisites Checklist +- [ ] Conda environment `ch` is activated +- [ ] Correct directory navigation completed +- [ ] Appropriate package manager selected (yarn/pip) + +## Development Workflow + +### Mandatory Pre-Command Steps +1. **ALWAYS** activate conda environment first: `conda activate ch` +2. Navigate to the appropriate project directory +3. Use yarn for frontend operations, pip for backend operations + +## Project Structure + +### Key Scripts +- `harness-generate` - Generate code +- `harness-deployment` - Generate deployment files: helm charts, ci/cd files, etc. +- `harness-application` - Generate application code (e.g., Django apps) +- `harness-migrate` - Migration helper tool +- `ch_cli_tools` - Python package for deployment and other tools +- `tests` - Unit test utilities and test code + + +## Code Style and best practices + +Take the following best practices into account when writing code for the project adn while performing code reviews: + +- Keep architecture lean: avoid unnecessary layers and abstractions. +- Use utils for stateless pure functions that don't hit external data sources nor the ORM. Utils are horizontal and can be used across the project. +- Use helpers to organize pieces of business logic; keep them stateless when possible. +- Use services for business workflows and cross-model coordination. Services are vertical on a single model or a group of related models +- Keep model logic close to the model when it represents domain rules or invariants. +- Handle exceptions only at the higher level; let lower layers raise. NEVER catch exceptions in helpers or services unless you are adding context and re-raising. +- Cover critical logic with unit tests, especially in helpers and services. Use mocks to isolate units under test. +- Prefer models classes for helpers and services to ensure data validation and clear interfaces. Use typed dicts for structured data that isn't covered by Schema classes. Use plain dicts only to represent real unstructured data. Avoid returning tuples. + + +## Important Constraints + +### File Creation Rules +- **NEVER** create new README or documentation files unless explicitly requested +- Follow existing documentation patterns when updates are needed + +### Development Server Rules +- **NEVER** run development servers +- **ALWAYS** assume servers are running +- **MUST** ask confirmation before opening browsers + +### Package Management Rules +- **Frontend**: ONLY use yarn, NEVER npm +- **Backend**: ONLY use pip within conda environment +- **ALWAYS** activate `ch` conda environment before any backend work + +### CloudHarness Considerations +- Dependencies may need special handling in development environment +- Follow established patterns for CloudHarness integration + +--- \ No newline at end of file diff --git a/applications/samples/deploy/values.yaml b/applications/samples/deploy/values.yaml index ed04be7c..607c9944 100644 --- a/applications/samples/deploy/values.yaml +++ b/applications/samples/deploy/values.yaml @@ -1,5 +1,6 @@ harness: subdomain: samples + image_name: sampleapp secured: true sentry: true port: 80 @@ -50,6 +51,7 @@ harness: - workflows - events - accounts + - cloudharness-base - common build: - cloudharness-flask diff --git a/ch-166.patch b/ch-166.patch deleted file mode 100644 index 1603e0a0..00000000 --- a/ch-166.patch +++ /dev/null @@ -1,14 +0,0 @@ -diff --git a/tools/deployment-cli-tools/ch_cli_tools/codefresh.py b/tools/deployment-cli-tools/ch_cli_tools/codefresh.py -index 8bcf2b79..3ea43e31 100644 ---- a/tools/deployment-cli-tools/ch_cli_tools/codefresh.py -+++ b/tools/deployment-cli-tools/ch_cli_tools/codefresh.py -@@ -175,8 +175,7 @@ def create_codefresh_deployment_scripts(root_paths, envs=(), include=(), exclude - - if app_config and app_config.dependencies and app_config.dependencies.git: - for dep in app_config.dependencies.git: -- step_name = f"clone_{basename(dep.url).replace('.', '_')}_{basename(dockerfile_relative_to_root).replace('.', '_')}" -- steps[CD_BUILD_STEP_DEPENDENCIES]['steps'][step_name] = clone_step_spec(dep, dockerfile_relative_to_root) -+ steps[CD_BUILD_STEP_DEPENDENCIES]['steps'][f"clone_{basename(dep.url).replace(".", "_")}_{basename(dockerfile_relative_to_root).replace(".", "_")}"] = clone_step_spec(dep, dockerfile_relative_to_root) - - build = None - if build_step in steps: diff --git a/deployment-configuration/codefresh-template-dev.yaml b/deployment-configuration/codefresh-template-dev.yaml index 511602c5..78460d49 100644 --- a/deployment-configuration/codefresh-template-dev.yaml +++ b/deployment-configuration/codefresh-template-dev.yaml @@ -34,7 +34,8 @@ steps: working_directory: . commands: - bash cloud-harness/install.sh - - harness-deployment $PATHS -d ${{DOMAIN}} -r ${{REGISTRY}} -rs '${{REGISTRY_SECRET}}' -n ${{NAMESPACE}} --write-env -e $ENV --cache-url '${{IMAGE_CACHE_URL}}' $PARAMS + - export HELM_META_ARGS="$( [ -n "${{HARNESS_CHART_NAME}}" ] && printf -- "--name %s " "${{HARNESS_CHART_NAME}}"; [ -n "${{HARNESS_CHART_VERSION}}" ] && printf -- "--chart-version %s " "${{HARNESS_CHART_VERSION}}"; [ -n "${{HARNESS_APP_VERSION}}" ] && printf -- "--app-version %s" "${{HARNESS_APP_VERSION}}" )" + - harness-deployment $PATHS -d ${{DOMAIN}} -r ${{REGISTRY}} -rs '${{REGISTRY_SECRET}}' -n ${{NAMESPACE}} --write-env -e $ENV --cache-url '${{IMAGE_CACHE_URL}}' $HELM_META_ARGS $PARAMS - cat deployment/.env >> ${{CF_VOLUME_PATH}}/env_vars_to_export - cat ${{CF_VOLUME_PATH}}/env_vars_to_export prepare_deployment_view: diff --git a/deployment-configuration/codefresh-template-prod.yaml b/deployment-configuration/codefresh-template-prod.yaml index 9d75bfed..5511635a 100644 --- a/deployment-configuration/codefresh-template-prod.yaml +++ b/deployment-configuration/codefresh-template-prod.yaml @@ -30,7 +30,8 @@ steps: working_directory: . commands: - bash cloud-harness/install.sh - - harness-deployment $PATHS -t ${{DEPLOYMENT_TAG}} -d ${{DOMAIN}} -r ${{REGISTRY}} -rs '${{REGISTRY_SECRET}}' -n ${{NAMESPACE}} -e $ENV --no-cd $PARAMS + - export HELM_META_ARGS="$( [ -n "${{HARNESS_CHART_NAME}}" ] && printf -- "--name %s " "${{HARNESS_CHART_NAME}}"; [ -n "${{HARNESS_CHART_VERSION}}" ] && printf -- "--chart-version %s " "${{HARNESS_CHART_VERSION}}"; [ -n "${{HARNESS_APP_VERSION}}" ] && printf -- "--app-version %s" "${{HARNESS_APP_VERSION}}" )" + - harness-deployment $PATHS -t ${{DEPLOYMENT_TAG}} -d ${{DOMAIN}} -r ${{REGISTRY}} -rs '${{REGISTRY_SECRET}}' -n ${{NAMESPACE}} -e $ENV --no-cd $HELM_META_ARGS $PARAMS prepare_deployment_view: commands: - "helm template ./deployment/helm --debug -n ${{NAMESPACE}}" diff --git a/deployment-configuration/codefresh-template-stage-onpremise.yaml b/deployment-configuration/codefresh-template-stage-onpremise.yaml deleted file mode 100644 index 8b365a96..00000000 --- a/deployment-configuration/codefresh-template-stage-onpremise.yaml +++ /dev/null @@ -1,162 +0,0 @@ -version: "1.0" -stages: - - prepare - - build - - deploy - - qa - - publish -steps: - main_clone: - title: Clone main repository - type: git-clone - stage: prepare - repo: "${{CF_REPO_OWNER}}/${{CF_REPO_NAME}}" - revision: "${{CF_BRANCH}}" - git: github - post_main_clone: - title: Post main clone - type: parallel - stage: prepare - steps: - clone_cloud_harness: - title: Cloning cloud-harness repository... - type: git-clone - stage: prepare - repo: "https://github.com/MetaCell/cloud-harness.git" - revision: "${{CLOUDHARNESS_BRANCH}}" - working_directory: . - git: github - prepare_deployment: - title: "Prepare helm chart" - image: python:3.12 - stage: prepare - working_directory: . - commands: - - bash cloud-harness/install.sh - - harness-deployment $PATHS -t ${{DEPLOYMENT_TAG}} -d ${{DOMAIN}} -r ${{REGISTRY}} -rs ${{REGISTRY_SECRET}} -n ${{NAMESPACE}} -e $ENV --no-cd $PARAMS - prepare_deployment_view: - commands: - - "helm template ./deployment/helm --debug -n ${{NAMESPACE}}" - environment: - - ACTION=auth - - KUBE_CONTEXT=${{NAMESPACE}} - image: codefresh/cfstep-helm:3.6.2 - stage: prepare - title: "View helm chart" - deployment: - stage: deploy - type: helm - working_directory: ./${{CF_REPO_NAME}} - title: Installing chart - arguments: - helm_version: 3.6.2 - chart_name: deployment/helm - release_name: ${{NAMESPACE}} - kube_context: ${{CLUSTER_NAME}} - namespace: ${{NAMESPACE}} - chart_version: ${{DEPLOYMENT_TAG}} - cmd_ps: --wait --timeout 600s --create-namespace - custom_value_files: - - ./deployment/helm/values.yaml - build_test_images: - title: Build test images - type: parallel - stage: qa - steps: [] - when: - condition: - all: - whenVarExists: 'includes("${{SKIP_TESTS}}", "{{SKIP_TESTS}}") == true' - wait_deployment: - stage: qa - title: Wait deployment to be ready - image: codefresh/kubectl - commands: - - kubectl config use-context ${{CLUSTER_NAME}} - - kubectl config set-context --current --namespace=${{NAMESPACE}} - tests_api: - stage: qa - title: Api tests - working_directory: /home/test - image: "${{REGISTRY}}/cloud-harness/test-api:latest" - fail_fast: false - commands: - - echo $APP_NAME - scale: {} - when: - condition: - all: - whenVarExists: 'includes("${{SKIP_TESTS}}", "{{SKIP_TESTS}}") == true' - tests_e2e: - stage: qa - title: End to end tests - working_directory: /home/test - image: "${{REGISTRY}}/cloud-harness/test-e2e:latest" - fail_fast: false - commands: - - yarn test - scale: {} - when: - condition: - all: - whenVarExists: 'includes("${{SKIP_TESTS}}", "{{SKIP_TESTS}}") == true' - manual_tests: - type: pending-approval - stage: publish - title: Manual tests performed - description: Manual tests have been performed and reported - timeout: - duration: 168 - finalState: approved - approval: - type: pending-approval - stage: publish - title: Approve release - description: Approve release and tagging/publication - timeout: - duration: 168 - finalState: approved - publish_helm_chart: - title: Publish Helm chart to artifact registry - stage: publish - image: google/cloud-sdk:alpine - working_directory: . - commands: - - echo $GCP_SA_KEY | base64 -d > /tmp/gcp-key.json - - gcloud auth activate-service-account --key-file=/tmp/gcp-key.json - - gcloud auth configure-docker ${{ARTIFACT_REGISTRY_LOCATION}}-docker.pkg.dev - - helm package ./deployment/helm --version ${{DEPLOYMENT_PUBLISH_TAG}} - - helm push ${{NAMESPACE}}-${{DEPLOYMENT_PUBLISH_TAG}}.tgz oci://${{ARTIFACT_REGISTRY_LOCATION}}-docker.pkg.dev/${{GCP_PROJECT_ID}}/${{HELM_REPO}} - - rm /tmp/gcp-key.json - when: - condition: - all: - whenVarExists: 'includes("${{DEPLOYMENT_PUBLISH_TAG}}", "{{DEPLOYMENT_PUBLISH_TAG}}") == false' - whenVarExists2: 'includes("${{ARTIFACT_REGISTRY_LOCATION}}", "{{ARTIFACT_REGISTRY_LOCATION}}") == false' - whenVarExists3: 'includes("${{GCP_PROJECT_ID}}", "{{GCP_PROJECT_ID}}") == false' - whenVarExists4: 'includes("${{HELM_REPO}}", "{{HELM_REPO}}") == false' - whenVarExists5: 'includes("${{GCP_SA_KEY}}", "{{GCP_SA_KEY}}") == false' - publish: - type: parallel - stage: publish - steps: REPLACE_ME - when: - condition: - all: - whenVarExists: 'includes("${{DEPLOYMENT_PUBLISH_TAG}}", "{{DEPLOYMENT_PUBLISH_TAG}}") == false' - git-tag: - title: Performing git tagging - stage: publish - image: alpine/git:latest - commands: - - git tag ${{DEPLOYMENT_PUBLISH_TAG}} - - ORIGIN=$(git remote get-url origin) - - PROTOCOL=https:// - - REPLACEMENT=${PROTOCOL}${{REPO_TOKEN}}@ - - git remote set-url origin ${ORIGIN/$PROTOCOL/$REPLACEMENT} - - git push origin --tags - when: - condition: - all: - whenVarExists: 'includes("${{DEPLOYMENT_PUBLISH_TAG}}", "{{DEPLOYMENT_PUBLISH_TAG}}") == false' - whenVarExists2: 'includes("${{REPO_TOKEN}}", "{{REPO_TOKEN}}") == false' diff --git a/deployment-configuration/codefresh-template-stage.yaml b/deployment-configuration/codefresh-template-stage.yaml index dc474651..82acaabf 100644 --- a/deployment-configuration/codefresh-template-stage.yaml +++ b/deployment-configuration/codefresh-template-stage.yaml @@ -33,7 +33,8 @@ steps: working_directory: . commands: - bash cloud-harness/install.sh - - harness-deployment $PATHS -t ${{DEPLOYMENT_TAG}} -d ${{DOMAIN}} -r ${{REGISTRY}} -rs ${{REGISTRY_SECRET}} -n ${{NAMESPACE}} -e $ENV --no-cd $PARAMS + - export HELM_META_ARGS="$( [ -n "${{HARNESS_CHART_NAME}}" ] && printf -- "--name %s " "${{HARNESS_CHART_NAME}}"; [ -n "${{HARNESS_CHART_VERSION}}" ] && printf -- "--chart-version %s " "${{HARNESS_CHART_VERSION}}"; [ -n "${{HARNESS_APP_VERSION}}" ] && printf -- "--app-version %s" "${{HARNESS_APP_VERSION}}" )" + - harness-deployment $PATHS -t ${{DEPLOYMENT_TAG}} -d ${{DOMAIN}} -r ${{REGISTRY}} -rs ${{REGISTRY_SECRET}} -n ${{NAMESPACE}} -e $ENV --no-cd $HELM_META_ARGS $PARAMS prepare_deployment_view: commands: - "helm template ./deployment/helm --debug -n ${{NAMESPACE}}" diff --git a/deployment-configuration/codefresh-template-test.yaml b/deployment-configuration/codefresh-template-test.yaml index 30b73245..8048bf47 100644 --- a/deployment-configuration/codefresh-template-test.yaml +++ b/deployment-configuration/codefresh-template-test.yaml @@ -33,7 +33,8 @@ steps: working_directory: . commands: - bash cloud-harness/install.sh - - harness-deployment $PATHS -n test-${{NAMESPACE_BASENAME}} -d ${{DOMAIN}} -r ${{REGISTRY}} -rs ${{REGISTRY_SECRET}} -e $ENV --write-env --cache-url '${{IMAGE_CACHE_URL}}' -N $PARAMS + - export HELM_META_ARGS="$( [ -n "${{HARNESS_CHART_NAME}}" ] && printf -- "--name %s " "${{HARNESS_CHART_NAME}}"; [ -n "${{HARNESS_CHART_VERSION}}" ] && printf -- "--chart-version %s " "${{HARNESS_CHART_VERSION}}"; [ -n "${{HARNESS_APP_VERSION}}" ] && printf -- "--app-version %s" "${{HARNESS_APP_VERSION}}" )" + - harness-deployment $PATHS -n test-${{NAMESPACE_BASENAME}} -d ${{DOMAIN}} -r ${{REGISTRY}} -rs ${{REGISTRY_SECRET}} -e $ENV --write-env --cache-url '${{IMAGE_CACHE_URL}}' -N $HELM_META_ARGS $PARAMS - cat deployment/.env >> ${{CF_VOLUME_PATH}}/env_vars_to_export - cat ${{CF_VOLUME_PATH}}/env_vars_to_export prepare_deployment_view: diff --git a/deployment/codefresh-test.yaml b/deployment/codefresh-test.yaml index 6840fe44..50707cf9 100644 --- a/deployment/codefresh-test.yaml +++ b/deployment/codefresh-test.yaml @@ -13,29 +13,20 @@ steps: repo: '${{CF_REPO_OWNER}}/${{CF_REPO_NAME}}' revision: '${{CF_BRANCH}}' git: github - post_main_clone: - title: Post main clone - type: parallel - stage: prepare - steps: - clone_cloud_harness: - title: Cloning cloud-harness repository... - type: git-clone - stage: prepare - repo: https://github.com/MetaCell/cloud-harness.git - revision: '${{CLOUDHARNESS_BRANCH}}' - working_directory: . - git: github prepare_deployment: title: Prepare helm chart image: python:3.12 stage: prepare working_directory: . commands: - - bash cloud-harness/install.sh + - bash ./install.sh + - export HELM_META_ARGS="$( [ -n "${{HARNESS_CHART_NAME}}" ] && printf -- "--name + %s " "${{HARNESS_CHART_NAME}}"; [ -n "${{HARNESS_CHART_VERSION}}" ] && printf + -- "--chart-version %s " "${{HARNESS_CHART_VERSION}}"; [ -n "${{HARNESS_APP_VERSION}}" + ] && printf -- "--app-version %s" "${{HARNESS_APP_VERSION}}" )" - harness-deployment . -n test-${{NAMESPACE_BASENAME}} -d ${{DOMAIN}} -r ${{REGISTRY}} -rs ${{REGISTRY_SECRET}} -e test --write-env --cache-url '${{IMAGE_CACHE_URL}}' - -N -i samples + -N $HELM_META_ARGS -i samples - cat deployment/.env >> ${{CF_VOLUME_PATH}}/env_vars_to_export - cat ${{CF_VOLUME_PATH}}/env_vars_to_export prepare_deployment_view: @@ -74,28 +65,28 @@ steps: == true forceNoCache: includes('${{TEST_E2E_TAG_FORCE_BUILD}}', '{{TEST_E2E_TAG_FORCE_BUILD}}') == false - cloudharness-frontend-build: + accounts: type: build stage: build - dockerfile: infrastructure/base-images/cloudharness-frontend-build/Dockerfile + dockerfile: Dockerfile registry: '${{CODEFRESH_REGISTRY}}' buildkit: true build_arguments: - NOCACHE=${{CF_BUILD_ID}} - image_name: cloud-harness/cloudharness-frontend-build - title: Cloudharness frontend build - working_directory: ./. + image_name: cloud-harness/accounts + title: Accounts + working_directory: ./applications/accounts tags: - - '${{CLOUDHARNESS_FRONTEND_BUILD_TAG}}' + - '${{ACCOUNTS_TAG}}' - '${{DEPLOYMENT_PUBLISH_TAG}}-dev' - '${{CF_BRANCH_TAG_NORMALIZED_LOWER_CASE}}' when: condition: any: - buildDoesNotExist: includes('${{CLOUDHARNESS_FRONTEND_BUILD_TAG_EXISTS}}', - '{{CLOUDHARNESS_FRONTEND_BUILD_TAG_EXISTS}}') == true - forceNoCache: includes('${{CLOUDHARNESS_FRONTEND_BUILD_TAG_FORCE_BUILD}}', - '{{CLOUDHARNESS_FRONTEND_BUILD_TAG_FORCE_BUILD}}') == false + buildDoesNotExist: includes('${{ACCOUNTS_TAG_EXISTS}}', '{{ACCOUNTS_TAG_EXISTS}}') + == true + forceNoCache: includes('${{ACCOUNTS_TAG_FORCE_BUILD}}', '{{ACCOUNTS_TAG_FORCE_BUILD}}') + == false cloudharness-base: type: build stage: build @@ -118,28 +109,28 @@ steps: == true forceNoCache: includes('${{CLOUDHARNESS_BASE_TAG_FORCE_BUILD}}', '{{CLOUDHARNESS_BASE_TAG_FORCE_BUILD}}') == false - accounts: + cloudharness-frontend-build: type: build stage: build - dockerfile: Dockerfile + dockerfile: infrastructure/base-images/cloudharness-frontend-build/Dockerfile registry: '${{CODEFRESH_REGISTRY}}' buildkit: true build_arguments: - NOCACHE=${{CF_BUILD_ID}} - image_name: cloud-harness/accounts - title: Accounts - working_directory: ./applications/accounts + image_name: cloud-harness/cloudharness-frontend-build + title: Cloudharness frontend build + working_directory: ./. tags: - - '${{ACCOUNTS_TAG}}' + - '${{CLOUDHARNESS_FRONTEND_BUILD_TAG}}' - '${{DEPLOYMENT_PUBLISH_TAG}}-dev' - '${{CF_BRANCH_TAG_NORMALIZED_LOWER_CASE}}' when: condition: any: - buildDoesNotExist: includes('${{ACCOUNTS_TAG_EXISTS}}', '{{ACCOUNTS_TAG_EXISTS}}') - == true - forceNoCache: includes('${{ACCOUNTS_TAG_FORCE_BUILD}}', '{{ACCOUNTS_TAG_FORCE_BUILD}}') - == false + buildDoesNotExist: includes('${{CLOUDHARNESS_FRONTEND_BUILD_TAG_EXISTS}}', + '{{CLOUDHARNESS_FRONTEND_BUILD_TAG_EXISTS}}') == true + forceNoCache: includes('${{CLOUDHARNESS_FRONTEND_BUILD_TAG_FORCE_BUILD}}', + '{{CLOUDHARNESS_FRONTEND_BUILD_TAG_FORCE_BUILD}}') == false title: Build parallel step 1 build_application_images_1: type: parallel @@ -168,7 +159,7 @@ steps: == true forceNoCache: includes('${{CLOUDHARNESS_FLASK_TAG_FORCE_BUILD}}', '{{CLOUDHARNESS_FLASK_TAG_FORCE_BUILD}}') == false - jupyterhub: + workflows-notify-queue: type: build stage: build dockerfile: Dockerfile @@ -177,21 +168,21 @@ steps: build_arguments: - NOCACHE=${{CF_BUILD_ID}} - CLOUDHARNESS_BASE=${{REGISTRY}}/cloud-harness/cloudharness-base:${{CLOUDHARNESS_BASE_TAG}} - image_name: cloud-harness/jupyterhub - title: Jupyterhub - working_directory: ./applications/jupyterhub + image_name: cloud-harness/workflows-notify-queue + title: Workflows notify queue + working_directory: ./applications/workflows/tasks/notify-queue tags: - - '${{JUPYTERHUB_TAG}}' + - '${{WORKFLOWS_NOTIFY_QUEUE_TAG}}' - '${{DEPLOYMENT_PUBLISH_TAG}}-dev' - '${{CF_BRANCH_TAG_NORMALIZED_LOWER_CASE}}' when: condition: any: - buildDoesNotExist: includes('${{JUPYTERHUB_TAG_EXISTS}}', '{{JUPYTERHUB_TAG_EXISTS}}') - == true - forceNoCache: includes('${{JUPYTERHUB_TAG_FORCE_BUILD}}', '{{JUPYTERHUB_TAG_FORCE_BUILD}}') - == false - workflows-extract-download: + buildDoesNotExist: includes('${{WORKFLOWS_NOTIFY_QUEUE_TAG_EXISTS}}', + '{{WORKFLOWS_NOTIFY_QUEUE_TAG_EXISTS}}') == true + forceNoCache: includes('${{WORKFLOWS_NOTIFY_QUEUE_TAG_FORCE_BUILD}}', + '{{WORKFLOWS_NOTIFY_QUEUE_TAG_FORCE_BUILD}}') == false + cloudharness-django: type: build stage: build dockerfile: Dockerfile @@ -200,44 +191,45 @@ steps: build_arguments: - NOCACHE=${{CF_BUILD_ID}} - CLOUDHARNESS_BASE=${{REGISTRY}}/cloud-harness/cloudharness-base:${{CLOUDHARNESS_BASE_TAG}} - image_name: cloud-harness/workflows-extract-download - title: Workflows extract download - working_directory: ./applications/workflows/tasks/extract-download + image_name: cloud-harness/cloudharness-django + title: Cloudharness django + working_directory: ./infrastructure/common-images/cloudharness-django tags: - - '${{WORKFLOWS_EXTRACT_DOWNLOAD_TAG}}' + - '${{CLOUDHARNESS_DJANGO_TAG}}' - '${{DEPLOYMENT_PUBLISH_TAG}}-dev' - '${{CF_BRANCH_TAG_NORMALIZED_LOWER_CASE}}' when: condition: any: - buildDoesNotExist: includes('${{WORKFLOWS_EXTRACT_DOWNLOAD_TAG_EXISTS}}', - '{{WORKFLOWS_EXTRACT_DOWNLOAD_TAG_EXISTS}}') == true - forceNoCache: includes('${{WORKFLOWS_EXTRACT_DOWNLOAD_TAG_FORCE_BUILD}}', - '{{WORKFLOWS_EXTRACT_DOWNLOAD_TAG_FORCE_BUILD}}') == false - samples-secret: + buildDoesNotExist: includes('${{CLOUDHARNESS_DJANGO_TAG_EXISTS}}', '{{CLOUDHARNESS_DJANGO_TAG_EXISTS}}') + == true + forceNoCache: includes('${{CLOUDHARNESS_DJANGO_TAG_FORCE_BUILD}}', '{{CLOUDHARNESS_DJANGO_TAG_FORCE_BUILD}}') + == false + test-api: type: build stage: build - dockerfile: Dockerfile + dockerfile: test/test-api/Dockerfile registry: '${{CODEFRESH_REGISTRY}}' buildkit: true build_arguments: - NOCACHE=${{CF_BUILD_ID}} - CLOUDHARNESS_BASE=${{REGISTRY}}/cloud-harness/cloudharness-base:${{CLOUDHARNESS_BASE_TAG}} - image_name: cloud-harness/samples-secret - title: Samples secret - working_directory: ./applications/samples/tasks/secret + image_name: cloud-harness/test-api + title: Test api + working_directory: ./. tags: - - '${{SAMPLES_SECRET_TAG}}' + - '${{TEST_API_TAG}}' - '${{DEPLOYMENT_PUBLISH_TAG}}-dev' - '${{CF_BRANCH_TAG_NORMALIZED_LOWER_CASE}}' + - latest when: condition: any: - buildDoesNotExist: includes('${{SAMPLES_SECRET_TAG_EXISTS}}', '{{SAMPLES_SECRET_TAG_EXISTS}}') + buildDoesNotExist: includes('${{TEST_API_TAG_EXISTS}}', '{{TEST_API_TAG_EXISTS}}') == true - forceNoCache: includes('${{SAMPLES_SECRET_TAG_FORCE_BUILD}}', '{{SAMPLES_SECRET_TAG_FORCE_BUILD}}') + forceNoCache: includes('${{TEST_API_TAG_FORCE_BUILD}}', '{{TEST_API_TAG_FORCE_BUILD}}') == false - samples-print-file: + jupyterhub: type: build stage: build dockerfile: Dockerfile @@ -246,19 +238,19 @@ steps: build_arguments: - NOCACHE=${{CF_BUILD_ID}} - CLOUDHARNESS_BASE=${{REGISTRY}}/cloud-harness/cloudharness-base:${{CLOUDHARNESS_BASE_TAG}} - image_name: cloud-harness/samples-print-file - title: Samples print file - working_directory: ./applications/samples/tasks/print-file + image_name: cloud-harness/jupyterhub + title: Jupyterhub + working_directory: ./applications/jupyterhub tags: - - '${{SAMPLES_PRINT_FILE_TAG}}' + - '${{JUPYTERHUB_TAG}}' - '${{DEPLOYMENT_PUBLISH_TAG}}-dev' - '${{CF_BRANCH_TAG_NORMALIZED_LOWER_CASE}}' when: condition: any: - buildDoesNotExist: includes('${{SAMPLES_PRINT_FILE_TAG_EXISTS}}', '{{SAMPLES_PRINT_FILE_TAG_EXISTS}}') + buildDoesNotExist: includes('${{JUPYTERHUB_TAG_EXISTS}}', '{{JUPYTERHUB_TAG_EXISTS}}') == true - forceNoCache: includes('${{SAMPLES_PRINT_FILE_TAG_FORCE_BUILD}}', '{{SAMPLES_PRINT_FILE_TAG_FORCE_BUILD}}') + forceNoCache: includes('${{JUPYTERHUB_TAG_FORCE_BUILD}}', '{{JUPYTERHUB_TAG_FORCE_BUILD}}') == false workflows-send-result-event: type: build @@ -283,31 +275,53 @@ steps: '{{WORKFLOWS_SEND_RESULT_EVENT_TAG_EXISTS}}') == true forceNoCache: includes('${{WORKFLOWS_SEND_RESULT_EVENT_TAG_FORCE_BUILD}}', '{{WORKFLOWS_SEND_RESULT_EVENT_TAG_FORCE_BUILD}}') == false - test-api: + workflows-extract-download: type: build stage: build - dockerfile: test/test-api/Dockerfile + dockerfile: Dockerfile registry: '${{CODEFRESH_REGISTRY}}' buildkit: true build_arguments: - NOCACHE=${{CF_BUILD_ID}} - CLOUDHARNESS_BASE=${{REGISTRY}}/cloud-harness/cloudharness-base:${{CLOUDHARNESS_BASE_TAG}} - image_name: cloud-harness/test-api - title: Test api - working_directory: ./. + image_name: cloud-harness/workflows-extract-download + title: Workflows extract download + working_directory: ./applications/workflows/tasks/extract-download tags: - - '${{TEST_API_TAG}}' + - '${{WORKFLOWS_EXTRACT_DOWNLOAD_TAG}}' - '${{DEPLOYMENT_PUBLISH_TAG}}-dev' - '${{CF_BRANCH_TAG_NORMALIZED_LOWER_CASE}}' - - latest when: condition: any: - buildDoesNotExist: includes('${{TEST_API_TAG_EXISTS}}', '{{TEST_API_TAG_EXISTS}}') + buildDoesNotExist: includes('${{WORKFLOWS_EXTRACT_DOWNLOAD_TAG_EXISTS}}', + '{{WORKFLOWS_EXTRACT_DOWNLOAD_TAG_EXISTS}}') == true + forceNoCache: includes('${{WORKFLOWS_EXTRACT_DOWNLOAD_TAG_FORCE_BUILD}}', + '{{WORKFLOWS_EXTRACT_DOWNLOAD_TAG_FORCE_BUILD}}') == false + samples-secret: + type: build + stage: build + dockerfile: Dockerfile + registry: '${{CODEFRESH_REGISTRY}}' + buildkit: true + build_arguments: + - NOCACHE=${{CF_BUILD_ID}} + - CLOUDHARNESS_BASE=${{REGISTRY}}/cloud-harness/cloudharness-base:${{CLOUDHARNESS_BASE_TAG}} + image_name: cloud-harness/samples-secret + title: Samples secret + working_directory: ./applications/samples/tasks/secret + tags: + - '${{SAMPLES_SECRET_TAG}}' + - '${{DEPLOYMENT_PUBLISH_TAG}}-dev' + - '${{CF_BRANCH_TAG_NORMALIZED_LOWER_CASE}}' + when: + condition: + any: + buildDoesNotExist: includes('${{SAMPLES_SECRET_TAG_EXISTS}}', '{{SAMPLES_SECRET_TAG_EXISTS}}') == true - forceNoCache: includes('${{TEST_API_TAG_FORCE_BUILD}}', '{{TEST_API_TAG_FORCE_BUILD}}') + forceNoCache: includes('${{SAMPLES_SECRET_TAG_FORCE_BUILD}}', '{{SAMPLES_SECRET_TAG_FORCE_BUILD}}') == false - workflows-notify-queue: + samples-print-file: type: build stage: build dockerfile: Dockerfile @@ -316,26 +330,26 @@ steps: build_arguments: - NOCACHE=${{CF_BUILD_ID}} - CLOUDHARNESS_BASE=${{REGISTRY}}/cloud-harness/cloudharness-base:${{CLOUDHARNESS_BASE_TAG}} - image_name: cloud-harness/workflows-notify-queue - title: Workflows notify queue - working_directory: ./applications/workflows/tasks/notify-queue + image_name: cloud-harness/samples-print-file + title: Samples print file + working_directory: ./applications/samples/tasks/print-file tags: - - '${{WORKFLOWS_NOTIFY_QUEUE_TAG}}' + - '${{SAMPLES_PRINT_FILE_TAG}}' - '${{DEPLOYMENT_PUBLISH_TAG}}-dev' - '${{CF_BRANCH_TAG_NORMALIZED_LOWER_CASE}}' when: condition: any: - buildDoesNotExist: includes('${{WORKFLOWS_NOTIFY_QUEUE_TAG_EXISTS}}', - '{{WORKFLOWS_NOTIFY_QUEUE_TAG_EXISTS}}') == true - forceNoCache: includes('${{WORKFLOWS_NOTIFY_QUEUE_TAG_FORCE_BUILD}}', - '{{WORKFLOWS_NOTIFY_QUEUE_TAG_FORCE_BUILD}}') == false + buildDoesNotExist: includes('${{SAMPLES_PRINT_FILE_TAG_EXISTS}}', '{{SAMPLES_PRINT_FILE_TAG_EXISTS}}') + == true + forceNoCache: includes('${{SAMPLES_PRINT_FILE_TAG_FORCE_BUILD}}', '{{SAMPLES_PRINT_FILE_TAG_FORCE_BUILD}}') + == false title: Build parallel step 2 build_application_images_2: type: parallel stage: build steps: - common: + workflows: type: build stage: build dockerfile: Dockerfile @@ -344,19 +358,19 @@ steps: build_arguments: - NOCACHE=${{CF_BUILD_ID}} - CLOUDHARNESS_FLASK=${{REGISTRY}}/cloud-harness/cloudharness-flask:${{CLOUDHARNESS_FLASK_TAG}} - image_name: cloud-harness/common - title: Common - working_directory: ./applications/common/server + image_name: cloud-harness/workflows + title: Workflows + working_directory: ./applications/workflows/server tags: - - '${{COMMON_TAG}}' + - '${{WORKFLOWS_TAG}}' - '${{DEPLOYMENT_PUBLISH_TAG}}-dev' - '${{CF_BRANCH_TAG_NORMALIZED_LOWER_CASE}}' when: condition: any: - buildDoesNotExist: includes('${{COMMON_TAG_EXISTS}}', '{{COMMON_TAG_EXISTS}}') + buildDoesNotExist: includes('${{WORKFLOWS_TAG_EXISTS}}', '{{WORKFLOWS_TAG_EXISTS}}') == true - forceNoCache: includes('${{COMMON_TAG_FORCE_BUILD}}', '{{COMMON_TAG_FORCE_BUILD}}') + forceNoCache: includes('${{WORKFLOWS_TAG_FORCE_BUILD}}', '{{WORKFLOWS_TAG_FORCE_BUILD}}') == false volumemanager: type: build @@ -381,7 +395,7 @@ steps: == true forceNoCache: includes('${{VOLUMEMANAGER_TAG_FORCE_BUILD}}', '{{VOLUMEMANAGER_TAG_FORCE_BUILD}}') == false - workflows: + samples: type: build stage: build dockerfile: Dockerfile @@ -389,22 +403,23 @@ steps: buildkit: true build_arguments: - NOCACHE=${{CF_BUILD_ID}} + - CLOUDHARNESS_FRONTEND_BUILD=${{REGISTRY}}/cloud-harness/cloudharness-frontend-build:${{CLOUDHARNESS_FRONTEND_BUILD_TAG}} - CLOUDHARNESS_FLASK=${{REGISTRY}}/cloud-harness/cloudharness-flask:${{CLOUDHARNESS_FLASK_TAG}} - image_name: cloud-harness/workflows - title: Workflows - working_directory: ./applications/workflows/server + image_name: cloud-harness/samples + title: Samples + working_directory: ./applications/samples tags: - - '${{WORKFLOWS_TAG}}' + - '${{SAMPLES_TAG}}' - '${{DEPLOYMENT_PUBLISH_TAG}}-dev' - '${{CF_BRANCH_TAG_NORMALIZED_LOWER_CASE}}' when: condition: any: - buildDoesNotExist: includes('${{WORKFLOWS_TAG_EXISTS}}', '{{WORKFLOWS_TAG_EXISTS}}') + buildDoesNotExist: includes('${{SAMPLES_TAG_EXISTS}}', '{{SAMPLES_TAG_EXISTS}}') == true - forceNoCache: includes('${{WORKFLOWS_TAG_FORCE_BUILD}}', '{{WORKFLOWS_TAG_FORCE_BUILD}}') + forceNoCache: includes('${{SAMPLES_TAG_FORCE_BUILD}}', '{{SAMPLES_TAG_FORCE_BUILD}}') == false - samples: + common: type: build stage: build dockerfile: Dockerfile @@ -412,21 +427,20 @@ steps: buildkit: true build_arguments: - NOCACHE=${{CF_BUILD_ID}} - - CLOUDHARNESS_FRONTEND_BUILD=${{REGISTRY}}/cloud-harness/cloudharness-frontend-build:${{CLOUDHARNESS_FRONTEND_BUILD_TAG}} - CLOUDHARNESS_FLASK=${{REGISTRY}}/cloud-harness/cloudharness-flask:${{CLOUDHARNESS_FLASK_TAG}} - image_name: cloud-harness/samples - title: Samples - working_directory: ./applications/samples + image_name: cloud-harness/common + title: Common + working_directory: ./applications/common/server tags: - - '${{SAMPLES_TAG}}' + - '${{COMMON_TAG}}' - '${{DEPLOYMENT_PUBLISH_TAG}}-dev' - '${{CF_BRANCH_TAG_NORMALIZED_LOWER_CASE}}' when: condition: any: - buildDoesNotExist: includes('${{SAMPLES_TAG_EXISTS}}', '{{SAMPLES_TAG_EXISTS}}') + buildDoesNotExist: includes('${{COMMON_TAG_EXISTS}}', '{{COMMON_TAG_EXISTS}}') == true - forceNoCache: includes('${{SAMPLES_TAG_FORCE_BUILD}}', '{{SAMPLES_TAG_FORCE_BUILD}}') + forceNoCache: includes('${{COMMON_TAG_FORCE_BUILD}}', '{{COMMON_TAG_FORCE_BUILD}}') == false title: Build parallel step 3 build_application_images_3: @@ -490,13 +504,13 @@ steps: commands: - kubectl config use-context ${{CLUSTER_NAME}} - kubectl config set-context --current --namespace=test-${{NAMESPACE_BASENAME}} - - kubectl rollout status deployment/accounts - - kubectl rollout status deployment/common - - kubectl rollout status deployment/volumemanager - - kubectl rollout status deployment/argo-gk - kubectl rollout status deployment/workflows + - kubectl rollout status deployment/argo-gk + - kubectl rollout status deployment/volumemanager - kubectl rollout status deployment/samples - kubectl rollout status deployment/samples-gk + - kubectl rollout status deployment/common + - kubectl rollout status deployment/accounts - sleep 60 tests_api: stage: qa @@ -582,7 +596,7 @@ steps: approval: type: pending-approval stage: qa - title: Approve anyway + title: Approve anyway and delete deployment description: The pipeline will fail after ${{WAIT_ON_FAIL}} minutes timeout: timeUnit: minutes diff --git a/docs/model/ApplicationHarnessConfig.md b/docs/model/ApplicationHarnessConfig.md index eb479530..a589c11a 100644 --- a/docs/model/ApplicationHarnessConfig.md +++ b/docs/model/ApplicationHarnessConfig.md @@ -32,6 +32,7 @@ Name | Type | Description | Notes **dockerfile** | [**DockerfileConfig**](DockerfileConfig.md) | | [optional] **sentry** | **bool** | | [optional] **proxy** | [**ProxyConf**](ProxyConf.md) | | [optional] +**image_name** | **str** | Use this name for the image in place of the default directory name | [optional] ## Example diff --git a/docs/upgrades/2.x_3.x.md b/docs/upgrades/2.x_3.x.md new file mode 100644 index 00000000..009ee8de --- /dev/null +++ b/docs/upgrades/2.x_3.x.md @@ -0,0 +1,73 @@ + +## Upgrade project from 2.x to 3.x + +## Update Python virtual environment + +With conda: + +```sh +conda create --name ENVNAME python=3.12 +conda activate ENVNAME +source install.sh +``` + +## Migrate cloudharness-base images to debian + +Before 3.x, there were 2 different base images that could be used on applications: + +- `cloudharness-base` -- based on Alpine -- `FROM $CLOUDHARNESS_BASE` in Dockerfiles +- `cloudharness-base-debian` -- based on Debian -- `FROM $CLOUDHARNESS_BASE_DEBIAN` in Dockerfiles + +Now `cloudharness-base` is based on Debian and `cloudharness-base-debian` cannot be used anymore as a dependency + +The new command `harness-migrate` will help with porting your cloudharness-base dependent images to debian. + +After that, it's likely that some apk based dependencies have still to be tweaked in your Dockerfiles. + + +## Update Keycloak + +Updating Keycloak is easily done by restoring a Postgres backup after the upgrade is done, +and letting Keycloak apply the migrations taking care of not mixing old and new replicas. + +### Prepare for update + +The new update is available on Cloud Harness 3.x+, or the develop branch. + +The following files are usually overridden and might need merge/replacement: + +applications/accounts/Dockerfile + +applications/accounts/deploy/resources/realm.json + +deployment-configuration/helm/templates/auto-gatekeepers.yaml + +In addition to this, there might be references to the code to old paths, as the new version removed the /auth prefix. + +If keycloak-js is used, better remove it as the updated version has known issues and doesn’t work on local deployments. The replacement to keycloak-js is to use a Gatekeeper and get user information from the kc-access cookie, as done for example here. + +### Upgrade procedure +#### Before upgrading +```sh +BACKUP_FILE=full_backup_keycloak_postgres.psql +NAMESPACE="${1:-neuroglass-research}" + +kubectl exec -n $NAMESPACE deployment/keycloak-postgres -- pg_dump -d auth_db -U postgres -F c > kc-$BACKUP_FILE + +kubectl scale deployment accounts --replicas=0 -n $NAMESPACE +``` + +#### After upgrading (new version deployed) +```sh +kubectl scale deployment accounts --replicas=0 -n $NAMESPACE +kubectl exec -i -n $NAMESPACE deployment/keycloak-postgres -- psql -U user -d auth_db -c "DROP SCHEMA public CASCADE; CREATE SCHEMA public;" +kubectl exec -i -n $NAMESPACE deployment/keycloak-postgres -- pg_restore --if-exists --no-owner --clean -d auth_db -U user < kc-$BACKUP_FILE +kubectl scale deployment accounts --replicas=1 -n $NAMESPACE +``` + +### Manual steps + +There are a few known issues that have to be fixed manually after the upgrade: + +1. Users cannot access their account page (e.g. https://accounts.ch.com/realms/mnp/account/) unless this: https://github.com/keycloak/keycloak/discussions/12894#discussioncomment-3145960 → add the client scopes as optional to the account-console client. +2. sub is not present in the token → add the mapper for sub to the profile client scope diff --git a/libraries/cloudharness-common/cloudharness/workflows/operations.py b/libraries/cloudharness-common/cloudharness/workflows/operations.py index 22a71667..d97ea591 100644 --- a/libraries/cloudharness-common/cloudharness/workflows/operations.py +++ b/libraries/cloudharness-common/cloudharness/workflows/operations.py @@ -157,7 +157,7 @@ def spec(self): volumes_mounts = list(self.volumes) or [] # Tasks volumes must be declared at workflow level volumes_mounts += list({v for task in self.task_list() - for v in task.volume_mounts if task.volume_mounts}) + for v in task.volume_mounts if task.volume_mounts}) spec['volumeClaimTemplates'] = [self.spec_volumeclaim(volume) for volume in volumes_mounts if # without PVC prefix (e.g. /location) diff --git a/libraries/models/api/openapi.yaml b/libraries/models/api/openapi.yaml index a378751c..eb554610 100644 --- a/libraries/models/api/openapi.yaml +++ b/libraries/models/api/openapi.yaml @@ -1020,4 +1020,7 @@ components: proxy: $ref: '#/components/schemas/ProxyConf' description: '' + image_name: + description: Use this name for the image in place of the default directory name + type: string additionalProperties: true diff --git a/libraries/models/cloudharness_model/models/application_harness_config.py b/libraries/models/cloudharness_model/models/application_harness_config.py index 5fd4473f..f4a74d88 100644 --- a/libraries/models/cloudharness_model/models/application_harness_config.py +++ b/libraries/models/cloudharness_model/models/application_harness_config.py @@ -70,8 +70,9 @@ class ApplicationHarnessConfig(CloudHarnessBaseModel): dockerfile: Optional[DockerfileConfig] = None sentry: Optional[StrictBool] = None proxy: Optional[ProxyConf] = None + image_name: Optional[StrictStr] = Field(default=None, description="Use this name for the image in place of the default directory name") additional_properties: Dict[str, Any] = {} - __properties: ClassVar[List[str]] = ["deployment", "service", "subdomain", "aliases", "domain", "dependencies", "secured", "uri_role_mapping", "secrets", "use_services", "database", "resources", "readinessProbe", "startupProbe", "livenessProbe", "sourceRoot", "name", "jupyterhub", "accounts", "test", "quotas", "env", "envmap", "dockerfile", "sentry", "proxy"] + __properties: ClassVar[List[str]] = ["deployment", "service", "subdomain", "aliases", "domain", "dependencies", "secured", "uri_role_mapping", "secrets", "use_services", "database", "resources", "readinessProbe", "startupProbe", "livenessProbe", "sourceRoot", "name", "jupyterhub", "accounts", "test", "quotas", "env", "envmap", "dockerfile", "sentry", "proxy", "image_name"] @field_validator('source_root') def source_root_validate_regular_expression(cls, value): @@ -214,7 +215,8 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]: "envmap": obj.get("envmap"), "dockerfile": DockerfileConfig.from_dict(obj["dockerfile"]) if obj.get("dockerfile") is not None else None, "sentry": obj.get("sentry"), - "proxy": ProxyConf.from_dict(obj["proxy"]) if obj.get("proxy") is not None else None + "proxy": ProxyConf.from_dict(obj["proxy"]) if obj.get("proxy") is not None else None, + "image_name": obj.get("image_name") }) # store additional fields in additional_properties for _key in obj.keys(): diff --git a/libraries/models/docs/ApplicationHarnessConfig.md b/libraries/models/docs/ApplicationHarnessConfig.md index eb479530..a589c11a 100644 --- a/libraries/models/docs/ApplicationHarnessConfig.md +++ b/libraries/models/docs/ApplicationHarnessConfig.md @@ -32,6 +32,7 @@ Name | Type | Description | Notes **dockerfile** | [**DockerfileConfig**](DockerfileConfig.md) | | [optional] **sentry** | **bool** | | [optional] **proxy** | [**ProxyConf**](ProxyConf.md) | | [optional] +**image_name** | **str** | Use this name for the image in place of the default directory name | [optional] ## Example diff --git a/openapitools.json b/openapitools.json deleted file mode 100644 index b11fef7d..00000000 --- a/openapitools.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "$schema": "node_modules/@openapitools/openapi-generator-cli/config.schema.json", - "spaces": 2, - "generator-cli": { - "version": "7.8.0" - } -} diff --git a/tools/deployment-cli-tools/ch_cli_tools/codefresh.py b/tools/deployment-cli-tools/ch_cli_tools/codefresh.py index f544da71..87cf8dc9 100644 --- a/tools/deployment-cli-tools/ch_cli_tools/codefresh.py +++ b/tools/deployment-cli-tools/ch_cli_tools/codefresh.py @@ -482,6 +482,9 @@ def codefresh_app_publish_spec(app_name, build_tag, base_name=None): app_name, base_name), build_tag or '${{DEPLOYMENT_TAG}}'), title=title, ) + step_spec["when"] = existing_publish_when_condition( + app_specific_publish_skip_variable(app_name) + ) step_spec['tags'].append('latest') return step_spec @@ -495,6 +498,10 @@ def app_specific_tag_variable(app_name): return "%s_TAG" % app_name.replace('-', '_').upper().strip() +def app_specific_publish_skip_variable(app_name): + return "%s_PUBLISH_SKIP" % app_name.replace('-', '_').upper().strip() + + def existing_build_when_condition(tag): """ See https://codefresh.io/docs/docs/pipelines/conditional-execution-of-steps/#execute-steps-according-to-the-presence-of-a-variable @@ -513,3 +520,15 @@ def existing_build_when_condition(tag): } return when_condition + + +def existing_publish_when_condition(skip_publish_variable): + return { + "condition": { + "all": { + "skipPublish": "includes('${{%s}}', '{{%s}}') == false" % ( + skip_publish_variable, skip_publish_variable + ), + } + } + } diff --git a/tools/deployment-cli-tools/ch_cli_tools/configurationgenerator.py b/tools/deployment-cli-tools/ch_cli_tools/configurationgenerator.py index ed521650..f1f6eff6 100644 --- a/tools/deployment-cli-tools/ch_cli_tools/configurationgenerator.py +++ b/tools/deployment-cli-tools/ch_cli_tools/configurationgenerator.py @@ -328,17 +328,17 @@ def image_tag(self, image_name, build_context_path=None, dependencies=()): logging.info(f"Ignoring {ignore}") tag = generate_tag_from_content(build_context_path, ignore) logging.info(f"Content hash: {tag}") - + # Get dependencies from build context if not provided dependencies = dependencies or guess_build_dependencies_from_dockerfile(build_context_path) - + # Combine with dependency tags dep_tags = "".join(self.all_images.get(n, '') for n in dependencies) if dep_tags: logging.info(f"Dependency tags: {[(n, self.all_images.get(n, '')) for n in dependencies]}") tag = sha1((tag + dep_tags).encode("utf-8")).hexdigest() logging.info(f"Generated tag (with dependencies): {tag}") - + app_name = image_name.split("/")[-1] # the image name can have a prefix self.all_images[app_name] = tag return self.registry + image_name + (f':{tag}' if tag else '') diff --git a/tools/deployment-cli-tools/ch_cli_tools/helm.py b/tools/deployment-cli-tools/ch_cli_tools/helm.py index 8e4f90ba..6a9c3eca 100644 --- a/tools/deployment-cli-tools/ch_cli_tools/helm.py +++ b/tools/deployment-cli-tools/ch_cli_tools/helm.py @@ -9,7 +9,7 @@ import subprocess from cloudharness_utils.constants import VALUES_MANUAL_PATH, HELM_CHART_PATH -from .utils import get_cluster_ip, get_git_commit_hash, image_name_from_dockerfile_path, \ +from .utils import get_cluster_ip, get_git_commit_hash, get_image_name, image_name_from_dockerfile_path, \ get_template, merge_to_yaml_file, dict_merge, app_name_from_path, \ find_dockerfiles_paths @@ -31,16 +31,46 @@ def deploy(namespace, output_path='./deployment'): def create_helm_chart(root_paths, tag: Union[str, int, None] = 'latest', registry='', local=True, domain=None, exclude=(), secured=True, output_path='./deployment', include=None, registry_secret=None, tls=True, env=None, - namespace=None) -> HarnessMainConfig: + namespace=None, name=None, chart_version=None, app_version=None) -> HarnessMainConfig: if (type(env)) == str: env = [env] return CloudHarnessHelm(root_paths, tag=tag, registry=registry, local=local, domain=domain, exclude=exclude, secured=secured, output_path=output_path, include=include, registry_secret=registry_secret, tls=tls, env=env, - namespace=namespace).process_values() + namespace=namespace, name=name, chart_version=chart_version, + app_version=app_version).process_values() class CloudHarnessHelm(ConfigurationGenerator): + def __init__(self, root_paths, tag: Union[str, int, None] = 'latest', registry='', local=True, domain=None, exclude=(), secured=True, + output_path='./deployment', include=None, registry_secret=None, tls=True, env=None, + namespace=None, name=None, chart_version=None, app_version=None): + super().__init__(root_paths, tag=tag, registry=registry, local=local, domain=domain, exclude=exclude, secured=secured, + output_path=output_path, include=include, registry_secret=registry_secret, tls=tls, env=env, + namespace=namespace) + self.chart_name = name + self.chart_version = chart_version + self.app_version = app_version + + def _merge_chart_metadata(self, values_name=None): + metadata = {} + + resolved_name = self.chart_name or self.namespace or values_name + if resolved_name: + metadata['name'] = resolved_name + + if self.namespace: + metadata['metadata'] = {'namespace': self.namespace} + + if self.chart_version: + metadata['version'] = self.chart_version + + if self.app_version: + metadata['appVersion'] = self.app_version + + if metadata: + merge_to_yaml_file(metadata, self.helm_chart_path) + def process_values(self) -> HarnessMainConfig: """ Creates values file for the helm chart @@ -77,9 +107,7 @@ def process_values(self) -> HarnessMainConfig: # Save values file for manual helm chart merged_values = merge_to_yaml_file(helm_values, os.path.join( self.dest_deployment_path, VALUES_MANUAL_PATH)) - if self.namespace: - merge_to_yaml_file({'metadata': {'namespace': self.namespace}, - 'name': helm_values['name']}, self.helm_chart_path) + self._merge_chart_metadata(helm_values['name']) validate_helm_values(merged_values) return HarnessMainConfig.from_dict(merged_values) @@ -223,9 +251,11 @@ def create_app_values_spec(self, app_name, app_path, base_image_name=None, helm_ deployment_values = values.get(KEY_HARNESS, {}).get(KEY_DEPLOYMENT, {}) deployment_image = deployment_values.get('image', None) or values.get('image', None) values['build'] = not bool(deployment_image) # Used by skaffold and ci/cd to determine if the image should be built + + image_name = get_image_name(values.get(KEY_HARNESS, {}).get('image_name', ''), base_image_name) if len(image_paths) > 0 and not deployment_image: - image_name = image_name_from_dockerfile_path(os.path.relpath( - image_paths[0], os.path.dirname(app_path)), base_image_name) + + image_name = image_name or image_name_from_dockerfile_path(os.path.relpath(image_paths[0], os.path.dirname(app_path)), base_image_name) values['image'] = self.image_tag( image_name, build_context_path=app_path, dependencies=build_dependencies) elif KEY_HARNESS in values and not deployment_image and values[ @@ -245,9 +275,9 @@ def create_app_values_spec(self, app_name, app_path, base_image_name=None, helm_ for task_path in task_images_paths: task_name = app_name_from_path(os.path.relpath( task_path, os.path.dirname(app_path))) - img_name = image_name_from_dockerfile_path(task_name, base_image_name) + task_img_name = "-".join([image_name, os.path.basename(task_path)]) if image_name else image_name_from_dockerfile_path(task_path, base_image_name) values[KEY_TASK_IMAGES][task_name] = self.image_tag( - img_name, build_context_path=task_path, dependencies=values[KEY_TASK_IMAGES].keys()) + task_img_name, build_context_path=task_path, dependencies=values[KEY_TASK_IMAGES].keys()) return values diff --git a/tools/deployment-cli-tools/ch_cli_tools/utils.py b/tools/deployment-cli-tools/ch_cli_tools/utils.py index 4809998e..9697c61d 100644 --- a/tools/deployment-cli-tools/ch_cli_tools/utils.py +++ b/tools/deployment-cli-tools/ch_cli_tools/utils.py @@ -85,6 +85,8 @@ def get_parent_app_name(app_relative_path): def get_image_name(app_name, base_name=None): + if not app_name: + return None return (base_name + '/' + app_name) if base_name else app_name diff --git a/tools/deployment-cli-tools/harness-deployment b/tools/deployment-cli-tools/harness-deployment index 4a587afa..19d714d0 100644 --- a/tools/deployment-cli-tools/harness-deployment +++ b/tools/deployment-cli-tools/harness-deployment @@ -36,6 +36,12 @@ if __name__ == "__main__": parser.add_argument('-n', '--namespace', dest='namespace', action="store", default=None, help='Specify the namespace of the deployment (default taken from values.yaml)') + parser.add_argument('--name', dest='name', action="store", default=None, + help='Specify the helm chart name (default: namespace when provided, otherwise chart default)') + parser.add_argument('--chart-version', dest='chart_version', action="store", default=None, + help='Specify the helm chart version') + parser.add_argument('--app-version', dest='app_version', action="store", default=None, + help='Specify the helm chart appVersion') parser.add_argument('-r', '--registry', dest='registry', action="store", default='', help='Specify image registry prefix') @@ -97,8 +103,8 @@ if __name__ == "__main__": chart_fn = create_helm_chart if not args.docker_compose else create_docker_compose_configuration - helm_values = chart_fn( - root_paths, + chart_args = dict( + root_paths=root_paths, tag=args.tag, registry=args.registry, domain=args.domain, @@ -110,9 +116,18 @@ if __name__ == "__main__": registry_secret=args.registry_secret, tls=not args.no_tls, env=envs, - namespace=args.namespace + namespace=args.namespace, ) + if not args.docker_compose: + chart_args.update(dict( + name=args.name, + chart_version=args.chart_version, + app_version=args.app_version, + )) + + helm_values = chart_fn(**chart_args) + merged_root_paths = preprocess_build_overrides( root_paths=root_paths, helm_values=helm_values) diff --git a/tools/deployment-cli-tools/tests/resources/applications/myapp/deploy/values-imagename.yaml b/tools/deployment-cli-tools/tests/resources/applications/myapp/deploy/values-imagename.yaml new file mode 100644 index 00000000..25be78aa --- /dev/null +++ b/tools/deployment-cli-tools/tests/resources/applications/myapp/deploy/values-imagename.yaml @@ -0,0 +1,2 @@ +harness: + image_name: custom-myapp \ No newline at end of file diff --git a/tools/deployment-cli-tools/tests/test_codefresh.py b/tools/deployment-cli-tools/tests/test_codefresh.py index a558f7cc..823e52e9 100644 --- a/tools/deployment-cli-tools/tests/test_codefresh.py +++ b/tools/deployment-cli-tools/tests/test_codefresh.py @@ -140,6 +140,9 @@ def test_create_codefresh_configuration(): tstep['commands']) == 2, "Unit test commands are not properly loaded from the unit test configuration file" assert tstep['commands'][0] == "tox", "Unit test commands are not properly loaded from the unit test configuration file" assert len(l1_steps[CD_STEP_CLONE_DEPENDENCIES]['steps']) == 3, "3 clone steps should be included as we have 2 dependencies from myapp, plus cloudharness" + + publish_base_step = l1_steps[CD_STEP_PUBLISH]['steps']['publish_cloudharness-base'] + assert publish_base_step['when']['condition']['all']['skipPublish'] == "includes('${{CLOUDHARNESS_BASE_PUBLISH_SKIP}}', '{{CLOUDHARNESS_BASE_PUBLISH_SKIP}}') == false" finally: shutil.rmtree(BUILD_MERGE_DIR) @@ -339,3 +342,44 @@ def test_app_depends_on_app(): envs=[], base_image_name=values['name'], helm_values=values, save=False) + + +def test_prepare_deployment_includes_optional_helm_metadata_args(): + values = create_helm_chart( + [CLOUDHARNESS_ROOT, RESOURCES], + output_path=OUT, + include=['myapp'], + domain="my.local", + namespace='test', + env='dev', + local=False, + tag=1, + registry='reg' + ) + try: + root_paths = preprocess_build_overrides( + root_paths=[CLOUDHARNESS_ROOT, RESOURCES], + helm_values=values, + merge_build_path=BUILD_MERGE_DIR + ) + build_included = [app['harness']['name'] + for app in values['apps'].values() if 'harness' in app] + + cf = create_codefresh_deployment_scripts(root_paths, include=build_included, + envs=['dev'], + base_image_name=values['name'], + helm_values=values, save=False) + + commands = cf['steps']['prepare_deployment']['commands'] + export_cmd = next(c for c in commands if c.startswith('export HELM_META_ARGS=')) + run_cmd = next(c for c in commands if 'harness-deployment' in c) + + assert 'HARNESS_CHART_NAME' in export_cmd + assert 'HARNESS_CHART_VERSION' in export_cmd + assert 'HARNESS_APP_VERSION' in export_cmd + assert '--name %s' in export_cmd + assert '--chart-version %s' in export_cmd + assert '--app-version %s' in export_cmd + assert '$HELM_META_ARGS' in run_cmd + finally: + shutil.rmtree(BUILD_MERGE_DIR) diff --git a/tools/deployment-cli-tools/tests/test_helm.py b/tools/deployment-cli-tools/tests/test_helm.py index 2e4b9391..809beffc 100644 --- a/tools/deployment-cli-tools/tests/test_helm.py +++ b/tools/deployment-cli-tools/tests/test_helm.py @@ -94,6 +94,16 @@ def test_collect_nobuild(tmp_path): assert values[KEY_APPS]['myapp']['build'] == False +def test_collect_helm_values_harness_image_name_override(tmp_path): + out_folder = tmp_path / 'test_collect_helm_values_harness_image_name_override' + + values = create_helm_chart([CLOUDHARNESS_ROOT, RESOURCES], output_path=out_folder, include=['myapp'], + domain="my.local", namespace='test', env='imagename', local=False, tag=1, registry='reg') + + assert values[KEY_APPS]['myapp'][KEY_HARNESS]['deployment']['image'] == 'reg/testprojectname/custom-myapp:1' + assert values[KEY_APPS]['myapp'][KEY_TASK_IMAGES]['myapp-mytask'] == 'reg/testprojectname/custom-myapp-mytask:1' + + def test_collect_helm_values_noreg_noinclude(tmp_path): out_path = tmp_path / 'test_collect_helm_values_noreg_noinclude' values = create_helm_chart([CLOUDHARNESS_ROOT, RESOURCES], output_path=out_path, domain="my.local", @@ -375,6 +385,51 @@ def create(): fname.unlink() +def test_chart_metadata_defaults_to_namespace_name(tmp_path): + out_folder = tmp_path / 'test_chart_metadata_defaults_to_namespace_name' + create_helm_chart( + [CLOUDHARNESS_ROOT, RESOURCES], + output_path=out_folder, + include=['myapp'], + domain="my.local", + namespace='custom-ns', + env='dev', + local=False, + tag=1, + registry='reg' + ) + + chart_path = out_folder / HELM_CHART_PATH / 'Chart.yaml' + chart = yaml.safe_load(open(chart_path, 'r')) + assert chart['name'] == 'custom-ns' + assert chart['metadata']['namespace'] == 'custom-ns' + + +def test_chart_metadata_optional_overrides(tmp_path): + out_folder = tmp_path / 'test_chart_metadata_optional_overrides' + create_helm_chart( + [CLOUDHARNESS_ROOT, RESOURCES], + output_path=out_folder, + include=['myapp'], + domain="my.local", + namespace='custom-ns', + name='custom-chart', + chart_version='9.8.7', + app_version='4.5.6', + env='dev', + local=False, + tag=1, + registry='reg' + ) + + chart_path = out_folder / HELM_CHART_PATH / 'Chart.yaml' + chart = yaml.safe_load(open(chart_path, 'r')) + assert chart['name'] == 'custom-chart' + assert chart['version'] == '9.8.7' + assert chart['appVersion'] == '4.5.6' + assert chart['metadata']['namespace'] == 'custom-ns' + + def test_exclude_single_task(tmp_path): out_folder = tmp_path / 'test_exclude_single_task' diff --git a/tools/deployment-cli-tools/tests/test_skaffold.py b/tools/deployment-cli-tools/tests/test_skaffold.py index a52de348..5898e37e 100644 --- a/tools/deployment-cli-tools/tests/test_skaffold.py +++ b/tools/deployment-cli-tools/tests/test_skaffold.py @@ -75,8 +75,10 @@ def test_create_skaffold_configuration(tmp_path): assert len(cloudharness_flask_artifact['requires']) == 1 + expected_samples_image = values[KEY_APPS]['samples'][KEY_HARNESS][KEY_DEPLOYMENT]['image'].split(':')[0] + samples_artifact = next( - a for a in sk['build']['artifacts'] if a['image'] == f'reg/testprojectname/samples' + a for a in sk['build']['artifacts'] if a['image'] == expected_samples_image ) assert os.path.samefile(samples_artifact['context'], join(CLOUDHARNESS_ROOT, 'applications/samples')) assert 'TEST_ARGUMENT' in samples_artifact['docker']['buildArgs'] @@ -96,7 +98,7 @@ def test_create_skaffold_configuration(tmp_path): assert len(sk['test']) == 2, 'Unit tests should be included' samples_test = sk['test'][0] - assert samples_test['image'] == f'reg/testprojectname/samples', 'Unit tests for samples should be included' + assert samples_test['image'] == expected_samples_image, 'Unit tests for samples should be included' assert "samples/test" in samples_test['custom'][0]['command'], "The test command must come from values.yaml test/unit/commands" assert len(sk['test'][1]['custom']) == 2