Integrated workflow and test results Dashboard.#90
Open
Manigurr wants to merge 1 commit intovideo.qclinux.mainfrom
Open
Integrated workflow and test results Dashboard.#90Manigurr wants to merge 1 commit intovideo.qclinux.mainfrom
Manigurr wants to merge 1 commit intovideo.qclinux.mainfrom
Conversation
2d7e31c to
098a4cd
Compare
Integrated workflow and test results Dashboard and also append the post merge cron job. Signed-off-by: Mani Deepak Gurram <manigurr@qti.qualcomm.com>
098a4cd to
f3576f7
Compare
Comment on lines
+44
to
+101
| run: | | ||
| echo "::group::Uploading files to S3" | ||
| case "${{ inputs.mode }}" in | ||
| multi-upload) | ||
| if [ ! -s "${{ inputs.local_file }}" ]; then | ||
| echo "❌ File list is empty. No files to upload." | ||
| exit 1 | ||
| fi | ||
|
|
||
| echo "📄 Contents of file list:" | ||
| cat "${{ inputs.local_file }}" | ||
|
|
||
| first_line=true | ||
| manifest="${{ github.workspace }}/presigned_urls.json" | ||
| echo "{" > "${manifest}" | ||
|
|
||
| while IFS= read -r file; do | ||
| resolved_file=$(readlink -f "$file") | ||
| if [ -f "$resolved_file" ]; then | ||
| filename=$(basename "$resolved_file") | ||
| echo "📤 Uploading $filename..." | ||
| aws s3 cp "$resolved_file" "s3://${{ inputs.s3_bucket }}/${{ env.UPLOAD_LOCATION }}/$filename" | ||
| presigned_url=$(aws s3 presign "s3://${{ inputs.s3_bucket }}/${{ env.UPLOAD_LOCATION }}/$filename" --expires-in 259200) | ||
|
|
||
| if [ "$first_line" = true ]; then | ||
| first_line=false | ||
| else | ||
| echo "," >> "${manifest}" | ||
| fi | ||
|
|
||
| # Key = filename, Value = presigned_url | ||
| echo " \"${filename}\": \"${presigned_url}\"" >> "${manifest}" | ||
| echo "✅ Pre-signed URL for $filename: $presigned_url" | ||
| else | ||
| echo "⚠️ Skipping: $file is not a regular file or not accessible." | ||
| fi | ||
| done < "${{ inputs.local_file }}" | ||
|
|
||
| echo "}" >> "${manifest}" | ||
| ;; | ||
| single-upload) | ||
| resolved_file=$(readlink -f "${{ inputs.local_file }}") | ||
| filename=$(basename "$resolved_file") | ||
| aws s3 cp "$resolved_file" "s3://${{ inputs.s3_bucket }}/${{ env.UPLOAD_LOCATION }}/$filename" | ||
| presigned_url=$(aws s3 presign "s3://${{ inputs.s3_bucket }}/${{ env.UPLOAD_LOCATION }}/$filename" --expires-in 259200) | ||
| echo "presigned_url=${presigned_url}" >> "$GITHUB_OUTPUT" | ||
| ;; | ||
| download) | ||
| download_dir=$(realpath "${{ inputs.download_location }}") | ||
| aws s3 cp "s3://${{ inputs.s3_bucket }}/${{ inputs.download_file }}" "$download_dir" | ||
| ;; | ||
| *) | ||
| echo "Invalid mode. Use 'upload', 'multi-upload', or 'download'." | ||
| exit 1 | ||
| ;; | ||
| esac | ||
| echo "::endgroup::" | ||
|
|
Comment on lines
+58
to
+68
| run: | | ||
| echo "Creating metadata.json from job_render templates" | ||
| cd ../job_render | ||
| docker run -i --rm \ | ||
| --user "$(id -u):$(id -g)" \ | ||
| --workdir="$PWD" \ | ||
| -v "$(dirname "$PWD")":"$(dirname "$PWD")" \ | ||
| -e dtb_url="${{ steps.process_urls.outputs.dtb_url }}" \ | ||
| ${{ inputs.docker_image }} \ | ||
| jq '.artifacts["dtbs/qcom/${{ env.MACHINE }}.dtb"] = env.dtb_url' data/metadata.json > temp.json && mv temp.json data/metadata.json | ||
|
|
Comment on lines
+79
to
+133
| run: | | ||
| echo "Populating cloudData.json with kernel, vmlinux, modules, metadata, ramdisk" | ||
| metadata_url="${{ steps.upload_metadata.outputs.presigned_url }}" | ||
| image_url="${{ steps.process_urls.outputs.image_url }}" | ||
| vmlinux_url="${{ steps.process_urls.outputs.vmlinux_url }}" | ||
| modules_url="${{ steps.process_urls.outputs.modules_url }}" | ||
| merged_ramdisk_url="${{ steps.process_urls.outputs.merged_ramdisk_url }}" | ||
|
|
||
| cd ../job_render | ||
|
|
||
| # metadata | ||
| docker run -i --rm \ | ||
| --user "$(id -u):$(id -g)" \ | ||
| --workdir="$PWD" \ | ||
| -v "$(dirname "$PWD")":"$(dirname "$PWD")" \ | ||
| -e metadata_url="$metadata_url" \ | ||
| ${{ inputs.docker_image }} \ | ||
| jq '.artifacts.metadata = env.metadata_url' data/cloudData.json > temp.json && mv temp.json data/cloudData.json | ||
|
|
||
| # kernel Image | ||
| docker run -i --rm \ | ||
| --user "$(id -u):$(id -g)" \ | ||
| --workdir="$PWD" \ | ||
| -v "$(dirname "$PWD")":"$(dirname "$PWD")" \ | ||
| -e image_url="$image_url" \ | ||
| ${{ inputs.docker_image }} \ | ||
| jq '.artifacts.kernel = env.image_url' data/cloudData.json > temp.json && mv temp.json data/cloudData.json | ||
|
|
||
| # vmlinux (set only if present) | ||
| docker run -i --rm \ | ||
| --user "$(id -u):$(id -g)" \ | ||
| --workdir="$PWD" \ | ||
| -v "$(dirname "$PWD")":"$(dirname "$PWD")" \ | ||
| -e vmlinux_url="$vmlinux_url" \ | ||
| ${{ inputs.docker_image }} \ | ||
| sh -c 'if [ -n "$vmlinux_url" ]; then jq ".artifacts.vmlinux = env.vmlinux_url" data/cloudData.json > temp.json && mv temp.json data/cloudData.json; fi' | ||
|
|
||
| # modules | ||
| docker run -i --rm \ | ||
| --user "$(id -u):$(id -g)" \ | ||
| --workdir="$PWD" \ | ||
| -v "$(dirname "$PWD")":"$(dirname "$PWD")" \ | ||
| -e modules_url="$modules_url" \ | ||
| ${{ inputs.docker_image }} \ | ||
| jq '.artifacts.modules = env.modules_url' data/cloudData.json > temp.json && mv temp.json data/cloudData.json | ||
|
|
||
| # ramdisk: use merged only here (fallback added in next step if missing) | ||
| docker run -i --rm \ | ||
| --user "$(id -u):$(id -g)" \ | ||
| --workdir="$PWD" \ | ||
| -v "$(dirname "$PWD")":"$(dirname "$PWD")" \ | ||
| -e merged_ramdisk_url="$merged_ramdisk_url" \ | ||
| ${{ inputs.docker_image }} \ | ||
| sh -c 'if [ -n "$merged_ramdisk_url" ]; then jq ".artifacts.ramdisk = env.merged_ramdisk_url" data/cloudData.json > temp.json && mv temp.json data/cloudData.json; fi' | ||
|
|
Comment on lines
+136
to
+180
| run: | | ||
| set -euo pipefail | ||
| cd ../job_render | ||
|
|
||
| # Fallback to stable kerneltest ramdisk only if merged ramdisk is not available | ||
| if [ -z "${{ steps.process_urls.outputs.merged_ramdisk_url }}" ]; then | ||
| echo "Merged ramdisk not found. Using stable kerneltest ramdisk fallback." | ||
| ramdisk_url="$(aws s3 presign s3://qli-prd-video-gh-artifacts/qualcomm-linux/video-driver/artifacts/initramfs/initramfs-kerneltest-full-image-qcom-armv8a.cpio.gz --expires 7600)" | ||
| docker run -i --rm \ | ||
| --user "$(id -u):$(id -g)" \ | ||
| --workdir="$PWD" \ | ||
| -v "$(dirname "$PWD")":"$(dirname "$PWD")" \ | ||
| -e ramdisk_url="$ramdisk_url" \ | ||
| ${{ inputs.docker_image }} \ | ||
| jq '.artifacts.ramdisk = env.ramdisk_url' data/cloudData.json > temp.json && mv temp.json data/cloudData.json | ||
| else | ||
| echo "Ramdisk set from merged source; skipping kerneltest fallback." | ||
| fi | ||
|
|
||
| # Optional board-specific firmware initramfs | ||
| if [ -n "${{ env.FIRMWARE }}" ]; then | ||
| case "${{ env.FIRMWARE }}" in | ||
| sm8750-mtp) | ||
| FW_FILE="initramfs-firmware-dragonboard410c-image-sm8750-mtp.cpio.gz" | ||
| ;; | ||
| *) | ||
| FW_FILE="initramfs-firmware-${{ env.FIRMWARE }}-image-qcom-armv8a.cpio.gz" | ||
| ;; | ||
| esac | ||
|
|
||
| echo "Using firmware file: $FW_FILE" | ||
|
|
||
| firmware_url="$(aws s3 presign s3://qli-prd-video-gh-artifacts/qualcomm-linux/video-driver/artifacts/initramfs/${FW_FILE} --expires 7600)" | ||
|
|
||
| docker run -i --rm \ | ||
| --user "$(id -u):$(id -g)" \ | ||
| --workdir="$PWD" \ | ||
| -v "$(dirname "$PWD")":"$(dirname "$PWD")" \ | ||
| -e firmware_url="$firmware_url" \ | ||
| ${{ inputs.docker_image }} \ | ||
| jq '.artifacts.firmware = env.firmware_url' data/cloudData.json > temp.json && mv temp.json data/cloudData.json | ||
| else | ||
| echo "No FIRMWARE provided; skipping firmware artifact update." | ||
| fi | ||
|
|
Comment on lines
+183
to
+196
| run: | | ||
| cd ../job_render | ||
| mkdir -p renders | ||
| docker run -i --rm \ | ||
| --user "$(id -u):$(id -g)" \ | ||
| --workdir="$PWD" \ | ||
| -v "$(dirname "$PWD")":"$(dirname "$PWD")" \ | ||
| -e TARGET="${{ env.LAVA_NAME }}" \ | ||
| -e TARGET_DTB="${{ env.MACHINE }}" \ | ||
| ${{ inputs.docker_image }} \ | ||
| sh -c 'export BOOT_METHOD=fastboot && \ | ||
| export TARGET=${TARGET} && \ | ||
| export TARGET_DTB=${TARGET_DTB} && \ | ||
| python3 lava_Job_definition_generator.py --localjson ./data/cloudData.json --video_pre-merge' No newline at end of file |
Comment on lines
+187
to
+197
| run: | | ||
| set -euo pipefail | ||
| machines_json='${{ inputs.build_matrix }}' | ||
| if ! command -v jq >/dev/null 2>&1; then | ||
| echo "❌ jq is not installed on this runner. Please install jq." | ||
| exit 1 | ||
| fi | ||
| echo "$machines_json" | jq -e . >/dev/null | ||
| [ "$(echo "$machines_json" | jq length)" -gt 0 ] || { echo "❌ build_matrix is empty"; exit 1; } | ||
| echo "✅ build_matrix is valid JSON" | ||
|
|
Comment on lines
+200
to
+260
| run: | | ||
| set -euo pipefail | ||
| workspace="${{ github.workspace }}" | ||
| file_list="$workspace/artifacts/file_list.txt" | ||
| mkdir -p "$workspace/artifacts" | ||
|
|
||
| # Fresh file_list | ||
| : > "$file_list" | ||
|
|
||
| # Package lib/modules (xz-compressed) — exclude risky symlinks | ||
| mod_root="$workspace/kobj/tar-install/lib/modules" | ||
| [ -d "$mod_root" ] || { echo "❌ Missing directory: $mod_root"; exit 1; } | ||
| tar -C "$workspace/kobj/tar-install" \ | ||
| --exclude='lib/modules/*/build' \ | ||
| --exclude='lib/modules/*/source' \ | ||
| --numeric-owner --owner=0 --group=0 \ | ||
| -cJf "$workspace/modules.tar.xz" lib/modules | ||
|
|
||
| # Safety checks on the tar | ||
| if tar -Jtvf "$workspace/modules.tar.xz" | grep -q ' -> '; then | ||
| echo "❌ Symlinks found in modules archive (should be none)"; exit 1 | ||
| fi | ||
| if tar -Jtf "$workspace/modules.tar.xz" | grep -Eq '^/|(^|/)\.\.(/|$)'; then | ||
| echo "❌ Unsafe paths found in modules archive"; exit 1 | ||
| fi | ||
|
|
||
| echo "$workspace/modules.tar.xz" >> "$file_list" | ||
| echo "✅ Queued for upload: $workspace/modules.tar.xz" | ||
|
|
||
| # Kernel Image + merged video ramdisk (no local ramdisk) | ||
| IMAGE_PATH="$workspace/kobj/arch/arm64/boot/Image" | ||
| VMLINUX_PATH="$workspace/kobj/vmlinux" | ||
| MERGED_PATH="$workspace/combineramdisk/video-merged.cpio.gz" | ||
|
|
||
| [ -f "$IMAGE_PATH" ] || { echo "❌ Missing expected file: $IMAGE_PATH"; exit 1; } | ||
| [ -f "$VMLINUX_PATH" ] || { echo "❌ Missing expected file: $VMLINUX_PATH"; exit 1; } | ||
| [ -f "$MERGED_PATH" ] || { echo "❌ Missing merged cpio: $MERGED_PATH"; exit 1; } | ||
|
|
||
| echo "$IMAGE_PATH" >> "$file_list" | ||
| echo "✅ Queued for upload: $IMAGE_PATH" | ||
| echo "$VMLINUX_PATH" >> "$file_list" | ||
| echo "✅ Queued for upload: $VMLINUX_PATH" | ||
| echo "$MERGED_PATH" >> "$file_list" | ||
| echo "✅ Queued for upload: $MERGED_PATH" | ||
|
|
||
| # Loop through all machines from the build_matrix input and add DTBs | ||
| machines='${{ inputs.build_matrix }}' | ||
| for machine in $(echo "$machines" | jq -r '.[].machine'); do | ||
| dtb="$workspace/kobj/arch/arm64/boot/dts/qcom/${machine}.dtb" | ||
| if [ -f "$dtb" ]; then | ||
| echo "$dtb" >> "$file_list" | ||
| echo "✅ Queued for upload: $dtb" | ||
| else | ||
| echo "❌ Missing DTB: $dtb" | ||
| exit 1 | ||
| fi | ||
| done | ||
|
|
||
| echo "----- Files queued for S3 upload -----" | ||
| cat "$file_list" | ||
|
|
Comment on lines
+85
to
+93
| run: | | ||
| cd ../job_render | ||
| job_id=$(docker run -i --rm --workdir="$PWD" -v "$(dirname $PWD)":"$(dirname $PWD)" ${{ inputs.docker_image }} sh -c "lavacli identities add --token ${{ secrets.LAVA_OSS_TOKEN }} --uri https://lava-oss.qualcomm.com/RPC2 --username ${{ secrets.LAVA_OSS_USER }} production && lavacli -i production jobs submit ./renders/lava_job_definition.yaml") | ||
| job_url="https://lava-oss.qualcomm.com/scheduler/job/$job_id" | ||
| echo "job_id=$job_id" >> $GITHUB_OUTPUT | ||
| echo "job_url=$job_url" >> $GITHUB_OUTPUT | ||
| echo "Lava Job: $job_url" | ||
| echo "JOB_ID=$job_id" >> $GITHUB_ENV | ||
|
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Integrated workflow and test results Dashboard and also appending the post merge cron job.
Expected_results:-https://github.com/qualcomm-linux/video-driver/actions/runs/24087010375