Skip to content

Add terraform deployment script for import automation#1924

Open
vish-cs wants to merge 1 commit intodatacommonsorg:masterfrom
vish-cs:terraform
Open

Add terraform deployment script for import automation#1924
vish-cs wants to merge 1 commit intodatacommonsorg:masterfrom
vish-cs:terraform

Conversation

@vish-cs
Copy link
Contributor

@vish-cs vish-cs commented Mar 18, 2026

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a comprehensive Terraform deployment script to automate the data import process within Data Commons. It establishes the foundational GCP infrastructure, including database, storage, messaging, and serverless compute components, along with orchestration workflows, to enable a robust and scalable import pipeline. The primary goal is to streamline and automate the ingestion, aggregation, and handling of data feeds.

Highlights

  • Infrastructure as Code: Introduced a new Terraform script (main.tf) to fully deploy the Data Commons Import Automation workflow on Google Cloud Platform.
  • GCP Service Provisioning: Configured and enabled essential GCP services including Artifact Registry, Batch, Cloud Build, Cloud Functions, Cloud Scheduler, Compute Engine, Dataflow, IAM, Pub/Sub, Cloud Run, Spanner, Cloud Storage, and Cloud Workflows.
  • Data Storage and Database: Provisioned a GCS bucket for imports and a Spanner instance and database, including schema initialization and an ingestion lock.
  • Serverless Compute and Messaging: Deployed three Python-based Cloud Functions (Gen2) for Spanner ingestion, BigQuery aggregation, and Pub/Sub event handling. A Pub/Sub topic and subscription were set up to trigger imports upon GCS transfer completion.
  • Workflow Orchestration: Created two Cloud Workflows: one to orchestrate import executor Batch jobs and another for Spanner ingestion Dataflow jobs, both utilizing a dedicated service account.
  • IAM Configuration: Established a unified service account (import-automation-sa) and assigned it a comprehensive set of IAM roles necessary for the deployed Workflows, Functions, Pub/Sub, Spanner, BigQuery, and other related services.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces a comprehensive Terraform deployment script for the Data Commons Import Automation Workflow. The script effectively sets up various GCP services including APIs, GCS buckets, Spanner, Pub/Sub, Cloud Functions, and Cloud Workflows, along with a unified service account. The overall structure and resource definitions are well-organized and follow good practices for infrastructure as code. However, there are a couple of areas related to security and data protection that warrant attention, specifically regarding Spanner database deletion protection and the granularity of IAM roles assigned to the service account.

instance = google_spanner_instance.import_instance.name
name = var.spanner_database_id
project = var.project_id
ddl = [for s in split(";", file("${path.module}/spanner_schema.sql")) : trimspace(s) if trimspace(s) != ""]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The deletion_protection for the Spanner database is set to false. For a critical component like a Spanner database, especially one involved in an import automation workflow, enabling deletion protection is highly recommended to prevent accidental data loss or service disruption. Consider setting this to true for production environments.

  deletion_protection = true

Comment on lines +145 to +147
"roles/storage.objectAdmin",
"roles/iam.serviceAccountUser",
"roles/spanner.databaseAdmin",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The service account automation_sa is granted roles/spanner.databaseAdmin, roles/storage.objectAdmin, and roles/bigquery.dataEditor. These roles provide very broad administrative access across all Spanner databases, GCS buckets, and BigQuery datasets within the project. This violates the principle of least privilege.

It is recommended to use more granular roles or custom roles that are scoped to the specific resources the service account needs to interact with (e.g., specific Spanner databases, GCS buckets, or BigQuery datasets/tables) and only grant the necessary permissions (e.g., spanner.dataEditor for data manipulation, storage.objectUser for object read/write).

    "roles/storage.objectUser",
    "roles/spanner.dataEditor",
    "roles/bigquery.dataEditor", # Consider more granular roles if possible

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant