Add terraform deployment script for import automation#1924
Add terraform deployment script for import automation#1924vish-cs wants to merge 1 commit intodatacommonsorg:masterfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a comprehensive Terraform deployment script to automate the data import process within Data Commons. It establishes the foundational GCP infrastructure, including database, storage, messaging, and serverless compute components, along with orchestration workflows, to enable a robust and scalable import pipeline. The primary goal is to streamline and automate the ingestion, aggregation, and handling of data feeds. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
The pull request introduces a comprehensive Terraform deployment script for the Data Commons Import Automation Workflow. The script effectively sets up various GCP services including APIs, GCS buckets, Spanner, Pub/Sub, Cloud Functions, and Cloud Workflows, along with a unified service account. The overall structure and resource definitions are well-organized and follow good practices for infrastructure as code. However, there are a couple of areas related to security and data protection that warrant attention, specifically regarding Spanner database deletion protection and the granularity of IAM roles assigned to the service account.
import-automation/workflow/main.tf
Outdated
| instance = google_spanner_instance.import_instance.name | ||
| name = var.spanner_database_id | ||
| project = var.project_id | ||
| ddl = [for s in split(";", file("${path.module}/spanner_schema.sql")) : trimspace(s) if trimspace(s) != ""] |
There was a problem hiding this comment.
The deletion_protection for the Spanner database is set to false. For a critical component like a Spanner database, especially one involved in an import automation workflow, enabling deletion protection is highly recommended to prevent accidental data loss or service disruption. Consider setting this to true for production environments.
deletion_protection = true
| "roles/storage.objectAdmin", | ||
| "roles/iam.serviceAccountUser", | ||
| "roles/spanner.databaseAdmin", |
There was a problem hiding this comment.
The service account automation_sa is granted roles/spanner.databaseAdmin, roles/storage.objectAdmin, and roles/bigquery.dataEditor. These roles provide very broad administrative access across all Spanner databases, GCS buckets, and BigQuery datasets within the project. This violates the principle of least privilege.
It is recommended to use more granular roles or custom roles that are scoped to the specific resources the service account needs to interact with (e.g., specific Spanner databases, GCS buckets, or BigQuery datasets/tables) and only grant the necessary permissions (e.g., spanner.dataEditor for data manipulation, storage.objectUser for object read/write).
"roles/storage.objectUser",
"roles/spanner.dataEditor",
"roles/bigquery.dataEditor", # Consider more granular roles if possible
No description provided.