diff --git a/calico-cloud/free/connect-cluster-free.mdx b/calico-cloud/free/connect-cluster-free.mdx index 150b389851..06cda13c0c 100644 --- a/calico-cloud/free/connect-cluster-free.mdx +++ b/calico-cloud/free/connect-cluster-free.mdx @@ -180,5 +180,5 @@ To fix this problem: ## Next steps -* To try out the observability tools with demo applications, follow the quickstart guide from [Step 4. Deploy NGINX and BusyBox to generate traffic](quickstart.mdx#step-4-deploy-nginx-and-busybox-to-generate-traffic). +* To try out the observability tools with a demo application, follow the quickstart guide from [Step 4: Deploy the demo app](quickstart.mdx#step-4-deploy-the-demo-app). * [Remove a cluster from Calico Cloud Free Tier](disconnect-cluster-free.mdx) \ No newline at end of file diff --git a/calico-cloud/free/quickstart.mdx b/calico-cloud/free/quickstart.mdx index 521db212ad..7651aea8a3 100644 --- a/calico-cloud/free/quickstart.mdx +++ b/calico-cloud/free/quickstart.mdx @@ -8,32 +8,18 @@ export let figCount = 1; # Calico Cloud Free Tier quickstart guide -This quickstart guide shows you how to connect a Kubernetes cluster to Calico Cloud Free Tier. -You'll learn how to create a cluster with `kind`, connect that cluster to the Calico Cloud web console, and use observability tools to monitor network traffic. +Get Calico Cloud running on a local Kind cluster, deploy a realistic microservices app, and explore your live Service Graph — all in a few minutes. ## Before you begin -* You need to [sign up for a Calico Cloud Free Tier account](https://calicocloud.io). -* You also need to install a few tools to complete this tutorial: - * `kind`. - This is what you'll use to create a cluster on your workstation. - For installation instructions, see the [`kind` documentation](https://kind.sigs.k8s.io/docs/user/quick-start/). - * Docker Engine or Docker Desktop. - This is required to run containers for the `kind` utility. - For installation instructions, see the [Docker documentation](https://docs.docker.com/desktop/). - * `kubectl`. - This is the tool you'll use to interact with your cluster. - For installation instructions, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/#kubectl) +* [Sign up for a Calico Cloud Free Tier account](https://calicocloud.io). +* Install [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/), [Docker](https://docs.docker.com/desktop/), and [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl). -## Step 1: Create a cluster +## Step 1: Create a Kind cluster -In this step, you will: -* **Create a cluster:** Use `kind` to create a Kubernetes cluster. -* **Verify the cluster:** Check that the cluster is running and ready. +1. Create a file called `kind-config.yaml`: -1. Create a file called `config.yaml` and give it the following content: - - ```bash + ```yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: @@ -45,92 +31,48 @@ In this step, you will: podSubnet: 192.168.0.0/16 ``` - This configuration file tells `kind` to create a cluster with one control-plane node and two worker nodes. - It instructs `kind` to create the cluster without a CNI. - The `podSubnet` range defines the IP addresses that Kubernetes will use for pods. + This tells Kind to create a three-node cluster without a default CNI, so you can install Calico in the next step. -2. Start your Kubernetes cluster with the configuration file by running the following command: +1. Create the cluster: ```bash - kind create cluster --name=calico-cluster --config=config.yaml - ``` - - `kind` reads you configuration file and creates a cluster in a few minutes. - - ```bash title='Expected output' - Creating cluster "calico-cluster" ... - ✓ Ensuring node image (kindest/node:v1.29.2) đŸ–ŧ - ✓ Preparing nodes đŸ“Ļ đŸ“Ļ đŸ“Ļ - ✓ Writing configuration 📜 - ✓ Starting control-plane đŸ•šī¸ - ✓ Installing StorageClass 💾 - ✓ Joining worker nodes 🚜 - Set kubectl context to "kind-calico-cluster" - You can now use your cluster with: - - kubectl cluster-info --context kind-calico-cluster - - Thanks for using kind! 😊 + kind create cluster --name=calico-cluster --config=kind-config.yaml ``` -3. To verify that your cluster is working, run the following command: +1. Verify the nodes are up (they'll show `NotReady` until Calico is installed): ```bash kubectl get nodes ``` - You should see three nodes with the name you gave the cluster. - ```bash title="Expected output" - NAME STATUS ROLES AGE VERSION - calico-cluster-control-plane NotReady control-plane 5m46s v1.29.2 - calico-cluster-worker NotReady 5m23s v1.29.2 - calico-cluster-worker2 NotReady 5m22s v1.29.2 + NAME STATUS ROLES AGE VERSION + calico-cluster-control-plane NotReady control-plane 60s v1.29.2 + calico-cluster-worker NotReady 40s v1.29.2 + calico-cluster-worker2 NotReady 40s v1.29.2 ``` - Don't wait for the nodes to get a `Ready` status. - They remain in a `NotReady` status until you configure networking in the next step. - -## Step 2. Install Calico +## Step 2: Install Calico -In this step, you will install Calico in your cluster. - -1. Install the Tigera Operator and custom resource definitions. +1. Install the Tigera operator and custom resource definitions: ```bash kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` - ```bash title="Expected output" - namespace/tigera-operator created - serviceaccount/tigera-operator created - clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created - clusterrole.rbac.authorization.k8s.io/tigera-operator created - clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created - rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created - deployment.apps/tigera-operator created - ``` - -2. Install $[prodname] by creating the necessary custom resources. +1. Install $[prodname] by creating the necessary custom resources: ```bash kubectl create -f $[manifestsUrl]/manifests/custom-resources.yaml ``` - ```bash title="Expected output" - installation.operator.tigera.io/default created - apiserver.operator.tigera.io/default created - goldmane.operator.tigera.io/default created - whisker.operator.tigera.io/default created - ``` - -3. Monitor the deployment by running the following command: +1. Wait for all components to become available: ```bash watch kubectl get tigerastatus ``` - After a few minutes, all the Calico components display `True` in the `AVAILABLE` column. + After a few minutes, all components should show `True` in the `AVAILABLE` column. ```bash title="Expected output" NAME AVAILABLE PROGRESSING DEGRADED SINCE @@ -141,12 +83,10 @@ In this step, you will install Calico in your cluster. whisker True False False 3m19s ``` -## Step 3. Connect to Calico Cloud Free Tier - -In this step, you will connect your cluster to Calico Cloud Free Tier. +## Step 3: Connect to Calico Cloud Free Tier -1. Sign in to the Calico Cloud web console and click the **Connect a cluster** button on the welcome screen. -1. Follow the prompts to create a name for your cluster (for example, `quickstart-cluster`) and copy a `kubectl` command to run in your cluster. +1. In the Calico Cloud web console, click **Connect a cluster**. +1. Follow the prompts to name your cluster (for example, `quickstart-cluster`) and copy the generated `kubectl` command.
What's happening in this command? @@ -158,383 +98,77 @@ In this step, you will connect your cluster to Calico Cloud Free Tier. This resource provides certificates for secure communication between your cluster and the Calico Cloud management cluster. * **A `Secret` resource (`tigera-voltron-linseed-certs-public`)**. This resource provides certificates for secure communications for the specific components that Calico Cloud uses for log data and observability. - - ```bash title='Example of generated kubectl command to connect a cluster to Calico Cloud Free Tier' - kubectl apply -f - < -1. To start the connection process, run the `kubectl` command in your terminal. - - ```bash title='Example output' - managementclusterconnection.operator.tigera.io/tigera-secure created - secret/tigera-managed-cluster-connection created - secret/tigera-voltron-linseed-certs-public created - ``` - -1. Back in your browser, click **I applied the manifest** to close the dialog. - Your new cluster connection appears in the **Managed Clusters** page. - - - *Figure {figCount++}: A screenshot of the Managed Clusters page showing the new cluster connection.* - -1. On the left side of the console, click **Service Graph** to view the Service Graph, which we'll be using to view your network traffic in this tutorial. - What you see is a dynamic diagram of the namespaces in your cluster and the connections between them. - For now, it shows only system namespaces. +1. Run the `kubectl` command in your terminal. +1. Back in the console, click **I applied the manifest**. + When the **Managed Clusters** page shows your cluster as **Connected**, move to the next step. +## Step 4: Deploy the demo app - - *Figure {figCount++}: A screenshot of the Service Graph showing system namespaces.* +This step deploys [Google's Online Boutique](https://github.com/GoogleCloudPlatform/microservices-demo), a microservices demo with 12 services. +Each service is deployed in its own namespace so that Service Graph displays a rich cross-namespace topology with realistic traffic patterns. - We'll return to this page to see the traffic after we deploy a sample application in the next step. - -## Step 4. Deploy NGINX and BusyBox to generate traffic - -Now it's time to generate some network traffic. -We'll do this first by deploying an NGINX server and exposing it as a service in the cluster. -Then we'll make HTTP requests from another pod in the cluster to the NGINX server and to an external website. -For this we'll use the BusyBox utility. - -In this step, you will: -* **Create a server:** Deploy an NGINX web server in your Kubernetes cluster. -* **Expose the server:** Make the NGINX server accessible within the cluster. -* **Test connectivity:** Use a BusyBox pod to verify connections to the NGINX server and the public internet. - -1. Create a namespace for your application: - - ```bash - kubectl create namespace quickstart - ``` - ```bash title="Expected output" - namespace/quickstart created - ``` - -1. Deploy an NGINX web server in the `quickstart` namespace: - - ```bash - kubectl create deployment --namespace=quickstart nginx --image=nginx - ``` - ```bash title="Expected output" - deployment.apps/nginx created - ``` - -1. Expose the NGINX deployment to make it accessible within the cluster: +1. Deploy the application: ```bash - kubectl expose --namespace=quickstart deployment nginx --port=80 - ``` - ```bash title="Expected output" - service/nginx exposed + kubectl apply -f $[tutorialFilesURL]/online-boutique-namespaced.yaml ``` -1. Start a BusyBox session to test whether you can access the NGINX server. + :::note - ```bash - kubectl run --namespace=quickstart access --rm -ti --image busybox /bin/sh - ``` + The manifest includes a load generator that automatically drives traffic between all services. + No manual traffic generation is needed. - This command creates a BusyBox pod inside the `quickstart` namespace and starts a shell session inside the pod. + ::: - ```bash title="Expected output" - If you don't see a command prompt, try pressing enter. - / # - ``` - -1. In the BusyBox shell, run the following command to test communication with the NGINX server: +1. Wait for all pods to reach `Running` status: ```bash - wget -qO- http://nginx - ``` - - You should see the HTML content of the NGINX welcome page. - - ```html title="Expected output" - - - - Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and - working. Further configuration is required.

- -

For online documentation and support please refer to - nginx.org.
- Commercial support is available at - nginx.com.

- -

Thank you for using nginx.

- - + watch "kubectl get pods -A | grep -v Running | grep -v Completed" ``` - This confirms that the BusyBox pod can access the NGINX server. - -1. In the Busybox shell, run the following command test communication with the public internet: - - ```bash - wget -qO- https://docs.tigera.io/pod-connection-test.txt - ``` - - You should see the content of the file `pod-connectivity-test.txt`. - ```html title="Expected output" - You successfully connected to https://docs.tigera.io/pod-connection-test.txt. - ``` + This may take 2–3 minutes. When the only output is the header line, all pods are ready. - This confirms that the BusyBox pod can access the public internet. +
+ What's in the demo app? + The Online Boutique is a cloud-native e-commerce app made up of 12 microservices: + `adservice`, `cartservice`, `checkoutservice`, `currencyservice`, `emailservice`, `frontend`, `loadgenerator`, `paymentservice`, `productcatalogservice`, `recommendationservice`, `redis-cart`, and `shippingservice`. -1. In the web console, go to the **Service Graph** page to view your flow logs. - It may take up to 5 minutes for the flows to appear. - When they appear, click the `quickstart` namespace in the Service Graph to filter the view to show the flows only in that namespace. - In the list of flows, you should see three new connection types: one to `coredns` one to `nginx`, and another to `pub`, meaning "public network". - - - *Figure {figCount++}: Service Graph with `quickstart` namespace selected showing flows to NGINX and public network.* - -## Step 5. Restrict all traffic with a default deny policy - -To effectively secure your cluster, it's best to start by denying all traffic, and then gradually allowing only the necessary traffic. -We'll do this by applying a Global Calico Network Policy that denies all ingress and egress traffic by default. - -In this step, you will: -* **Implement a global default deny policy:** Use a Global Calico Network Policy to deny all ingress and egress traffic by default. -* **Verify access is denied:** Use your BusyBox pod to confirm that the policy is working as expected. - -1. Create a Global Calico Network Policy to deny all traffic except for the necessary system namespaces: - - ```bash - kubectl create -f - < - *Figure {figCount++}: Service Graph showing denied flows to `coredns`.* - - - By following these steps, you have successfully implemented a global default deny policy and verified that it is working as expected. - -## Step 6. Create targeted network policy for allowed traffic - -Now that you have a default deny policy in place, you need to create specific policies to allow only the necessary traffic for your applications to function. -The `default-deny` policy blocks all ingress and egress traffic for pods not in system namespaces, including our `access` (BusyBox) and `nginx` pods in the `quickstart` namespace. - -In this step, you will: -* **Allow egress traffic from BusyBox** Create a network policy to allow egress traffic from the BusyBox pod to the public internet. -* **Allow ingress traffic to NGINX** Create a network policy to allow ingress traffic to the NGINX server. - -1. Create a Calico network policy in the `quickstart` namespace that selects the `access` pod and allows all egress traffic from it. - - ```bash - kubectl create -f - < - ```bash - wget -qO- https://docs.tigera.io/pod-connection-test.txt - ``` +## Step 5: Explore Service Graph - ```html title="Expected output" - You successfully connected to https://docs.tigera.io/pod-connection-test.txt. - ``` +In the Calico Cloud web console, go to **Service Graph**. +Within a minute or two you'll see every service and namespace mapped as a live node, with directional traffic flows between them. -1. Test access to the NGINX server again. - Egress *from* the `access` pod is allowed by the new policy, but ingress *to* the `nginx` pod is still blocked by the `default-deny` policy. - This request should fail. +Try clicking on a service — for example, `cartservice`. You'll see: +* Which services send traffic to it (such as `checkoutservice` and `frontend`) +* Which services it connects to (such as `redis-cart`) +* Flow logs with details on allowed and denied connections - ```bash - wget -qO- http://nginx - ``` + +*Figure {figCount++}: Service Graph showing cross-namespace traffic flows between the Online Boutique microservices.* - ```bash title="Expected output" - wget: bad address 'nginx' - ``` +## Step 6: Clean up - -4. Create another Calico network policy in the `quickstart` namespace. - This policy selects the `nginx` pods (using the label `app=nginx`) and allows ingress traffic specifically *from* pods with the label `run=access`. - - ```bash - kubectl create -f - < - - - Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and - working. Further configuration is required.

- -

For online documentation and support please refer to - nginx.org.
- Commercial support is available at - nginx.com.

- -

Thank you for using nginx.

- - - ``` - -You have now successfully implemented a default deny policy while allowing only the necessary traffic for your applications to function. - - -## Step 7. Clean up - -1. To delete the cluster, run the following command: +1. Delete the Kind cluster: ```bash kind delete cluster --name=calico-cluster ``` - ```bash title="Expected output" - Deleted cluster: calico-cluster - ``` - -1. To remove the cluster from Calico Cloud, go to the **Managed Clusters** page. - Click **Actions > Disconnect**, and in the confirmation dialog, click **I ran the commands**. - (Ordinarily you would run the commands from the dialog, but since you deleted the cluster already, you don't need to do this.) +1. In the Calico Cloud web console, go to **Managed Clusters**, click **Actions > Disconnect**, and then click **I ran the commands**. + (Because you already deleted the cluster, you don't need to run the disconnect commands.) 1. Click **Actions > Remove** to fully remove the cluster from Calico Cloud. - You can now connect another cluster to make use of the observability tools. - -## Additional resources +## Next steps -* To view requirements and connect another cluster, see [Connect a cluster to Calico Cloud Free Tier](connect-cluster-free.mdx). \ No newline at end of file +* [Write a network policy](/calico-cloud/network-policy/beginners/calico-network-policy) to restrict traffic between services — for example, allow only `cartservice` to reach `redis-cart`. +* [Set up alerts](/calico-cloud/observability/alerts) for unexpected traffic patterns. +* [Explore the full feature set](/calico-cloud/free/overview) available on the Free Tier. +* [Connect another cluster](/calico-cloud/free/connect-cluster-free) to Calico Cloud Free Tier. diff --git a/calico-cloud_versioned_docs/version-22-2/free/connect-cluster-free.mdx b/calico-cloud_versioned_docs/version-22-2/free/connect-cluster-free.mdx index 150b389851..06cda13c0c 100644 --- a/calico-cloud_versioned_docs/version-22-2/free/connect-cluster-free.mdx +++ b/calico-cloud_versioned_docs/version-22-2/free/connect-cluster-free.mdx @@ -180,5 +180,5 @@ To fix this problem: ## Next steps -* To try out the observability tools with demo applications, follow the quickstart guide from [Step 4. Deploy NGINX and BusyBox to generate traffic](quickstart.mdx#step-4-deploy-nginx-and-busybox-to-generate-traffic). +* To try out the observability tools with a demo application, follow the quickstart guide from [Step 4: Deploy the demo app](quickstart.mdx#step-4-deploy-the-demo-app). * [Remove a cluster from Calico Cloud Free Tier](disconnect-cluster-free.mdx) \ No newline at end of file diff --git a/calico-cloud_versioned_docs/version-22-2/free/quickstart.mdx b/calico-cloud_versioned_docs/version-22-2/free/quickstart.mdx index 521db212ad..7651aea8a3 100644 --- a/calico-cloud_versioned_docs/version-22-2/free/quickstart.mdx +++ b/calico-cloud_versioned_docs/version-22-2/free/quickstart.mdx @@ -8,32 +8,18 @@ export let figCount = 1; # Calico Cloud Free Tier quickstart guide -This quickstart guide shows you how to connect a Kubernetes cluster to Calico Cloud Free Tier. -You'll learn how to create a cluster with `kind`, connect that cluster to the Calico Cloud web console, and use observability tools to monitor network traffic. +Get Calico Cloud running on a local Kind cluster, deploy a realistic microservices app, and explore your live Service Graph — all in a few minutes. ## Before you begin -* You need to [sign up for a Calico Cloud Free Tier account](https://calicocloud.io). -* You also need to install a few tools to complete this tutorial: - * `kind`. - This is what you'll use to create a cluster on your workstation. - For installation instructions, see the [`kind` documentation](https://kind.sigs.k8s.io/docs/user/quick-start/). - * Docker Engine or Docker Desktop. - This is required to run containers for the `kind` utility. - For installation instructions, see the [Docker documentation](https://docs.docker.com/desktop/). - * `kubectl`. - This is the tool you'll use to interact with your cluster. - For installation instructions, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/#kubectl) +* [Sign up for a Calico Cloud Free Tier account](https://calicocloud.io). +* Install [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/), [Docker](https://docs.docker.com/desktop/), and [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl). -## Step 1: Create a cluster +## Step 1: Create a Kind cluster -In this step, you will: -* **Create a cluster:** Use `kind` to create a Kubernetes cluster. -* **Verify the cluster:** Check that the cluster is running and ready. +1. Create a file called `kind-config.yaml`: -1. Create a file called `config.yaml` and give it the following content: - - ```bash + ```yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: @@ -45,92 +31,48 @@ In this step, you will: podSubnet: 192.168.0.0/16 ``` - This configuration file tells `kind` to create a cluster with one control-plane node and two worker nodes. - It instructs `kind` to create the cluster without a CNI. - The `podSubnet` range defines the IP addresses that Kubernetes will use for pods. + This tells Kind to create a three-node cluster without a default CNI, so you can install Calico in the next step. -2. Start your Kubernetes cluster with the configuration file by running the following command: +1. Create the cluster: ```bash - kind create cluster --name=calico-cluster --config=config.yaml - ``` - - `kind` reads you configuration file and creates a cluster in a few minutes. - - ```bash title='Expected output' - Creating cluster "calico-cluster" ... - ✓ Ensuring node image (kindest/node:v1.29.2) đŸ–ŧ - ✓ Preparing nodes đŸ“Ļ đŸ“Ļ đŸ“Ļ - ✓ Writing configuration 📜 - ✓ Starting control-plane đŸ•šī¸ - ✓ Installing StorageClass 💾 - ✓ Joining worker nodes 🚜 - Set kubectl context to "kind-calico-cluster" - You can now use your cluster with: - - kubectl cluster-info --context kind-calico-cluster - - Thanks for using kind! 😊 + kind create cluster --name=calico-cluster --config=kind-config.yaml ``` -3. To verify that your cluster is working, run the following command: +1. Verify the nodes are up (they'll show `NotReady` until Calico is installed): ```bash kubectl get nodes ``` - You should see three nodes with the name you gave the cluster. - ```bash title="Expected output" - NAME STATUS ROLES AGE VERSION - calico-cluster-control-plane NotReady control-plane 5m46s v1.29.2 - calico-cluster-worker NotReady 5m23s v1.29.2 - calico-cluster-worker2 NotReady 5m22s v1.29.2 + NAME STATUS ROLES AGE VERSION + calico-cluster-control-plane NotReady control-plane 60s v1.29.2 + calico-cluster-worker NotReady 40s v1.29.2 + calico-cluster-worker2 NotReady 40s v1.29.2 ``` - Don't wait for the nodes to get a `Ready` status. - They remain in a `NotReady` status until you configure networking in the next step. - -## Step 2. Install Calico +## Step 2: Install Calico -In this step, you will install Calico in your cluster. - -1. Install the Tigera Operator and custom resource definitions. +1. Install the Tigera operator and custom resource definitions: ```bash kubectl create -f $[manifestsUrl]/manifests/tigera-operator.yaml ``` - ```bash title="Expected output" - namespace/tigera-operator created - serviceaccount/tigera-operator created - clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created - clusterrole.rbac.authorization.k8s.io/tigera-operator created - clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created - rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created - deployment.apps/tigera-operator created - ``` - -2. Install $[prodname] by creating the necessary custom resources. +1. Install $[prodname] by creating the necessary custom resources: ```bash kubectl create -f $[manifestsUrl]/manifests/custom-resources.yaml ``` - ```bash title="Expected output" - installation.operator.tigera.io/default created - apiserver.operator.tigera.io/default created - goldmane.operator.tigera.io/default created - whisker.operator.tigera.io/default created - ``` - -3. Monitor the deployment by running the following command: +1. Wait for all components to become available: ```bash watch kubectl get tigerastatus ``` - After a few minutes, all the Calico components display `True` in the `AVAILABLE` column. + After a few minutes, all components should show `True` in the `AVAILABLE` column. ```bash title="Expected output" NAME AVAILABLE PROGRESSING DEGRADED SINCE @@ -141,12 +83,10 @@ In this step, you will install Calico in your cluster. whisker True False False 3m19s ``` -## Step 3. Connect to Calico Cloud Free Tier - -In this step, you will connect your cluster to Calico Cloud Free Tier. +## Step 3: Connect to Calico Cloud Free Tier -1. Sign in to the Calico Cloud web console and click the **Connect a cluster** button on the welcome screen. -1. Follow the prompts to create a name for your cluster (for example, `quickstart-cluster`) and copy a `kubectl` command to run in your cluster. +1. In the Calico Cloud web console, click **Connect a cluster**. +1. Follow the prompts to name your cluster (for example, `quickstart-cluster`) and copy the generated `kubectl` command.
What's happening in this command? @@ -158,383 +98,77 @@ In this step, you will connect your cluster to Calico Cloud Free Tier. This resource provides certificates for secure communication between your cluster and the Calico Cloud management cluster. * **A `Secret` resource (`tigera-voltron-linseed-certs-public`)**. This resource provides certificates for secure communications for the specific components that Calico Cloud uses for log data and observability. - - ```bash title='Example of generated kubectl command to connect a cluster to Calico Cloud Free Tier' - kubectl apply -f - < -1. To start the connection process, run the `kubectl` command in your terminal. - - ```bash title='Example output' - managementclusterconnection.operator.tigera.io/tigera-secure created - secret/tigera-managed-cluster-connection created - secret/tigera-voltron-linseed-certs-public created - ``` - -1. Back in your browser, click **I applied the manifest** to close the dialog. - Your new cluster connection appears in the **Managed Clusters** page. - - - *Figure {figCount++}: A screenshot of the Managed Clusters page showing the new cluster connection.* - -1. On the left side of the console, click **Service Graph** to view the Service Graph, which we'll be using to view your network traffic in this tutorial. - What you see is a dynamic diagram of the namespaces in your cluster and the connections between them. - For now, it shows only system namespaces. +1. Run the `kubectl` command in your terminal. +1. Back in the console, click **I applied the manifest**. + When the **Managed Clusters** page shows your cluster as **Connected**, move to the next step. +## Step 4: Deploy the demo app - - *Figure {figCount++}: A screenshot of the Service Graph showing system namespaces.* +This step deploys [Google's Online Boutique](https://github.com/GoogleCloudPlatform/microservices-demo), a microservices demo with 12 services. +Each service is deployed in its own namespace so that Service Graph displays a rich cross-namespace topology with realistic traffic patterns. - We'll return to this page to see the traffic after we deploy a sample application in the next step. - -## Step 4. Deploy NGINX and BusyBox to generate traffic - -Now it's time to generate some network traffic. -We'll do this first by deploying an NGINX server and exposing it as a service in the cluster. -Then we'll make HTTP requests from another pod in the cluster to the NGINX server and to an external website. -For this we'll use the BusyBox utility. - -In this step, you will: -* **Create a server:** Deploy an NGINX web server in your Kubernetes cluster. -* **Expose the server:** Make the NGINX server accessible within the cluster. -* **Test connectivity:** Use a BusyBox pod to verify connections to the NGINX server and the public internet. - -1. Create a namespace for your application: - - ```bash - kubectl create namespace quickstart - ``` - ```bash title="Expected output" - namespace/quickstart created - ``` - -1. Deploy an NGINX web server in the `quickstart` namespace: - - ```bash - kubectl create deployment --namespace=quickstart nginx --image=nginx - ``` - ```bash title="Expected output" - deployment.apps/nginx created - ``` - -1. Expose the NGINX deployment to make it accessible within the cluster: +1. Deploy the application: ```bash - kubectl expose --namespace=quickstart deployment nginx --port=80 - ``` - ```bash title="Expected output" - service/nginx exposed + kubectl apply -f $[tutorialFilesURL]/online-boutique-namespaced.yaml ``` -1. Start a BusyBox session to test whether you can access the NGINX server. + :::note - ```bash - kubectl run --namespace=quickstart access --rm -ti --image busybox /bin/sh - ``` + The manifest includes a load generator that automatically drives traffic between all services. + No manual traffic generation is needed. - This command creates a BusyBox pod inside the `quickstart` namespace and starts a shell session inside the pod. + ::: - ```bash title="Expected output" - If you don't see a command prompt, try pressing enter. - / # - ``` - -1. In the BusyBox shell, run the following command to test communication with the NGINX server: +1. Wait for all pods to reach `Running` status: ```bash - wget -qO- http://nginx - ``` - - You should see the HTML content of the NGINX welcome page. - - ```html title="Expected output" - - - - Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and - working. Further configuration is required.

- -

For online documentation and support please refer to - nginx.org.
- Commercial support is available at - nginx.com.

- -

Thank you for using nginx.

- - + watch "kubectl get pods -A | grep -v Running | grep -v Completed" ``` - This confirms that the BusyBox pod can access the NGINX server. - -1. In the Busybox shell, run the following command test communication with the public internet: - - ```bash - wget -qO- https://docs.tigera.io/pod-connection-test.txt - ``` - - You should see the content of the file `pod-connectivity-test.txt`. - ```html title="Expected output" - You successfully connected to https://docs.tigera.io/pod-connection-test.txt. - ``` + This may take 2–3 minutes. When the only output is the header line, all pods are ready. - This confirms that the BusyBox pod can access the public internet. +
+ What's in the demo app? + The Online Boutique is a cloud-native e-commerce app made up of 12 microservices: + `adservice`, `cartservice`, `checkoutservice`, `currencyservice`, `emailservice`, `frontend`, `loadgenerator`, `paymentservice`, `productcatalogservice`, `recommendationservice`, `redis-cart`, and `shippingservice`. -1. In the web console, go to the **Service Graph** page to view your flow logs. - It may take up to 5 minutes for the flows to appear. - When they appear, click the `quickstart` namespace in the Service Graph to filter the view to show the flows only in that namespace. - In the list of flows, you should see three new connection types: one to `coredns` one to `nginx`, and another to `pub`, meaning "public network". - - - *Figure {figCount++}: Service Graph with `quickstart` namespace selected showing flows to NGINX and public network.* - -## Step 5. Restrict all traffic with a default deny policy - -To effectively secure your cluster, it's best to start by denying all traffic, and then gradually allowing only the necessary traffic. -We'll do this by applying a Global Calico Network Policy that denies all ingress and egress traffic by default. - -In this step, you will: -* **Implement a global default deny policy:** Use a Global Calico Network Policy to deny all ingress and egress traffic by default. -* **Verify access is denied:** Use your BusyBox pod to confirm that the policy is working as expected. - -1. Create a Global Calico Network Policy to deny all traffic except for the necessary system namespaces: - - ```bash - kubectl create -f - < - *Figure {figCount++}: Service Graph showing denied flows to `coredns`.* - - - By following these steps, you have successfully implemented a global default deny policy and verified that it is working as expected. - -## Step 6. Create targeted network policy for allowed traffic - -Now that you have a default deny policy in place, you need to create specific policies to allow only the necessary traffic for your applications to function. -The `default-deny` policy blocks all ingress and egress traffic for pods not in system namespaces, including our `access` (BusyBox) and `nginx` pods in the `quickstart` namespace. - -In this step, you will: -* **Allow egress traffic from BusyBox** Create a network policy to allow egress traffic from the BusyBox pod to the public internet. -* **Allow ingress traffic to NGINX** Create a network policy to allow ingress traffic to the NGINX server. - -1. Create a Calico network policy in the `quickstart` namespace that selects the `access` pod and allows all egress traffic from it. - - ```bash - kubectl create -f - < - ```bash - wget -qO- https://docs.tigera.io/pod-connection-test.txt - ``` +## Step 5: Explore Service Graph - ```html title="Expected output" - You successfully connected to https://docs.tigera.io/pod-connection-test.txt. - ``` +In the Calico Cloud web console, go to **Service Graph**. +Within a minute or two you'll see every service and namespace mapped as a live node, with directional traffic flows between them. -1. Test access to the NGINX server again. - Egress *from* the `access` pod is allowed by the new policy, but ingress *to* the `nginx` pod is still blocked by the `default-deny` policy. - This request should fail. +Try clicking on a service — for example, `cartservice`. You'll see: +* Which services send traffic to it (such as `checkoutservice` and `frontend`) +* Which services it connects to (such as `redis-cart`) +* Flow logs with details on allowed and denied connections - ```bash - wget -qO- http://nginx - ``` + +*Figure {figCount++}: Service Graph showing cross-namespace traffic flows between the Online Boutique microservices.* - ```bash title="Expected output" - wget: bad address 'nginx' - ``` +## Step 6: Clean up - -4. Create another Calico network policy in the `quickstart` namespace. - This policy selects the `nginx` pods (using the label `app=nginx`) and allows ingress traffic specifically *from* pods with the label `run=access`. - - ```bash - kubectl create -f - < - - - Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and - working. Further configuration is required.

- -

For online documentation and support please refer to - nginx.org.
- Commercial support is available at - nginx.com.

- -

Thank you for using nginx.

- - - ``` - -You have now successfully implemented a default deny policy while allowing only the necessary traffic for your applications to function. - - -## Step 7. Clean up - -1. To delete the cluster, run the following command: +1. Delete the Kind cluster: ```bash kind delete cluster --name=calico-cluster ``` - ```bash title="Expected output" - Deleted cluster: calico-cluster - ``` - -1. To remove the cluster from Calico Cloud, go to the **Managed Clusters** page. - Click **Actions > Disconnect**, and in the confirmation dialog, click **I ran the commands**. - (Ordinarily you would run the commands from the dialog, but since you deleted the cluster already, you don't need to do this.) +1. In the Calico Cloud web console, go to **Managed Clusters**, click **Actions > Disconnect**, and then click **I ran the commands**. + (Because you already deleted the cluster, you don't need to run the disconnect commands.) 1. Click **Actions > Remove** to fully remove the cluster from Calico Cloud. - You can now connect another cluster to make use of the observability tools. - -## Additional resources +## Next steps -* To view requirements and connect another cluster, see [Connect a cluster to Calico Cloud Free Tier](connect-cluster-free.mdx). \ No newline at end of file +* [Write a network policy](/calico-cloud/network-policy/beginners/calico-network-policy) to restrict traffic between services — for example, allow only `cartservice` to reach `redis-cart`. +* [Set up alerts](/calico-cloud/observability/alerts) for unexpected traffic patterns. +* [Explore the full feature set](/calico-cloud/free/overview) available on the Free Tier. +* [Connect another cluster](/calico-cloud/free/connect-cluster-free) to Calico Cloud Free Tier. diff --git a/static/files/online-boutique-namespaced.yaml b/static/files/online-boutique-namespaced.yaml new file mode 100644 index 0000000000..bd3c1a543a --- /dev/null +++ b/static/files/online-boutique-namespaced.yaml @@ -0,0 +1,1028 @@ +# Online Boutique microservices demo (namespaced) +# +# Adapted from Google's microservices-demo, licensed under Apache 2.0. +# https://github.com/GoogleCloudPlatform/microservices-demo +# +# Each service is placed in its own namespace so that Calico Cloud +# Service Graph shows a rich cross-namespace topology. +# All cross-service env vars use . DNS format. + +# -------------------------------------------------------------------------- +# adservice +# -------------------------------------------------------------------------- +apiVersion: v1 +kind: Namespace +metadata: + name: adservice +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: adservice + namespace: adservice +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: adservice + namespace: adservice +spec: + selector: + matchLabels: + app: adservice + template: + metadata: + labels: + app: adservice + spec: + serviceAccountName: adservice + terminationGracePeriodSeconds: 5 + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsNonRoot: true + runAsUser: 1000 + containers: + - name: server + image: us-central1-docker.pkg.dev/google-samples/microservices-demo/adservice:v0.10.5 + ports: + - containerPort: 9555 + env: + - name: PORT + value: "9555" + resources: + requests: + cpu: 200m + memory: 180Mi + limits: + cpu: 300m + memory: 300Mi + readinessProbe: + initialDelaySeconds: 20 + periodSeconds: 15 + grpc: + port: 9555 + livenessProbe: + initialDelaySeconds: 20 + periodSeconds: 15 + grpc: + port: 9555 + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true +--- +apiVersion: v1 +kind: Service +metadata: + name: adservice + namespace: adservice +spec: + type: ClusterIP + selector: + app: adservice + ports: + - name: grpc + port: 9555 + targetPort: 9555 +--- +# -------------------------------------------------------------------------- +# cartservice +# -------------------------------------------------------------------------- +apiVersion: v1 +kind: Namespace +metadata: + name: cartservice +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: cartservice + namespace: cartservice +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: cartservice + namespace: cartservice +spec: + selector: + matchLabels: + app: cartservice + template: + metadata: + labels: + app: cartservice + spec: + serviceAccountName: cartservice + terminationGracePeriodSeconds: 5 + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsNonRoot: true + runAsUser: 1000 + containers: + - name: server + image: us-central1-docker.pkg.dev/google-samples/microservices-demo/cartservice:v0.10.5 + ports: + - containerPort: 7070 + env: + - name: REDIS_ADDR + value: "redis-cart.redis-cart:6379" + resources: + requests: + cpu: 200m + memory: 64Mi + limits: + cpu: 300m + memory: 128Mi + readinessProbe: + initialDelaySeconds: 15 + grpc: + port: 7070 + livenessProbe: + initialDelaySeconds: 15 + periodSeconds: 10 + grpc: + port: 7070 + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true +--- +apiVersion: v1 +kind: Service +metadata: + name: cartservice + namespace: cartservice +spec: + type: ClusterIP + selector: + app: cartservice + ports: + - name: grpc + port: 7070 + targetPort: 7070 +--- +# -------------------------------------------------------------------------- +# checkoutservice +# -------------------------------------------------------------------------- +apiVersion: v1 +kind: Namespace +metadata: + name: checkoutservice +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: checkoutservice + namespace: checkoutservice +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: checkoutservice + namespace: checkoutservice +spec: + selector: + matchLabels: + app: checkoutservice + template: + metadata: + labels: + app: checkoutservice + spec: + serviceAccountName: checkoutservice + terminationGracePeriodSeconds: 5 + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsNonRoot: true + runAsUser: 1000 + containers: + - name: server + image: us-central1-docker.pkg.dev/google-samples/microservices-demo/checkoutservice:v0.10.5 + ports: + - containerPort: 5050 + env: + - name: PORT + value: "5050" + - name: PRODUCT_CATALOG_SERVICE_ADDR + value: "productcatalogservice.productcatalogservice:3550" + - name: SHIPPING_SERVICE_ADDR + value: "shippingservice.shippingservice:50051" + - name: PAYMENT_SERVICE_ADDR + value: "paymentservice.paymentservice:50051" + - name: EMAIL_SERVICE_ADDR + value: "emailservice.emailservice:5000" + - name: CURRENCY_SERVICE_ADDR + value: "currencyservice.currencyservice:7000" + - name: CART_SERVICE_ADDR + value: "cartservice.cartservice:7070" + resources: + requests: + cpu: 100m + memory: 64Mi + limits: + cpu: 200m + memory: 128Mi + readinessProbe: + grpc: + port: 5050 + livenessProbe: + grpc: + port: 5050 + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true +--- +apiVersion: v1 +kind: Service +metadata: + name: checkoutservice + namespace: checkoutservice +spec: + type: ClusterIP + selector: + app: checkoutservice + ports: + - name: grpc + port: 5050 + targetPort: 5050 +--- +# -------------------------------------------------------------------------- +# currencyservice +# -------------------------------------------------------------------------- +apiVersion: v1 +kind: Namespace +metadata: + name: currencyservice +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: currencyservice + namespace: currencyservice +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: currencyservice + namespace: currencyservice +spec: + selector: + matchLabels: + app: currencyservice + template: + metadata: + labels: + app: currencyservice + spec: + serviceAccountName: currencyservice + terminationGracePeriodSeconds: 5 + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsNonRoot: true + runAsUser: 1000 + containers: + - name: server + image: us-central1-docker.pkg.dev/google-samples/microservices-demo/currencyservice:v0.10.5 + ports: + - containerPort: 7000 + name: grpc + env: + - name: PORT + value: "7000" + - name: DISABLE_PROFILER + value: "1" + resources: + requests: + cpu: 100m + memory: 64Mi + limits: + cpu: 200m + memory: 128Mi + readinessProbe: + grpc: + port: 7000 + livenessProbe: + grpc: + port: 7000 + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true +--- +apiVersion: v1 +kind: Service +metadata: + name: currencyservice + namespace: currencyservice +spec: + type: ClusterIP + selector: + app: currencyservice + ports: + - name: grpc + port: 7000 + targetPort: 7000 +--- +# -------------------------------------------------------------------------- +# emailservice +# -------------------------------------------------------------------------- +apiVersion: v1 +kind: Namespace +metadata: + name: emailservice +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: emailservice + namespace: emailservice +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: emailservice + namespace: emailservice +spec: + selector: + matchLabels: + app: emailservice + template: + metadata: + labels: + app: emailservice + spec: + serviceAccountName: emailservice + terminationGracePeriodSeconds: 5 + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsNonRoot: true + runAsUser: 1000 + containers: + - name: server + image: us-central1-docker.pkg.dev/google-samples/microservices-demo/emailservice:v0.10.5 + ports: + - containerPort: 8080 + env: + - name: PORT + value: "8080" + - name: DISABLE_PROFILER + value: "1" + resources: + requests: + cpu: 100m + memory: 64Mi + limits: + cpu: 200m + memory: 128Mi + readinessProbe: + periodSeconds: 5 + grpc: + port: 8080 + livenessProbe: + periodSeconds: 5 + grpc: + port: 8080 + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true +--- +apiVersion: v1 +kind: Service +metadata: + name: emailservice + namespace: emailservice +spec: + type: ClusterIP + selector: + app: emailservice + ports: + - name: grpc + port: 5000 + targetPort: 8080 +--- +# -------------------------------------------------------------------------- +# frontend +# -------------------------------------------------------------------------- +apiVersion: v1 +kind: Namespace +metadata: + name: frontend +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: frontend + namespace: frontend +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend + namespace: frontend +spec: + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + annotations: + sidecar.istio.io/rewriteAppHTTPProbers: "true" + spec: + serviceAccountName: frontend + terminationGracePeriodSeconds: 5 + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsNonRoot: true + runAsUser: 1000 + containers: + - name: server + image: us-central1-docker.pkg.dev/google-samples/microservices-demo/frontend:v0.10.5 + ports: + - containerPort: 8080 + env: + - name: PORT + value: "8080" + - name: PRODUCT_CATALOG_SERVICE_ADDR + value: "productcatalogservice.productcatalogservice:3550" + - name: CURRENCY_SERVICE_ADDR + value: "currencyservice.currencyservice:7000" + - name: CART_SERVICE_ADDR + value: "cartservice.cartservice:7070" + - name: RECOMMENDATION_SERVICE_ADDR + value: "recommendationservice.recommendationservice:8080" + - name: SHIPPING_SERVICE_ADDR + value: "shippingservice.shippingservice:50051" + - name: CHECKOUT_SERVICE_ADDR + value: "checkoutservice.checkoutservice:5050" + - name: AD_SERVICE_ADDR + value: "adservice.adservice:9555" + - name: SHOPPING_ASSISTANT_SERVICE_ADDR + value: "disabled" + - name: ENABLE_PROFILER + value: "0" + resources: + requests: + cpu: 100m + memory: 64Mi + limits: + cpu: 200m + memory: 128Mi + readinessProbe: + initialDelaySeconds: 10 + httpGet: + path: /_healthz + port: 8080 + httpHeaders: + - name: Cookie + value: shop_session-id=x-readiness-probe + livenessProbe: + initialDelaySeconds: 10 + httpGet: + path: /_healthz + port: 8080 + httpHeaders: + - name: Cookie + value: shop_session-id=x-liveness-probe + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true +--- +apiVersion: v1 +kind: Service +metadata: + name: frontend + namespace: frontend +spec: + type: ClusterIP + selector: + app: frontend + ports: + - name: http + port: 80 + targetPort: 8080 +--- +# -------------------------------------------------------------------------- +# loadgenerator +# -------------------------------------------------------------------------- +apiVersion: v1 +kind: Namespace +metadata: + name: loadgenerator +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: loadgenerator + namespace: loadgenerator +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: loadgenerator + namespace: loadgenerator +spec: + selector: + matchLabels: + app: loadgenerator + replicas: 1 + template: + metadata: + labels: + app: loadgenerator + annotations: + sidecar.istio.io/rewriteAppHTTPProbers: "true" + spec: + serviceAccountName: loadgenerator + terminationGracePeriodSeconds: 5 + restartPolicy: Always + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsNonRoot: true + runAsUser: 1000 + initContainers: + - name: frontend-check + image: busybox:latest + command: + - /bin/sh + - -exc + - | + MAX_RETRIES=12 + RETRY_INTERVAL=10 + for i in $(seq 1 $MAX_RETRIES); do + echo "Attempt $i: Pinging frontend: ${FRONTEND_ADDR}..." + STATUSCODE=$(wget --server-response http://${FRONTEND_ADDR} 2>&1 | awk '/^ HTTP/{print $2}') + if [ $STATUSCODE -eq 200 ]; then + echo "Frontend is reachable." + exit 0 + fi + echo "Error: Could not reach frontend - Status code: ${STATUSCODE}" + sleep $RETRY_INTERVAL + done + echo "Failed to reach frontend after $MAX_RETRIES attempts." + exit 1 + env: + - name: FRONTEND_ADDR + value: "frontend.frontend:80" + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + containers: + - name: main + image: us-central1-docker.pkg.dev/google-samples/microservices-demo/loadgenerator:v0.10.5 + env: + - name: FRONTEND_ADDR + value: "frontend.frontend:80" + - name: USERS + value: "10" + - name: RATE + value: "1" + resources: + requests: + cpu: 300m + memory: 256Mi + limits: + cpu: 500m + memory: 512Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true +--- +# -------------------------------------------------------------------------- +# paymentservice +# -------------------------------------------------------------------------- +apiVersion: v1 +kind: Namespace +metadata: + name: paymentservice +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: paymentservice + namespace: paymentservice +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: paymentservice + namespace: paymentservice +spec: + selector: + matchLabels: + app: paymentservice + template: + metadata: + labels: + app: paymentservice + spec: + serviceAccountName: paymentservice + terminationGracePeriodSeconds: 5 + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsNonRoot: true + runAsUser: 1000 + containers: + - name: server + image: us-central1-docker.pkg.dev/google-samples/microservices-demo/paymentservice:v0.10.5 + ports: + - containerPort: 50051 + env: + - name: PORT + value: "50051" + - name: DISABLE_PROFILER + value: "1" + resources: + requests: + cpu: 100m + memory: 64Mi + limits: + cpu: 200m + memory: 128Mi + readinessProbe: + grpc: + port: 50051 + livenessProbe: + grpc: + port: 50051 + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true +--- +apiVersion: v1 +kind: Service +metadata: + name: paymentservice + namespace: paymentservice +spec: + type: ClusterIP + selector: + app: paymentservice + ports: + - name: grpc + port: 50051 + targetPort: 50051 +--- +# -------------------------------------------------------------------------- +# productcatalogservice +# -------------------------------------------------------------------------- +apiVersion: v1 +kind: Namespace +metadata: + name: productcatalogservice +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: productcatalogservice + namespace: productcatalogservice +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: productcatalogservice + namespace: productcatalogservice +spec: + selector: + matchLabels: + app: productcatalogservice + template: + metadata: + labels: + app: productcatalogservice + spec: + serviceAccountName: productcatalogservice + terminationGracePeriodSeconds: 5 + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsNonRoot: true + runAsUser: 1000 + containers: + - name: server + image: us-central1-docker.pkg.dev/google-samples/microservices-demo/productcatalogservice:v0.10.5 + ports: + - containerPort: 3550 + env: + - name: PORT + value: "3550" + - name: DISABLE_PROFILER + value: "1" + resources: + requests: + cpu: 100m + memory: 64Mi + limits: + cpu: 200m + memory: 128Mi + readinessProbe: + grpc: + port: 3550 + livenessProbe: + grpc: + port: 3550 + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true +--- +apiVersion: v1 +kind: Service +metadata: + name: productcatalogservice + namespace: productcatalogservice +spec: + type: ClusterIP + selector: + app: productcatalogservice + ports: + - name: grpc + port: 3550 + targetPort: 3550 +--- +# -------------------------------------------------------------------------- +# recommendationservice +# -------------------------------------------------------------------------- +apiVersion: v1 +kind: Namespace +metadata: + name: recommendationservice +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: recommendationservice + namespace: recommendationservice +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: recommendationservice + namespace: recommendationservice +spec: + selector: + matchLabels: + app: recommendationservice + template: + metadata: + labels: + app: recommendationservice + spec: + serviceAccountName: recommendationservice + terminationGracePeriodSeconds: 5 + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsNonRoot: true + runAsUser: 1000 + containers: + - name: server + image: us-central1-docker.pkg.dev/google-samples/microservices-demo/recommendationservice:v0.10.5 + ports: + - containerPort: 8080 + env: + - name: PORT + value: "8080" + - name: PRODUCT_CATALOG_SERVICE_ADDR + value: "productcatalogservice.productcatalogservice:3550" + - name: DISABLE_PROFILER + value: "1" + resources: + requests: + cpu: 100m + memory: 220Mi + limits: + cpu: 200m + memory: 450Mi + readinessProbe: + periodSeconds: 5 + grpc: + port: 8080 + livenessProbe: + periodSeconds: 5 + grpc: + port: 8080 + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true +--- +apiVersion: v1 +kind: Service +metadata: + name: recommendationservice + namespace: recommendationservice +spec: + type: ClusterIP + selector: + app: recommendationservice + ports: + - name: grpc + port: 8080 + targetPort: 8080 +--- +# -------------------------------------------------------------------------- +# redis-cart +# -------------------------------------------------------------------------- +apiVersion: v1 +kind: Namespace +metadata: + name: redis-cart +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: redis-cart + namespace: redis-cart +spec: + selector: + matchLabels: + app: redis-cart + template: + metadata: + labels: + app: redis-cart + spec: + terminationGracePeriodSeconds: 5 + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsNonRoot: true + runAsUser: 1000 + containers: + - name: redis + image: redis:alpine + ports: + - containerPort: 6379 + readinessProbe: + periodSeconds: 5 + tcpSocket: + port: 6379 + livenessProbe: + periodSeconds: 5 + tcpSocket: + port: 6379 + volumeMounts: + - name: redis-data + mountPath: /data + resources: + requests: + cpu: 70m + memory: 200Mi + limits: + cpu: 125m + memory: 300Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + volumes: + - name: redis-data + emptyDir: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: redis-cart + namespace: redis-cart +spec: + type: ClusterIP + selector: + app: redis-cart + ports: + - name: tcp-redis + port: 6379 + targetPort: 6379 +--- +# -------------------------------------------------------------------------- +# shippingservice +# -------------------------------------------------------------------------- +apiVersion: v1 +kind: Namespace +metadata: + name: shippingservice +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: shippingservice + namespace: shippingservice +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: shippingservice + namespace: shippingservice +spec: + selector: + matchLabels: + app: shippingservice + template: + metadata: + labels: + app: shippingservice + spec: + serviceAccountName: shippingservice + terminationGracePeriodSeconds: 5 + securityContext: + fsGroup: 1000 + runAsGroup: 1000 + runAsNonRoot: true + runAsUser: 1000 + containers: + - name: server + image: us-central1-docker.pkg.dev/google-samples/microservices-demo/shippingservice:v0.10.5 + ports: + - containerPort: 50051 + env: + - name: PORT + value: "50051" + - name: DISABLE_PROFILER + value: "1" + resources: + requests: + cpu: 100m + memory: 64Mi + limits: + cpu: 200m + memory: 128Mi + readinessProbe: + periodSeconds: 5 + grpc: + port: 50051 + livenessProbe: + grpc: + port: 50051 + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true +--- +apiVersion: v1 +kind: Service +metadata: + name: shippingservice + namespace: shippingservice +spec: + type: ClusterIP + selector: + app: shippingservice + ports: + - name: grpc + port: 50051 + targetPort: 50051 diff --git a/static/img/calico-cloud/cc-free-quickstart-service-graph.png b/static/img/calico-cloud/cc-free-quickstart-service-graph.png new file mode 100644 index 0000000000..0dad0fcf61 Binary files /dev/null and b/static/img/calico-cloud/cc-free-quickstart-service-graph.png differ