AWS Terraform Integration with Kloudfuse
Build the required infrastructure with Terraform to deploy Kloudfuse in AWS.
See the Terraform documentation for Get Started — AWS for further details.
Pre-Requisites
Enable these artifacts to configure S3 as a Terraform backend:
- S3 bucket
-
Create an S3 bucket with public access turned off, and remaining defaults enabled.
- Dynamodb table
-
Create a
dynamodbtable with partitionkey = LockID.For details, see Terraform Configuration.
- Install Terraform
-
Install the latest Terraform release, and export
terraformto your executablePATH.
- Terraform user
-
Create an AWS user, or use a role that has enough permissions to access the S3 bucket and provision different resources on AWS.
-
Ensure to add the profile to your AWS credentials.
-
Export the AWS profile before running Terraform.
-
Terraform Configuration
You must change the backend.tf file in each component folder, according to the created resources in S3 and dynamodb. Note that the state key is unique for each component; for a VPC, it can be vpc.tfstate.
terraform {
backend "s3"{
bucket = "bucket-name"
region = "us-west-2"
key = "<resource>.tfstate"
dynamodb_table = "dynamodb-table-name"
}
}
Terraform Variables
You must modify the variables in variables.tf for every deployed AWS component.
- vpc_cidr
-
The IPv4 CIDR block for the VPC. CIDR can be explicitly set, or it can be derived from IPAM using
ipv4_netmask_lengthandipv4_ipam_pool_id.- Example
-
"172.159.0.0/16"
- region
-
Region where the resource is created.
- Example
-
"us-west-2"
- azs
-
List of availability zone names or IDs in the region.
- Example
-
["us-west-2a", "us-west-2b", "us-west-2c"]
- public_subnets
-
A list of public subnets in the region.
- Example
-
["172.159.0.0/19", "172.159.32.0/19", "172.159.64.0/19"]
- domain_name
-
Domain name for the ACM certificate.
- Example
-
"terraform-dev.kloudfuse.io"
- validation_method
-
Validation method for ACM certificate.
- Example
-
"DNS"
- cluster_name
-
EKS cluster name
- Example
-
"kfuse-devops-eks"
- bucket_deepstore
-
Ss3 bucket for the deepstore.
- Example
-
"kfuse-test-deepstore-s3"
- principal_arn_console
-
The ARN of the IAM principal (user or role) that requires access to the EKS cluster console to view the nodes attached to the cluster.
- Example
-
"arn:aws:iam::783739827:user/terraform"
- principal_arn_cli
-
The ARN of the IAM principal (user or role) that requires access to the EKS cluster resources through the CLI.
- Example
-
"arn:aws:iam::783739827:user/terraform"
- ami_type
-
AMI type used.
- Example
-
"AL2_x86_64"
- instance_types
-
Instance type of EKS cluster.
- Example
-
["r6i.8xlarge"]
Configure Infrastructure for Kloudfuse
Choose the method for using Terraform to configure the infrastructure:
Configuration for AWS Accounts without Infrastructure
For AWS account without provisioned infrastructure, complete these steps:
-
Provision VPC: Create the VPC using the terraform files for Kloudfuse, from the
kloudfuse/terraform/netwworking/directory.Be sure to modify the variables in the
variables.tffile to reflect the instance requirements for VPC CIDR, Public subnet CIDR, VPC name, and so on. -
Provision S3: Ensure that the Kloudfuse datastore Pinot has access to an S3 bucket for long-term storage. See Terraform resources for creating the appropriate S3 bucket in the
terraform/s3/directory. -
Provision EKS: After creating and provisioning the VPC, use the Terraform resources in the
/terraform/eks/directory to provision the EKS cluster.Use the
terraform_remote_stateblock in main.tf` file to retrieve VPC S3 Backend values for public subnet IDs.You must modify the variables in the variables.tf file. need to be modified; refer to Terraform Variables. Be sure to add the proper ARN for cluster users in the
principal_*variables. -
Provision Route53: Create a public hosted zone for the ACM certificate. Use the Terraform files in the
terraform/route53/directory. -
Provision ACM: Create certificate to enable SSL/TLS. Use the Terraform files in the
terraform/acm/directory.
Configuration for AWS Accounts with non-EKS VPC Infrastructure
For AWS accounts with provisioned VPC that run a non-EKS cluster, you must add EKS, S3, ACM, and Route53.
|
Because VPC is created without using Terraform files supplied by Kloudfuse, you must provide the public subnet IDs in the |
-
Provision S3: Ensure that the Kloudfuse datastore Pinot has access to an S3 bucket for long-term storage. See Terraform resources for creating the appropriate S3 bucket in the
terraform/s3/directory. -
Provision EKS: After creating and provisioning the VPC, use the Terraform resources in the
/terraform/eks/directory to provision the EKS cluster.Use the
terraform_remote_stateblock in main.tf` file to retrieve VPC S3 Backend values for public subnet IDs.You must modify the variables in the variables.tf file. need to be modified; refer to Terraform Variables. Be sure to add the proper ARN for cluster users in the
principal_*variables. -
Provision Route53: Create a public hosted zone for the ACM certificate. Use the Terraform files in the
terraform/route53/directory. -
Provision ACM: Create certificate to enable SSL/TLS. Use the Terraform files in the
terraform/acm/directory.
Configuration for AWS Accounts with EKS Cluster
For AWS accounts with an existing EKS cluster (shared or dedicated), you must only deploy Kloudfuse. These are the possible deployment options:
Terraform and Helm
-
Copy the Kloudfuse
token.jsonand your~/.kube/config(askubeconfigindestination) files in the/prekfusefolder. -
Update the
configin thekubeconfigfor the EKS Cluster where you plan to deploy Kloudfuse. -
Run the Kloudfuse pre-requisite Terraform files for setting up the cluster using the files in the
terraform/prekfuse/directory.This also creates the namespace.
-
Login to the Kloudfuse Helm registry.
-
Create a secret so the services can pull their Helm charts.
-
Run the Helm upgrade command:
helm upgrade --install -n kfuse kfuse oci://us-east1-docker.pkg.dev/mvp-demo-301906/kfuse-helm/kfuse --version 2.7.4 -f custom_values.yaml
Kubectl Commands and Helm
-
Login to Kloudfuse Helm registry using the
token.jsonthat Kloudfuse sent to your email address. -
Create a secret for Helm, to pull the required docker images.
kubectl create ns kfuse kubectl config set-context --current --namespace=kfuse kubectl create secret docker-registry kfuse-image-pull-credentials \ --namespace='kfuse' --docker-server 'us.gcr.io' --docker-username _json_key \ --docker-email 'container-registry@mvp-demo-301906.iam.gserviceaccount.com' \ --docker-password=''"$(cat token.json)"'' -
Run the following command to install the Kloudfuse chart.
You must use Helm version 3.10.0 or higher.
helm upgrade \ --install -n kfuse kfuse oci://us-east1-docker.pkg.dev/mvp-demo-301906/kfuse-helm/kfuse --version <VERSION.NUM.BER> -f custom_values.yaml (1)1 Use the most current Kloudfuse version number