Skip to main content

Deploy environment

Use this flow when bringing up or updating an environment from this repository.

Prerequisites

  • Terraform >= 1.14.7, < 1.15.0
  • AWS CLI configured for the target account
  • Docker or CI pipelines capable of pushing application images to ECR
  • Remote state already bootstrapped

Step 0: bootstrap backend if needed

If the backend does not exist yet, run:

export AWS_REGION=us-east-1
bash scripts/bootstrap-state.sh

Step 1: create the application ECR repositories

Create the repositories first so the service repositories can push images before the ECS services try to start:

cd terraform/staging
terraform apply \
-target=module.events_ecr \
-target=module.dashboard_ecr

Expected images:

  • Events ingestion API image in module.events_ecr
  • Dashboard backend image in module.dashboard_ecr

Step 2: prepare environment inputs

cp terraform/staging/terraform.tfvars.example terraform/staging/terraform.tfvars
# edit terraform/staging/terraform.tfvars

Step 3: plan and apply

cd terraform/staging
terraform plan -out=tfplan
terraform apply tfplan

Step 4: complete post-apply tasks

After apply:

  • update the events ingestion secret with real Kafka values
  • update the dashboard backend secret with real Auth0 values and a real DATABASE_URL
  • confirm the service repositories have pushed deployable images to ECR

Special case: optional MSK Connect S3 sink

If enable_msk_s3_sink = true, the plugin artifact must exist before the full apply succeeds.

Create the plugin bucket first:

cd terraform/staging
terraform apply -target=module.msk_connect_plugin_bucket
terraform output -raw msk_connect_plugin_bucket_name

Then upload the Confluent ZIP to the printed bucket and set msk_s3_sink_plugin_file_key before running the full plan.

warning

When enable_msk_s3_sink = true and msk_s3_sink_plugin_file_key = "", Terraform validation fails before apply.