Skip to content

CLI Overview

The ThinkWork CLI (thinkwork-cli) is the primary way to deploy and manage ThinkWork. It wraps Terraform with a developer-friendly interface, manages named environment profiles, and provides health-check and diagnostic commands.

Terminal window
npm install -g thinkwork-cli

Or run one-off without installing globally:

Terminal window
npx thinkwork-cli --help

Verify the installation:

Terminal window
thinkwork --version
# 1.0.0
thinkwork --help
# Usage: thinkwork <command> [options]
# ...
  • Node.js 20+
  • Terraform 1.6+ (must be on your $PATH)
  • AWS credentials (via ~/.aws/credentials, environment variables, or EC2/ECS instance role)

The CLI checks for Terraform on startup and prints a clear error if it’s missing:

✗ Terraform not found. Install from https://developer.hashicorp.com/terraform/install

The CLI is a Commander.js application that:

  1. Manages environment profiles — Named environments stored in ~/.thinkwork/environments/ as JSON files. Each profile holds the AWS region, profile name, and a pointer to the deployment directory.

  2. Wraps Terraformdeploy, plan, destroy, and outputs shell out to terraform with the correct working directory, variable files, and backend configuration.

  3. Validates deploymentsdoctor runs a suite of health checks against the deployed infrastructure (Cognito reachable, Lambda responding, Aurora available, etc.)

  4. Bootstraps state storagebootstrap creates the S3 bucket and DynamoDB table used for Terraform remote state before the first deploy.

An environment profile is a named configuration that maps to a ThinkWork deployment. Profiles are stored at ~/.thinkwork/environments/<name>.json.

{
"name": "dev",
"region": "us-east-1",
"awsProfile": "default",
"deploymentDir": "/Users/alice/thinkwork-deploy",
"createdAt": "2024-04-01T10:00:00Z"
}

You can have multiple profiles pointing at different AWS accounts, regions, or directories:

Terminal window
thinkwork login # Creates "dev" profile → us-east-1, account 111111111111
thinkwork login # Creates "prod" profile → us-east-1, account 222222222222
thinkwork login # Creates "eu-staging" profile → eu-west-1, account 111111111111

Then use -s <name> (or --stage <name>) to select the profile for each command:

Terminal window
thinkwork deploy -s dev
thinkwork doctor -s prod
thinkwork outputs -s eu-staging

The CLI uses two related concepts:

  • Stage — The name of your deployment (e.g. dev, prod). This becomes the prefix for all AWS resource names (dev-thinkwork-agentcore, prod-thinkwork-aurora). Set in terraform.tfvars as stage.

  • Environment — The CLI profile name. By convention, the environment name matches the stage name, but they’re independent. You can have an environment named alice-dev pointing at a stage named dev.

Use --stage (or -s) in all CLI commands to select the environment profile.

The CLI looks for configuration in this order:

  1. ~/.thinkwork/environments/<name>.json — Profile file (created by thinkwork login)
  2. terraform.tfvars — Terraform variables (in the deployment directory)
  3. .thinkwork/stage.json — Local metadata in the deployment directory

You can view the resolved configuration for a stage:

Terminal window
thinkwork config list -s dev

ThinkWork uses S3 + DynamoDB for Terraform remote state. The bootstrap command creates these resources before the first deploy:

Terminal window
thinkwork bootstrap -s dev

This creates:

  • S3 bucket: thinkwork-{stage}-tfstate-{account_id} (versioning enabled, encryption at rest)
  • DynamoDB table: thinkwork-{stage}-tfstate-lock (PAY_PER_REQUEST billing)

The terraform.tfvars file is updated with the backend configuration after bootstrap runs.

For CI/CD pipelines, use environment variables instead of the interactive login command:

Terminal window
# Set credentials via environment
export AWS_ACCESS_KEY_ID=${{ secrets.AWS_ACCESS_KEY_ID }}
export AWS_SECRET_ACCESS_KEY=${{ secrets.AWS_SECRET_ACCESS_KEY }}
export AWS_REGION=us-east-1
# Non-interactive deploy (skips confirmation prompt)
thinkwork deploy -s prod --auto-approve

Or use an IAM role with aws sts assume-role before running deploy.

See Commands Reference for the full syntax and options for every command.