地貌 is designed around the idea of pluggable providers (written in GoLang) to facilitate the use of the tool for many platforms or systems. The provider is the primary dependency of your project; it creates and manages resources. You can use more than one provider at a time. You want to make sure you are using the same version as the code has been tested or ran with previously. Once you have executed a 地貌 project, it captures the provider version to your terraform.tfstate file to make this easier. We’ll explain how remote states help you stay in sync with your teammates later in this post.
When you execute terraform plan or terraform apply, 地貌 creates a dependency graph from 所有 files matching *.tf in your current working directory. Keep in mind, the files ingested are only in your current directory. There is no recursion into sub-directories, but you can use this to group your resource definitions logically to make development and troubleshooting tasks easier. For those more familiar with CloudFormation this would be similar to generating a composite template from multiple files before running. It is different from CloudFormation nested stacks because changes are applied to the whole set, not sub-templates. If you need further control, use modules, or split resources into many projects.
If you run into a situation where the graph improperly resolves execution order, there is a depends_on flag to force ordering. While improved in 0.12 there may still be edge cases where depends_on still has difficulties when referenced between modules.
When you perform $ terraform plan -out=./out.tfplan you generate an execution plan. This can be used as an input for a subsequent $ terraform apply ./out.tfplan to ensure 地貌 will only execute if the environment is still in sync with what was observed during the last plan. This is important when you have many contributors to a project or you want to leverage a 地貌 CICD pipeline and would like a convenient and safe way to review changes.
To make this work, 地貌 needs a mechanism to know what resources in the target AWS account belong to your current project and which do not. 地貌 records the list of resources and their attributes when you perform $ terraform apply. It then compares your current code (the request) with the target account (current state) and with its last known state (your terraform.tfstate file).
You should never commit secrets to your version control, this should include your .tfstate and .tfstate.backup files. I use a service called 吉宝 to make this painless. It becomes as easy as 吉宝 dump terraform >> .gitignore.
Alternatively, you can just add the following to your .gitignore manually.
# Local .terraform directories
# .tfstate files
It might be tempting to pass your certs, tokens, keys, passwords, and other sensitive data out through 地貌 because of convenience. Avoid this as the data passed in can be caught in your .tfstate file. AWS offers services you can use to distribute secrets, certs, and such to your resources. Look at AWS Systems Manager参数存储 您可以将角色分配到您的资源中，以使他们可以轻松/安全地获取这些秘密！
shared everyone ends up with a shared account, this is mine, I’ll make projects within it for different tooling that is stood up here. Depending on how you handle domain names, you may put your root zone definition here though I generally put that in my /apps/ prod workspace.
In more traditional environments I generally see “dev”, “stage”, “prod” accounts (or something similar) with a shared VPC with or without many subnet groups that is home to application instances. I do that here with /apps/ by leveraging workspaces. I’ll have my multi-tenant infrastructure file to define networking resources (vpcs, subnets, route tables, etc.) along with any other shared components I need to make this account “ready” for application deployment. We will want to export a lot of values here (or write them to SSM参数存储) this will help us with our application deployment later. The applications are then deployed “onto” this infrastructure and they will use output values from the multi-tenant.tf stack via a remote-state data object to ensure they are correctly provisioned.
$ terraform workspace new dev
Created and switched to workspace "dev"!
$ terraform workspace new stage
Created and switched to workspace "stage"!
$ terraform workspace new prod
Created and switched to workspace "prod"!
$ terraform workspace select dev
Switched to workspace "dev".
Many groups find it beneficial to wrap 地貌 execution with scripts to save time and add safeguards. Examples may be enforcing $ terraform plan before $ terraform apply or selecting the appropriate workspace. I find it helpful to accelerate team members less familiar with 地貌 so they can get started quickly without having to learn when and why they need to call $ terraform init. Here's a simple example:
# expect a warning, don't worry it's an idempotent operation
terraform workspace new dev
terraform workspace select dev