You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Creates and manages EC2 Image Builder Container resources. Specifically this pipeline builds an Amazon Linux 2 Baseline Container using Docker with RHEL 7 STIG Version 3 Release 7 hardening applied, along with a few other configurations. See recipes.tf for more details.
3
+
Terraform modules build an [EC2 Image Builder Pipeline](https://docs.aws.amazon.com/imagebuilder/latest/userguide/start-build-image-pipeline.html) with an [Amazon Linux 2](https://aws.amazon.com/amazon-linux-2/) Baseline Container Recipe, which is used to deploy a [Docker](https://docs.docker.com/) based Amazon Linux 2 Container Image that has been hardened according to RHEL 7 STIG Version 3 Release 7 - Medium. See the "[STIG-Build-Linux-Medium version 2022.2.1](https://docs.aws.amazon.com/imagebuilder/latest/userguide/toe-stig.html#linux-os-stig)" section in Linux STIG Components for details. This is commonly referred to as a "Golden" container image.
4
4
5
-
Test.
5
+
The build includes two [Cloudwatch Event Rules](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Rule.html). One which triggers the start of the Container Image pipeline based on an [Inspector Finding](https://docs.aws.amazon.com/inspector/latest/user/findings-managing.html) of "High" or "Critical" so that insecure images are replaced, if Inspector and [Amazon Elastic Container Registry](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html)["Enhanced Scanning"](https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning-enhanced.html) are both enabled. The other Event Rule sends notifications to an SQS Queue after a successful Container Image push to the ECR Repository, to better enable consumption of new container images.
6
6
7
7
## Prerequisites
8
8
9
-
* Terraform v.15+. Download and setup Terraform. Refer to the official Terraform instructions to get started.
10
-
* AWS CLI installed for setting your AWS Credentials for Local Deployment.
11
-
* An AWS Account to deploy the infrastructure within.
12
-
* Git (if provisioning from a local machine).
13
-
* A role within the AWS account that you are able create AWS resources with
14
-
* Ensure the .tfvars file has all variables defined or define all variables at “Terraform Apply” time
9
+
* Terraform v.15+. [Download](https://www.terraform.io/downloads.html) and setup Terraform. Refer to the official Terraform [instructions](https://learn.hashicorp.com/collections/terraform/aws-get-started) to get started.
10
+
*[AWS CLI installed](https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-install.html) for setting your AWS Credentials for Local Deployment.
11
+
*[An AWS Account](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/) to deploy the infrastructure within.
12
+
*[Git](https://git-scm.com/) (if provisioning from a local machine).
13
+
* A [role](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjllPaT-LD8AhXsFFkFHd4PBEsQFnoECA8QAQ&url=https%3A%2F%2Fdocs.aws.amazon.com%2FIAM%2Flatest%2FUserGuide%2Fid_roles.html&usg=AOvVaw2x3qPB3Ld00_O0zMSxCNNi) within the AWS account that you are able create AWS resources with
14
+
* Ensure the [.tfvars](https://developer.hashicorp.com/terraform/tutorials/configuration-language/variables) file has all variables defined or define all variables at "Terraform Apply" time
15
15
16
16
## Target technology stack
17
17
18
-
* S3 Bucket for the Pipeline Component Files
19
-
* ECR
20
-
*1 VPC, 1 Public and 1 Private subnet, Route tables, a NAT Gateway, and an Internet Gateway
18
+
*Two [S3 Buckets](https://aws.amazon.com/s3/), 1 for the Pipeline [Component](https://docs.aws.amazon.com/imagebuilder/latest/userguide/create-component-console.html) Files and 1 for Server Access and VPC Flow logs
*A [VPC](https://aws.amazon.com/vpc/), a Public and Private [subnet](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html), [Route tables](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html), a [NAT Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html), and an [Internet Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html)
21
21
* An EC2 Image Builder Pipeline, Recipe, and Components
22
-
* 1 Container Image
23
-
* 1 KMS Key for Image Encryption
24
-
* A Cloudwatch Event Rule which triggers the start of the pipeline based on an Inspector2 Finding of “High”
25
-
* This pattern creates 29 AWS Resources total.
22
+
* A Container Image
23
+
* A [KMS Key](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiC5J339rD8AhV-F1kFHSp_CCEQFnoECA8QAQ&url=https%3A%2F%2Faws.amazon.com%2Fkms%2F&usg=AOvVaw3RCXPeRLWlWbJyXWU3HNGF) for Image Encryption
24
+
* An SQS Queue
25
+
* Four roles, one for the EC2 Image Builder Pipeline to execute as, one instance profile for EC2 Image Builder, and one for EventBridge Rules, and one for VPC Flow Log collection.
26
+
* Two Cloudwatch Event Rules, one which triggers the start of the pipeline based on an Inspector Finding of "High" or "Critical," and one which sends notifications to an SQS Queue for a successful Image push to the ECR Repository
27
+
* This pattern creates 43 AWS Resources total
26
28
27
29
## Limitations
28
30
29
-
VPC Endpoints cannot be used, and therefore this solution creates VPC Infrastructure that includes a NAT Gateway and an Internet Gateway for internet connectivity from its private subnet. This is due to the bootstrap process by AWSTOE, which installs AWS CLI v2 from the internet.
31
+
[VPC Endpoints](https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html) cannot be used, and therefore this solution creates VPC Infrastructure that includes a [NAT Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) and an Internet Gateway for internet connectivity from its private subnet. This is due to the bootstrap process by [AWSTOE](https://docs.aws.amazon.com/imagebuilder/latest/userguide/how-image-builder-works.html#ibhow-component-management), which installs AWS CLI v2 from the internet.
30
32
31
33
## Operating systems
32
34
@@ -54,36 +56,37 @@ This Pipeline only contains a recipe for Amazon Linux 2.
54
56
├── main.tf
55
57
├── outputs.tf
56
58
├── sec-groups.tf
59
+
├── trigger-build.tf
57
60
└── variables.tf
58
61
```
59
62
60
63
## Module details
61
64
62
-
1. hardening-pipeline.tfvars contains the Terraform variables to be used at apply time
63
-
2. pipeline.tf creates and manages an EC2 Image Builder pipeline in Terraform
64
-
3. image.tf contains the definitions for the Base Image OS, this is where you can modify for a different base image pipeline.
65
-
4. infr-config.tf and dist-config.tf contain the resources for the minimum AWS infrastructure needed to spin up and distribute the image.
66
-
5. components.tf contains an S3 upload resource to upload the contents of the /files directory, and where you can modularly add custom component YAML files as well.
67
-
6. recipes.tf is where you can specific different mixtures of components to create a different container recipe.
68
-
7. trigger-build.tf is an inspector2 finding based pipeline trigger.
69
-
8. roles.tf contains the IAM policy definitions for the EC2 Instance Profile and Pipeline Deployment Role
70
-
9. infra-network-config.tf contains the minimum VPC infrastructure to deploy the container image into
71
-
10. /files contains the .yml files which are used to define the components used in components.tf
65
+
1.`hardening-pipeline.tfvars` contains the Terraform variables to be used at apply time.
66
+
2.`pipeline.tf` creates and manages an EC2 Image Builder pipeline in Terraform.
67
+
3.`image.tf` contains the definitions for the Base Image OS, this is where you can modify for a different base image pipeline.
68
+
4.`infr-config.tf` and `dist-config.tf` contain the resources for the minimum AWS infrastructure needed to spin up and distribute the image.
69
+
5.`components.tf` contains an S3 upload resource to upload the contents of the /files directory, and where you can modularly add custom component YAML files as well.
70
+
6.`recipes.tf` is where you can specific different mixtures of components to create a different container recipe.
71
+
7.`trigger-build.tf` contains the EventBridge rules and SQS queue resources.
72
+
8.`roles.tf` contains the IAM policy definitions for the EC2 Instance Profile and Pipeline deployment role.
73
+
9.`infra-network-config.tf` contains the minimum VPC infrastructure to deploy the container image into.
74
+
10.`/files` is intended to contain the `.yml` files which are used to define any custom components used in components.tf.
* This terraform module set is intended to be used at scale. Instead of deploying it locally, the Terraform modules can be used in a multi-account strategy environment, such as in an AWS Control Tower with Account Factory for Terraform environment. In that case, a backend state S3 bucket should be used for managing Terraform state files, instead of managing the configuration state locally.
81
+
* This terraform module set is intended to be used at scale. Instead of deploying it locally, the Terraform modules can be used in a multi-account strategy environment, such as in an [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) with [Account Factory for Terraform](https://aws.amazon.com/blogs/aws/new-aws-control-tower-account-factory-for-terraform/) environment. In that case, a [backend state S3 bucket](https://developer.hashicorp.com/terraform/language/settings/backends/s3) should be used for managing Terraform state files, instead of managing the configuration state locally.
79
82
80
-
* To deploy for scaled use, deploy the solution to one central account, such as “Shared Services/Common Services” from a Control Tower or Landing Zone account model and grant consumer accounts permission to access to the ECR Repo/KMS Key, see this blog post explaining the setup. For example, in an Account Vending Machine or Account Factory for Terraform, add permissions to each account baseline or account customization baseline to have access to that ECR Repo and Encryption key.
83
+
* To deploy for scaled use, deploy the solution to one central account, such as "Shared Services/Common Services" from a Control Tower or Landing Zone account model and grant consumer accounts permission to access to the ECR Repo/KMS Key, see [this blog post](https://aws.amazon.com/premiumsupport/knowledge-center/secondary-account-access-ecr/) explaining the setup. For example, in an [Account Vending Machine](https://www.hashicorp.com/resources/terraform-landing-zones-for-self-service-multi-aws-at-eventbrite) or Account Factory for Terraform, add permissions to each account baseline or account customization baseline to have access to that ECR Repo and Encryption key.
81
84
82
-
* This container image pipeline can be simply modified once deployed, using EC2 Image Builder features, such as the “Component” feature, which will allow easy packaging of more components into the Docker build.
85
+
* This container image pipeline can be simply modified once deployed, using EC2 Image Builder features, such as the "Component" feature, which will allow easy packaging of more components into the Docker build.
83
86
84
87
* The KMS Key used to encrypt the container image should be shared across accounts which the container image is intended to be used in
85
88
86
-
* Support for other images can be added by simply duplicating this entire Terraform module, and modifying the recipes.tf attributes, parent_image = "amazonlinux:latest" to be another parent image type, and modifying the repository_name to point to an existing ECR repository. This will create another pipeline which deploys a different parent image type, but to your existing ECR repostiory.
89
+
* Support for other images can be added by simply duplicating this entire Terraform module, and modifying the `recipes.tf` attributes, `parent_image = "amazonlinux:latest"` to be another parent image type, and modifying the repository_name to point to an existing ECR repository. This will create another pipeline which deploys a different parent image type, but to your existing ECR repostiory.
87
90
88
91
## Deployment steps
89
92
@@ -109,18 +112,24 @@ If you instead got command not found then install the AWS CLI
109
112
Default region name: [us-east-1]: <Your desired region for deployment>
110
113
Default output format [None]: <Your desired Output format>
4. Navigate to the directory containing this solution before running the commands below:
117
126
```shell
118
127
cd terraform-ec2-image-builder-container-hardening-pipeline
119
128
```
120
129
121
-
5. Update variables in hardening-pipeline.tfvars to match your environment and your desired configuration. You cannot use provided variable values, the solution will not deploy.
130
+
5. Update variables in hardening-pipeline.tfvars to match your environment and your desired configuration. You must provide your own `account_id`, however, you should modify the rest of the variables to fit your desired deployment.
*When running Terraform apply or destroy commands from your local machine, you may encounter an error similar to the following:*
157
+
When running Terraform apply or destroy commands from your local machine, you may encounter an error similar to the following:
149
158
150
159
```json
151
160
Error: configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: 123456a9-fbc1-40ed-b8d8-513d0133ba7f, api error InvalidClientTokenId: The security token included in the request is invalid.
152
161
```
153
162
154
163
This error is due to the expiration of the security token for the credentials used in your local machine’s configuration.
155
164
156
-
See “Set and View Configuration Settings” from the AWS Command Line Interface Documentation to resolve.
165
+
See "[Set and View Configuration Settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-methods)" from the AWS Command Line Interface Documentation to resolve.
0 commit comments