Skip to content

Commit 672d5de

Browse files
author
Mike Saintcross
committed
Escaped git revert blackhole
1 parent 2657575 commit 672d5de

10 files changed

+218
-62
lines changed

LICENSE

+2-1
Original file line numberDiff line numberDiff line change
@@ -11,4 +11,5 @@ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
1111
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
1212
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
1313
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
14-
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
14+
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
15+

README.md

+47-38
Original file line numberDiff line numberDiff line change
@@ -1,32 +1,34 @@
11
# Terraform EC2 Image Builder Container Hardening Pipeline summary
22

3-
Creates and manages EC2 Image Builder Container resources. Specifically this pipeline builds an Amazon Linux 2 Baseline Container using Docker with RHEL 7 STIG Version 3 Release 7 hardening applied, along with a few other configurations. See recipes.tf for more details.
3+
Terraform modules build an [EC2 Image Builder Pipeline](https://docs.aws.amazon.com/imagebuilder/latest/userguide/start-build-image-pipeline.html) with an [Amazon Linux 2](https://aws.amazon.com/amazon-linux-2/) Baseline Container Recipe, which is used to deploy a [Docker](https://docs.docker.com/) based Amazon Linux 2 Container Image that has been hardened according to RHEL 7 STIG Version 3 Release 7 - Medium. See the "[STIG-Build-Linux-Medium version 2022.2.1](https://docs.aws.amazon.com/imagebuilder/latest/userguide/toe-stig.html#linux-os-stig)" section in Linux STIG Components for details. This is commonly referred to as a "Golden" container image.
44

5-
Test.
5+
The build includes two [Cloudwatch Event Rules](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Rule.html). One which triggers the start of the Container Image pipeline based on an [Inspector Finding](https://docs.aws.amazon.com/inspector/latest/user/findings-managing.html) of "High" or "Critical" so that insecure images are replaced, if Inspector and [Amazon Elastic Container Registry](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) ["Enhanced Scanning"](https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning-enhanced.html) are both enabled. The other Event Rule sends notifications to an SQS Queue after a successful Container Image push to the ECR Repository, to better enable consumption of new container images.
66

77
## Prerequisites
88

9-
* Terraform v.15+. Download and setup Terraform. Refer to the official Terraform instructions to get started.
10-
* AWS CLI installed for setting your AWS Credentials for Local Deployment.
11-
* An AWS Account to deploy the infrastructure within.
12-
* Git (if provisioning from a local machine).
13-
* A role within the AWS account that you are able create AWS resources with
14-
* Ensure the .tfvars file has all variables defined or define all variables at Terraform Apply time
9+
* Terraform v.15+. [Download](https://www.terraform.io/downloads.html) and setup Terraform. Refer to the official Terraform [instructions](https://learn.hashicorp.com/collections/terraform/aws-get-started) to get started.
10+
* [AWS CLI installed](https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-install.html) for setting your AWS Credentials for Local Deployment.
11+
* [An AWS Account](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/) to deploy the infrastructure within.
12+
* [Git](https://git-scm.com/) (if provisioning from a local machine).
13+
* A [role](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjllPaT-LD8AhXsFFkFHd4PBEsQFnoECA8QAQ&url=https%3A%2F%2Fdocs.aws.amazon.com%2FIAM%2Flatest%2FUserGuide%2Fid_roles.html&usg=AOvVaw2x3qPB3Ld00_O0zMSxCNNi) within the AWS account that you are able create AWS resources with
14+
* Ensure the [.tfvars](https://developer.hashicorp.com/terraform/tutorials/configuration-language/variables) file has all variables defined or define all variables at "Terraform Apply" time
1515

1616
## Target technology stack
1717

18-
* S3 Bucket for the Pipeline Component Files
19-
* ECR
20-
* 1 VPC, 1 Public and 1 Private subnet, Route tables, a NAT Gateway, and an Internet Gateway
18+
* Two [S3 Buckets](https://aws.amazon.com/s3/), 1 for the Pipeline [Component](https://docs.aws.amazon.com/imagebuilder/latest/userguide/create-component-console.html) Files and 1 for Server Access and VPC Flow logs
19+
* An ECR [Repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/Repositories.html)
20+
* A [VPC](https://aws.amazon.com/vpc/), a Public and Private [subnet](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html), [Route tables](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html), a [NAT Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html), and an [Internet Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html)
2121
* An EC2 Image Builder Pipeline, Recipe, and Components
22-
* 1 Container Image
23-
* 1 KMS Key for Image Encryption
24-
* A Cloudwatch Event Rule which triggers the start of the pipeline based on an Inspector2 Finding of “High”
25-
* This pattern creates 29 AWS Resources total.
22+
* A Container Image
23+
* A [KMS Key](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiC5J339rD8AhV-F1kFHSp_CCEQFnoECA8QAQ&url=https%3A%2F%2Faws.amazon.com%2Fkms%2F&usg=AOvVaw3RCXPeRLWlWbJyXWU3HNGF) for Image Encryption
24+
* An SQS Queue
25+
* Four roles, one for the EC2 Image Builder Pipeline to execute as, one instance profile for EC2 Image Builder, and one for EventBridge Rules, and one for VPC Flow Log collection.
26+
* Two Cloudwatch Event Rules, one which triggers the start of the pipeline based on an Inspector Finding of "High" or "Critical," and one which sends notifications to an SQS Queue for a successful Image push to the ECR Repository
27+
* This pattern creates 43 AWS Resources total
2628

2729
## Limitations
2830

29-
VPC Endpoints cannot be used, and therefore this solution creates VPC Infrastructure that includes a NAT Gateway and an Internet Gateway for internet connectivity from its private subnet. This is due to the bootstrap process by AWSTOE, which installs AWS CLI v2 from the internet.
31+
[VPC Endpoints](https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html) cannot be used, and therefore this solution creates VPC Infrastructure that includes a [NAT Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) and an Internet Gateway for internet connectivity from its private subnet. This is due to the bootstrap process by [AWSTOE](https://docs.aws.amazon.com/imagebuilder/latest/userguide/how-image-builder-works.html#ibhow-component-management), which installs AWS CLI v2 from the internet.
3032

3133
## Operating systems
3234

@@ -54,36 +56,37 @@ This Pipeline only contains a recipe for Amazon Linux 2.
5456
├── main.tf
5557
├── outputs.tf
5658
├── sec-groups.tf
59+
├── trigger-build.tf
5760
└── variables.tf
5861
```
5962

6063
## Module details
6164

62-
1. hardening-pipeline.tfvars contains the Terraform variables to be used at apply time
63-
2. pipeline.tf creates and manages an EC2 Image Builder pipeline in Terraform
64-
3. image.tf contains the definitions for the Base Image OS, this is where you can modify for a different base image pipeline.
65-
4. infr-config.tf and dist-config.tf contain the resources for the minimum AWS infrastructure needed to spin up and distribute the image.
66-
5. components.tf contains an S3 upload resource to upload the contents of the /files directory, and where you can modularly add custom component YAML files as well.
67-
6. recipes.tf is where you can specific different mixtures of components to create a different container recipe.
68-
7. trigger-build.tf is an inspector2 finding based pipeline trigger.
69-
8. roles.tf contains the IAM policy definitions for the EC2 Instance Profile and Pipeline Deployment Role
70-
9. infra-network-config.tf contains the minimum VPC infrastructure to deploy the container image into
71-
10. /files contains the .yml files which are used to define the components used in components.tf
65+
1. `hardening-pipeline.tfvars` contains the Terraform variables to be used at apply time.
66+
2. `pipeline.tf` creates and manages an EC2 Image Builder pipeline in Terraform.
67+
3. `image.tf` contains the definitions for the Base Image OS, this is where you can modify for a different base image pipeline.
68+
4. `infr-config.tf` and `dist-config.tf` contain the resources for the minimum AWS infrastructure needed to spin up and distribute the image.
69+
5. `components.tf` contains an S3 upload resource to upload the contents of the /files directory, and where you can modularly add custom component YAML files as well.
70+
6. `recipes.tf` is where you can specific different mixtures of components to create a different container recipe.
71+
7. `trigger-build.tf` contains the EventBridge rules and SQS queue resources.
72+
8. `roles.tf` contains the IAM policy definitions for the EC2 Instance Profile and Pipeline deployment role.
73+
9. `infra-network-config.tf` contains the minimum VPC infrastructure to deploy the container image into.
74+
10. `/files` is intended to contain the `.yml` files which are used to define any custom components used in components.tf.
7275

7376
## Target architecture
7477
![Deployed Resources Architecture](container-harden.png)
7578

7679
## Automation and scale
7780

78-
* This terraform module set is intended to be used at scale. Instead of deploying it locally, the Terraform modules can be used in a multi-account strategy environment, such as in an AWS Control Tower with Account Factory for Terraform environment. In that case, a backend state S3 bucket should be used for managing Terraform state files, instead of managing the configuration state locally.
81+
* This terraform module set is intended to be used at scale. Instead of deploying it locally, the Terraform modules can be used in a multi-account strategy environment, such as in an [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) with [Account Factory for Terraform](https://aws.amazon.com/blogs/aws/new-aws-control-tower-account-factory-for-terraform/) environment. In that case, a [backend state S3 bucket](https://developer.hashicorp.com/terraform/language/settings/backends/s3) should be used for managing Terraform state files, instead of managing the configuration state locally.
7982

80-
* To deploy for scaled use, deploy the solution to one central account, such as Shared Services/Common Services from a Control Tower or Landing Zone account model and grant consumer accounts permission to access to the ECR Repo/KMS Key, see this blog post explaining the setup. For example, in an Account Vending Machine or Account Factory for Terraform, add permissions to each account baseline or account customization baseline to have access to that ECR Repo and Encryption key.
83+
* To deploy for scaled use, deploy the solution to one central account, such as "Shared Services/Common Services" from a Control Tower or Landing Zone account model and grant consumer accounts permission to access to the ECR Repo/KMS Key, see [this blog post](https://aws.amazon.com/premiumsupport/knowledge-center/secondary-account-access-ecr/) explaining the setup. For example, in an [Account Vending Machine](https://www.hashicorp.com/resources/terraform-landing-zones-for-self-service-multi-aws-at-eventbrite) or Account Factory for Terraform, add permissions to each account baseline or account customization baseline to have access to that ECR Repo and Encryption key.
8184

82-
* This container image pipeline can be simply modified once deployed, using EC2 Image Builder features, such as the Component feature, which will allow easy packaging of more components into the Docker build.
85+
* This container image pipeline can be simply modified once deployed, using EC2 Image Builder features, such as the "Component" feature, which will allow easy packaging of more components into the Docker build.
8386

8487
* The KMS Key used to encrypt the container image should be shared across accounts which the container image is intended to be used in
8588

86-
* Support for other images can be added by simply duplicating this entire Terraform module, and modifying the recipes.tf attributes, parent_image = "amazonlinux:latest" to be another parent image type, and modifying the repository_name to point to an existing ECR repository. This will create another pipeline which deploys a different parent image type, but to your existing ECR repostiory.
89+
* Support for other images can be added by simply duplicating this entire Terraform module, and modifying the `recipes.tf` attributes, `parent_image = "amazonlinux:latest"` to be another parent image type, and modifying the repository_name to point to an existing ECR repository. This will create another pipeline which deploys a different parent image type, but to your existing ECR repostiory.
8790

8891
## Deployment steps
8992

@@ -109,18 +112,24 @@ If you instead got command not found then install the AWS CLI
109112
Default region name: [us-east-1]: <Your desired region for deployment>
110113
Default output format [None]: <Your desired Output format>
111114
```
112-
3. Clone the repository
115+
3. Clone the repository with HTTPS or SSH
116+
117+
HTTPS
118+
``` shell
119+
git clone https://github.com/aws-samples/terraform-ec2-image-builder-container-hardening-pipeline.git
120+
```
121+
SSH
113122
``` shell
114-
git clone https://gitlab.aws.dev/msaintcr/terraform-ec2-image-builder-container-hardening-pipeline.git
123+
git clone [email protected]:aws-samples/terraform-ec2-image-builder-container-hardening-pipeline.git
115124
```
116125
4. Navigate to the directory containing this solution before running the commands below:
117126
``` shell
118127
cd terraform-ec2-image-builder-container-hardening-pipeline
119128
```
120129

121-
5. Update variables in hardening-pipeline.tfvars to match your environment and your desired configuration. You cannot use provided variable values, the solution will not deploy.
130+
5. Update variables in hardening-pipeline.tfvars to match your environment and your desired configuration. You must provide your own `account_id`, however, you should modify the rest of the variables to fit your desired deployment.
122131
``` json
123-
account_id = "012345678900"
132+
account_id = "<DEPLOYMENT-ACCOUNT-ID>"
124133
aws_region = "us-east-1"
125134
vpc_name = "example-hardening-pipeline-vpc"
126135
kms_key_alias = "image-builder-container-key"
@@ -140,21 +149,21 @@ terraform init && terraform validate && terraform apply -var-file *.tfvars -auto
140149

141150
7. After successfully completion of your first Terraform apply, if provisioning locally, you should see this snippet in your local machine’s terminal:
142151
``` shell
143-
Apply complete! Resources: 29 added, 0 changed, 0 destroyed.
152+
Apply complete! Resources: 43 added, 0 changed, 0 destroyed.
144153
```
145154

146155
## Troubleshooting
147156

148-
*When running Terraform apply or destroy commands from your local machine, you may encounter an error similar to the following:*
157+
When running Terraform apply or destroy commands from your local machine, you may encounter an error similar to the following:
149158

150159
``` json
151160
Error: configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: 123456a9-fbc1-40ed-b8d8-513d0133ba7f, api error InvalidClientTokenId: The security token included in the request is invalid.
152161
```
153162

154163
This error is due to the expiration of the security token for the credentials used in your local machine’s configuration.
155164

156-
See Set and View Configuration Settings from the AWS Command Line Interface Documentation to resolve.
165+
See "[Set and View Configuration Settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-methods)" from the AWS Command Line Interface Documentation to resolve.
157166

158167
## Author
159168

160-
* Mike Saintcross [msaintcr@]([email protected])
169+
* Mike Saintcross [msaintcr@amazon.com](mailto:[email protected])

components.tf

+6-5
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,16 @@
11
# Upload files to S3
22
resource "aws_s3_bucket_object" "component_files" {
33
depends_on = [
4-
aws_s3_bucket.s3_pipeline_bucket
4+
aws_s3_bucket.s3_pipeline_bucket,
5+
aws_kms_key.this
56
]
67

78
for_each = fileset(path.module, "files/**/*.yml")
89

9-
bucket = var.aws_s3_ami_resources_bucket
10-
key = each.value
11-
source = "${path.module}/${each.value}"
12-
server_side_encryption = "aws:kms"
10+
bucket = var.aws_s3_ami_resources_bucket
11+
key = each.value
12+
source = "${path.module}/${each.value}"
13+
kms_key_id = aws_kms_key.this.id
1314
}
1415

1516
# Add custom component resources below

container-harden.png

32.5 KB
Loading

dist-config.tf

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
resource "aws_ecr_repository" "hardening_pipeline_repo" {
22
name = var.ecr_name
3-
image_tag_mutability = "MUTABLE"
3+
image_tag_mutability = "IMMUTABLE"
44

55
encryption_configuration {
66
encryption_type = "KMS"

hardening-pipeline.tfvars

+1-2
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
# Enter values for all of the following if you wish to avoid being prompted on each run.
2-
# You must specify an account_id
3-
account_id = "537229986333"
2+
account_id = "012345678900"
43
aws_region = "us-east-1"
54
vpc_name = "example-hardening-pipeline-vpc"
65
kms_key_alias = "image-builder-container-key"

infra-network-config.tf

+18
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,18 @@ resource "aws_vpc" "hardening_pipeline" {
99
Name = "${var.vpc_name}"
1010
}
1111
}
12+
13+
resource "aws_flow_log" "hardening_pipeline_flow" {
14+
depends_on = [
15+
aws_s3_bucket.s3_pipeline_logging_bucket_logs
16+
]
17+
log_destination = aws_s3_bucket.s3_pipeline_logging_bucket_logs.arn
18+
log_destination_type = "s3"
19+
traffic_type = "ALL"
20+
vpc_id = aws_vpc.hardening_pipeline.id
21+
}
22+
23+
# Map public IP on launch because we are creating an internet gateway
1224
resource "aws_subnet" "hardening_pipeline_public" {
1325
depends_on = [
1426
aws_vpc.hardening_pipeline
@@ -22,6 +34,7 @@ resource "aws_subnet" "hardening_pipeline_public" {
2234
Name = "${var.vpc_name}-public"
2335
}
2436
}
37+
2538
resource "aws_subnet" "hardening_pipeline_private" {
2639
depends_on = [
2740
aws_vpc.hardening_pipeline,
@@ -36,6 +49,11 @@ resource "aws_subnet" "hardening_pipeline_private" {
3649
Name = "${var.vpc_name}-private"
3750
}
3851
}
52+
53+
resource "aws_default_security_group" "hardening_pipeline_vpc_default" {
54+
vpc_id = aws_vpc.hardening_pipeline.id
55+
}
56+
3957
resource "aws_internet_gateway" "hardening_pipeline_igw" {
4058
depends_on = [
4159
aws_vpc.hardening_pipeline,

0 commit comments

Comments
 (0)