I work at a AWS/Docker/ECS/Terraform shop. Our current ECS clusters contain EBS file systems. While that works fine, some services require file sharing between containers. Here is our setup for AWS ECS/EFS.
- Modules
- Environment
- Example Output
- References
Modules EFS
main.tf
We define the following resources:
- aws_efs_file_system
- aws_efs_mount_target
- aws_security_group
resource "aws_efs_file_system" "efs" {
creation_token = "${var.name}-c${var.cluster_num}"
performance_mode = "${var.performance_mode}"
encrypted = "true"
tags {
Name = "${var.name}-c${var.cluster_num}"
Environment = "${var.environment}"
Cluster = "${var.cluster_num}"
Terraform = "true"
}
}
resource "aws_efs_mount_target" "efs" {
count = "${length(var.subnets)}"
file_system_id = "${aws_efs_file_system.efs.id}"
subnet_id = "${element(var.subnets, count.index)}"
security_groups = ["${aws_security_group.efs.id}"]
}
resource "aws_security_group" "efs" {
name = "${var.name}-efs-c${var.cluster_num}"
description = "Allow NFS traffic."
vpc_id = "${var.vpc_id}"
lifecycle {
create_before_destroy = true
}
ingress {
from_port = "2049"
to_port = "2049"
protocol = "tcp"
cidr_blocks = ["${var.allowed_cidr_blocks}"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags {
Name = "${var.name}-c${var.cluster_num}"
Environment = "${var.environment}"
Cluster = "${var.cluster_num}"
Terraform = "true"
}
}
variables.tf
variable "name" {
description = "(Required) The reference_name of your file system. Also, used in tags."
type = "string"
}
variable "region" {
description = "(Optional) The region of your file system."
type = "string"
default = "us-east-1"
}
variable "environment" {
description = "(Required) The environment of your file system. Also, used in tags."
type = "string"
}
variable "cluster_num" {
description = "(Optional) The cluster number of your file system. Also, used in tags."
type = "string"
default = "01"
}
variable "performance_mode" {
description = "(Optional) The performance mode of your file system."
type = "string"
default = "generalPurpose"
}
variable "vpc_id" {
description = "(Required) The VPC ID where NFS security groups will be."
type = "string"
}
variable "subnets" {
description = "(Required) A comma separated list of subnet ids where mount targets will be."
type = "list"
}
variable "allowed_cidr_blocks" {
description = "(Required) A comma separated list of CIDR blocks allowed to mount target."
type = "list"
}
outputs.tf
output "name" {
value = "${var.name}"
}
output "file_system_id" {
value = "${aws_efs_file_system.efs.id}"
}
output "dns_name" {
value = "${aws_efs_file_system.efs.dns_name}"
}
output "mount_target_ids" {
value = "${join(",", aws_efs_mount_target.efs.*.id)}"
}
output "mount_target_interface_ids" {
value = "${join(",", aws_efs_mount_target.efs.*.network_interface_id)}"
}
output "security_group_id" {
value = "${aws_security_group.efs.id}"
}
Modules ECS Cluster
You will need elasticfilesystem:DescribeFileSystems
action in your aws_iam_role_policy
resource.
resource "aws_iam_role_policy" "container_instance" {
...
"elasticfilesystem:DescribeFileSystems",
...
}
The efs_dns_name
variables will need to be passed into the template file so it can be parsed by the aws_launch_configuration
resource.
data "template_file" "user_data" {
...
vars {
efs_dns_name = "${var.efs_dns_name}"
...
}
}
In the user_data template file (if you setup your ECS instances that way). This will allow the ECS instance to mount the EFS partition during provision.
packages:
- nfs-utils
...
runcmd:
# Setup EFS
- DIR_SRC="$efs_dns_name"
- DIR_TGT="/efs"
- mkdir -p $$DIR_TGT
- mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,soft,timeo=600,retrans=2 $${DIR_SRC}:/ $${DIR_TGT}
- echo "$${DIR_SRC}:/ $${DIR_TGT} nfs nfsvers=4.1,rsize=1048576,wsize=1048576,soft,timeo=600,retrans=2 0 2" | tee -a /etc/fstab
Modules ALB Service
Define the volume in the aws_ecs_task_definition
resource.
resource "aws_ecs_task_definition" "ecs_task_definition" {
...
volume {
name = "efs"
host_path = "/efs/${var.name}-c${var.cluster_num}"
}
...
}
In the task_definition template file (if you setup your task definition that way).
"mountPoints": [
{
"containerPath": "/efs",
"sourceVolume": "efs",
"readOnly": null
}
Environment
efs.tf
Calling the aws_efs
module like below:
module "efs_default_c01" {
source = "../../modules/aws_efs"
name = "${var.env}-default"
vpc_id = "${module.vpc.vpc_id}"
subnets = "${module.vpc.backend_subnets_list}"
allowed_cidr_blocks = ["${var.default_allowed_ips}"]
region = "${var.region}"
environment = "${var.env}"
cluster_num = "01"
}
Example Output
Once the ECS instance is up and if everything goes accordingly, you should have /efs
partition mounted.
$ df -h /efs
Filesystem Size Used Avail Use% Mounted on
fs-********.efs.us-east-1.amazonaws.com:/ 8.0E 24M 8.0E 1% /efs
Inside the containers, your /efs
partition will look like this.
$ df -h /efs
Filesystem Size Used Avail Use% Mounted on
fs-********.efs.us-east-1.amazonaws.com:/dev-search-c01 8.0E 24M 8.0E 1% /efs