• Home
  • About
  • Résumé
  • RunLog
  • Posts
    • All Posts
    • All Tags

Terraform and AWS Tranfer Family w/ Lambda + Secrets Manager

01 Oct 2022

Reading time ~9 minutes

  • Terraform
    • SFTP Server
    • SFTP User
    • Environment Setup
  • Creating the AWS Resources
    • Lambda Function
    • Secrets Manager
    • Cloudwatch Logs
    • IAM Roles and Policies
      • IAM Roles
      • IAM Policies
  • Alternative
    • Amazon S3 Protocol
  • Verify
    • AWS CLI
    • SFTP Client
  • Resources

We used to run a SFTP server on a laptop sitting in the office closet for our clients to upload their product SKU’s! 😅

While many companies are probably still running a Secure FTP server on an EC2 instance, Amazon provides a service called “AWS Transfer Family” which supports FTP/SFTP/FTPS and more.

There are many documents online, but it took me a while to figure everything out. This doc is to setup a SFTP server with S3 bucket as backend using Lambda function to authenticate against Secrets Manger.

After the Terraform code is applied, we will have the following resources created.

  • Transfer Family SFTP server
  • Lambda function
  • Secrets Manager secrets with user credentials
  • CloudWatch logs for lambda and SFTP
  • IAM roles and policies

Terraform

SFTP Server

modules/sftp-server/
├── lambda-applications/
│   └── authentication/
│       ├── README.md
│       └── index.py
├── main.tf
└── variables.tf

lambda-applications/authentication/index.py

Working with custom identity providers has all the information that we will need, including the Lambda function for authentication. It is in a CloudFormation template, but we can strip out the python code using yq.

$ curl -s -o - https://s3.amazonaws.com/aws-transfer-resources/custom-idp-templates/aws-transfer-custom-idp-secrets-manager-apig.template.yml | \
    sed 's/Fn::Sub/FnSub/' | \
    yq '.Resources.GetUserConfigLambda.Properties.Code.ZipFile.FnSub'

main.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 4.56"
    }
  }
}

########################
## Locals
########################

locals {
  sftp_count = var.enable == true ? 1 : 0
}

########################
## Data
########################

data "aws_caller_identity" "current" {}

########################
## Lambda
########################

module "lambda_auth" {
  source  = "terraform-aws-modules/lambda/aws"
  version = "4.0.2"

  function_name = "${var.stage}-sftp-lambda-authentication"
  description   = "Managed by Terraform"
  handler       = "index.lambda_handler"
  runtime       = "python3.7"
  source_path   = "${path.module}/lambda-applications/authentication"

  environment_variables = {
    SecretsManagerRegion = var.region
  }

  attach_policy = true
  policy        = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"

  attach_policy_statements = true
  policy_statements = {
    transfer = {
      effect    = "Allow",
      actions   = ["secretsmanager:GetSecretValue"]
      resources = ["arn:aws:secretsmanager:${var.region}:${data.aws_caller_identity.current.account_id}:secret:SFTP/*"]
    }
  }

  recreate_missing_package                = false
  create_current_version_allowed_triggers = false
  allowed_triggers = local.sftp_count == 1 ? {
    "${var.stage}-sftp-lambda-authentication" = {
      principal  = "transfer.amazonaws.com"
      source_arn = module.sftp[0].arn
    }
  } : {}

  tags = var.tags
}

########################
## Transfer Family
########################

module "sftp" {
  source = "github.com/terrablocks/aws-sftp-server?ref=v1.2.0"

  count                  = local.sftp_count
  name                   = "${var.stage}-sftp-server"
  sftp_type              = "PUBLIC"
  protocols              = ["SFTP"]
  identity_provider_type = "AWS_LAMBDA"
  function_arn           = module.lambda_auth.lambda_function_arn
  hosted_zone            = var.hosted_zone
  sftp_sub_domain        = var.subdomain
  tags                   = var.tags
}

resource "null_resource" "sftp" {
  provisioner "local-exec" {
    command = "aws --region ${var.region} transfer update-server --server-id ${module.sftp.0.id} --protocol-details SetStatOption=ENABLE_NO_OP"
  }

  depends_on = [module.sftp]
}

variables.tf

variable "stage" {
  type        = string
  description = "Required: Stage/Environment"
}

variable "subdomain" {
  type        = string
  description = "Required: Subdomain of instance"
}

variable "hosted_zone" {
  type        = string
  description = "Required: FQDN"
}

variable "region" {
  type        = string
  default     = "us-east-2"
  description = "Optional: Region"
}

variable "enable" {
  type        = bool
  default     = false
  description = "Optional: Enable SFTP server"
}

variable "tags" {
  type = map(any)
  default = {
    "infra" = "terraform"
  }
  description = "Optional: Tags"
}

SFTP User

modules/sftp-user/
├── main.tf
├── outputs.tf
└── variables.tf

main.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 4.56"
    }
  }
}

########################
## IAM
########################

data "aws_iam_policy_document" "data_sftp_user_policy" {
  statement {
    sid    = "ListHomeDir"
    effect = "Allow"

    actions = [
      "s3:ListBucket",
    ]

    resources = ["arn:aws:s3:::${var.bucket_name}"]
  }

  statement {
    sid    = "AWSTransferRequirements"
    effect = "Allow"

    actions = [
      "s3:ListAllMyBuckets",
      "s3:GetBucketLocation",
    ]

    resources = ["*"]
  }

  statement {
    sid    = "HomeDirObjectAccess"
    effect = "Allow"

    actions = [
      "s3:PutObject",
      "s3:GetObject",
      "s3:DeleteObjectVersion",
      "s3:DeleteObject",
      "s3:GetObjectVersion",
    ]

    resources = ["arn:aws:s3:::${var.bucket_name}/*"]
  }
}

module "iam_role" {
  source  = "cloudposse/iam-role/aws"
  version = "0.16.2"

  enabled            = true
  name               = "sftp-user-role"
  stage              = var.stage
  label_key_case     = "lower"
  policy_description = "Allow S3 HomeDirectory access."
  role_description   = "IAM role used by Lamba to access Transfer Family service."

  principals = {
    Service = ["transfer.amazonaws.com"]
  }

  policy_documents = [
    data.aws_iam_policy_document.data_sftp_user_policy.json
  ]

  tags = {
    name  = "${var.stage}-sftp-user-role"
    stage = var.stage
    infra = "terraform"
  }
}

########################
## Secrets Manager
########################

resource "random_pet" "pet" {
  for_each = var.sftp_users
  length   = 2
}

resource "random_password" "password" {
  for_each    = var.sftp_users
  length      = 16
  special     = true
  min_lower   = 1
  min_numeric = 1
  min_special = 1
  min_upper   = 1
}

# NOTE: Since we aren't specifying a KMS key this will default to using
# `aws/secretsmanager`/
resource "aws_secretsmanager_secret" "user" {
  for_each    = var.sftp_users
  name        = "SFTP/${each.value}-${random_pet.pet[each.key].id}"
  description = "Managed by Terraform"
  tags        = var.tags
}

resource "aws_secretsmanager_secret_version" "secret_value" {
  for_each  = var.sftp_users
  secret_id = aws_secretsmanager_secret.user[each.key].id

  secret_string = jsonencode({
    Password          = "${random_password.password[each.key].result}"
    HomeDirectoryType = "LOGICAL"
    HomeDirectoryDetails = jsonencode([{
      Entry  = "/"
      Target = "/${var.bucket_name}/${each.key}/upload"
    }])
    Role = "${module.iam_role.arn}"
    Policy = jsonencode({
      Statement = [
        {
          Effect   = "Allow"
          Resource = "*"
          Action = [
            "s3:GetBucketLocation",
            "s3:ListAllMyBuckets"
          ]
        },
        {
          Effect = "Allow"
          Resource = [
            "arn:aws:s3:::${var.bucket_name}"
          ],
          Action = [
            "s3:ListBucket"
          ]
        },
        {
          Effect = "Allow"
          Resource = [
            "arn:aws:s3:::${var.bucket_name}/${each.key}/upload/*"
          ],
          Action = [
            "s3:DeleteObject",
            "s3:DeleteObjectVersion",
            "s3:GetObject",
            "s3:GetObjectACL",
            "s3:GetObjectVersion",
            "s3:PutObject",
            "s3:PutObjectACL",
          ]
        }
      ],
      Version = "2012-10-17"
    })
  })
}

variables.tf

variable "stage" {
  type        = string
  description = "Required: Stage/Environment"
}

variable "sftp_users" {
  type = map(any)
}

variable "bucket_name" {
  type        = string
  description = "Required: S3 bucket name"
}

variable "tags" {
  type = map(any)
  default = {
    "infra" = "terraform"
  }
  description = "Optional: Tags"
}

outputs.tf

output "iam_role_arn" {
  value = module.iam_role.arn
}

output "iam_role_id" {
  value = module.iam_role.id
}

output "iam_role_policy" {
  value = module.iam_role.policy
}

Environment Setup

########################
## SFTP - server
########################

module "sftp_server" {
  source      = "../modules/sftp-server"
  stage       = local.stage
  hosted_zone = "your-company-fqdn.com."
  enable      = true
}

#########################
### SFTP - user
#########################

module "sftp_user" {
  source      = "../modules/sftp-user"
  stage       = local.stage
  bucket_name = "${local.cluster}-sftp-user-home"

  sftp_users = {
    100 = "woody"
  }
}

Creating the AWS Resources

Once we apply the Terraform changes, we can verify the resources and test the login credential.

SFTP Server

$ aws transfer list-servers
{
    "Servers": [
        {
            "Arn": "arn:aws:transfer:us-east-2:**********:server/s-**********42466db",
            "Domain": "S3",
            "IdentityProviderType": "AWS_LAMBDA",
            "EndpointType": "PUBLIC",
            "LoggingRole": "arn:aws:iam::**********:role/preprod-sftp-server-transfer-logging",
            "ServerId": "s-**********42466db",
            "State": "ONLINE"
        }
    ]
}

Lambda Function

$ aws lambda list-functions | jq '.Functions[] | select(.FunctionName == "preprod-sftp-lambda-authentication")'
{
  "FunctionName": "preprod-sftp-lambda-authentication",
  "FunctionArn": "arn:aws:lambda:us-east-2:**********:function:preprod-sftp-lambda-authentication",
  "Runtime": "python3.7",
  "Role": "arn:aws:iam::**********:role/preprod-sftp-lambda-authentication",
  "Handler": "index.lambda_handler",
  "Description": "Managed by Terraform",
  "Timeout": 3,
  "MemorySize": 128,
  "Version": "$LATEST",
  "Environment": {
    "Variables": {
      "SecretsManagerRegion": "us-east-2"
    }
  },
  "TracingConfig": {
    "Mode": "PassThrough"
  },
  "PackageType": "Zip",
  "Architectures": [
    "x86_64"
  ],
  "EphemeralStorage": {
    "Size": 512
  }
}

Secrets Manager

This is the tricky part of the setup. The Role is created in Terraform, but the Policy has to be part of the value.

We are using LOGICAL home directory type which allows us to lock down the S3 bucket access to a sub folder.

$ aws secretsmanager get-secret-value --secret-id SFTP/woody-clever-bluebird | jq -r '.SecretString' | jq -r
{
  "Password": "**********",
  "HomeDirectoryType": "LOGICAL",
  "HomeDirectoryDetails": "[ { \"Entry\":\"/\", \"Target\": \"/**********/11/upload\" } ]",
  "Role": "arn:aws:iam::**********:role/preprod-sftp-user-role",
  "Policy": "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetBucketLocation\", \"s3:ListAllMyBuckets\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\" ], \"Resource\":[ \"arn:aws:s3:::**********\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:DeleteObjectVersion\", \"s3:DeleteObject\", \"s3:GetObjectACL\", \"s3:GetObjectVersion\", \"s3:GetObject\", \"s3:PutObjectACL\", \"s3:PutObject\" ], \"Resource\": [ \"arn:aws:s3:::**********/upload/*\" ] } ] }"
}

Cloudwatch Logs

$ aws logs describe-log-groups | jq -r '.logGroups[] | select(.logGroupName | test("transfer|sftp")) | .logGroupName'
/aws/lambda/preprod-sftp-lambda-authentication
/aws/transfer/s-**********a34dbab
/aws/transfer/s-**********3446e1b
/aws/transfer/s-**********42466db

IAM Roles and Policies

IAM Roles

$ aws iam list-roles | jq -r '.Roles[] | select (.RoleName | test("sftp")) | .RoleName'
preprod-sftp-lambda-authentication
preprod-sftp-server-transfer-logging
preprod-sftp-user-role

IAM Policies

$ aws iam list-policies | jq -r '.Policies[] | select (.PolicyName | test("sftp")) | .PolicyName'
preprod-sftp-lambda-authentication-inline
preprod-sftp-lambda-authentication-logs
preprod-sftp-user-role

Alternative

AWS Transfer Family is NOT cheap, so another alternative option is to use Amazone S3 Protocol. Setup each user via IAM user with limited S3 access for the account policy.

Amazon S3 Protocol

modules/iam-user-upload/
├── main.tf
└── variables.tf

main.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 4.56"
    }
  }
}

locals {
  buckets = flatten([
    for key in var.buckets : key
  ])
}

data "aws_caller_identity" "this" {}

########################
## IAM
########################

data "aws_iam_policy_document" "this" {
  for_each = var.users

  statement {
    actions = [
      "s3:GetBucketLocation",
    ]

    condition {
      test     = "StringEquals"
      variable = "aws:PrincipalAccount"

      values = [
        data.aws_caller_identity.this.account_id
      ]
    }

    resources = ["*"]
  }

  statement {
    actions = [
      "s3:ListBucket"
    ]

    condition {
      test     = "StringEquals"
      variable = "aws:PrincipalAccount"

      values = [
        data.aws_caller_identity.this.account_id
      ]
    }

    condition {
      test     = "StringLike"
      variable = "s3:prefix"

      values = [
        "assets/practice/${each.key}/${var.upload_folder}/*"
      ]
    }

    resources = [
      for key in var.buckets : "arn:aws:s3:::${key}"
    ]
  }

  statement {
    actions = [
      "s3:List*",
      "s3:DeleteObject",
      "s3:DeleteObjectVersion",
      "s3:GetObject",
      "s3:GetObjectACL",
      "s3:GetObjectVersion",
      "s3:PutObject",
      "s3:PutObjectACL"
    ]

    condition {
      test     = "StringEquals"
      variable = "aws:PrincipalAccount"

      values = [
        data.aws_caller_identity.this.account_id
      ]
    }

    resources = concat(
      formatlist("arn:aws:s3:::%s/assets/office/${each.key}/${var.upload_folder}", local.buckets),
      formatlist("arn:aws:s3:::%s/assets/office/${each.key}/${var.upload_folder}/*", local.buckets)
    )
  }
}

resource "aws_iam_user" "this" {
  for_each = var.users
  name     = "${var.stage}-upload-${each.value}"
  path     = "/"
  tags     = var.tags
}

resource "aws_iam_access_key" "this" {
  for_each = var.users
  user     = aws_iam_user.this[each.key].name
}

resource "aws_iam_user_policy" "this" {
  for_each = var.users
  name     = "${var.stage}-upload-user-${each.value}"
  user     = aws_iam_user.this[each.key].name
  policy   = data.aws_iam_policy_document.this[each.key].json
}

#########################
### Secrets Manager
#########################

# NOTE: Since we aren't specifying a KMS key this will default to using
# `aws/secretsmanager`/
resource "aws_secretsmanager_secret" "this" {
  for_each    = var.users
  name        = "${var.stage}-upload/${each.value}"
  description = "Managed by Terraform"
  tags        = var.tags
}

resource "aws_secretsmanager_secret_version" "this" {
  for_each  = var.users
  secret_id = aws_secretsmanager_secret.this[each.key].id

  secret_string = jsonencode({
    OFFICE_ID             = each.key
    AWS_ACCESS_KEY_ID     = aws_iam_access_key.this[each.key].id
    AWS_SECRET_ACCESS_KEY = aws_iam_access_key.this[each.key].secret
    S3_BUCKET_PATHS       = join(",", formatlist("%s/assets/office/${each.key}/${var.upload_folder}", local.buckets))
  })
}

#########################
### Local Exec
#########################

resource "null_resource" "this" {
  for_each = var.users

  provisioner "local-exec" {
    command = "aws s3api put-object --bucket ${var.buckets[0]} --key assets/practice/${each.key}/${var.upload_folder}/"
  }
}

variables.tf

variable "stage" {
  type        = string
  description = "Required: Stage/Environment"
}

variable "users" {
  type        = map(any)
  description = "Required: Users"
}

variable "buckets" {
  type        = list(any)
  description = "Required: S3 bucket name"
}

variable "upload_folder" {
  type        = string
  default     = "upload"
  description = "Optional: Upload folder source name"
}

variable "tags" {
  type = map(any)
  default = {
    "infra" = "terraform"
  }
  description = "Optional: Tags"
}


```ruby
module "migration_users" {
  source = "../modules/iam-user-upload"

  stage = local.gf_cluster

  buckets = [
    "bucket-name-us-east-2",
    "bucket-name-us-west-2",
  ]

  users = {
    10 = "alan-adkin",
    11 = "brian-burns",
    12 = "christy-carmichael",
  }

  tags = local.tags
}

Verify

AWS CLI

$ aws transfer test-identity-provider --server-id s-**********42466db --user-name woody-clever-bluebird --user-password '**********' --server-protocol SFTP --source-ip 127.0.0.1
{
    "Response": "{\"HomeDirectoryDetails\":\"[{\\\"Entry\\\":\\\"/\\\",\\\"Target\\\":\\\"/**********/11/upload\\\"}]\",\"HomeDirectoryType\":\"LOGICAL\",\"Role\":\"arn:aws:iam::**********:role/preprod-sftp-user-role\",\"Policy\":\"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetBucketLocation\", \"s3:ListAllMyBuckets\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\" ], \"Resource\":[ \"arn:aws:s3:::**********\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:DeleteObjectVersion\", \"s3:DeleteObject\", \"s3:GetObjectACL\", \"s3:GetObjectVersion\", \"s3:GetObject\", \"s3:PutObjectACL\", \"s3:PutObject\" ], \"Resource\": [ \"arn:aws:s3:::**********/*\" ] } ] }\",\"UserName\":\"woody-clever-bluebird\",\"IdentityProviderType\":\"AWS_LAMBDA\"}",
    "StatusCode": 200,
    "Message": ""
}

SFTP Client

$ sftp woody-clever-bluebird@s-**********42466db.server.transfer.us-east-2.amazonaws.com
woody-clever-bluebird@s-**********42466db.server.transfer.us-east-2.amazonaws.com's password:
Connected to s-**********42466db.server.transfer.us-east-2.amazonaws.com.
sftp> put /Users/cwong/Downloads/still_dre_24.mp3
Uploading /Users/cwong/Downloads/still_dre_24.mp3 to /still_dre_24.mp3
/Users/cwong/Downloads/still_dre_24.mp3                                  100%   76KB 198.0KB/s   00:00
sftp> dir
still_dre_24.mp3
sftp> pwd
Remote working directory: /
sftp> cd ..
sftp> pwd
Remote working directory: /
sftp>

Resources

  • AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3
  • Create an FTPS-enabled server
  • AWS SFTP Transfer with a Custom Identity Provider
  • Configuring AWS Transfer for SFTP: IAM, Route 53 and CloudWatch
  • Working with custom identity providers
  • How AWS Transfer Family works
  • Avoid setstat errors


technologydocdevopsawsterraformtransfer familysftplambdasecrets manager Share Tweet +1