AWS Parameter Store
- Terraform Setup for Engineering and Container Access
- AWS CLI
- SSH authorized_keys
- Dockerfile
- References
AWS Parameter Store allows us to store config key/value credentials in an encrypted manner. They then can be extracted from EC2/ECS containers with IAM policy access. This will allow our current applications/services to go away from storing credentials in S3.
Benefits:
- Centralized service
- IAM policy access
- Change history for auditing
- Retrieve during run-time
Our parameter structure is defined as the following:
/ ENVIRONMENT / APPLICATION / KEY
Example of a grafana key in dev environment will be:
/dev/grafana/GF_DATABASE_USER
Terraform Setup for Engineering and Container Access
In conjunction to our ECS stack for setting up alb and service definition in Terraform modules, the following resources allows the ECS instances/containers to communicate with Parameter Store.
KMS Key
// access policy (kms key)
data "aws_iam_policy_document" "key_policy" {
statement {
sid = "Enable IAM User Permissions"
effect = "Allow"
actions = [
"kms:*",
]
principals = {
type = "AWS"
identifiers = [
"arn:aws:iam::${var.account_id}:root",
]
}
resources = [
"*",
]
}
}
// create kms key ID
resource "aws_kms_key" "key" {
description = "Parameter Store Key - ${var.environment} container"
key_usage = "ENCRYPT_DECRYPT"
is_enabled = true
enable_key_rotation = false
policy = "${data.aws_iam_policy_document.key_policy.json}"
tags {
Environment = "${var.environment}"
Terraform = "true"
}
}
// map kms key ID to alias
resource "aws_kms_alias" "alias" {
name = "alias/${var.environment}-${var.region}-ecs-key"
target_key_id = "${aws_kms_key.key.key_id}"
}
Engineering Access
First of all, we want engineers to have access to the lower environments and AWS console. Just attach the engineers to the policy …
// access policy (aws console access, engineering rw)
data "aws_iam_policy_document" "param_store_policy_rw" {
statement {
actions = [
"ssm:DescribeParameters",
]
resources = [
"*",
]
}
statement {
actions = [
"kms:Decrypt",
"ssm:DescribeParameters",
"ssm:GetParameter",
"ssm:GetParameterHistory",
"ssm:GetParameters",
"ssm:GetParametersByPath",
"ssm:PutInventory",
"ssm:Putparameter",
]
resources = [
"arn:aws:ssm:${var.region}:${var.account_id}:parameter/${var.environment}/*",
"${aws_kms_key.key.arn}",
]
}
}
// join access policy document to policy for rw
resource "aws_iam_policy" "param_store_policy_rw" {
name = "${var.environment}-${var.region}-parameter-store-policy-rw"
policy = "${data.aws_iam_policy_document.param_store_policy_rw.json}"
}
Container Instances Access
We can attach the following resource to the ecs cluster aws_iam_role
. Instances will have access to environment parameters such as authorized_keys.
// access policy (ec2 ro)
data "aws_iam_policy_document" "param_store_policy_ro" {
statement {
actions = [
"kms:Decrypt",
"ssm:GetParameterHistory",
"ssm:GetParameter",
"ssm:GetParameters",
"ssm:GetParametersByPath",
]
resources = [
"arn:aws:ssm:${var.region}:${var.account_id}:parameter/${var.environment}/*",
"arn:aws:ssm:${var.region}:${var.account_id}:parameter/global/*",
"${aws_kms_key.key.arn}",
]
}
}
// join access policy document to policy for ro
resource "aws_iam_policy" "param_store_policy_ro" {
name = "${var.environment}-${var.region}-parameter-store-policy-ro"
policy = "${data.aws_iam_policy_document.param_store_policy_ro.json}"
}
Policy Attachment
We have a “cluster” module which sets up:
- IAM Role
- IAM Role Policy
- Security Groups
- ECS Cluster
- Launch Configuration
- Autoscaling Group/Policy
- Cloudwatch Metric Alarm
The ${module.cluster_default_c01.container_policy}
is the “outputs” of aws_iam_role
.
output "container_policy" {
value = "${aws_iam_role.container_instance.name}"
}
We can attach ro access policy to container policy.
resource "aws_iam_role_policy_attachment" "policy_attachment_default_c01" {
role = "${module.cluster_default_c01.container_policy}"
policy_arn = "${aws_iam_policy.param_store_policy_ro.arn}"
}
Container Services Access
Lastly, we will setup service-based policy to restrict each individual services access by assigning it’s own Task Role.
Notice! Obviously the following resources should be in the aws_ecs_service
module, and variables to pass in (¬、¬) …
// association to task definition
data "aws_iam_policy_document" "aws_iam_role_document" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = [
"ecs-tasks.amazonaws.com",
"ecs.amazonaws.com",
]
}
}
}
// in-line access policy (ecs ro)
data "aws_iam_policy_document" "aws_iam_role_policy_document" {
statement {
actions = [
"kms:Decrypt",
"ssm:GetParameterHistory",
"ssm:GetParameter",
"ssm:GetParameters",
"ssm:GetParametersByPath",
]
resources = [
"arn:aws:ssm:${var.region}:${var.account_id}:parameter/${var.environment}/${var.application}/*",
"${var.kms_key}",
]
}
}
resource "aws_iam_role" "task_role" {
name = "${var.environment}-${var.region}-${var.application}-task-role"
assume_role_policy = "${data.aws_iam_policy_document.aws_iam_role_document.json}"
}
resource "aws_iam_role_policy" "task_role_policy" {
name = "${var.environment}-${var.region}-${var.application}-task-role-policy"
role = "${aws_iam_role.task_role.id}"
policy = "${data.aws_iam_policy_document.aws_iam_role_policy_document.json}"
}
// map the role to the ecs task definition, along with the rest of the arguments
resource "aws_ecs_task_definition" "ecs_task_definition" {
task_role_arn = "${aws_iam_role.task_role.arn}"
// ...
}
Example of how to pass variables into our alb web service module:
module "web_service_application_c01" {
source = "../../modules/ecs_stack/alb_service"
account_id = "****"
application = "grafana"
kms_key = "${aws_kms_key.key.arn}"
// ...
}
We can attach additional policy to the Task Role after defining the alb web service above.
resource "aws_iam_role_policy_attachment" "task_role_policy_attachment_application_c01" {
role = "${module.web_service_application_c01.service_policy}"
policy_arn = "${aws_iam_policy.additional_application_policy.arn}"
}
AWS CLI
Create Parameters
Warning! If your value contains http://
it will error out: received non 200 status code of 404
. The work around is to use --cli-input-json
documented here.
$ aws ssm put-parameter --region us-east-1 --name /environment/application/key_name --type "SecureString" --value "VALUE"
$ aws ssm put-parameter --region us-east-1 --cli-input-json '{"Type": "SecureString", "Name": "/environment/application/key_name", "Value": "https://VALUE"}'
Update Parameters
$ aws ssm put-parameter --region us-east-1 --name /environment/application/key_name --type "SecureString" --value "NEW_VALUE" --overwrite
Delete Parameters
$ aws ssm delete-parameter --region us-east-1 --name /environment/application/key_name
Get Parameters
$ aws ssm get-parameters --region us-east-1 --names /environment/application/key_name --with-decryption
{
"InvalidParameters": [],
"Parameters": [
{
"Type": "SecureString",
"Name": "/environment/application/key_name",
"Value": "VALUE"
}
]
}
Get Parameters by Path
In order to get the entire path, we can either use the latest version of awscli
or 3rd-party tools.
This is available in version 1.14.30
or above.
$ aws --version
aws-cli/1.14.30 Python/3.6.4 Darwin/16.7.0 botocore/1.8.34
Example 1: Search by Path
$ aws ssm get-parameters-by-path --region us-east-1 --with-decryption --path "/environment/application/"
{
"Parameters": [
{
"Name": "/environment/application/key_name",
"Type": "SecureString",
"Value": "VALUE",
"Version": 1
},
...
{
"Name": "/environment/application/foo",
"Type": "SecureString",
"Value": "BAR",
"Version": 1
}
]
}
With that, we can set all the key pairs into environment variables. This can be done with a one-liner, but for the purpose of how-to examples, i am breaking each task step by step.
- Get all the parameters and output to a file in
key_name="VALUE"
format usingjq
. - Strip out the “path” using
sed
. - Take action on the file. In my case, I just source it for environment variables.
$ aws ssm get-parameters-by-path --region us-east-1 --with-decryption --path "/dev/grafana/" | jq -r '.Parameters[] | ([.Name, "\"" + .Value + "\""]) | join("=")' > /tmp/dev.grafana.ini
$ cat /tmp/dev.grafana.ini
/dev/grafana/GF_EXTERNAL_IMAGE_STORAGE_PROVIDER="s3"
/dev/grafana/GF_EXTERNAL_IMAGE_STORAGE_S3_REGION="us-east-1"
/dev/grafana/GF_INSTALL_PLUGINS="abhisant-druid-datasource,grafana-piechart-panel,grafana-worldmap-panel,grafana-simple-json-datasource"
/dev/grafana/GF_USERS_ALLOW_SIGN_UP="false"
...
$ sed -i~ 's#/dev/grafana/##' /tmp/dev.grafana.ini
$ source /tmp/dev.grafana.ini
$ set | grep "GF_INSTALL_PLUGINS"
GF_INSTALL_PLUGINS=abhisant-druid-datasource,grafana-piechart-panel,grafana-worldmap-panel,grafana-simple-json-datasource
Example 2: Recursive Search
This example is doing a recursive search using the --recursive
option on the /
path, which will dump ALL the parameters. Good for auditing.
$ time aws ssm get-parameters-by-path --region us-east-1 --with-decryption --recursive --path "/" | jq -r '.Parameters[] | ([.Name, "\"" + .Value + "\""]) | join("=")'
/dev/grafana/GF_EXTERNAL_IMAGE_STORAGE_PROVIDER="s3"
/dev/grafana/GF_EXTERNAL_IMAGE_STORAGE_S3_REGION="us-east-1"
/dev/grafana/GF_INSTALL_PLUGINS="abhisant-druid-datasource,grafana-piechart-panel,grafana-worldmap-panel,grafana-simple-json-datasource"
/dev/grafana/GF_USERS_ALLOW_SIGN_UP="false"
...
/prd/grafana/GF_EXTERNAL_IMAGE_STORAGE_PROVIDER="s3"
/prd/grafana/GF_EXTERNAL_IMAGE_STORAGE_S3_REGION="us-east-1"
/prd/grafana/GF_INSTALL_PLUGINS="abhisant-druid-datasource,grafana-piechart-panel,grafana-worldmap-panel,grafana-simple-json-datasource"
/prd/grafana/GF_USERS_ALLOW_SIGN_UP="false"
...
/stg/grafana/GF_EXTERNAL_IMAGE_STORAGE_PROVIDER="s3"
/stg/grafana/GF_EXTERNAL_IMAGE_STORAGE_S3_REGION="us-east-1"
/stg/grafana/GF_INSTALL_PLUGINS="abhisant-druid-datasource,grafana-piechart-panel,grafana-worldmap-panel,grafana-simple-json-datasource"
/stg/grafana/GF_USERS_ALLOW_SIGN_UP="false"
...
real 0m21.341s
user 0m0.707s
sys 0m0.114s
Incorporating Parameter as CLI Variable
Assuming we have the database hostname and username, only getting the password from parameter store:
$ mysql -h $GF_DATABASE_HOST -u $GF_DATABASE_USER -p$(aws ssm get-parameters --region us-east-1 --names /dev/grafana/GF_DATABASE_PASSWORD --with-decryption --query Parameters[0].Value --output text)
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 427381
Server version: 5.6.10 MySQL Community Server (GPL)
...
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
3rd Party Tools
Here is an example from OS-X using a great 3rd party tool called aws-env.
$ wget https://github.com/Droplr/aws-env/blob/master/bin/aws-env-darwin-amd64 -O /usr/local/bin/aws-env && chmod 755 /usr/local/bin/aws-env
$ AWS_ENV_PATH=/dev/grafana/ AWS_REGION=us-east-1 aws-env
export GF_EXTERNAL_IMAGE_STORAGE_PROVIDER=$'s3'
export GF_EXTERNAL_IMAGE_STORAGE_S3_REGION=$'us-east-1'
export GF_INSTALL_PLUGINS=$'abhisant-druid-datasource,grafana-piechart-panel,grafana-worldmap-panel,grafana-simple-json-datasource'
export GF_USERS_ALLOW_SIGN_UP=$'false'
...
Rendering Template File
We can also use the environment variables from parameter store to generate config files by rendering jinja2 templates using envtpl.
$ pip install envtpl
$ eval $(AWS_ENV_PATH=/dev/grafana/ AWS_REGION=us-east-1 /bin/aws-env)
$ envtpl /etc/grafana/grafana.ini.tpl --keep-template
SSH authorized_keys
We also store our engineering public ssh keys as parameters. All engineers have access to dev and staging, their keys are in /global/authorized_keys/USERNAME
. For production access, we put their keys in /prd/authorized_keys/USERNAME
.
$ aws ssm get-parameters --region us-east-1 --names /global/authorized_keys/calvin.wong --with-decryption
{
"InvalidParameters": [],
"Parameters": [
{
"Type": "SecureString",
"Name": "/global/authorized_keys/calvin.wong",
"Value": "ssh-rsa AAA......=== Calvin Wong"
}
]
}
The authorized_keys file is generated during instance provision either in a script or user_data:
[ "$ENVIRONMENT" == "prd" ] && _env="$ENVIRONMENT" || _env="global"; AWS_ENV_PATH=/${_env}/authorized_keys/ AWS_REGION=${AWS_REGION} aws-env | awk -F'$' '{print $2}' | tr -d "'" >> /home/ec2-user/.ssh/authorized_keys
Dockerfile
FROM grafana/grafana
ARG AWS_ENV_BIN="https://github.com/Droplr/aws-env/raw/master/bin/aws-env-linux-amd64"
RUN curl -L -o /bin/aws-env ${AWS_ENV_BIN} && \
chmod +x /bin/aws-env
ENTRYPOINT ["/bin/bash", "-c", "eval $(AWS_ENV_PATH=/${NODE_ENV}/${NODE_TYPE}/ AWS_REGION=${AWS_REGION} /bin/aws-env) && /run.sh"]
References
- Parameter Store Instead of Hashicorp Vault
- Secure credentials for ECS tasks using the EC2 Parameter Store
- The Right Way to Manage Secrets with AWS
- Use Parameter Store to Securely Access Secrets and Config Data in AWS CodeDeploy
- Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks