• Home
  • About
  • Résumé
  • RunLog
  • Posts
    • All Posts
    • All Tags

EKS Group Access using AWS IAM

05 Dec 2019

Reading time ~4 minutes

It took our team few days to figure out everything we need to get this working in EKS. The docs online are confusing and long so I decided to write down the step by step setup using Terraform.

For a more detailed but confusing doc, you can check out cloudjourney.io.

This document will assume you have your Kubernetes server and cluster setup. Here is an example of our terraform module in dev.

Terraform Modules for Kubernetes Server and Cluster

module "default-eks-c01" {
  source                        = "../../modules/aws_eks/cluster"
  cluster-name                  = "${var.env}-default-eks"
  environment                   = "${var.env}"
  cluster_num                   = "01"
  instance_type                 = "m4.2xlarge"
  spot_price                    = "0.40"
  eks_desired_size              = "3"
  eks_min_size                  = "3"
  key_name                      = "${lookup(var.ec2_keys, var.region)}"
  vpc_id                        = "${module.vpc.vpc_id}"
  allowed_cidr_blocks           = ["172.***.***.0/12", "172.***.***.0/16", "10.***.***.0/8", "207.***.***.0/27"]
  eks_subnets                   = ["subnet-ae69e***", "subnet-e8f68***", "subnet-f35dc***"]
  worker_subnet                 = ["subnet-ae69e***", "subnet-e8f68***", "subnet-f35dc***"]
  subnet_ids                    = ["subnet-ae69e***", "subnet-e8f68***", "subnet-f35dc***"]
}

IAM Roles and Policies

Our next step is to setup the access role for the admin and user group. The follow IAM resources will only show the user group setup, but for admin setup its the same.

resource "aws_iam_role" "eks-access-role-user" {
  name        = "${var.env}-${var.region_shorthand}-eks-access-role-user"
  path        = "/"
  description = "Managed by Terraform"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com",
        "AWS": "arn:aws:iam::${var.account_id}:root"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}
resource "aws_iam_role_policy" "eks-access-role-policy-user" {
  name = "${var.env}-${var.region_shorthand}-eks-access-role-policy-user"
  role = "${aws_iam_role.eks-access-role-user.id}"

  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": "sts:GetCallerIdentity",
      "Resource": "*"
    }
  ]
}
POLICY
}

For eks-auth-assume-role-admin, just change "Action": "eks:DescribeCluster" "Action": "eks:*". This is required to run aws eks update-kubeconfig.

resource "aws_iam_policy" "eks-auth-assume-role-user" {
  name        = "${var.env}-${var.region_shorthand}-eks-auth-assume-role-user"
  path        = "/"
  description = "Managed by Terraform"

  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "sts:AssumeRole",
      "Resource": "arn:aws:iam::${var.account_id}:role/${aws_iam_role.eks-access-role-user.id}"
    },
    {
      "Effect": "Allow",
      "Action": "eks:DescribeCluster",
      "Resource": "*"
    }
  ]
}
POLICY
}

Policy Attachment

Grant kubernetes access to users.

$ engineer=eks.user
$ aws iam attach-user-policy --policy-arn arn:aws:iam::************:policy/dev-east-eks-auth-assume-role-user --user-name $engineer

$ aws iam list-attached-user-policies --user-name $engineer
{
    "AttachedPolicies": [
        ...
        {
            "PolicyName": "dev-east-eks-auth-assume-role-user",
            "PolicyArn": "arn:aws:iam::************:policy/dev-east-eks-auth-assume-role-user"
        }
    ]
}

ConfigMap and RBAC Setup

Now that we have all the AWS IAM roles and policy setup. We will need to setup ConfigMap and RBAC in Kubernetes.

aws-auth ConfigMap

The role/dev-eks-worker-nodes-role-c01 which comes from module "default-eks-01" will allow worker-nodes to join the cluster. I have setup a system:masters account via mapUsers for myself and the devops team.

Our directory tree looks like this.

k8s-objects/
├── ConfigMap/
│   └── ConfigMap-aws-auth.yaml
└── RBAC/
    ├── ClusterRole-rfk-cluster-admins.yaml
    ├── ClusterRole-rfk-cluster-users.yaml
    ├── ClusterRoleBinding-rfk-access-admin.yaml
    └── ClusterRoleBinding-rfk-access-user.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::************:role/dev-eks-worker-nodes-role-c01
      username: system:node:
      groups:
        - system:bootstrappers
        - system:nodes
    - rolearn: arn:aws:iam::************:role/dev-east-eks-access-role-admin
      username: dev-eks-admin:
      groups:
        - rfk-cluster-admins
    - rolearn: arn:aws:iam::************:role/dev-east-eks-access-role-user
      username: dev-eks-user:
      groups:
        - rfk-cluster-users
  mapUsers: |
    - userarn: arn:aws:iam::************:user/calvin.wong
      username: calvin.wong
      groups:
        - system:masters

RBAC

We will need to setup ClusterRole and ClusterRoleBinding.

This ClusterRole will grant access to the following resources and actions.

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: rfk-cluster-admins
  namespace: default
rules:
  - apiGroups:
      - '*'
    resources:
      - '*'
    verbs:
      - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: rfk-cluster-users
  namespace: default
rules:
  - apiGroups:
      - '*'
    resources:
      - configmaps
      - cronjobs.batch
      - deployments
      - deployments/scale
      - endpoints
      - ingresses
      - nodes
      - persistentvolumeclaims
      - pods
      - pods/attach
      - pods/exec
      - pods/log
      - pods/portforward
      - pods/proxy
      - pods/status
      - secrets
      - services
      - statefulsets
      - statefulsets/scale
    verbs:
      - create
      - get
      - list
      - patch
      - update
      - watch

While this ClusterRoleBinding will map it to the user group which we define in the ConfigMap.

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: rfk-access-admin
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: rfk-cluster-admins
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: Group
    name: rfk-cluster-admins
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: rfk-access-user
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: rfk-cluster-users
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: Group
    name: rfk-cluster-users

User Authentication

Users can now authenticate via aws (assuming you have aws-cli and aws-iam-authenticator installed).

$ aws eks --region us-east-1 update-kubeconfig --name dev-default-eks-c01

Last step is to edit the kubeconfig file because we’re using aws-iam-authenticator.

users:
- name: arn:aws:eks:us-east-1:************:cluster/dev-default-eks-c01
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - dev-default-eks-c01
      - -r
      - arn:aws:iam::************:role/dev-east-eks-access-role-user
      command: aws-iam-authenticator

Once authenticated, we can verify as a member of the user group.

$ kubectl auth can-i get pod
yes

$ kubectl auth can-i delete pod
no

$ kubectl auth can-i get sc
Warning: resource 'storageclasses' is not namespace scoped in group 'storage.k8s.io'
no


technologydocdevopsawsekskubernetesterraform Share Tweet +1