Saturday, April 22, 2017

Kubernetes Share Config across services

Following blog post shows one way to allow one Kubernetes service to generate configs and share with other services. It may not be the best approach, but just one approach to allow the share to happen.

Example Scenario

One Kubernetes (like raik-cs or mysql random password generator) service that generates a random generated credential during first time start up. We will mention it as datastore service in below article.

Another Kubernetes service (like webapplication) needs the credential to access the service. We will mention it as client service in below article.


There is several approach:

  • Utilize another key-value service like Valut
    • The only downside is need to manage another service and it still kind remain a chicken and egg question that if you use auto generated credential, the credential needs store somewhere.
  • Store the credential somewhere in the Pod and then somehow have an orchestrator to use kuebctl exec to pull data from one service and present as secret to another
    • It seem to be not a generic approach (attach a process through exec seem never a really good idea for a production system), and the third party orchestrator parsing may not be reliable 

So the conclusion solution that I come up with is to utilize Kubernetes Secret API and share the secrets across services within the same namespace

Prerequisite

Kubernetes 1.6+ (due to the requirement of RBCA authentication) setup.
Following example will use minikube (0.18.0)


Setup

The datasotre service will be created, and the pod will auto generated an account, and push the account credential to a new secret in the namespace

The client service will be created with the secret and able to consume the service


During normal setup, a pod in non the kube-system namespace will not able to access the Kubernetes Secret object api

So we need to create additional role to address this requirement. The reason we choose RBCA mode over the current default ABAC (1.6) is because ABAC does not allow dynamic role creation (it will require a restart, and the control on resource is not that fine grained).

More doc can be read in following section
https://kubernetes.io/docs/admin/authorization/

Start up minikube with RBAC mode

minikube start --vm-driver=kvm --extra-config=apiserver.Authorization.Mode=RBAC
Create a namespace with a role to access all secrets info in a ymal file

cat > create_default_namespace_role.yaml << END
kind: Namespace
apiVersion: v1
metadata:
  name: dummy
  labels:
    name: dummy
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: dummy
  name: dummy-default-role
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: dummy-default-rolebinding
  namespace: dummy
subjects:
  - kind: ServiceAccount
    name: default
    namespace: dummy
roleRef:
  kind: Role
  name: dummy-default-role
  apiGroup: rbac.authorization.k8s.io
END
Create the namespace and default role
kubectl create -f  create_default_namespace_role.yaml
Prepare a docker image for datastore service, during the line where auto generated credential is exposed, please do following


DUMMY_ACCESS_KEY='keyid'
DUMMY_ACCESS_SECRET='keysecret'  

curl -X POST -H "Content-Type:application/json" -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -k https://${KUBERNETES_PORT_443_TCP_ADDR}/api/v1/namespaces/$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)/secrets -d "{\"apiVersion\":\"v1\",\"data\":{\"DUMMY_ACCESS_KEY\":\"$(echo ${DUMMY_ACCESS_KEY} | base64)\",\"DUMMY_ACCESS_SECRET\":\"$(echo ${DUMMY_ACCESS_SECRET} | base64)\"},\"kind\":\"Secret\",\"metadata\":{\"name\":\"dummy-admin-credential\"}}"
Note: The data fields will require base64 encoded before can pass into the API

If  it is possible that credential changes in later state, one can use PUT to update the secret (please note the client service may need to be restarted to adopt the change)



DUMMY_ACCESS_KEY='keyid'
DUMMY_ACCESS_SECRET='keysecret'

curl -X PUT -H "Content-Type:application/json" -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -k https://${KUBERNETES_PORT_443_TCP_ADDR}/api/v1/namespaces/$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)/secrets/dummy-admin-credential -d "{\"apiVersion\":\"v1\",\"data\":{\"DUMMY_ACCESS_KEY\":\"$(echo ${DUMMY_ACCESS_KEY} | base64)\",\"DUMMY_ACCESS_SECRET\":\"$(echo ${DUMMY_ACCESS_SECRET} | base64)\"},\"kind\":\"Secret\",\"metadata\":{\"name\":\"dummy-admin-credential\"}}"

Once the datastore service is up, on the outide, one can use kubectl to check the generated credential

kubectl get dummy-admin-credential -o yaml --namespace dummy

and base64 decode the DUMMY_ACCCESS_KEY and DUMMY_ACCESS_SECRT

In the client service, the docker container can either do following to get the credential (to use base64 decode + json process)

curl -X GET -H "Content-Type:application/json" -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -k https://${KUBERNETES_PORT_443_TCP_ADDR}/api/v1/namespaces/$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)/secrets/dummy-admin-credential


or do the proper way by attaching the secret to the pod