Dynamic Provisioning in K8S using FlexVolume Plug-in
Unlike Static provisioning example there can be use cases where Dynamic provisioning is useful in cases where you do not want MapR and Kubernetes administrators to create storage manually to store the Pod storage state/data. The PersistentVolume is created automatically based on the parameters specified in the referenced StorageClass.
The following Blog uses a PersistentVolumeClaim that references a Storage Class. In this example, a Kubernetes Administrator has created a storage class called
secure-maprfs
for Pod creators to use when they want to create persistent storage for their Pods. The created Pod storage will survive the deletion of a Pod hence the reclaim policy is set to retain.
1) Below is sample yaml file to statically provision MapR Flex volume to K8S .
[root@tssperf09 abizerwork]# cat DynamicProvisioner.yaml
[root@tssperf09 abizerwork]# cat DynamicProvisioner.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: secure-maprfs
namespace: mapr-system
provisioner: mapr.com/maprfs
parameters:
restServers: "10.10.70.113:8443"
cldbHosts: "10.10.70.113 10.10.70.114 10.10.70.115"
cluster: "ObjectPool"
securityType: "unsecure"
ticketSecretNamespace: "mapr-system"
maprSecretName: "mapr-provisioner-secrets"
maprSecretNamespace: "mapr-system"
namePrefix: "pv"
mountPrefix: "/pv"
readOnly: “true”
reclaimPolicy: “Retain”
advisoryquota: "100M"
---
kind: Pod
apiVersion: v1
metadata:
name: test-secure-provisioner
namespace: mapr-system
spec:
containers:
- name: busybox
image: docker.io/maprtech/kdf-plugin:1.0.0_029_centos7
args:
- sleep
- "1000000"
imagePullPolicy: Always
volumeMounts:
- name: maprfs-pvc
mountPath: "/dynvolume"
restartPolicy: "Never"
volumes:
- name: maprfs-pvc
persistentVolumeClaim:
claimName: maprfs-secure-pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: maprfs-secure-pvc
namespace: mapr-system
spec:
accessModes:
- ReadWriteOnce
storageClassName: secure-maprfs
resources:
requests:
storage: 300M
---
apiVersion: v1
kind: Secret
metadata:
name: mapr-provisioner-secrets
namespace: mapr-system
type: Opaque
data:
MAPR_CLUSTER_USER: cm9vdA== <--- Username in base64 format
MAPR_CLUSTER_PASSWORD: YWJpemVy <--- Password in base64 format
[root@tssperf09 abizerwork]#
Note :- Convert username and password to base64 format.
https://www.base64encode.org/
2) Use the
3) Now we can see "test-secure-provisioner" pod is up and running. This would give all the steps
https://www.base64encode.org/
2) Use the
kubectl create
command with the -f
option to install Dynamic provisioner on Kubernetes cluster .
[root@tssperf09 abizerwork]# kubectl create -f DynamicProvisioner.yaml
storageclass "secure-maprfs" created
pod "test-secure-provisioner" created
persistentvolumeclaim "maprfs-secure-pvc" created
secret "mapr-provisioner-secrets" created
[root@tssperf09 abizerwork]#
3) Now we can see "test-secure-provisioner" pod is up and running. This would give all the steps
[root@tssperf09 abizerwork]# kubectl get pods -n mapr-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
mapr-kdfplugin-6l5n7 1/1 Running 0 23d 192.168.61.65 tssperf10.lab
mapr-kdfplugin-crzhk 1/1 Running 0 23d 192.168.217.129 tssperf11.lab
mapr-kdfplugin-srcln 1/1 Running 0 23d 192.168.196.223 tssperf09.lab
mapr-kdfprovisioner-79b86f459d-hjkcn 1/1 Running 0 23d 192.168.217.130 tssperf11.lab
test-secure 1/1 Running 0 2d 192.168.61.67 tssperf10.lab
test-secure-provisioner 1/1 Running 0 1m 192.168.217.135 tssperf11.lab
[root@tssperf09 abizerwork]#
4) To review the status of the Pod being spun up review the "Events" para for the command output.
[root@tssperf09 abizerwork]# kubectl describe pod test-secure-provisioner -n mapr-system
Name: test-secure-provisioner
Namespace: mapr-system
Node: tssperf11.lab/10.10.72.251
Start Time: Fri, 20 Apr 2018 14:57:41 -0600
Labels: <none>
Annotations: <none>
Status: Running
IP: 192.168.217.135
Containers:
busybox:
Container ID: docker://dce62da9f665d8362fa62af528fb2c2b005a3b23c505da3764e33d8cf7f2fa37
Image: docker.io/maprtech/kdf-plugin:1.0.0_029_centos7
Image ID: docker-pullable://docker.io/maprtech/kdf-plugin@sha256:eecb2d64ede9b9232b6eebf5d0cc59fe769d16aeb56467d0a00489ce7224278d
Port: <none>
Args:
sleep
1000000
State: Running
Started: Fri, 20 Apr 2018 14:57:51 -0600
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/dynvolume from maprfs-pvc (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9g8tq (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
maprfs-pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: maprfs-secure-pvc
ReadOnly: false
default-token-9g8tq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9g8tq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30s default-scheduler Successfully assigned test-secure-provisioner to tssperf11.lab
Normal SuccessfulMountVolume 30s kubelet, tssperf11.lab MountVolume.SetUp succeeded for volume "default-token-9g8tq"
Normal SuccessfulMountVolume 22s kubelet, tssperf11.lab MountVolume.SetUp succeeded for volume "pv-xplttvibrl"
Normal Pulling 21s kubelet, tssperf11.lab pulling image "docker.io/maprtech/kdf-plugin:1.0.0_029_centos7"
Normal Pulled 20s kubelet, tssperf11.lab Successfully pulled image "docker.io/maprtech/kdf-plugin:1.0.0_029_centos7"
Normal Created 20s kubelet, tssperf11.lab Created container
Normal Started 20s kubelet, tssperf11.lab Started container
[root@tssperf09 abizerwork]#
As seen above the POD is assigned to be spun up on node tssperf11 . Incase if there are any issues while provisioner is trying to Provision the POD we can review the logs where Provisioner is running in this case node tssperf11 and tail the logs to review the logs and errors they are running into to figure out the RC and fix the issue accordingly.
[root@tssperf11 logs]# tail -12 provisioner-k8s.log
2018/04/19 19:18:33 main.go:443: INFO === Starting volume provisioning ===
2018/04/19 19:18:33 main.go:444: INFO options={Delete pvc-be766816-4438-11e8-b9b1-84b80208e1f2 &PersistentVolumeClaim{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:maprfs-secure-pvc,GenerateName:,Namespace:mapr-system,SelfLink:/api/v1/namespaces/mapr-system/persistentvolumeclaims/maprfs-secure-pvc,UID:be766816-4438-11e8-b9b1-84b80208e1f2,ResourceVersion:3036847,Generation:0,CreationTimestamp:2018-04-19 19:18:33 -0600 MDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{300 6} {<nil>} 300M DecimalSI},},},VolumeName:,Selector:nil,StorageClassName:*secure-maprfs,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},},} map[maprSecretName:mapr-provisioner-secrets mountPrefix:/pv namePrefix:pv readOnly:“true” restServers:10.10.70.113:8443 advisoryquota:100M cldbHosts:10.10.70.113 10.10.70.114 10.10.70.115 cluster:ObjectPool maprSecretNamespace:mapr-system reclaimPolicy:“Retain” securityType:unsecure ticketSecretNamespace:mapr-system]}
2018/04/19 19:18:33 main.go:445: INFO Cleaning parameters...
2018/04/19 19:18:33 main.go:449: INFO Parsing parameters...
2018/04/19 19:18:33 main.go:135: INFO Getting admin secret: mapr-provisioner-secrets from namespace: mapr-system
2018/04/19 19:18:33 main.go:165: INFO Got admin secret
2018/04/19 19:18:33 main.go:452: INFO Constructed server info: (rest: 10.10.70.113:8443, cldb: 10.10.70.113 10.10.70.114 10.10.70.115, cluster: ObjectPool, securitytype: unsecure)
2018/04/19 19:18:33 main.go:105: INFO Convert Kubernetes capacity: %!s(int64=300000000)
2018/04/19 19:18:33 main.go:108: INFO Converted MapR capacity: 300M
2018/04/19 19:18:33 main.go:458: INFO Generated Mapr volumename: pv.uclcmsgkxq mountpoint: /pv/pv-uclcmsgkxq
2018/04/19 19:18:33 main.go:319: INFO Creating MapR query...
2018/04/19 19:18:33 main.go:326: INFO Calling executeQuery with query string: /rest/volume/create?createparent=1&name=pv.uclcmsgkxq&advisoryquota=100M&path=%2Fpv%2Fpv-uclcmsgkxq"a=300M&mount=1
2018/04/19 19:18:36 main.go:346: INFO Response: {"timestamp":1524186116264,"timeofday":"2018-04-19 06:01:56.264 GMT-0700","status":"OK","total":0,"data":[],"messages":["Successfully created volume: 'pv.uclcmsgkxq'"]}
2018/04/19 19:18:36 main.go:467: INFO Creating Kubernetes PersistentVolume: pv-uclcmsgkxq
2018/04/19 19:18:36 main.go:472: INFO Reclaim Policy: Delete
2018/04/19 19:18:36 main.go:511: INFO === Finished volume provisioning ===
[root@tssperf11 logs]#
As seen in screenshot below Volume did get created with prefix pv-<RandomString>
No comments:
Post a Comment