Statically Provisioning in K8S using FlexVolume Plug-In (Non-Secure cluster)
You can designate a pre-created MapR volume for use with Kubernetes by specifying the MapR FlexVolume parameters directly inside the Pod spec. In the Pod spec, you define a Kubernetes volume and add the MapR FlexVolume information to it. You can supply path information by using the
For static provisioning, configuring a PersistentVolume has some advantages over Kubernetes volume in a Pod:
volumePath
parameter.For static provisioning, configuring a PersistentVolume has some advantages over Kubernetes volume in a Pod:
- The configuration file can be shared for use by multiple Pod specs.
- The configuration file enables the PersistentVolume to be mounted and available even when the Pod spec that references it is removed.
1) Below is sample yaml file to statically provision MapR Flex volume to K8S .
[root@tssperf09 abizerwork]# cat staticProvisioning.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-secure
namespace: mapr-system
spec:
containers:
- name: mycontainer
imagePullPolicy: Always
image: docker.artifactory/maprtech/base:5.2.2_3.0.1_centos7 args:
- sleep
- "1000000"
imagePullPolicy: Always
resources:
requests:
memory: "2Gi"
cpu: "500m"
volumeMounts:
- mountPath: /maprvolume1 # mount name in K8S POD
name: maprvolume # Volume name pre-created in cluster
volumes:
- name: maprvolume
flexVolume:
driver: "mapr.com/maprfs"
readOnly: true
options:
volumePath: "/maprvolume" # Volume mount point
cluster: "ObjectPool"
cldbHosts: "10.10.70.113 10.10.70.114 10.10.70.115"
securityType: "unsecure"
[root@tssperf09 abizerwork]#
Note : - On the cluster side its assumed "maprvolume" is already created and mounted on /maprvolume.
[root@node113rhel67 ~]# maprcli volume info -name maprvolume -json
{
"timestamp":1524095640124,
"timeofday":"2018-04-18 04:54:00.124 GMT-0700",
"status":"OK",
"total":1,
"data":[
{
"acl":{
"Principal":"User root",
"Allowed actions":[
"dump",
"restore",
"m",
"d",
"fc"
]
},
"creator":"root",
"aename":"root",
"aetype":0,
"numreplicas":"3",
"minreplicas":"2",
"nsNumReplicas":"3",
"nsMinReplicas":"2",
"allowGrant":"false",
"reReplTimeOutSec":"0",
"replicationtype":"high_throughput",
"rackpath":"/data",
"mirrorthrottle":"1",
"accesstime":"April 18, 2018",
"readonly":"0",
"mountdir":"/maprvolume",
"volumename":"maprvolume",
"mounted":1,
"quota":"0",
"advisoryquota":"0",
"snapshotcount":"0",
"logicalUsed":"0",
"used":"0",
"snapshotused":"0",
"totalused":"0",
"scheduleid":0,
"schedulename":"",
"mirrorscheduleid":0,
"volumetype":0,
"mirrortype":3,
"creatorcontainerid":2181,
"creatorvolumeuuid":"-9169018513486905817:7826340903402007859",
"volumeid":45178265,
"actualreplication":[
0,
0,
0,
100,
0,
0,
0,
0,
0,
0,
0
],
"nameContainerSizeMB":0,
"nameContainerId":2181,
"needsGfsck":false,
"maxinodesalarmthreshold":"0",
"dbrepllagsecalarmthresh":"0",
"limitspread":"true",
"partlyOutOfTopology":0,
"auditVolume":0,
"audited":0,
"coalesceInterval":60,
"enableddataauditoperations":"getattr,setattr,chown,chperm,chgrp,getxattr,listxattr,setxattr,removexattr,read,write,create,delete,mkdir,readdir,rmdir,createsym,lookup,rename,createdev,truncate,tablecfcreate,tablecfdelete,tablecfmodify,tablecfScan,tableget,tableput,tablescan,tablecreate,tableinfo,tablemodify,getperm,getpathforfid,hardlink",
"disableddataauditoperations":"",
"volumeAces":{
"readAce":"p",
"writeAce":"p"
},
"fixCreatorId":"false",
"ReplTypeConversionInProgress":"0",
"tier":{
"enable":"false"
}
}
]
}
[root@node113rhel67 ~]#
2) Use the
kubectl create
command with the -f
option to install static provisioner on Kubernetes cluster
kubectl create -f staticProvisioning.yaml
3) Once the pod starts and is in running state you should see a new pod "test-secure" .
[root@tssperf09 abizerwork]# kubectl get pods --all-namespaces -o wide --sort-by=.status.hostIP
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system kube-scheduler-tssperf09.lab 1/1 Running 0 25d 10.10.72.249 tssperf09.lab
kube-system calico-kube-controllers-d554689d5-mv6lz 1/1 Running 0 25d 10.10.72.249 tssperf09.lab
kube-system kube-dns-6f4fd4bdf-x2g82 3/3 Running 0 25d 192.168.196.222 tssperf09.lab
mapr-system mapr-kdfplugin-srcln 1/1 Running 0 21d 192.168.196.223 tssperf09.lab
kube-system calico-node-z6tqw 2/2 Running 0 25d 10.10.72.249 tssperf09.lab
kube-system etcd-tssperf09.lab 1/1 Running 0 25d 10.10.72.249 tssperf09.lab
kube-system kube-apiserver-tssperf09.lab 1/1 Running 0 25d 10.10.72.249 tssperf09.lab
kube-system kube-controller-manager-tssperf09.lab 1/1 Running 0 25d 10.10.72.249 tssperf09.lab
kube-system calico-etcd-8s8mf 1/1 Running 0 25d 10.10.72.249 tssperf09.lab
kube-system kube-proxy-kln5q 1/1 Running 0 25d 10.10.72.249 tssperf09.lab
kube-system calico-node-rcxzm 2/2 Running 0 25d 10.10.72.250 tssperf10.lab
mapr-system mapr-kdfplugin-6l5n7 1/1 Running 0 21d 192.168.61.65 tssperf10.lab
kube-system kube-proxy-68tv4 1/1 Running 0 25d 10.10.72.250 tssperf10.lab
mapr-system test-secure 1/1 Running 0 3m 192.168.61.67 tssperf10.lab
kube-system kube-proxy-cknz7 1/1 Running 0 25d 10.10.72.251 tssperf11.lab
mapr-system mapr-kdfplugin-crzhk 1/1 Running 0 21d 192.168.217.129 tssperf11.lab
kube-system calico-node-x5qds 2/2 Running 0 25d 10.10.72.251 tssperf11.lab
mapr-system mapr-kdfprovisioner-79b86f459d-hjkcn 1/1 Running 0 21d 192.168.217.130 tssperf11.lab
[root@tssperf09 abizerwork]#
To check the status and step the pod is currently executing while coming up below command can be executed.
[root@tssperf09 abizerwork]# kubectl describe pod test-secure -n mapr-system
Name: test-secure
Namespace: mapr-system
Node: tssperf10.lab/10.10.72.250
Start Time: Tue, 17 Apr 2018 19:32:57 -0600
Labels: <none>
Annotations: <none>
Status: Running
IP: 192.168.61.67
Containers:
mycontainer:
Container ID: docker://588093ea68361532a56b82bedeb78a4c22c1b501b83df50f76664438ffad236f
Image: docker.io/maprtech/kdf-plugin:1.0.0_029_centos7
Image ID: docker-pullable://docker.io/maprtech/kdf-plugin@sha256:eecb2d64ede9b9232b6eebf5d0cc59fe769d16aeb56467d0a00489ce7224278d
Port: <none>
Args:
sleep
1000000
State: Running
Started: Tue, 17 Apr 2018 19:33:05 -0600
Ready: True
Restart Count: 0
Requests:
cpu: 500m
memory: 2Gi
Environment: <none>
Mounts:
/maprvolume1 from maprvolume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9g8tq (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
maprvolume:
Type: FlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin)
Driver: Options: %v
FSType: mapr.com/maprfs
SecretRef:
ReadOnly: <nil>
%!(EXTRA bool=true, map[string]string=map[cluster:ObjectPool securityType:unsecure volumePath:/maprvolume cldbHosts:10.10.70.113 10.10.70.114 10.10.70.115]) default-token-9g8tq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9g8tq
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m default-scheduler Successfully assigned test-secure to tssperf10.lab
Normal SuccessfulMountVolume 3m kubelet, tssperf10.lab MountVolume.SetUp succeeded for volume "default-token-9g8tq"
Normal SuccessfulMountVolume 3m kubelet, tssperf10.lab MountVolume.SetUp succeeded for volume "maprvolume"
Normal Pulling 3m kubelet, tssperf10.lab pulling image "docker.io/maprtech/kdf-plugin:1.0.0_029_centos7"
Normal Pulled 3m kubelet, tssperf10.lab Successfully pulled image "docker.io/maprtech/kdf-plugin:1.0.0_029_centos7"
Normal Created 3m kubelet, tssperf10.lab Created container
Normal Started 3m kubelet, tssperf10.lab Started container
4) Once the POD is up you can login into the POD via below command.
[root@tssperf09 abizerwork]# kubectl exec -it test-secure -n mapr-system -- bash
bash-4.4# df -hP
Filesystem Size Used Available Capacity Mounted on
/dev/mapper/docker-253:0-393780-e7ea6408663cf79cf844e6f1f44915099168752e33c258a51ff64bb54dcc8149 10.0G 306.1M 9.7G 3% /
tmpfs 62.8G 0 62.8G 0% /dev
tmpfs 62.8G 0 62.8G 0% /sys/fs/cgroup
posix-client-basic 415.1G 1.8G 413.3G 0% /maprvolume1 <--- This is the mount path via fuse
/dev/mapper/VolGroup-lv_root 49.1G 15.1G 31.4G 32% /dev/termination-log
/dev/mapper/VolGroup-lv_root 49.1G 15.1G 31.4G 32% /etc/resolv.conf
/dev/mapper/VolGroup-lv_root 49.1G 15.1G 31.4G 32% /etc/hostname
/dev/mapper/VolGroup-lv_root 49.1G 15.1G 31.4G 32% /etc/hosts
shm 64.0M 0 64.0M 0% /dev/shm
/dev/mapper/VolGroup-lv_root 49.1G 15.1G 31.4G 32% /run/secrets
tmpfs 62.8G 12.0K 62.8G 0% /run/secrets/kubernetes.io/serviceaccount
tmpfs 62.8G 0 62.8G 0% /proc/kcore
tmpfs 62.8G 0 62.8G 0% /proc/timer_list
tmpfs 62.8G 0 62.8G 0% /proc/timer_stats
tmpfs 62.8G 0 62.8G 0% /proc/sched_debug
tmpfs 62.8G 0 62.8G 0% /proc/scsi
bash-4.4#
Note : To get more details on the logs journals has collected for the Kubelet service, run below command.
journalctl -u kubelet
No comments:
Post a Comment