Extend Kubernetes with NFS and Dynamic provisioning

D. Heinrich
3 min readDec 23, 2019

I’m working on a new dev solution where I came accross required persistent volumes with nfs. This is how I solved it.

Requirements:

  • nfs-server
  • Kubernetes-SingleNode or Cluster

NFS-Server

pic by sysadmins.co.za

Installing NFS (if not yet installed):

apt install nfs-kernel-server nfs-common

Edit the defaults/nfs-common to enable the kubernetes provisioner to remote-lock files:

vim /etc/default/nfs-common
# find NEED_STATD= and replace it with:
NEED_STATD=yes

Starting/Restarting the NFS (if not yet started):

systemctl status nfs-serversystemctl restart nfs-server

Create the NFS-Share:

mkdir -p /opt/nfs/k8s_test# not recommended but will do it for the test
chmod -R 777 /opt/nfs

Enable the NFS-Share:

# to open it for every host:
/opt/nfs/k8s_test *(rw,sync,no_subtree_check,no_root_squash,insecure)
# to just open it for a specific cidr:
/opt/nfs/k8s_test 192.168.150.0/255.255.255.0(rw,sync,no_subtree_check,no_root_squash,insecure)
# Enable and validate the configuration
$ exportfs -rv
exporting 192.168.150.0/255.255.255.0:/opt/nfs/k8s_test

This will finish the NFS-Setup

Kubernetes-Setup

pic by softwareengineeringdaily.com

NOTE: It is assumed that you want to deploy your provisioner into the kube-system namespace (ns).

If we want to use NFS dynamically we need to add a few RBACs:

The ServiceAccount which will later be used by the

---
kind: ServiceAccount
apiVersion: v1
metadata:
name: nfs-pod-provisioner-sa

The ClusterRole will later be bound to the Service Account and describes everything we need to persistent volumes (pv) and persistent volume claims (pvc).

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1 # auth API
metadata:
name: nfs-provisioner-clusterRole
rules:
- apiGroups: [""] # rules on persistentvolumes
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]

The ClusterRoleBinding stitches the ClusterRole and the ServiceAccount together:

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-provisioner-rolebinding
subjects:
- kind: ServiceAccount
name: nfs-pod-provisioner-sa # defined on top of file
namespace: kube-system
roleRef:
kind: ClusterRole
name: nfs-provisioner-clusterRole # name defined in clusterRole
apiGroup: rbac.authorization.k8s.io

Now we need a Role which allowes the ServiceAccount (SA) to execute certain actions:

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-pod-provisioner-otherRoles
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]

This Role will now connected using a RoleBinding:

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-pod-provisioner-otherRoles
subjects:
- kind: ServiceAccount
name: nfs-pod-provisioner-sa
namespace: kube-system
roleRef:
kind: Role
name: nfs-pod-provisioner-otherRoles
apiGroup: rbac.authorization.k8s.io

Now we need to add the StorageClass to Kubernetes which we later reference in the provisoner deployment.

NOTE: The PVC needs to reference to the nfs-storageclass !

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
# NOTE: The PVC needs to reference to this
name: nfs-storageclass
provisioner: nfs-test
parameters:
archiveOnDelete: "false"

Deployment of the NFS-Provisoner:
NOTE: fill the placeholder <NFS> with either the IP or the HOSTNAME of your NFS-Server

---
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-pod-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-pod-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-pod-provisioner
spec:
serviceAccountName: nfs-pod-provisioner-sa
containers:
- name: nfs-pod-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-provisioner-v
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-test
- name: NFS_SERVER # do not change
value: <NFS>
- name: NFS_PATH # do not change
value: /opt/nfs/k8s_test
volumes:
- name: nfs-provisioner-v # same as volumemouts name
nfs:
server: <NFS>
path: /opt/nfs/k8s_test

Now you can deploy a TestSetup with nginx. This will look like so:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc-test
spec:
storageClassName: nfs-storageclass
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nfs-nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: nfs-test
persistentVolumeClaim:
claimName: nfs-pvc-test
containers:
- image: nginx
name: nginx
volumeMounts:
- name: nfs-test
mountPath: mydata2

Sources:

--

--

D. Heinrich

Working as a Head of Infrastructure at Flower Labs.