Setup NFS‌ Server‌ In Kubernetes

Kubernetes K8s Intro

Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.

Note: Kubernetes is not a containerization platform. It is a multi-container management solution.

Wikipedia

Initial release: 7 June 2014; 6 years ago

Developed by: Cloud Native Computing Foundation
Stable release: 1.18 / March 25, 2020; 4 months ago Written in: Google
License: Apache License 2.0

Original author: Google

kubernetes setup nfs provisioning

kubernetes_architecture

Why Use Kubernetes?

Companies out there may be using Docker or Rocket or maybe simply Linux containers for containerizing their applications. But, whatever it is, they use it on a massive scale. They don’t stop at using 1 or 2 containers in Prod. But rather, 10’s or 100’s of containers for load balancing the traffic and ensuring high availability.

Keep in mind that, as the traffic increases, they even have to scale up the number of containers to service the ’n’ no of requests that come in every second. And, they have to also scale down the containers when the demand is less. Can all this be done natively? That is why the need for container management tools is imminent. Both Docker Swarm and Kubernetes are popular tools for Container management and orchestration.

Deploying Dynamic NFS Provisioning in Kubernetes

Deploying Dynamic NFS Provisioning in Kubernetes

Kubernetes containerized storage allows people to develop data containers on-demand. The confidentiality of the data contained is maintained by providing automatic provisioning.

The storage method is at sky-high demand sprawling out far and wide among various verticals. Numerous applications are coming up for providing contented access to people.

One of the ways Kubernetes allow applications to access storage is the standard Network File Service (NFS) protocol.

configuring nfs storage for kubernetes

With the fast-growing pace of Kubernetes storage, here in this post, we are elaborating on establishing the Kubernetes NFS server.

With our best intentions, the details to set up the “Dynamic NFS Provisioning server for Kubernetes” are furnished here. Let’s dive in!

RecommendedGithub

Kubernetes Volumes and NFS

Kubernetes Volumes are persistent storage units having different nodes within a cluster. They allow us to write, read, and conveniently share data.

It supports many storage plugins that allow access to storage services and platforms. NFS plugin is one of the significant plugins used for this purpose.

The NFS (Network File System) protocol is a standard protocol that is widely used to share files in enterprise environments, allowing many users to access the same files at the same time.

It lets you mount a storage device considering it as a local drive. Kubernetes permits hosts to mount a volume as a local drive on its container.

NFS integration is a functional way of migrating legacy applications to Kubernetes. Many times, the legacy code accesses data via NFS.

Ways to Access Data via NFS in Kubernetes

NFS offers concurrent access to all the hosts. NFS, users don’t need to format the storage volume using an Operating File System.

The storage can simply be mounted and used straight away. There are two ways to access data via NFS in Kubernetes:

Ephemeral NFS Volume – Allows you to access NFS storage, you are intrinsically connected with.

Persistent Volume with NFS –  A separate cluster is formulated within the managed resource within the cluster which is accessed via NFS.

Advantages of Using NFS With Kubernetes

NFS persistent volumes help in creating separate storage classes for different mount parameters which are responsible for dynamically resizing NFS persistent volumes.

Here are a few reasons you should consider before you set up NFS server for Kubernetes:

Use existing storage –  A standard interface is exercised to mount existing data volumes currently in use, on-premises, or in the cloud.

Persistence – Kubernetes volumes are ephemeral which means that it is tattered when its parent pod shuts down. In this context, an NFS Volume defined within pod definitions bestows your persistence without having to define a Persistent Volume.

The data saved contained in an NFS volume will be stored in the connected storage device, even after the pod shuts down. There is an option of defining a Kubernetes Persistent Volume that exposes its data via an NFS interface.

Share data – Another benefit of the NFS Volumes is to share data between Kubernetes containers, whether in the same pod or different pods. This helps Kubernetes abstracted data available to more people.

Simultaneous mounting – NFS Volumes are equipped with the facility to provide access to multiple nodes at the same time. The multiple nodes can write to the same NFS Volume at the same time.

Persistent Storage Using NFS provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using NFS as persistent storage.

Kubernetes Persistent Volumes

Persistent volumes in Kubernetes provide consistent storage for the long term. They exist beyond nodes, containers, and pods.

Here is an example where a pod uses an NFS persistent volume claim to get read and write access to the persistent volume.

Two simple steps to use a volume:.

First, the pod defines the volume.

Second, the container uses ” Volume Mounts”  to add that volume at a specific path “Mount Path”  in its filesystem.

 NFS storage management in Kubernetes Cluster

Then, the container reads and writes to the volume just like a normal directory..

kind: Pod

apiVersion: v1

metadata:

name: simple-volume-pod

spec:

# Volumes are declared by the pod. They share its lifecycle

# and are communal across containers.

volumes:

# Volumes have a name and configuration based on the type of volume.

# In this example, we use the emptyDir volume type

- name: simple-vol

emptyDir: {} # No extra configuration

# Now, one of our containers can mount this volume and use it like

# any other directory.

containers:

- name: my-container

volume mounts:

- name: simple-vol # This is the name of the volume we set at the pod level

mountPath: /var/simple # Where to mount this directory in our container

# Now that we have a directory mounted at /var/simple, let's

# write to a file inside it!

image: alpine

command: ["/bin/sh"]

args: ["-c", "while true; do date >> /var/simple/file.txt; sleep 5; done"]

Source: GitHub

NFS ( Network File System)

Network File System is one of the useful types of volumes in Kubernetes that allows you to share file systems. This can be accessed over the network. Further, Kubernetes don’t use the NFS but Pods access the data from NFS. NFS has two main advantages:

  1. If a Pod is destroyed, the data can be recovered
  2. NFS can be accessed with the help of many Pods at the same time. This allows shared access. You can even use an NFS to run WordPress on Kubernetes!
ALSO READ  12 Jaw Dropping OpenAI-O1 Use Cases in 2024 [October]

One imperative caveat is that for an NFS Volume to work, you must set up a server. This server exposes storage via NFS.

Mounting an Ephemeral NFS Share on a Container

Below, we are discussing the process to add an NFS Volume to your pod definition, so that containers can mount a share from an existing NFS server:

1. POD Definition

In  pod YAML file, incorporate the following directive under the container definition (substitute bold text with your data):

volume mounts:

- name: your-nfs-volume

mountPath: /var/your-destination

Define the volume as follows (substitute bold text with your data):

volumes:

- name: your-nfs-volume

nfs:

server: nfs-server.yourdomain.com

path:/path/to/shared-folder

2. Deploy The Pod

Generate the pod, and ensure it is deployed correctly, using:

$ kubectl create -f your-pod-definition.yaml
$ kubectl get pods

3. Verify NFS Share Is Working

Check that the relevant container has mounted the NFS share correctly:

$ kubectl exec -it your-pod-definition sh
/ #
/ # mount | grep nfs-server.yourdomain.com

Another way of incorporating Kubernetes NFS integration is that you can set up an NFS Persistent Volume managed within Kubernetes.

Kubernetes NFS volume example (NFS server in Kubernetes)

Here we like to elaborate working on the NFS server in Kubernetes with the help of an

an example that shows to set up a complete application saving data to an NFS Persistent Volume.

(Source: official Kubernetes report)

This step will look a bit different depending on which underlying storage you want to use for your NFS service. On Azure, use this command:

1. Define the NFS service

$ kubectl create -f examples/staging/volumes/nfs/provisioner/nfs-server-gce-pv.yaml2. Create an NFS server and service

2. Create an NFS server and service

Run these commands to create the NFS server from the service definition and expose it as a service. Finally, check that pods have deployed correctly.

$ kubectl create -f examples/staging/volumes/nfs/nfs-server-rc.yaml
$ kubectl create -f examples/staging/volumes/nfs/nfs-server-service.yaml
$ kubectl get pods -l role=nfs-server.

3. Create the Persistent Volume Claim

Find the cluster IP of your server using this command:

$ kubectl describe services NFS-server

Now, edit the NFS Persistent Volume and replace the IP with the correct one. You need to hard-code the IP for now because there are no service names yet.

4. Create the Persistent Volume

Use these commands to set up the persistent volume that uses the NFS service.

$ kubectl create -f examples/staging/volumes/nfs/nfs-pv.yaml
$ kubectl create -f examples/staging/volumes/nfs/nfs-pvc.yaml

Tip:  In the entire example only a valid NFS backend with persistent volume is to be shared.

For more clarity on setting up the NFS server, below is one more example that will explain the process to set up NFS Server. In this example we are using an existing file from Kubernetes , you can use a GCP data store or some Firebase storage as your NFS.

Create the resources with kubectl apply -f nfs-server.yaml.

# Note - an NFS server isn't really a Kubernetes

# concept. We're just creating it in Kubernetes

# for illustration and convenience. In practice,

# it might be run in some other system.

# Create a service to expose the NFS server

# to pods inside the cluster.

kind: Service

apiVersion: v1

metadata:

name: nfs-service

spec:

selector:

role: nfs

ports:

# Open the ports required by the NFS server

# Port 2049 for TCP

- name: tcp-2049

port: 2049

protocol: TCP

# Port 111 for UDP

- name: udp-111

port: 111

protocol: UDP

---

# Run the NFS server image in a pod that is

# exposed by the service.

kind: Pod

apiVersion: v1

metadata:

name: nfs-server-pod

labels:

role: nfs

spec:

containers:

- name: nfs-server-container

image: cpuguy83/nfs-server

securityContext:

privileged: true

args:

# Pass the paths to share to the Docker image

- /exports

Source: GitHub

Note: This example will not work for the MAC users.

Using NFS Volume in the PODS

Once the NFS server is set, adding NFS volume to the pod is an important step to follow:

Steps to follow:

1. Add the NFS volume to the pod by setting the server and path values to refer to the NFS server

2. Mount the NFS volume in the container. In our example, we write the date to the file in the network file system every five seconds.

 Note: The IP address in the YAML is to be changed to the IP address of the service, we have already set up above.Kubernetes internal resolution is not to be used here.

# Create a pod that reads and writes to the

# NFS server via an NFS volume.

kind: Pod

apiVersion: v1

metadata:

name: pod-using-nfs

spec:

# Add the server as an NFS volume for the pod

volumes:

- name: nfs-volume

nfs:

# URL for the NFS server

server: 10.108.211.244 # Change this!

path: /

# In this container, we'll mount the NFS volume

# and write the date to a file inside it.

containers:

- name: app

image: alpine

# Mount the NFS volume in the container

volumeMounts:

- name: nfs-volume

mountPath: /var/nfs

# Write to a file inside our NFS

command: ["/bin/sh"]

args: ["-c", "while true; do date >> /var/nfs/dates.txt; sleep 5; done"]

Create the pod with kubectl apply -f pod.yaml.

Check that it works – The final step will be checking its working.to do that run the following code:

nfs ⟩ kubectl exec -it pod-using-nfs sh

/ # cat /var/nfs/dates.txt

Mon Oct 22 00:47:36 UTC 2018

Mon Oct 22 00:47:41 UTC 2018

Mon Oct 22 00:47:46 UTC 2018

nfs ⟩ kubectl exec -it nfs-server-pod sh

# cat /exports/dates.txt

Mon Oct 22 00:47:36 UTC 2018

Mon Oct 22 00:47:41 UTC 2018

Mon Oct 22 00:47:46 UTC 2018

Source: GitHub

ALSO READ  How To Become a Python Developer in 2024 - Roadmap

From these two examples shared above, you will be able to use NFS volume to share data between pods in your cluster.

For the individual container, you need to add volume to the pod and add volume mount to the NFS.

Prerequisites for Dynamic NFS Provisioning in Kubernetes

When you have to create storage volumes on-demand, Dynamic NFS provisioning is one of the most efficient methods.

It eliminates the need for cluster administrators to pre-provision storage. Instead, it automatically provisions storage when it is requested by users.

Dynamic NFS Provisioning in Kubernetes

For dynamic provisioning, the following prerequisites are required:

  • Linux WorkStation
  • K8 Cluster with no other load balancer installed
  • Kubernetes-CLI or kubectl program
  • Kubernetes version v1.15.1 (any version should work)
  • Routable IP network with DHCP configured
  • Helm Package Manager installed
  • Tiller Service Running

Let’s take an insight by following the step-wise procedure:

Step 1) Installing the NFS Server

This example allocates a local file system for the Persistence Volume Claims

The first step is to create “/SRV/NFS/kubedata

[vaant@kmaster ~]$ sudo mkdir /srv/nfs/kubedata -p

Change the ownership to “nfsnobody”

[vaant@kmaster ~]$ sudo chown nfsnobody: /srv/nfs/kubedata/

Next, install the NFS-utils. This example is for centos 7

vaant@kmaster ~]$ Sudo yum install -y NFS-utils

Next, enable and start the userspace NFS server using systemctl.

[vaant@kmaster ~]$ sudo systemctl enable NFS-server

Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.

[vaant@kmaster ~]$ sudo systemctl start NFS-server

[vaant@kmaster ~]$ sudo systemctl status NFS-server

nfs-server.service – NFS server and services

Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)

Active: active (exited) since Sat 2020-16-06 22:06:49 UTC; 12s ago

Next, we need to edit the exports file to add the file system we created to be exported to remote hosts.

[vaant@kmaster ~]$ sudo vi /etc/exports

/srv/nfs/kubedata *(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)

Next, run the exportfs command to make the local directory we configured available to remote hosts.

[vaant@kmaster ~]$ sudo exportfs -rav

exporting *:/srv/nfs/kubedata

If you want to see more details about our export file system, you can run “exportfs -v”.

vagrant@kmaster ~]$ sudo exportfs -v

/srv/nfs/kubedata

<world>(sync,wdelay,hide,no_subtree_check,sec=sys,rw,insecure,no_root_squash,no_all_squash)

Next, let’s test the nfs configurations. Log onto one of the worker nodes and mount the nfs filesystem and

verify.

[vagrant@kworker1 ~]$ sudo mount -t nfs 172.42.42.100:/srv/nfs/kubedata /mnt

[vagrant@kworker1 ~]$ mount | grep kubedata

172.42.42.100:/srv/nfs/kubedata on /mnt type nfs4

 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.42.42.101,local_lock=none,addr=172.42.42.100)

After verifying that NFS is configured correctly and working we can unmount the filesystem.

[vaant@kworker1 ~]$ sudo umount /mnt)

Step 2) Deploying Service Account and Role Bindings

Next, we’ll configure a service account and role bindings. We’ll use role-based access control to do the configuration. The first step is to download the NFS-provisioning repo and change into the NFS-provisioning directory.

git clone https://redblink@bitbucket.org/exxsyseng/nfs-provisioning.git

cd nfs-provisioning

In this directory, we have 4 files. (class.yaml default-sc.yaml deployment.yaml rbac.yaml) We will use the rbac.yaml file to create the service account for nfs and cluster roles and bindings.

[vagrant@kmaster nfs-provisioning]$ kubectl create -f rbac.yaml

We can verify that the service account, clusterrole and binding was created.

[vagrant@kmaster nfs-provisioning]$ kubectl get clusterrole,clusterrolebinding,role,rolebinding | grep nfs

clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner 20m

clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner 20m
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner 20m
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner 20m

Step 3) Deploying Storage Class

Next, let’s run the “class.yaml” to set up the storage class. A storage class provides a way for administrators to describe the “classes” of storage they offer.

Let’s edit the “class.YAML” file and set both the storage class name and the provisioner name.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: example.com/nfs
parameters:
archiveOnDelete: “false”

Once we’ve updated the class.yaml file we can execute the file using kubectl create

[vaant@kmaster nfs-provisioning]$ kubectl create -f class.yaml

storageclass.storage.k8s.io/managed-nfs-storage created

Next, check that the storage class was created.

[vaant@kmaster nfs-provisioning]$ kubectl get storageclass

NAME PROVISIONER AGE

managed-nfs-storage example.com/nfs 48s

Step 4) Deploying NFS Provisioner

Now let’s deploy a NFS provisioner. But first we’ll need to edit the deployment.yaml file. In this file we’ll need to specify the IP Address of our NFS Server (kmaster) 172.42.42.100.

kind: Deployment: apiVersion: apps/v1

metadata: name: nfs-client-provisioner

spec:selector:matchLabels:

app: nfs-client-provisioner,  replicas: 1

strategy:type: Recreate

template:labels:

app: nfs-client-provisioner

spec: serviceAccountName: nfs-client-provisioner

containers:- name: nfs-client-provisioner

image: quay.io/external_storage/nfs-client-provisioner:latest

volumeMounts:- name: nfs-client-root

mountPath: /persistentvolumes

env:- name: PROVISIONER_NAME

value: example.com/nfs

name: NFS_SERVER

-value: 172.42.42.100

name: NFS_PATH

value: /srv/nfs/kubedata

volumes:

name: nfs-client-root

nfs:

server: 172.42.42.100

path: /srv/nfs/kubedata

Once we’ve made the changes, save the file and apply the changes by running “kubectl create”.

[vaant@kmaster nfs-provisioning]$ kubectl create -f deployment.yaml

deployment.apps/nfs-client-provisioner created

After applying the changes, we should see a pod was created for nfs-client provisioner.

[vaant@kmaster nfs-provisioning]$ kubectl get all

NAME READY STATUS RESTARTS AGE

pod/nfs-client-provisioner-5b4f5775c7-9j2dw 1/1 Running 0 4m2s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d22h

NAME READY UP-TO-DATE AVAILABLE AGE

deployment.apps/nfs-client-provisioner 1/1 1 1 4m2s

NAME DESIRED CURRENT READY AGE

replicaset.apps/nfs-client-provisioner-5b4f5775c7 1 1 1 4m2s

We can run “kubectl describe” to see more details about the pod.

[vagrant@kmaster ~]$ kubectl describe pod nfs-client-provisioner-5b4f5775c7-9j2dw

Name: nfs-client-provisioner-5b4f5775c7-9j2dw

Namespace: default

Priority: 0

Node: kworker2.example.com/172.42.42.102

Start Time: Sun, 03 Nov 2019 20:11:51 +0000

Labels: app=nfs-client-provisioner

pod-template-hash=5b4f5775c7

Annotations: cni.projectcalico.org/podIP: 192.168.136.65/32

Status: Running

IP: 192.168.136.65

IPs:

IP: 192.168.136.65

Controlled By: ReplicaSet/nfs-client-provisioner-5b4f5775c7

Containers:

nfs-client-provisioner:

Container ID: docker://95432ef4c256b48746b61f44a0292557b73abaced78342acafeae3c36681343b

Image: quay.io/external_storage/nfs-client-provisioner:latest

Image ID: docker-pullable://quay.io/external_storage/nfs-client-provisioner@sha256:022ea0b0d69834b652a4c53655d78642ae23f0324309097be874fb58d09d2919

Port: <none>

Host Port: <none>

State: Running

Started: Sun, 03 Nov 2019 20:11:56 +0000

Ready: True

Restart Count: 0

Environment:

PROVISIONER_NAME: example.com/nfs

NFS_SERVER: 172.42.42.100

NFS_PATH: /srv/nfs/kubedata

Mounts:

/persistentvolumes from nfs-client-root (rw)

/var/run/secrets/kubernetes.io/serviceaccount from nfs-client-provisioner-token-wgwct (ro)

Conditions:

Type Status

Initialized True

Ready True

ContainersReady True

PodScheduled True

Volumes:

nfs-client-root:

Type: NFS (an NFS mount that lasts the lifetime of a pod)

Server: 172.42.42.100

Path: /srv/nfs/kubedata

ReadOnly: false

nfs-client-provisioner-token-wgwct:

Type: Secret (a volume populated by a Secret)

SecretName: nfs-client-provisioner-token-wgwct

Optional: false

QoS Class: BestEffort

Node-Selectors: <none>

Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s

node.kubernetes.io/unreachable:NoExecute for 300s

Events: <none>

Step 5) Creating Persistent Volume and Persistent Volume Claims

Persistent Volume Claims are objects that request storage resources from your cluster. They’re similar to a voucher that your deployment can redeem for storage access.

ALSO READ  No-Code Development: A Guide from Portals to Mobile Apps

Persistent Volume is a resource that can be used by a pod to store data that will persist beyond the lifetime of the pod. It is a storage volume that in this case is an NFS volume.

If we check our cluster we’ll see that there are currently no Persistent Volumes or Persistent Volume Claims.

[vaant@kmaster ~]$ kubectl get pv,pvc

No resources found in the default namespace.

Also, we can look in the directory we allocated for Persistent Volumes and see there.

[vaant@kmaster ~]$ ls /srv/nfs/kubedata/

Let’s create a PVC. Inside the NFS-provisioning repo there is a file “4-pvc-nfs.yaml”. In this example, we will allocate 500 MegaBytes.

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: pvc1

spec:

storageClassName: managed-nfs-storage

accessModes:

- ReadWriteMany

resources:

requests:

storage: 500Mi

We can create the PVC by running “kubectl create” against the 4-pvc-nfs.yaml” file.

[vaant@kmaster nfs-provisioning]$ kubectl create -f 4-pvc-nfs.yaml

persistentvolumeclaim/pvc1 created

We can now view the PVC and PV that was allocated. As we can see below a

PCV was created “persistentvolumeclaim/pvc1” and its bound to a PV “pvc-eca295aa-bc2c-420c-b60e-9a6894fc9daf”. The PV was created automatically by the nfs-provisioner.

[vaant@kmaster nfs-provisioning]$ kubectl get pvc,pv

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGE CLASS AGE

persistentvolumeclaim/pvc1 Bound pvc-eca295aa-bc2c-420c-b60e-9a6894fc9daf 500Mi RWX managed-nfs-storage 2m30s

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE

persistentvolume/pvc-eca295aa-bc2c-420c-b60e-9a6894fc9daf 500Mi RWX Delete Bound default/pvc1 managed-nfs-storage 2m30s

[vaant@kmaster nfs-provisioning]$

Step 6) Creating a Pod to use Persistent Volume Claims

Now that we have our nfs-provisoner working and we have both a PVC and PV in place for working. Let’s create a pod to use our PVC. If we take a quick look at the existing pods we’ll see that only the “NFS-client-provisioner” pod is running.

[vaant@kmaster nfs-provisioning]$ kubectl get pods

NAME READY STATUS RESTARTS AGE

nfs-client-provisioner-5b4f5775c7-9j2dw 1/1 Running 0 4h36m

Next, we’ll create a pod using the “4-busybox-pv-nfs.yaml” file. But first let’s take a look at the files contents.

apiVersion: v1

kind: Pod

metadata:

name: busybox

spec:

volumes:

- name: host-volume

persistentVolumeClaim:

claimName: pvc1

containers:

- image: busybox

name: busybox

command: ["/bin/sh"]

args: ["-c", "sleep 600"]

volumeMounts:

- name: host-volume

mountPath: /mydata

We’ll execute test-pod-pvc1.yaml using “kubectl create”.

[vagrant@kmaster nfs-provisioning]$ kubectl create -f 4-busybox-pv-nfs.yaml

pod/busybox created

We can now see that the pod is up and running.

[vagrant@kmaster nfs-provisioning]$ kubectl get pods
NAME READY STATUS RESTARTS AGE

busybox 1/1 Running 0 69s

nfs-client-provisioner-5b4f5775c7-9j2dw 1/1 Running 0 7h33m

We can describe the pod to see more details.

[vagrant@kmaster nfs-provisioning]$ kubectl describe pod busybox

Name: busybox

Namespace: default

Priority: 0

Node: kworker1.example.com/172.42.42.101

Start Time: Mon, 04 Nov 2019 03:44:30 +0000

Labels: <none>

Annotations: cni.projectcalico.org/podIP: 192.168.33.194/32

Status: Running

IP: 192.168.33.194

IPs:

IP: 192.168.33.194

Containers:

busybox:

Container ID: docker://f27b38404abbfd3ab77fe81b23e148e0a15f4779420ddfcb17eebcbe699767f3

Image: busybox

Image ID: docker-pullable://busybox@sha256:1303dbf110c57f3edf68d9f5a16c082ec06c4cf7604831669faf2c712260b5a0

Port: <none>

Host Port: <none>

Command:

/bin/sh

Args:

-c

sleep 600

State: Running

Started: Mon, 04 Nov 2019 03:44:34 +0000

Ready: True

Restart Count: 0

Environment: <none>

Mounts:

/mydata from host-volume (rw)

/var/run/secrets/kubernetes.io/serviceaccount from default-token-p2ctq (ro)

Conditions:

Type Status

Initialized True

Ready True

ContainersReady True

PodScheduled True

Volumes:

host-volume:

Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)

ClaimName: pvc1

ReadOnly: false

default-token-p2ctq:

Type: Secret (a volume populated by a Secret)

SecretName: default-token-p2ctq

Optional: false

QoS Class: BestEffort

Node-Selectors: <none>

Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s

node.kubernetes.io/unreachable:NoExecute for 300s

We can log into the container to view the mount point and create a file for testing.

[vaant@kmaster nfs-provisioning]$ kubectl exec -it busybox -- ./bin/sh

/ # 

/ # ls /mydata/

/ # > /mydata/myfile

/ # ls /my data/

myfile

Now that we’ve created a file named myfile, we can log into the master node and verify the file by looking in the PV directory that was allocated for this pod.

[vagrant@kmaster nfs-provisioning]$ ls /srv/nfs/kubedata/default-pvc1-pvc-eca295aa-bc2c-420c-b60e-9a6894fc9daf/

myfile

Step 7) Deleting Pods with Persistent Volume Claims

To delete the pod just use “kubectl delete pod [pod name]”

[vaant@kmaster nfs-provisioning]$ kubectl delete pod busybox

pod "busybox" deleted

Deleting the pod will delete the pod but not the PV and PVC. This will have to be done separately.

[vaant@kmaster nfs-provisioning]$ kubectl get pvc,pv

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGE CLASS AGE

persistentvolumeclaim/pvc1 Bound pvc-eca295aa-bc2c-420c-b60e-9a6894fc9daf 500Mi RWX managed-nfs-storage 3h26m

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGE CLASS REASON AGE
persistentvolume/pvc-eca295aa-bc2c-420c-b60e-9a6894fc9daf 500Mi RWX Delete Bound default/pvc1 managed-nfs-storage 3h26m

To delete the PV and PVC use “kubectl delete”

[vagrant@kmaster nfs-provisioning]$ kubectl delete pvc --all

persistentvolumeclaim "pvc1" deleted

PV and PVC resources are gone

Kubernetes NFS with Cloud Volumes ONTAP

NFS storage is set up for the cloud volumes ONTAP for achieving high-performance data access, built-in backup and high availability. Some added benefits of using Kubernetes NFS with Cloud Volumes ONTAP are:

  • Kubernetes NFS is a robust and scalable solution for cloud volumes such as AWS or Azure.
  • Cost-saving storage provision for high efficiencies with data protection.
  • Instant creation of copies for replication for all NFS users.
  • Multi-node data availability with no data loss(RPO- 0) and minimal downtime (RTO < 60 seconds).

Precautions: Once you start experiencing a failure, you need to shut down a node for maintenance.

Forward Path

From the above knowledge, we can summarize that NFS provisioning imposes on creating storage volumes on-demand.

To manage clusters, dynamic NFS provisioning helps in creating new storage volumes to represent  Persistent Volume objects in Kubernetes.

It features an abstract storage method that doesn’t require manual cluster management for pre-provision storage.

To add on that, Kubernetes NFS provisioner offers many advantages such as the ability to dynamically resize NFS persistent volumes, multiple -node accessing and many more.

You also have the provision to combine it with Cloud Volumes ONTAP which is a  powerful data storage solution for future applications.

Useful Resources:

 

Also Read –