Editing
Tutorials:Persistent volumes on the Kubernetes cluster
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Persistent volumes == A persistent volume in Kubernetes is a cluster resource which can be requested by a container. For this, you have to claim a persistent volume (PV) using a persistent volume claim (PVC), which you apply in your namespace. The persistent volume claim can then be mounted to directories within a container. The important point is that the PVC survives the end of the container, i.e. the data in the PV will be permanent until the PVC is released. If the PVC is mounted again to a new container, the data will still be present. A persistent volume which is bound to a claim can not be assigned to any other claim. '''If the PVC is released, the PV is also released and immediately and automatically wiped clean of all data'''. If you want to keep your data, copy it to some other permanent storage first. On the cluster, there are two types of persistent volumes currently configured: * Local persistent volumes * Global persistent volumes Note: the cluster will soon get large, fast global storage, at this point local persistent volumes will be phased out and probably not available anymore. Tensorboard monitoring should be done using service exports, as explained below, and not make use of local PVs. === Local persistent volumes === These are persistent volumes which are mapped to special folders of the host filesystem of the node. Each node exposes several persistent volumes which can be claimed. The user can not control exactly which volume is bound to a claim, but can request a minimum size. A persistent volume claim for a local PV is configured like this. Code examples can be found in the subdirectory "kubernetes/example_2" of the tutorial sample code, [[File:Kubernetes_samples.zip|Kubernetes samples]]. '''WARNING: Once a local persistent volume has been bound to a specific node, all pods which make use this volume are forced to also run on this node. This means you have to rely on resources (e.g. GPUs) being available on exactly that particular node.''' '''NOTE: The storage class "local-ssd" which was previously used for local persistent volumes is now obsolete, since a better driver with automatic provisioning has been installed. From now on, please use "local-path" instead, which will give you a PV on the fastest local device (usually SSD/NVMe RAID). No new volumes of class "local-ssd" can be claimed.''' Please copy over your data from old PVCs if you have the opportunity, or delete old PVCs not in use anymore. As soon as there are no more PVCs of the old class in use, it will be deleted from the cluster. Also, check out "global-datasets" below, which gives you a new opportunity to store large, static datasets on a very fast device. <syntaxhighlight lang="yaml"> apiVersion: v1 kind: PersistentVolumeClaim metadata: # the name of the PVC, we refer to this in the container configuration name: tf-mnist-pvc spec: resources: requests: # storage resource request. This PVC can only be bound to volumes which # have at least 8 GiB of storage available. storage: 8Gi # the requested storage class, see tutorial. storageClassName: local-path # leave these unchanged, they must match the PV type, otherwise binding will fail accessModes: - ReadWriteOnce volumeMode: Filesystem </syntaxhighlight> The following storage classes are configured in the cluster: When the claim is defined to your satisfaction, apply it like this: <syntaxhighlight lang="yaml"> > kubectl apply -f pvc.yaml </syntaxhighlight> You can check on the status of this (and every other) claim: <syntaxhighlight lang="yaml"> > kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE tf-mnist-pvc Pending local-path 11s </syntaxhighlight> Since the claim has not been used by a container yet, it is not yet bound to a persitent volume (PV). === Global persistent volumes === In contrast, global persistent volumes are provided cluster-wide and are accessible from any node (managed internally with rook-ceph). They reside on SSDs and thus should be reasonably fast, however, depending on where the volume ends up, data will probably be transferred across the network to/from the node. Thus, they are slower than local-ssd, but leave you considerably more flexible, as they do not require pods to run on specific nodes. Also, there is no constraint on maximum size except for physical limitations. Currently, there is a total of 20 TB of cluster-wide SSD storage, which we plan to increase considerably in the near future. Compared to creating local persistent volumes, the only thing which needs to be changed is the storage class to "ceph-ssd". <syntaxhighlight lang="yaml"> apiVersion: v1 kind: PersistentVolumeClaim metadata: # the name of the PVC, we refer to this in the container configuration name: tf-mnist-global-pvc spec: resources: requests: # storage resource request. This PVC can only be bound to volumes which # have at least 8 GiB of storage available. storage: 8Gi # the requested storage class, see tutorial. storageClassName: ceph-ssd # access mode is mandatory accessModes: - ReadWriteOnce - ReadOnlyMany # For me (Felix) it worked only with the additional following line: volumeMode: Filesystem </syntaxhighlight> Since anyone can mount global persistent volumes in the same namespace, they can and should be used to share datasets. The name of a PVC which contains a useful dataset should start with "dataset-" and be descriptive, so that it can easily be found by other users. Also, the root of the PVC should contain a README with informations about the dataset (at least the source and what exactly it is). A note on mounting. Currently (will change in the near future), ceph volumes can be either mounted ReadWrite by a single pod only, or ReadOnly by multiple pods. Thus, the workflow for a static dataset is to create the PVC, then create a pod to write all the data to it, then delete this pod and mount it read only from now on so it can be used in multiple pods.
Summary:
Please note that all contributions to Collective Computational Unit may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
CCU:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Collective Computational Unit
Main page
Projects
Tutorials
GPU Cluster
Core Facilitys
Mediawiki
Recent changes
Random page
Help
Tools
What links here
Related changes
Page information