Tutorials:Monitoring with Tensorboard on the GPU cluster: Difference between revisions

From Collective Computational Unit
Jump to navigation Jump to search
Created page with "== Tensorboard support on the GPU cluster == Tensorboard is a monitoring tool for machine learning training, which provides a web browser interface on a port of the server (6..."
 
 
(12 intermediate revisions by 3 users not shown)
Line 1: Line 1:
== Tensorboard support on the GPU cluster ==
== Tensorboard support on the GPU cluster ==


Tensorboard is a monitoring tool for machine learning training, which provides a web browser interface on a port of the server (6116 in our cluster). Each compute node has its own instance of Tensorboard running, which is thus exported on node-address:6116. Tensorboard parses the content of a particular directory of the node, which can be mounted as a certain persistent volume storage class.
Tensorboard is a monitoring tool for machine learning training, which provides a web browser interface on a port of the server (6116 in our cluster). Each compute node has its own instance of Tensorboard running, which is exposed on node-domain:6116. Tensorboard parses the content of a particular directory of the node. Subdirectories can be mounted as the persistent volume storage class "local-tensorboard" and used to write logs.




TODO: following was copy/pasted for reference, finish tutorial page.
=== Local persistent volumes for Tensorboard logging ===


A persistent volume in Kubernetes is a cluster resource which can be requested by a container. For this, you have to claim a persistent volume (PV) using a persistent volume claim (PVC), which you apply in your namespace. The persistent volume claim can then be mounted to directories within a container. The important point is that the PVC survives the end of the container, i.e. the data in the PV will be permanent until the PVC is released. If the PVC is mounted again to a new container, the data will still be present. A persistent volume which is bound to a claim can not be assigned to any other claim. '''If the PVC is released, the PV is also released and immediately and automatically wiped clean of all data'''. If you want to keep your data, copy it to some other permanent storage first.
The following obtains a persistent volume claim for a local PV for data storage, as well as a PV for Tensorboard logging. Note that both can be done with a single config file. Code examples can be found in the subdirectory "example_2" of the tutorial sample code, [[File:Kubernetes_samples.zip|Kubernetes samples]]. As a first step, run the docker-compose and push the resulting container to the CCU registry,


On the cluster, there are two types of persistent volumes currently configured:
<syntaxhighlight lang="bash">
* Local persistent volumes
> docker-compose up --build
* Host directories
[Wait a bit until the program has started, then ^C]
Local persistent volumes should be used to import training data and store results and log files of your training. There are special PVs for monitoring the training using Tensorboard. Host directories are meant for common training data sets stored permanently on the host. They are always read only.
> docker push ccu.uni-konstanz.de:5000/your.username/tf-mnist-tb:0.1
 
</syntaxhighlight>
 
=== Local persistent volumes ===


These are persistent volumes which are mapped to special folders of the host filesystem of the node. Each node exposes several persistent volumes which can be claimed. The user can not control exactly which volume is bound to a claim, but can request a minimum size. A persistent volume claim for a local PV is configured like this. Code examples can be found in the subdirectory "kubernetes/example_2" of the tutorial sample code, [[File:Kubernetes_samples.zip|Kubernetes samples]].
Also, create the Kubernetes scripts as before in the kubernetes subdirectory. Then, check out "pvc.yaml".


<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
Line 23: Line 21:
metadata:
metadata:
   # the name of the PVC, we refer to this in the container configuration
   # the name of the PVC, we refer to this in the container configuration
   name: tf-mnist-pvc
   name: your-username-tf-mnist-pvc


spec:
spec:
Line 32: Line 30:
       storage: 8Gi
       storage: 8Gi


   # the requested storage class, see tutorial.
   # the requested storage class here is fast data storage.
   storageClassName: local-ssd
   storageClassName: local-ssd


Line 39: Line 37:
     - ReadWriteOnce
     - ReadWriteOnce
   volumeMode: Filesystem
   volumeMode: Filesystem
</syntaxhighlight>
---
 
apiVersion: v1
The following storage classes are configured in the cluster:
kind: PersistentVolumeClaim
metadata:
  # the second claim is for tensorboard logging, it needs its own ID.
  name: your-username-tf-mnist-tb-pvc


spec:
  resources:
    requests:
      # Tensorboard logging typically requires not that much storage.
      storage: 2Gi


  # this storage class is parsed by the local Tensorboard instance
  # exposed to the network at port 6116.
  storageClassName: local-tensorboard


  # leave these unchanged, they must match the PV type, otherwise binding will fail
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
</syntaxhighlight>


When the claim is defined to your satisfaction, apply it like this:
Note that all names were prepended with your username to make them unique. When the claim is defined to your satisfaction, apply it like this:


<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
Line 52: Line 66:
</syntaxhighlight>
</syntaxhighlight>


You can check on the status of this (and every other) claim:
You can again check on the status of this (and every other) claim:


<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
> kubectl get pvc
> kubectl get pvc
NAME           STATUS    VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS   AGE
NAME                           STATUS    VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS       AGE
tf-mnist-pvc  Pending                                      local-ssd      11s
your-username-tf-mnist-pvc      Pending                                      local-ssd          11s
your-username-tf-mnist-tb-pvc  Pending                                      local-tensorboard  11s
</syntaxhighlight>
</syntaxhighlight>


Since the claim has not been used by a container yet, it is not yet bound to a persitent volume (PV).
Since the claim has not been used by a container yet, it is not yet bound to a persitent volume (PV). The contents of the PV can be accessed like any other PV, see previous tutorial.


=== Host directories ===
== Logging to Tensorboard from within your container ==


Large training data sets which are required by many different users are stored permanently in the filesystem of several nodes. They can be claimed with a PVC as follows:
In your job file, make sure both PVC are mounted to the container. We use "/mnt/tensorboard" as the mount point for the tensorboard log directory.


<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
...
      containers:
      - name: your-username-tf-mnist-tb
        volumeMounts:
        - mountPath: "/tmp/data"
          name: pvc-mnist
        - mountPath: "/mnt/tensorboard"
          name: pvc-mnist-tb
...
      volumes:
        - name: pvc-mnist
          persistentVolumeClaim:
            claimName: test-user-tf-mnist-pvc
        - name: pvc-mnist-tb
          persistentVolumeClaim:
            claimName: test-user-tf-mnist-tb-pvc
...
...
</syntaxhighlight>
</syntaxhighlight>


We will not cover the details of Tensorboard logging here, see the example code in "application/src/tf_mnist.py" for some initial ideas. Make sure to provide the correct log directory when creating the writer instance for the logs. I suggest to create a new subdirectory for each run of the program and hold the PVC, so that you can compare different runs, like this:


<syntaxhighlight lang="python">
from datetime import datetime


== Reading/writing the contents of a persistent volume ==
tb_base_directory = "/mnt/tensorboard/"
now = datetime.now()
subdir = now.strftime("%Y%m%d-%H%M%S")
tb_out_directory = tb_base_directory + subdir
writer = tf.summary.FileWriter( tb_out_directory, sess.graph )
</syntaxhighlight>


You can access a PV which is bound to a PVC by mounting it into a container. For a demonstration, we use the simple container image "ubuntu:18.04", which runs a minimalistic Ubuntu, and keep it in a very long wait after container startup.
Otherwise, please refer to some of the excellent online tutorials on Tensorboard, e.g.
[https://itnext.io/how-to-use-tensorboard-5d82f8654496 this here].


<syntaxhighlight lang="yaml">
# Test pod to mount a PV bound to a PVC into a container
# Before starting this pod, apply the PVC with kubectl apply -f pvc.yaml
apiVersion: v1
kind: Pod
metadata:
  name: your-username-pvc-access-pod
spec:
  containers:
    - name: pvc-access-container


      # we use a small ubuntu base to access the PVC
      image: ubuntu:18.04
      # make sure that we have some time until the container quits by itself
      command: ['sleep', '6h']


      # list of mount paths within the container which will be
== Viewing the Tensorboard of the job ==
      # bound to persistent volumes.
      volumeMounts:
      - mountPath: "/mnt/pvc-mnist"
        # name of the volume for this path (from the below list)
        name: pvc-mnist


  volumes:
Note: Option 1 will be phased out, as local volumes will probably not be supported anymore in the future (not flexible enough during scheduling). We suggest a variant of Option 2, with tensorboard running directly in your compute container, so that the PVC needs to be mounted only once (in case you have a storage class which only can be mounted a single time, which is likely since you need write support).
    # User-defined name of the persistent volume within this configuration.
    # This can be different from the name of the PVC.
    - name: pvc-mnist
      persistentVolumeClaim:
        # name of the PVC this volume binds to
        claimName: your-username-tf-mnist-pvc
</syntaxhighlight>


After the PVC is applied, spin up the test pod with


<syntaxhighlight lang="yaml">
=== Option 1: Using compute node global Tensorboard instance ===
> kubectl apply -f pvc-access-pod.yaml
</syntaxhighlight>


You now have several options to get data to and from the container.
As stated earlier, each compute node has its own instance of Tensorboard running. This instance will automatically display all Tensorboard summary files contained in persistent volumes with <code>storageClassName: local-tensorboard</code>.


=== 1. Copying data from within the container ===
First, find out the compute node your pod was allocated to.


You can get a root shell inside the container as usual (insert the correct pod name you used below):
<syntaxhighlight lang="python">
> kubectl get pods | grep your-username
NAME                                  READY  STATUS      RESTARTS  AGE
your-username-tf-mnist-tb-pvc-mqt9m  1/1    Running    0          3m4s


<syntaxhighlight lang="yaml">
> kubectl describe pod your-username-tf-mnist-tb-pvc-mqt9m | grep Node
> kubectl exec -it pvc-access-pod /bin/bash
Node:              glasya/134.34.226.30
</syntaxhighlight>
</syntaxhighlight>


Your pod has internet access. Thus, an option to get data to/from the pod, in particular into the persistent volume, is to use scp, which first needs to be installed inside the pod:
Your pod is running on [[Cluster:Compute nodes|Glasya]], IP 134.34.226.30. You can now point your browser to 134.34.226.30:6116 to access the Tensorboard instance for the node. Note that it lists the logs for all currently mounted PVs. To find out which directory your PV corresponds to, you need to check which PV your PVC was bound to, and inspect its data:


<syntaxhighlight lang="yaml">
<syntaxhighlight lang="python">
# apt-get update && apt install openssh-client rsync
> kubectl get pvc | grep your-username
# cd /my-pvc-mount-path
your-username-tf-mnist-tb-pvc   Bound    local-pv-d07aa16c  25Gi      RWO            local-tensorboard  19m
# scp your.username@external-server:/path/to/data/. ./
 
> kubectl describe pv local-pv-d07aa16c | grep Path
Path: /mnt/tensorboard/glasya-pv-tb-25gb-2
</syntaxhighlight>
</syntaxhighlight>


An even better variant would be "rsync -av" instead of scp, as this only copies files which are different or do not exist in the destination. By reversing source and destination, you can also copy data out of the container this way.
This means that your logs will be the ones prefixed by "glasya-pv-tb-25gb-2" in the Tensorboard instance.
 
=== Option 2: Run your own Tensorboard instance ===


=== 2. Copying data from the outside ===
Another option is to create a pod running your own Tensorboard instance which is exposed via a Kubernetes service.


From the outside world, you can directly copy data to and from the container using kubectl cp, which has a very similar syntax as scp:
First create a pod running Tensorboard which is listening to your summary directory. In order to do so, we can simply use the [https://hub.docker.com/r/tensorflow/tensorflow/ latest Tensorflow container] from Docker Hub:<code>tensorflow/tensorflow:latest-py3</code>. The corresponding pod should look like this:


<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
# to get data into the container, substitute name with correct id obtained from kubectl get pods
apiVersion: v1
> kubectl cp /path/to/data/. pvc-access-pod:/my-pvc-mount/path/data
kind: Pod
# to get data from the container
metadata:
> kubectl cp pvc-access-pod:/my-pvc-mount/path/. /path/to/output/
  name: your-username-tb-pod
</syntaxhighlight>
  labels:
    run: your-username-tb-0
spec:
  containers:
    - name: your-username-tb-container
      image: tensorflow/tensorflow:latest-py3
      # Execute Tensorboard in your mounted summaries folder. This will make the pod run indefinitely if no errors occur. Make sure to delete the pod if you do not use it anymore.
      command: ["/bin/bash"]
      args: ["-c", "cd /mnt/tensorboard/; tensorboard --logdir ."]


      # Mount the persistent volume where you log Tensorboard summaries to
      volumeMounts:
        - mountPath: "/mnt/tensorboard"
          name: your-username-tb


      # Expose Tensorboard port, which is 6006 by default.
      ports:
      - containerPort: 6006
        protocol: TCP


  restartPolicy: Never


TODO: Will finish this part soon, for now, read up on Kubernetes "kubectl cp" documentation to copy stuff to/from a PV.
  volumes:
        - name: your-username-tb
          persistentVolumeClaim:
            claimName: your-username-tb-pvc
</syntaxhighlight>


Run the pod as usual. Next, we need to create a Kubernetes service mapping the Tensorboard pod IP and port to some fixed service IP and expose it publicly. This can be done using the <code>kubectl expose</code> command:
<syntaxhighlight lang="python">
kubectl expose pod *pod-name* --type=NodePort --name=*your-username-service-name*
</syntaxhighlight>


Replace *pod-name* with the name of the Tensorboard pod you just started and give the Service some name. You can check all running services with <code>kubectl get svc</code>. Your service should be in this list.
Kubernetes will automatically choose a port (NodePort) to expose, which we need to access Tensorboard. Get the NodePort with:
<syntaxhighlight lang="python">
kubectl describe svc *your-username-service-name*| grep NodePort
Type:                  NodePort:
NodePort:              <unset>  *NodePort*/TCP
</syntaxhighlight>


At last, find out the IP of the cluster node the Tensorboard pod is running on like described in Option 1. Then, your Tensorboard instance can be accessed via <code>*cluster-node-ip*:*service-node-port*</code>. For more general information on how to expose an Application running in a Kubernetes pod see [https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/ this].


[[Category:Tutorials]]
[[Category:Tutorials]]

Latest revision as of 06:11, 20 September 2020

Tensorboard support on the GPU cluster

[edit]

Tensorboard is a monitoring tool for machine learning training, which provides a web browser interface on a port of the server (6116 in our cluster). Each compute node has its own instance of Tensorboard running, which is exposed on node-domain:6116. Tensorboard parses the content of a particular directory of the node. Subdirectories can be mounted as the persistent volume storage class "local-tensorboard" and used to write logs.


Local persistent volumes for Tensorboard logging

[edit]

The following obtains a persistent volume claim for a local PV for data storage, as well as a PV for Tensorboard logging. Note that both can be done with a single config file. Code examples can be found in the subdirectory "example_2" of the tutorial sample code, File:Kubernetes samples.zip. As a first step, run the docker-compose and push the resulting container to the CCU registry,

> docker-compose up --build
[Wait a bit until the program has started, then ^C]
> docker push ccu.uni-konstanz.de:5000/your.username/tf-mnist-tb:0.1

Also, create the Kubernetes scripts as before in the kubernetes subdirectory. Then, check out "pvc.yaml".

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  # the name of the PVC, we refer to this in the container configuration
  name: your-username-tf-mnist-pvc

spec:
  resources:
    requests:
      # storage resource request. This PVC can only be bound to volumes which
      # have at least 8 GiB of storage available.
      storage: 8Gi

  # the requested storage class here is fast data storage.
  storageClassName: local-ssd

  # leave these unchanged, they must match the PV type, otherwise binding will fail
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  # the second claim is for tensorboard logging, it needs its own ID.
  name: your-username-tf-mnist-tb-pvc

spec:
  resources:
    requests:
      # Tensorboard logging typically requires not that much storage.
      storage: 2Gi

  # this storage class is parsed by the local Tensorboard instance
  # exposed to the network at port 6116.
  storageClassName: local-tensorboard

  # leave these unchanged, they must match the PV type, otherwise binding will fail
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem

Note that all names were prepended with your username to make them unique. When the claim is defined to your satisfaction, apply it like this:

> kubectl apply -f pvc.yml

You can again check on the status of this (and every other) claim:

> kubectl get pvc
NAME                            STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS        AGE
your-username-tf-mnist-pvc      Pending                                      local-ssd           11s
your-username-tf-mnist-tb-pvc   Pending                                      local-tensorboard   11s

Since the claim has not been used by a container yet, it is not yet bound to a persitent volume (PV). The contents of the PV can be accessed like any other PV, see previous tutorial.

Logging to Tensorboard from within your container

[edit]

In your job file, make sure both PVC are mounted to the container. We use "/mnt/tensorboard" as the mount point for the tensorboard log directory.

...
       containers:
       - name: your-username-tf-mnist-tb
        volumeMounts:
        - mountPath: "/tmp/data"
          name: pvc-mnist
        - mountPath: "/mnt/tensorboard"
          name: pvc-mnist-tb
...
      volumes:
        - name: pvc-mnist
          persistentVolumeClaim:
            claimName: test-user-tf-mnist-pvc
        - name: pvc-mnist-tb
          persistentVolumeClaim:
            claimName: test-user-tf-mnist-tb-pvc
...

We will not cover the details of Tensorboard logging here, see the example code in "application/src/tf_mnist.py" for some initial ideas. Make sure to provide the correct log directory when creating the writer instance for the logs. I suggest to create a new subdirectory for each run of the program and hold the PVC, so that you can compare different runs, like this:

from datetime import datetime

tb_base_directory = "/mnt/tensorboard/"
now = datetime.now()
subdir = now.strftime("%Y%m%d-%H%M%S")
tb_out_directory = tb_base_directory + subdir
writer = tf.summary.FileWriter( tb_out_directory, sess.graph )

Otherwise, please refer to some of the excellent online tutorials on Tensorboard, e.g. this here.


Viewing the Tensorboard of the job

[edit]

Note: Option 1 will be phased out, as local volumes will probably not be supported anymore in the future (not flexible enough during scheduling). We suggest a variant of Option 2, with tensorboard running directly in your compute container, so that the PVC needs to be mounted only once (in case you have a storage class which only can be mounted a single time, which is likely since you need write support).


Option 1: Using compute node global Tensorboard instance

[edit]

As stated earlier, each compute node has its own instance of Tensorboard running. This instance will automatically display all Tensorboard summary files contained in persistent volumes with storageClassName: local-tensorboard.

First, find out the compute node your pod was allocated to.

> kubectl get pods | grep your-username
NAME                                  READY   STATUS      RESTARTS   AGE
your-username-tf-mnist-tb-pvc-mqt9m   1/1     Running     0          3m4s

> kubectl describe pod your-username-tf-mnist-tb-pvc-mqt9m | grep Node
Node:               glasya/134.34.226.30

Your pod is running on Glasya, IP 134.34.226.30. You can now point your browser to 134.34.226.30:6116 to access the Tensorboard instance for the node. Note that it lists the logs for all currently mounted PVs. To find out which directory your PV corresponds to, you need to check which PV your PVC was bound to, and inspect its data:

> kubectl get pvc | grep your-username
your-username-tf-mnist-tb-pvc   Bound    local-pv-d07aa16c   25Gi       RWO            local-tensorboard   19m

> kubectl describe pv local-pv-d07aa16c | grep Path
Path:  /mnt/tensorboard/glasya-pv-tb-25gb-2

This means that your logs will be the ones prefixed by "glasya-pv-tb-25gb-2" in the Tensorboard instance.

Option 2: Run your own Tensorboard instance

[edit]

Another option is to create a pod running your own Tensorboard instance which is exposed via a Kubernetes service.

First create a pod running Tensorboard which is listening to your summary directory. In order to do so, we can simply use the latest Tensorflow container from Docker Hub:tensorflow/tensorflow:latest-py3. The corresponding pod should look like this:

apiVersion: v1
kind: Pod
metadata:
  name: your-username-tb-pod
  labels:
    run: your-username-tb-0
spec:
  containers:
    - name: your-username-tb-container
      image: tensorflow/tensorflow:latest-py3
      # Execute Tensorboard in your mounted summaries folder. This will make the pod run indefinitely if no errors occur. Make sure to delete the pod if you do not use it anymore.
      command: ["/bin/bash"]
      args: ["-c", "cd /mnt/tensorboard/; tensorboard --logdir ."]

      # Mount the persistent volume where you log Tensorboard summaries to
      volumeMounts:
        - mountPath: "/mnt/tensorboard"
          name: your-username-tb

      # Expose Tensorboard port, which is 6006 by default.
      ports:
      - containerPort: 6006
        protocol: TCP

  restartPolicy: Never

  volumes:
        - name: your-username-tb
          persistentVolumeClaim:
            claimName: your-username-tb-pvc

Run the pod as usual. Next, we need to create a Kubernetes service mapping the Tensorboard pod IP and port to some fixed service IP and expose it publicly. This can be done using the kubectl expose command:

kubectl expose pod *pod-name* --type=NodePort --name=*your-username-service-name*

Replace *pod-name* with the name of the Tensorboard pod you just started and give the Service some name. You can check all running services with kubectl get svc. Your service should be in this list. Kubernetes will automatically choose a port (NodePort) to expose, which we need to access Tensorboard. Get the NodePort with:

kubectl describe svc *your-username-service-name*| grep NodePort
Type:                   NodePort:
NodePort:               <unset>  *NodePort*/TCP

At last, find out the IP of the cluster node the Tensorboard pod is running on like described in Option 1. Then, your Tensorboard instance can be accessed via *cluster-node-ip*:*service-node-port*. For more general information on how to expose an Application running in a Kubernetes pod see this.