Editing
Tutorials:Monitoring with Tensorboard on the GPU cluster
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Tensorboard support on the GPU cluster == Tensorboard is a monitoring tool for machine learning training, which provides a web browser interface on a port of the server (6116 in our cluster). Each compute node has its own instance of Tensorboard running, which is exposed on node-domain:6116. Tensorboard parses the content of a particular directory of the node. Subdirectories can be mounted as the persistent volume storage class "local-tensorboard" and used to write logs. === Local persistent volumes for Tensorboard logging === The following obtains a persistent volume claim for a local PV for data storage, as well as a PV for Tensorboard logging. Note that both can be done with a single config file. Code examples can be found in the subdirectory "example_2" of the tutorial sample code, [[File:Kubernetes_samples.zip|Kubernetes samples]]. As a first step, run the docker-compose and push the resulting container to the CCU registry, <syntaxhighlight lang="bash"> > docker-compose up --build [Wait a bit until the program has started, then ^C] > docker push ccu.uni-konstanz.de:5000/your.username/tf-mnist-tb:0.1 </syntaxhighlight> Also, create the Kubernetes scripts as before in the kubernetes subdirectory. Then, check out "pvc.yaml". <syntaxhighlight lang="yaml"> apiVersion: v1 kind: PersistentVolumeClaim metadata: # the name of the PVC, we refer to this in the container configuration name: your-username-tf-mnist-pvc spec: resources: requests: # storage resource request. This PVC can only be bound to volumes which # have at least 8 GiB of storage available. storage: 8Gi # the requested storage class here is fast data storage. storageClassName: local-ssd # leave these unchanged, they must match the PV type, otherwise binding will fail accessModes: - ReadWriteOnce volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolumeClaim metadata: # the second claim is for tensorboard logging, it needs its own ID. name: your-username-tf-mnist-tb-pvc spec: resources: requests: # Tensorboard logging typically requires not that much storage. storage: 2Gi # this storage class is parsed by the local Tensorboard instance # exposed to the network at port 6116. storageClassName: local-tensorboard # leave these unchanged, they must match the PV type, otherwise binding will fail accessModes: - ReadWriteOnce volumeMode: Filesystem </syntaxhighlight> Note that all names were prepended with your username to make them unique. When the claim is defined to your satisfaction, apply it like this: <syntaxhighlight lang="yaml"> > kubectl apply -f pvc.yml </syntaxhighlight> You can again check on the status of this (and every other) claim: <syntaxhighlight lang="yaml"> > kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE your-username-tf-mnist-pvc Pending local-ssd 11s your-username-tf-mnist-tb-pvc Pending local-tensorboard 11s </syntaxhighlight> Since the claim has not been used by a container yet, it is not yet bound to a persitent volume (PV). The contents of the PV can be accessed like any other PV, see previous tutorial. == Logging to Tensorboard from within your container == In your job file, make sure both PVC are mounted to the container. We use "/mnt/tensorboard" as the mount point for the tensorboard log directory. <syntaxhighlight lang="yaml"> ... containers: - name: your-username-tf-mnist-tb volumeMounts: - mountPath: "/tmp/data" name: pvc-mnist - mountPath: "/mnt/tensorboard" name: pvc-mnist-tb ... volumes: - name: pvc-mnist persistentVolumeClaim: claimName: test-user-tf-mnist-pvc - name: pvc-mnist-tb persistentVolumeClaim: claimName: test-user-tf-mnist-tb-pvc ... </syntaxhighlight> We will not cover the details of Tensorboard logging here, see the example code in "application/src/tf_mnist.py" for some initial ideas. Make sure to provide the correct log directory when creating the writer instance for the logs. I suggest to create a new subdirectory for each run of the program and hold the PVC, so that you can compare different runs, like this: <syntaxhighlight lang="python"> from datetime import datetime tb_base_directory = "/mnt/tensorboard/" now = datetime.now() subdir = now.strftime("%Y%m%d-%H%M%S") tb_out_directory = tb_base_directory + subdir writer = tf.summary.FileWriter( tb_out_directory, sess.graph ) </syntaxhighlight> Otherwise, please refer to some of the excellent online tutorials on Tensorboard, e.g. [https://itnext.io/how-to-use-tensorboard-5d82f8654496 this here]. == Viewing the Tensorboard of the job == Note: Option 1 will be phased out, as local volumes will probably not be supported anymore in the future (not flexible enough during scheduling). We suggest a variant of Option 2, with tensorboard running directly in your compute container, so that the PVC needs to be mounted only once (in case you have a storage class which only can be mounted a single time, which is likely since you need write support). === Option 1: Using compute node global Tensorboard instance === As stated earlier, each compute node has its own instance of Tensorboard running. This instance will automatically display all Tensorboard summary files contained in persistent volumes with <code>storageClassName: local-tensorboard</code>. First, find out the compute node your pod was allocated to. <syntaxhighlight lang="python"> > kubectl get pods | grep your-username NAME READY STATUS RESTARTS AGE your-username-tf-mnist-tb-pvc-mqt9m 1/1 Running 0 3m4s > kubectl describe pod your-username-tf-mnist-tb-pvc-mqt9m | grep Node Node: glasya/134.34.226.30 </syntaxhighlight> Your pod is running on [[Cluster:Compute nodes|Glasya]], IP 134.34.226.30. You can now point your browser to 134.34.226.30:6116 to access the Tensorboard instance for the node. Note that it lists the logs for all currently mounted PVs. To find out which directory your PV corresponds to, you need to check which PV your PVC was bound to, and inspect its data: <syntaxhighlight lang="python"> > kubectl get pvc | grep your-username your-username-tf-mnist-tb-pvc Bound local-pv-d07aa16c 25Gi RWO local-tensorboard 19m > kubectl describe pv local-pv-d07aa16c | grep Path Path: /mnt/tensorboard/glasya-pv-tb-25gb-2 </syntaxhighlight> This means that your logs will be the ones prefixed by "glasya-pv-tb-25gb-2" in the Tensorboard instance. === Option 2: Run your own Tensorboard instance === Another option is to create a pod running your own Tensorboard instance which is exposed via a Kubernetes service. First create a pod running Tensorboard which is listening to your summary directory. In order to do so, we can simply use the [https://hub.docker.com/r/tensorflow/tensorflow/ latest Tensorflow container] from Docker Hub:<code>tensorflow/tensorflow:latest-py3</code>. The corresponding pod should look like this: <syntaxhighlight lang="yaml"> apiVersion: v1 kind: Pod metadata: name: your-username-tb-pod labels: run: your-username-tb-0 spec: containers: - name: your-username-tb-container image: tensorflow/tensorflow:latest-py3 # Execute Tensorboard in your mounted summaries folder. This will make the pod run indefinitely if no errors occur. Make sure to delete the pod if you do not use it anymore. command: ["/bin/bash"] args: ["-c", "cd /mnt/tensorboard/; tensorboard --logdir ."] # Mount the persistent volume where you log Tensorboard summaries to volumeMounts: - mountPath: "/mnt/tensorboard" name: your-username-tb # Expose Tensorboard port, which is 6006 by default. ports: - containerPort: 6006 protocol: TCP restartPolicy: Never volumes: - name: your-username-tb persistentVolumeClaim: claimName: your-username-tb-pvc </syntaxhighlight> Run the pod as usual. Next, we need to create a Kubernetes service mapping the Tensorboard pod IP and port to some fixed service IP and expose it publicly. This can be done using the <code>kubectl expose</code> command: <syntaxhighlight lang="python"> kubectl expose pod *pod-name* --type=NodePort --name=*your-username-service-name* </syntaxhighlight> Replace *pod-name* with the name of the Tensorboard pod you just started and give the Service some name. You can check all running services with <code>kubectl get svc</code>. Your service should be in this list. Kubernetes will automatically choose a port (NodePort) to expose, which we need to access Tensorboard. Get the NodePort with: <syntaxhighlight lang="python"> kubectl describe svc *your-username-service-name*| grep NodePort Type: NodePort: NodePort: <unset> *NodePort*/TCP </syntaxhighlight> At last, find out the IP of the cluster node the Tensorboard pod is running on like described in Option 1. Then, your Tensorboard instance can be accessed via <code>*cluster-node-ip*:*service-node-port*</code>. For more general information on how to expose an Application running in a Kubernetes pod see [https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/ this]. [[Category:Tutorials]]
Summary:
Please note that all contributions to Collective Computational Unit may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
CCU:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Collective Computational Unit
Main page
Projects
Tutorials
GPU Cluster
Core Facilitys
Mediawiki
Recent changes
Random page
Help
Tools
What links here
Related changes
Page information