Editing
Tutorials:Run the example container on the cluster
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Requirements == * A working connection and login to the Kubernetes cluster. * A valid namespace selected with authorization to run pods. * A test container pushed to the CCU docker registry. == Set up a Kubernetes job script == Download the [[File:Kubernetes_samples.zip|Kubernetes samples]] and look at the kubernetes subdirectory in example_1. Check out "make_config.sh" and run it after you have set the bash environment variable "KUBERNETES_USER" to your cluster username: <syntaxhighlight lang="bash"> > export KUBERNETES_USER=your.username > ./make_configs.sh </syntaxhighlight> This will create a number of yaml files (Kubernetes configuration files) from the templates in the "template" subdirectory. Check out the first example, "job-script.yaml": <syntaxhighlight lang="yaml"> apiVersion: batch/v1 kind: Job metadata: # name of the job name: your-username-tf-mnist spec: template: spec: # List of containers belonging to the job starts here containers: # container name used for pod creation - name: your-username-tf-mnist-container # container image from the registry image: ccu.uni-konstanz.de:5000/your.username/tf_mnist:0.1 # container resources requested from the node resources: # requests are minimum resource requirements requests: # this gives us a minimum 2 GiB of main memory to work with. memory: "2Gi" # you should allocate at least 1 CPU for machine learning jobs, # usually more if you for example have seperate threads for reading data # 1 CPU unit is 1 CPU core or hyperthread, depending on CPU architecture # Note that these are typically not a scarce resource on our GPU servers, # so you can be a bit generous. cpu: 1 # limits are maximum resource allocations limits: # this gives an absolute limit of 3 GiB of main memory. # exceeding it will mean the container exits immediately with an error. memory: "3Gi" # CPU limit, but pod will usually not be killed for excessive CPU use cpu: 1 # this requests a number of GPUs. GPUs will be allocated to the container # exclusively. No fractional GPUs can be requested. # When executing nvidia-smi in the container, it should show exactly this # number of GPUs. # # PLEASE DO NOT SET THE NUMBER TO ZERO, EVER, AND ALWAYS INCLUDE THIS LINE. # ALWAYS PUT IT IN THE SECTION "limits", NOT "requests". # # It is a known limitation of nVidias runtime that if zero GPUs are requested, # then actually *all* GPUs are exposed in the container. # We are looking for a fix to this. # nvidia.com/gpu: "1" # the command which is executed after container creation command: ["/application/run.sh"] # login credentials to the docker registry. # for convenience, a readonly credential is provided as a secret in each namespace. imagePullSecrets: - name: registry-ro-login # containers will never restart restartPolicy: Never # number of retries after failure. # since we typically have to fix something in this case, set to zero by default. backoffLimit: 0</syntaxhighlight> When we start this job, it will create a single container based on the image we previously uploaded to the registry on a suitable node which serves the selected namespace of the cluster. <syntaxhighlight lang="yaml"> > kubectl apply -f job-script.yaml </syntaxhighlight> == Checking in on the job == We first check if our container is running. <syntaxhighlight lang="bash"> > kubectl get pods # somewhere in the output you should see a line like this: NAME READY STATUS RESTARTS AGE your-username-tf-mnist-xxxx 1/1 Running 0 7s </syntaxhighlight> Now that you now the name of the pod, you can check in on the logs: <syntaxhighlight lang="bash"> # replace xxxx with the code from get pods. > kubectl logs your-username-tf-mnist-xxxx # this should show the console output of your python program </syntaxhighlight> or get some more information about the job, the node the pod was placed on etc. <syntaxhighlight lang="bash"> > kubectl describe job your-username-tf-mnist # replace xxxx with the code from get pods. > kubectl describe pod your-username-tf-mnist-xxxx </syntaxhighlight> You can also open a shell in the running container, just as with docker: <syntaxhighlight lang="bash"> > kubectl exec -it your-username-tf-mnist-xxxx /bin/bash root@tf-mnist-xxxxx:/workspace# nvidia-smi Tue Jun 18 14:25:00 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla V100-SXM3... On | 00000000:E7:00.0 Off | 0 | | N/A 39C P0 68W / 350W | 30924MiB / 32480MiB | 6% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| +-----------------------------------------------------------------------------+ root@tf-mnist-xxxxx:/workspace# ls /application/ nn.py run.sh tf-mnist.py root@tf-mnist-xxxxx:/workspace# </syntaxhighlight> == Shutting down the job early == If while inspecting the job you notice that it does not run correctly, you can shut it down prematurely with <syntaxhighlight lang="bash"> > kubectl delete -f job-script.yaml </syntaxhighlight> Note that this also deletes all data your container might have written to its filesystem layer. If you want to save your trained models, you have to mount persistent volumes from the Kubernetes cluster into the container. This is covered in the [[Tutorials:Persistent volumes on the Kubernetes cluster|persistent volume tutorial]]. [[Category:Tutorials]]
Summary:
Please note that all contributions to Collective Computational Unit may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
CCU:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Collective Computational Unit
Main page
Projects
Tutorials
GPU Cluster
Core Facilitys
Mediawiki
Recent changes
Random page
Help
Tools
What links here
Related changes
Page information