Editing
Tutorials:Install the nVidia docker system
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Access the containers on the nVidia GPU cloud == nVidia provides many optimized container images for their GPU infrastructure for a variety of tasks (deep learning, high-performance computing, etc.). You should choose these images as the source container images for your applications. To be able to do so, you first need an account at the [https://ngc.nvidia.com/catalog/landing nVidia GPU cloud]. Once you are signed in, click left on configuration and follow the steps to get an API key to the registry. It will be a long string of characters which looks like this: <syntaxhighlight lang="bash"> QWEzamZyNWhhaWZuN2J2aW5hNjBzdmk5N206NzMwMTU5MWMtNzE0My00N2FmLTk4ZTktY2EzZmQyYzgzZDUz </syntaxhighlight> Copy it and place it in a file somewhere safe so that you remember it. You can generate a new one anytime, but then the old one will become invalid. You can now tell docker to login to the nVidia GPU cloud container registry so that you can pull container images from there. For this, use the shell command <syntaxhighlight lang="bash"> docker login -u '$oauthtoken' --password-stdin nvcr.io <<< ' your API key here between the quotes ' </syntaxhighlight> That's it. I suggest you put the above command in a script in $USER/bin, so you can quickly rerun it after a reboot, then you also do not forget your key. Remember to protect your folders from being read by other users if they contain this kind of sensitive information. You can test whether everything worked by pulling a container from the nVidia cloud and then firing up Python inside the container: <syntaxhighlight lang="bash"> docker run -it --runtime=nvidia nvcr.io/nvidia/tensorflow:18.06-py3 python </syntaxhighlight> The first time will be slow, as it needs to download all the images, after that, they will be in your local storage and start up much faster. You will enter an interactive Python interpreter which runs inside the container. To test whether GPU acceleration works in Tensorflow, you can issue for example the following commands in the interpreter (enter an empy line after each "with" block and take care to copy the right number of spaces in front of the lines as well): <syntaxhighlight lang="bash"> import tensorflow as tf with tf.device('/gpu:0'): a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tf.matmul(a, b) with tf.Session() as sess: print (sess.run(c)) </syntaxhighlight> If you don't get any errors and the final output is a matrix like this: <syntaxhighlight lang="bash"> [[ 22. 28.] [ 49. 64.]] </syntaxhighlight> then everything is fine. When you are done testing, just enter the line "quit()" to exit the Python interpreter, which will also terminate the container.
Summary:
Please note that all contributions to Collective Computational Unit may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
CCU:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Collective Computational Unit
Main page
Projects
Tutorials
GPU Cluster
Core Facilitys
Mediawiki
Recent changes
Random page
Help
Tools
What links here
Related changes
Page information