Editing
Tutorials:container which trains MNIST using Tensorflow
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== application/Dockerfile === This is equivalent to a makefile for our application container. Also hopefully self-explanatory due to the comments. <syntaxhighlight lang="bash"> # First, we define the base image of the container. # # Our example container is derived from nVidia's image # 'tensorflow:18.06-py3', the code stands # for container version 18.06 (nVidia's internal version number), # which contains tensorflow set up for python3 # FROM nvcr.io/nvidia/tensorflow:18.06-py3 # # This is the maintainer of the container, referenced # by e-Mail address. # MAINTAINER ccu@uni-konstanz.de # # This is the first line which tells us how this container # differs from the base image. # # In this case, we copy the subdirectory "src" from # the directory containing the Dockerfile into the # directory "/application" of the container image. # COPY src /application # # Many COPY commands can be issued, as well as RUN # commands which run commands inside the container, e.g. # to install stuff. # # The following is just an example, it is not necessary for # the application to run. The final container image will # now contain the "nano" editor just in case you need it # when logging into the container (yes, you can do this while it's # running). You should always squeeze as many package # installations as possible into one RUN command, as each one # will generate a new intermediate container image. # # Note that COPY as well as RUN are executed with sudo # privileges inside the container. Yes, this also means # you can access anything on the host, that's why being # a docker user is basically equivalent to being a sudo user # on a system. When running on the Kubernetes cluster, # container privileges are of course much more limited. # RUN apt-get update && apt-get install -y nano # # this is what will be executed by default when the container is run # ENTRYPOINT [ "/application/run.sh" ] </syntaxhighlight> That's basically already what is different from a container deployment to executing the application directly on your system. Of course, there are more details to learn, in particular about how to mount external filesystems of the cluster into the container, so you can read/write persistent data. More on this later. Let's now run the container defined in the above configuration files.
Summary:
Please note that all contributions to Collective Computational Unit may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
CCU:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Collective Computational Unit
Main page
Projects
Tutorials
GPU Cluster
Core Facilitys
Mediawiki
Recent changes
Random page
Help
Tools
What links here
Related changes
Page information