Editing
Tutorials:container which trains MNIST using Tensorflow
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== docker-compose.yml === Together with the comments, it should be pretty much self-explanatory. In summary, this docker-compose is going to build the application container, tag it with a specific name, and then run it once on our system, using a pre-configured entrypoint (i.e. a command which will be executed after container creation). Please edit this file now and set your own username in the tag id of the image. <syntaxhighlight lang="yaml"> # # This defines the version of the docker-compose.yml # file format we are using. # version: '2.3' # # In this section, all the services we are going to # start are defined. Each service corresponds to one # container. # services: # Our application container is the only one we start. application: # This tells docker-compose that we intend to # build the application container from scratch, it # is not just a pre-existing image. The build configuration # (kind of a makefile) resides in the subdirectory # "application" in the file "Dockerfile". build: context: ./application dockerfile: Dockerfile # This gives the container which has been built a "tag". # A tag is a unique name which you can use to refer to this container. # It should be of the form "<registry>/<username>/<application>:<version> # If <version> is not specified, it will get the default "latest". # # The registry should be the one of the CCU, same with your # username. You can also use a temporary image name here and # later use the "docker tag" commmand to rename it to the final name # you want to push to the registry. # image: ccu.uni-konstanz.de:5000/<your.username>/tf_mnist:0.1 # The container needs the nvidia container runtime. # The following is equivalent to specifying "docker run --runtime=nvidia". # It is not necessary if nvidia is already configured as the # default runtime (as on the Kubernetes cluster). runtime: nvidia # Environment variables set when running the image, # which can for example used to configure the nVidia base # container or your application. You can use these to # configure your own code as well. # environment: - NVIDIA_VISIBLE_DEVICES=all # This container should only be started once, if it fails, # we have to fix errors anyway, if it exits successfully, # we are happy. restart: "no" # The entry point of the container, i.e. the script or executable # which is started after it has been created. entrypoint: "/application/run.sh" </syntaxhighlight>
Summary:
Please note that all contributions to Collective Computational Unit may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
CCU:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Collective Computational Unit
Main page
Projects
Tutorials
GPU Cluster
Core Facilitys
Mediawiki
Recent changes
Random page
Help
Tools
What links here
Related changes
Page information