Editing
Initializing the Kubernetes cluster
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Kubernetes and pre-requisites (every node) == Install Kubernetes on Ubuntu 18.04. Assuming version 1.14.3 is pulled, check how to fix version. On new systems, copy over the install script from the master node. <syntaxhighlight lang="bash"> > cd init > ./install_kubernetes.sh </syntaxhighlight> Reconfigure docker runtime. Edit /etc/docker/daemon.json as follows: <syntaxhighlight lang="bash"> { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } </syntaxhighlight> On nodes with an nVidia GPU, add the following: <syntaxhighlight lang="bash"> "default-runtime": "nvidia", "default-shm-size": "1g", "runtimes": { "nvidia": { "path": "nvidia-container-runtime", "runtimeArgs": [] } } </syntaxhighlight> Restart docker daemon: <syntaxhighlight lang="bash"> > mkdir -p /etc/systemd/system/docker.service.d > systemctl daemon-reload > systemctl restart docker </syntaxhighlight> Make sure swap is off <syntaxhighlight lang="bash"> > sudo swapoff -a </syntaxhighlight> Check /etc/fstab if swap is still configured there, delete if this is the case. == Spin up the master node == Use kubeadm with vanilla defaults to initialize the control plane. <syntaxhighlight lang="bash"> > sudo systemctl enable docker.service > sudo kubeadm init </syntaxhighlight> If this fails at any point, use kubeadm reset after problems have been fixed before trying to re-initialize. * Post-init steps to setup admin user on this account <syntaxhighlight lang="bash"> > cd init > ./finalize_master.sh </syntaxhighlight> == Update kubelet configuration for master node == Edit /etc/kubernetes/manifests/kube-controller-manager.yaml: <syntaxhighlight lang="bash"> spec: containers: - command: # add these two - --allocate-node-cidrs=true - --cluster-cidr=10.244.0.0/16 </syntaxhighlight> Copy certs/ca.crt (certificate for ccu.uni-konstanz.de) to /usr/share/ca-certificates/ca-dex.pem. Edit /etc/kubernetes/manifests/kube-apiserver.yaml: <syntaxhighlight lang="bash"> spec: containers: - command: # add these five - --oidc-issuer-url=https://ccu.uni-konstanz.de:32000/dex - --oidc-client-id=loginapp - --oidc-ca-file=/usr/share/ca-certificates/ca-dex.pem - --oidc-username-claim=name - --oidc-groups-claim=groups </syntaxhighlight> == Daemonsets on Master node == === Flannel daemonset (pod network for communication) === <syntaxhighlight lang="bash"> > cd init > ./start_pod_network.sh </syntaxhighlight> === nVidia daemonset === <syntaxhighlight lang="bash"> > cd init > ./deploy_nvidia_device_plugin.sh </syntaxhighlight> The daemonset should be active on any node with an nVidia GPU. == Authentication systems == The master node should now login to the docker registry of the cluster. <syntaxhighlight lang="bash"> > docker login https://ccu.uni-konstanz.de:5000 Username: bastian.goldluecke Password: </syntaxhighlight> Also, we need to provide the read-only secret for the docker registry in every namespace. TODO: howto. Finally, we need to set up all the rules for rbac. <syntaxhighlight lang="bash"> > cd rbac # generate namespaces for user groups > ./generate_namespaces.sh # label all compute nodes for which namespace they serve # (after they are up, needs to be redone when new nodes are added) > ./label_nodes.sh # set up access rights for namespaces > kubectl apply -f rbac.yaml # set up rights for which namespaces can access which compute node > kubectl apply -f node_to_groups.yaml </syntaxhighlight> == Persistent volumes == === Local persistent volumes === Check directory local_storage: * clone the git repository for the provisioner using clone_provisioner.sh (delete first if already here). * install helm: install_helm.sh, get_helm.sh. Do NOT run helm init (unsafe and soon obsolete). * set up and run provisioner: <syntaxhighlight lang="bash"> > cd install > generate_config.sh > kubectl apply -f install_storageclass.yaml > kubectl apply -f install_service.yaml > kubectl apply -f provisioner_generated.yaml </syntaxhighlight> After local persistent volumes on the nodes have been generated in /mnt/kubernetes, they should show up under <syntaxhighlight lang="bash"> > kubectl get pv </syntaxhighlight>
Summary:
Please note that all contributions to Collective Computational Unit may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
CCU:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Collective Computational Unit
Main page
Projects
Tutorials
GPU Cluster
Core Facilitys
Mediawiki
Recent changes
Random page
Help
Tools
What links here
Related changes
Page information