lima 0.2.1 documentation suse - kubernative · kubeopsroot must be set to a absolute path to a...
TRANSCRIPT
LIMA 0.2.1 Documentation SUSEWhat is Kubernetes?What is KubeOps?Requirements
HardwareInstallation LIMA
PrerequisitesOn every node
User requirementsConfiguration
On every nodeOn admin node
Install sshpass: OptionalInstall rpmInstall KubeOps rpm on AirGap environment
YAML file syntaxConfigurate cluster using YAMLConfigurate node using YAML
How to use CertificatesHow to use LIMA
Set up a single node cluster for testingSet up a single master clusterAdd node to a single master clusterAdd multiple nodes to a single master cluster at onceDelete nodes from the kubernetes clusterShow the version of LIMA
AttachmentsChanges by KubeOps
On master node when setting up a clusterOn clustermaster node when node is added to the clusterOn master nodes which will be joined to the clusterOn Kubernetes workers
Installed packages and versionsProduct environment considerations
Recommended architectureInstallation scenariosKubernetes Networking
Persistent StorageCluster StorageTaints and tolerationsFurther explanation of YAML file syntax
clusterconfig API Objectsnodeconfig API Objects
LinkpageKnown Issues
Kubernetes is an open source container orchestration platform that automates many of the manual processesinvolved in deploying, managing, and scaling containerized applications.
In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easilyand efficiently manage those clusters.
A Kubernetes cluster typically consists of two types of nodes, each responsible for different aspects of functionality:
LIMA 0.2.1 Documentation SUSELast edited by Martin Hafner 36 minutes ago
LIMA 0.2.1 Documentation SUSE
What is Kubernetes?
Master nodes – These nodes host the control plane aspects of the cluster and are responsible for, among otherthings, the API endpoint which the users interact with and provide scheduling for pods across resources. Typically,these nodes are not used to schedule application workloads. Worker nodes – Nodes which are responsible forexecuting workloads for the cluster users.
KubeOps is a software suite and includes:
LIMA: Is a CLI application for creating a single control-plane Kubernetes cluster or a high-availability Kubernetescluster.SINA: Is a tool that allows the user to install packages with all its dependencies via Helmcharts and list allpackages from the index.yaml .
The main goal behind KubeOps is to make Kubernetes secure and easy to use for everyone. KubeOps gives you thepossibility to automate thousands of Kubernetes clusters. Our focus is to provide a secure, easy-to-manage andAirGap-compatible Kubernetes environment immediately after installation.
Minimal required memory Admin node: 2GB Master node: 2GB Worker node: 2GB
Minimal required CPU cores Admin node: 2 cores Master node: 2 cores Worker node: 2 cores
A machine running openSuse 15.1 as your admin node . A machine running openSuse 15.1 for every other node, either a master node or a worker node .
Zypper repository You need an available connection to a default Zypper repository when installing LIMA.
What is KubeOps?
Requirements
Hardware
Installation LIMA
Prerequisites
Registry You either need an internet connection to use the Kubernative registry registry.kubernative.net or a local registryfor an AirGap installation to some extend.
Persistent storage Optional You need a persistent storage using NFS.
Note: Please see under Attachments for more information.
Default route
You need to set an IP as a default route which is within the IP range of your Kubernetes cluster on every machine. Donot set your default route on 'localhost' or any private address. We recommend to set your default route on a validunicast address.
ip route add default via <valid_IP_address> dev <network_adapter>
Example
ip route add default via 10.1.1.1 dev eth0
If you set your default route correctly you should see your default route on the last line. Check your default route:
ip route
All users specified in the YAML config files need sudo privileges or must be 'root'
DNS Server Optional In order to use FQDN's you have to set up a DNS server.
Hostname You have to assign specific lowercase hostnames for every machine you are using. It is recommended to use self-explanatory hostnames.
Command:
hostnamectl set-hostname <name of node>
Examples: Admin:
hostnamectl set-hostname admin
Master:
hostnamectl set-hostname master
Node1:
hostnamectl set-hostname node1
On every node
User requirements
Configuration
On every node
Node2:
hostnamectl set-hostname node2
Environment variable KUBEOPSROOT must be set to a absolute path to a folder that is readable and writable on your admin node .
Set the environment variable temporary:
export KUBEOPSROOT="<your path>"
and set the environment variable permanently:
echo 'export KUBEOPSROOT="<your path>"' >> $HOME/.bashrc
Check if environment variable is set:
echo $KUBEOPSROOT
If you want to use ssh with a password, instead of a ssh key, you need to install sshpass.
zypper install -y sshpass
1. Download the rpm from our website http://www.kubernative.net/:
curl -O http://www.kubernative.net/images/KubeOps/kubeops-\<version\>.x86_64.rpm
2. Install the rpm:
zypper install <path to rpm>/kubeops-\<version\>.x86_64.rpm
3. Check if LIMA is running:
lima
1. Set up a local docker registry in your AirGap environment. Manual: https://docs.docker.com/registry/deploying/
2. Pull the following images with a machine that has docker installed and internet access:
docker pull registry.kubernative.net/lima:v0.2.1
docker pull gcr.io/google-containers/kube-apiserver:v1.17.9
docker pull gcr.io/google-containers/kube-controller-manager:v1.17.9
On admin node
Install sshpass: Optional
Install rpm
Install KubeOps rpm on AirGap environment
docker pull gcr.io/google-containers/kube-proxy:v1.17.9
docker pull gcr.io/google-containers/kube-scheduler:v1.17.9
docker pull gcr.io/google-containers/pause:3.1
docker pull gcr.io/google-containers/coredns:1.6.5
docker pull gcr.io/google-containers/etcd:3.4.3-0
docker pull docker.io/weaveworks/weave-kube:2.6.2
docker pull docker.io/weaveworks/weave-npc:2.6.2
3. Tag them:
4. Export your images as tar files:
5. Move all tar files to a place that has access to your registry and docker installed.
6. Extract all image tar files:
docker load -i ./anible.tar
docker load -i ./kube-apiserver.tar
docker load -i ./kube-controller-manager.tar
docker tag registry.kubernative.net/lima:v0.2.1 <your registry>/registry.kubernative.net/lima:v0.2.1
docker tag gcr.io/google-containers/kube-apiserver:v1.17.9 <your registry>/gcr.io/google-containers/
docker tag gcr.io/google-containers/kube-controller-manager:v1.17.9 <your registry>/gcr.io/google-co
docker tag gcr.io/google-containers/kube-proxy:v1.17.9 <your registry>/gcr.io/google-containers/kube
docker tag gcr.io/google-containers/kube-scheduler:v1.17.9 <your registry>/gcr.io/google-containers/
docker tag gcr.io/google-containers/pause:3.2 <your registry>/gcr.io/google-containers/pause:3.2
docker tag gcr.io/google-containers/coredns:1.6.5 <your registry>/gcr.io/google-containers/coredns:1
docker tag gcr.io/google-containers/etcd:3.4.3-0 <your registry>/gcr.io/google-containers/etcd:3.4.3
docker tag docker.io/weaveworks/weave-kube:2.6.2 <your registry>/docker.io/weaveworks/weave-kube:2.6
docker tag docker.io/weaveworks/weave-npc:2.6.2 <your registry>/docker.io/weaveworks/weave-npc:2.6.2
docker save -o ./anible.tar <your registry>/registry.kubernative.net/lima:v0.2.1
docker save -o ./kube-apiserver.tar <your registry>/gcr.io/google-containers/kube-apiserver:v1.17.9
docker save -o ./kube-controller-manager.tar <your registry>/gcr.io/google-containers/kube-controlle
docker save -o ./kube-proxy.tar <your registry>/gcr.io/google-containers/kube-proxy:v1.17.9
docker save -o ./kube-scheduler.tar <your registry>/gcr.io/google-containers/kube-scheduler:v1.17.9
docker save -o ./pause.tar <your registry>/gcr.io/google-containers/pause:3.2
docker save -o ./coredns.tar <your registry>/gcr.io/google-containers/coredns:1.6.5
docker save -o ./etcd.tar <your registry>/gcr.io/google-containers/etcd:3.4.3-0
docker save -o ./weave-kube.tar <your registry>/docker.io/weaveworks/weave-kube:2.6.2
docker save -o ./weave-npc.tar <your registry>/docker.io/weaveworks/weave-npc:2.6.2
docker load -i ./kube-proxy.tar
docker load -i ./kube-scheduler.tar
docker load -i ./pause.tar
docker load -i ./coredns.tar
docker load -i ./etcd.tar
docker load -i ./weave-kube.tar
docker load -i ./weave-npc.tar
7. Push all images into your local registry:
docker push <your registry>/registry.kubernative.net/lima:v0.2.1
docker push <your registry>/gcr.io/google-containers/kube-apiserver:v1.17.9
docker push <your registry>/gcr.io/google-containers/kube-controller-manager:v1.17.9
docker push <your registry>/gcr.io/google-containers/kube-proxy:v1.17.9
docker push <your registry>/gcr.io/google-containers/kube-scheduler:v1.17.9
docker push <your registry>/gcr.io/google-containers/pause:3.2
docker push <your registry>/gcr.io/google-containers/coredns:1.6.5
docker push <your registry>/gcr.io/google-containers/etcd:3.4.3-0
docker push <your registry>/docker.io/weaveworks/weave-kube:2.6.2
docker push <your registry>/docker.io/weaveworks/weave-npc:2.6.2
8. Download the rpm from our website http://www.kubernative.net/:
curl -O http://www.kubernative.net/images/KubeOps/kubeops-\<version\>.x86_64.rpm
9. Move the rpm to your admin node.
10. Install the LIMA rpm:
zypper install <path to rpm>/kubeops-\<version\>.x86_64.rpm
11. Check if LIMA is running:
lima
12. Set 'registry' to your local registry name when setting up a new cluster:
apiVersion: lima/clusterconfig/v1alpha1 spec: ... registry: <your registry address> ...
YAML file syntax
Note: It is unproblematic to reuse a YAML file when changing the content, that means the name of the filecan stay the same but you must change the specifications in the file.
In order to set up a cluster you have to create a YAML file first. The YAML file contains all the configuration specification for your cluster. The YAML file creates a single master cluster.
Note: This kind of YAML file is only used for the initial cluster set up. It must always have the apiVersion: lima/clusterconfig/<version> . To add master nodes or worker nodes use the addNode.yaml file shown below.
Below is an example to show the structure of the file createCluster.yaml . Please use only alphanumeric numbers for the weave password.
Note: You can name the YAML files as you want, but it recommends to use self explanatory names. In this documentation, the file createCluster.yaml is further only used for a inital cluster set up. It is important to know that you can not set up another cluster with the exact same YAML file.
apiVersion: lima/clusterconfig/v1alpha1 spec: clusterName: example kubernetesVersion: 1.17.9 registry: registry.kubernative.net useInsecureRegistry: true apiEndpoint: 10.2.1.11:6443 serviceSubnet: 192.168.128.0/24 podSubnet: 192.168.129.0/24 masterHost: worker1.kubernative.net OR 10.2.1.11 systemCpu: 1000m systemMemory: 2Gi
To learn more about the YAML syntax and the specification of the API Objects please see the dedicatedsection under 'Attachments'.
To add a node to your cluster, you have to create a YAML file to configurate your node. The YAML file contains all the configuration specification for your node. This specific YAML file adds only one master node.
Note: This kind of YAML file is only used for adding nodes to your cluster. It must always have the apiVersion: lima/nodeconfig/<version> . To set up a cluster use the createCluster.yaml file shown above.
Below is an example to show the structure of the file addNode.yaml .
Note: It is possible to add multiple nodes at once to your cluster. This is shown in an example under the section 'Add multiple nodes to a single master cluster at once'.
apiVersion: lima/nodeconfig/v1alpha1 clusterName: example spec: masters: - host: 10.2.1.11 user: root password: password workers:{}
The YAML file contains the specification for one master node.
Configurate cluster using YAML
Configurate node using YAML
To learn more about the YAML syntax and the specification of the API Objects please see the dedicatedsection under 'Attachments'.
Instead of using a password you can use certificates. https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys-on-centos7
Note: The create command is both used for creating a cluster and adding nodes to the cluster.
Important: This node is only suitable as an example installation or for testing. To use this node for production workloads first remove the taint from the master node.
First create a cluster config YAML file.
We are using now the createCluster.yaml file from above.
Run the create command on the admin node to create a cluster with one node.
lima create -f createCluster.yaml
Note: Now you have set up a regular single master cluster. To use this master node also as a worker node for testing production workloads you have to remove thetaint of the master node.
Now remove the taint:
kubectl taint nodes --all node-role.kubernetes.io/master-
Use case diagram
To learn more about taints please see in the Attachments section under taints and tolerations .
How to use Certificates
How to use LIMA
Set up a single node cluster for testing
Set up a single master cluster
Note: This node is not suitable for production workloads. Please add another worker node as shown below for production workloads.
First create a cluster config YAML file.
We are using now the createCluster.yaml file from above.
Run the create command on the admin node to create a cluster with one cluster master.
lima create -f createCluster.yaml
Use case diagram
Note: Only worker nodes that are added with addNode.yaml are suitable for production workloads.
Now create a config YAML file.
We are using now the addNode.yaml file with the specification for a master node.
Run the create command on the admin node to add the new master node to your cluster.
lima create -f addNode.yaml
Use case diagram
Add node to a single master cluster
It is possible to add multiple nodes at once to your cluster. You do that by listing your desired nodes in the spec section of the addNode.yaml file.
We are creating now a new file addNode1.yaml with the desired nodes. Keep in mind that there are two types of nodes; master nodes and worker nodes. Put the desired node in their specific spec category.
Note: You can reuse the previously YAML file addNode.yaml when changing the content of the file. For thisexample we are using a new file. The file name can be the same every time, but the content must contain different nodes in order to work.
apiVersion: lima/nodeconfig/v1alpha1 clusterName: example spec: masters: - host: 10.2.1.7 user: root password: password - host: 10.2.1.13 user: root password: password - host: master1.kubernative.net workers: - host: 10.2.1.12 user: root password: password - host: 10.2.1.9 user: root password: password
The YAML file contains now the configuration for 3 new master and 2 new worker nodes.
Note: For more information about the YAML syntax and the specification of the API Objects please see thededicated section under 'Attachments'.
Now type the create command on the admin node and add the nodes to your cluster.
lima create -f addNode1.yaml
Use case diagram
Yor cluster has now a total of:
Add multiple nodes to a single master cluster at once
5 master nodes (1 cluster master from the initial set up, 1 single added master, 3 new added master )2 worker nodes (The 2 new added ones)
If you want to remove a node from your cluster you can run the delete command on the admin node.
lima delete -n <node which should be deleted> <name of your cluster>
So now we delete worker node 2 from our existing kubernetes cluster named example with the followingcommand:
lima delete -n 10.2.1.9 example
This is how our cluster looks like
**Use case diagramm**
Run the version command on the admin node to check the current version of LIMA.
lima version
Below you can see the KubeOps process and its changes to the system. The processes and changes are listed for each node separately.
Note: Shown are only the changes from using the YAML file createCluster.yaml .
1. sshkeyscan on all hosts listed in the production.yaml file.
2. firewallCheck Collect installed services and packages. No changed files.
Delete nodes from the kubernetes cluster
Show the version of LIMA
Attachments
Changes by KubeOps
On master node when setting up a cluster
3. firewallInstall Installs iptables rpm if necessary and starts firewalld/iptables or none.
4. openPortsCluster (Skipped if ignoreFirewallError==true and no firewall installed/recognised/mentioned) 4.1 Opens master_ports: 2379-2380/tcp, 6443/tcp, 6784/tcp, 9153/tcp, 10250/tcp, 10251/tcp, 10252/tcp and weave_net_ports: 6783/tcp, 6783-6784/udp. 4.2 Reloads the firewall if firewalld is used / persists iptables firewall changes through reboot
5. systemSwap 5.1 Disable swap. System reboots after disabling swap.
6. docker 6.1 Install docker and dependencies. 6.2 Enable and start docker service.
7. dockerInsecureRegistry 7.1 Check if /etc/docker/daemon.json exists 7.2 If not: Create it. 7.2.1 Write empty json ({}) into it. 7.3 Append "insecure-registries" : ["<registry>:<registryPort>]" to daemon.json . 7.4 Restart docker.
8. systemNetBridge 8.1 modprobe br_netfilter && systemctl daemon-reload. 8.2 Enable ipv4 forwarding. 8.3 Enable netfilter on bridge.
9. kubeadmKubeletKubectl 9.1 Install all required rpms. 9.2 Enable and start service kubelet.
10. kubernetesCluster 10.1 Create kubeadmconfig from template. 10.2 Create k8s user config directory. 10.3 Copy admin config to /root/.kube/config .
Note: Shown are only the changes when the YAML file addNode.yaml and addNode1.yaml is used whenadding nodes.
1. kubernetesCreateToken Clustermaster creates join-token with following command:
kubeadm token create --print-join-command
2. getCert 2.1 Upload certs and get certificate key. 2.2 Writes output from kubeadm token create --print-join-command --control-plane --certificate-key <cert_key> into var.
1. sshkeyscan on all hosts listed in the production.yaml file.
2. firewallCheck Collect installed services and packages. No changed files.
3. firewallInstall Installs iptables rpm if necessary and starts firewalld/iptables or none.
4. openPortsMaster (Skipped if ignoreFirewallError==true and no firewall installed/recognised/mentioned) 4.1 Opens master_ports: 2379-2380/tcp, 6443/tcp, 6784/tcp, 9153/tcp, 10250/tcp, 10251/tcp, 10252/tcp and weave_net_ports: 6783/tcp, 6783-6784/udp. 4.2 Reloads the firewall if firewalld is used / persists iptables firewall changes through reboot
On clustermaster node when node is added to the cluster
On master nodes which will be joined to the cluster
5. systemSwap 5.1 Disable swap. System reboots after disabling swap.
6. docker 6.1 Install docker and dependencies. 6.2 Enable and start docker service.
7. dockerInsecureRegistry 7.1 Check if /etc/docker/daemon.json exists 7.2 If not: Create it. 7.2.1 Write empty json ({}) into it. 7.3 Append "insecure-registries" : ["<registry>:<registryPort>]" to daemon.json . 7.4 Restart docker.
8. systemNetBridge 8.1 modprobe br_netfilter && systemctl daemon-reload. 8.2 Enable ipv4 forwarding. 8.3 Enable netfilter on bridge.
9. kubeadmKubeletKubectl 9.1 Install all required rpms. 9.2 Enable and start service kubelet.
10. kubernetesJoinNode Master node joins with token generated by cluster.
1. sshkeyscan on all hosts listed in the production.yaml file.
2. firewallCheck Collect installed services and packages. No changed files.
3. firewallInstall Installs iptables rpm if necessary and starts firewalld/iptables or none.
4. openPortsWorker (Skipped if ignoreFirewallError==true and no firewall installed/recognised/mentioned) 4.1 Opens worker_ports: 10250/tcp, 30000-32767/tcp and weave_net_ports: 6783/tcp, 6783-6784/udp. 4.2 Reloads the firewall if firewalld is used / persists iptables firewall changes through reboot
5. systemSwap 5.1 Disable swap. System reboots after disabling swap.
6. docker 6.1 Install docker and dependencies. 6.2 Enable and start docker service.
7. dockerInsecureRegistry 7.1 Check if /etc/docker/daemon.json exists 7.2 If not: Create it. 7.2.1 Write empty json ({}) into it. 7.3 Append "insecure-registries" : ["<registry>:<registryPort>]" to daemon.json . 7.4 Restart docker.
8. systemNetBridge 8.1 modprobe br_netfilter && systemctl daemon-reload. 8.2 Enable ipv4 forwarding. 8.3 Enable netfilter on bridge.
9. kubeadmKubeletKubectl 9.1 Install all required rpms. 9.2 Enable and start service kubelet.
10. kubernetesJoinNode Worker node joins with token generated by cluster.
On Kubernetes workers
Kubernetes version 1.16.13
conntrack-tools 1.4.4cri-tools 1.18.0kubeadm 1.16.13kubectl 1.16.13kubelet 1.16.13kubernetes-cni 0.8.6libnetfilter_cthelper 1.0.0libnetfilter_cttimeout 1.0.0libnetfilter_queue 1.0.2socat 1.7.3.2
Kubernetes version 1.17.9
conntrack-tools 1.4.4cri-tools 1.18.0kubeadm 1.17.9kubectl 1.17.9kubelet 1.17.9kubernetes-cni 0.8.6libnetfilter_conntrack 1.0.6libnetfilter_cthelper 1.0.0libnetfilter_cttimeout 1.0.0libnetfilter_queue 1.0.2socat 1.7.3.2
Kubernetes version 1.18.6
conntrack-tools 1.4.4cri-tools 1.18.0kubeadm 1.18.6kubectl 1.18.6kubelet 1.18.6kubernetes-cni 0.8.6libnetfilter_cthelper 1.0.0libnetfilter_cttimeout 1.0.0libnetfilter_queue 1.0.2socat 1.7.3.2
Firewall
iptables 1.4.21
Docker Dependencies SUSE
audit-libs-python 2.8.5checkpolicy 2.5container-selinux 2.119.1containerd.io 1.2.13docker-ce 18.09.1docker-ce-cli 18.09.1libcgroup 0.41libseccomp 2.3.1libsemanage-python 2.5libtool-ltdl 2.4.2policycoreutils 2.5policycoreutils-python 2.5python-IPy 0.75setools-libs-3.3.8
Docker Dependencies SUSE
catatonit 0.1.3containerd 1.2.10criu 3.8.1docker 19.03.5docker-bash-completion 19.03.5docker-compose 1.17.0
Installed packages and versions
docker-libnetwork 0.7.0.1docker-runc 1.0.0git-core 2.26.2libnet9 1.2libprotobuf-c1 1.3.0libpython2_7-1_0 2.7.17libsha1detectcoll1 1.0.3perl-Error 0.17025python 2.7.17python-base-2.7.17python-dockerpty-0.4.1python-enum34-1.1.6python-functools32-3.2.3.2python-ipaddress-1.0.18python-xml-2.7.17python2-appdirs-1.4.3python2-asn1crypto-0.24.0python2-backports-4.0.0python2-backports.ssl_match_hostname-3.5.0.1python2-cached-property-1.3.0python2-certifi-2018.1.18python2-cffi-1.11.2python2-chardet-3.0.4python2-cryptography-2.1.4python2-docker-2.6.1python2-docker-pycreds-0.2.1python2-docopt-0.6.2python2-idna-2.6python2-ipaddr-2.1.11python2-jsonschema-2.6.0python2-ndg-httpsclient-0.4.0python2-packaging-16.8python2-protobuf-3.5.0python2-py-1.8.1python2-pyasn1-0.4.2python2-pycparser-2.17python2-pyOpenSSL-17.5.0python2-pyparsing-2.2.0python2-PySocks-1.6.8python2-PyYAML-5.1.2python2-requests-2.20.1python2-setuptools-40.5.0python2-six-1.11.0python2-texttable-1.1.1python2-urllib3-1.24python2-websocket-client-0.44.0
Docker
docker-ce 18.09.9
You need:
DNS server (optional)Persistent storage (NFS) (optional)Internet access for Kubernative registryYour own local registry on an AirGap environment
Product environment considerations
Recommended architecture
Failure tolerance Having multiple master nodes ensures that services remain available should master node(s) fail. In order to guaranteeavailability of master nodes, they should be deployed with odd numbers (e.g. 3,5,7,9 etc.) An odd-size cluster tolerates the same number of failures as an even-size cluster but with fewer nodes. The differencecan be seen by comparing even and odd sized clusters:
Cluster Size Majority Failure Tolerance
1 1 0
2 2 0
3 2 1
4 3 1
5 3 2
6 4 2
7 4 3
8 5 3
9 5 4
Adding a member to bring the size of cluster up to an even number doesn’t buy additional fault tolerance. Likewise,during a network partition, an odd number of members guarantees that there will always be a majority partition thatcan continue to operate and be the source of truth when the partition ends.
Note: Please see under How to install KubeOps for the instructions.
1. Install from registry
2. Install on AirGap environment
Installation scenarios
Pod- and servicesubnet In Kubernetes, every pod has its own routable IP address. Kubernetes networking – through the network plug-in thatis required to install (e.g. Weave) takes care of routing all requests internally between hosts to the appropriate pod.External access is provided through a service or load balancer which Kubernetes routes to the appropriate pod.
The Podsubnet is set by the user in the createCluster.yaml file. Every Pod has it's own IP address. The Podsubnetrange has to be big enough to fit all of the pods. Due to the design of Kubernetes, where pods will change or nodesreboot, services where built into Kubernetes to address the problem of changing IP addresses.
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them(sometimes this pattern is called a micro-service). Kubernetes service manages the state of a set of pods. The service is an abstraction of pods which asign a virtual IPaddress over a set of pod IP addresses.
Note: The podsubnet and the servicesubnet must have different IP ranges.
Please see the link below to learn more about persistent storage.
https://docs.openshift.com/container-platform/4.4/storage/understanding-persistent-storage.html#understanding-persistent-storage
The state of each cluster will be saved in the environment variable KUBEOPSROOT and represents which nodes arejoined with the cluster master. In KUBEOPSROOT the clusterName is set as a folder name and includes a clusterStorage.yaml . In the clusterStorage.yaml is all the information about the cluster.
The structure of the clusterStorage.yaml file looks like this:
apiVersion: lima/clusterconfig/v1alpha1 config: clusterName: example kubernetesVersion: 1.17.9 registry: registry.kubernative.net useInsecureRegistry: true apiEndpoint: 10.2.1.11:6443 serviceSubnet: 192.168.128.0/24 podSubnet: 192.168.129.0/24 clusterMaster: master1.kubernative.net OR 10.2.1.11 systemCpu: 1000m systemMemory: 2Gi nodes: masters: master1.kubernative.net: {} 10.2.1.50: {} 10.2.1.51: {} workers:
Kubernetes Networking
Persistent Storage
Cluster Storage
worker1.kubernative.net: {} worker2.kubernative.net: {} 10.2.1.54: {}
Taints allow a node to reject a set of pods.
Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or moretaints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints.
Example: The master node has a taint that prevents it from using it for production workloads. So first remove the taint in order to use the master for production workloads.
Important: Removing the taint from a master node is only recommended for testing!
Structure of the createCluster API Object with version lima/createCluster/v1alpha1 .
An Example to show the structure of the file createCluster.yaml :
apiVersion: lima/clusterconfig/v1alpha1 spec: clusterName: example kubernetesVersion: 1.17.9 registry: registry.kubernative.net useInsecureRegistry: true apiEndpoint: 10.2.1.11:6443 serviceSubnet: 192.168.128.0/24 podSubnet: 192.168.129.0/24 masterHost: worker1.kubernative.net OR 10.2.1.11 systemCpu: 1000m systemMemory: 2Gi
apiVersion: Version String which defines the format of the API Object The only currently supported version is:
apiVersion: lima/clusterconfig/v1alpha1
clusterName: Mandatory Name for the cluster used to address the cluster. Should consist of only uppercase/lowercase letters, numbers and underscores. Example:
clusterName: example
kubernetesVersion: Mandatory Defines the Kubernetes version to be installed. This value must follow the Kubernetes version convention. The validformat is '#.#.#.' and accepts for '#' numbers between 0 and 9. Currently the versions 1.16.13, 1.17.9 and 1.18.6 are supported. Example:
kubernetesVersion: 1.17.9
Taints and tolerations
Further explanation of YAML file syntax
clusterconfig API Objects
registry: Optional Address of the registry where the kubernetes images are stored. This value has to be a valid IP address or valid DNS name. Example:
registry: 10.2.1.12
Default:
registry: registry.kubernative.net
useInsecureRegistry: Mandatory Value defines if the used registry is a secure or insecure registry. This value can be either of true or false. Onlylowercase letters allowed for 'true' and 'false'. Example:
useInsecureRegistry: true
debug: Optional Value defines if the user wants to see output from the pipe or not. This value can be either of true or false. Onlylowercase letters allowed for 'true' and 'false'. Example:
debug: true
Default:
debug: false
systemCpu: Mandatory The CPU value to be used in a cluster setup. If this field remains empty, 1000m is used. The value range is from 0.001to 0.9 or 1m to 50000m. Example:
systemCpu: 500m
Default:
systemCpu: 1000m
systemMemory: Mandatory The Memory value to be used in a cluster setup. If this field remains empty, 2Gi is used. Limits and requests formemory are measured in bytes. You can express memory as a simple integer or as a fixed-point number using one ofthese suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. Example:
systemMemory: 1Gi
Default:
systemMemory: 2Gi
ignoreFirewallError: Optional Value defines if the firewall error is ignored or not. This value can be either of true or false. Only lowercase lettersallowed for 'true' and 'false'. Example:
ignoreFirewallError: true
Default:
ignoreFirewallError: false
firewall: Optional Value defines which firewall is used. This value can be either iptables or firewalld. Only lowercase letters allowed for'iptables' and 'firewalld'. Example:
firewall: iptables
apiEndpoint: Mandatory The IP address and port where the apiserver can be reached at. This value consists of an IP address followed by acolon and port. Usually the IP address of a clustermaster or Load Balancer is used. Example:
apiEndpoint: 10.2.1.11:6443
serviceSubnet: Mandatory Defines the subnet to be used for the services within kubernetes. This subnet has to be given in CIDR format. But isnot checked for validity. Example:
serviceSubnet: 192.168.128.0/24
podSubnet: Mandatory Defines the subnet used by the pods within kubernetes. This subnet has to be given in CIDR format. Example:
podSubnet: 192.168.129.0/24
Note: The podsubnet and the servicesubnet must have different IP ranges.
weavePassword: Optional Password used to encrypt the weave overlay network. Requirements for the password:
Password length must be between 9 and 128 characters longOnly case sensitive alphanumeric characters and underscores allowed
Example:
weavePassword: password1_
weaveIpRange: Optional Defines the subnet used by the weave plugin. This subnet has to be given in CIDR format. Example:
weaveIpRange: 172.30.0.0/16
Default:
weaveIpRange: 10.32.0.0/12
masterHost: Mandatory Name of the node to be installed as the first master. This value can be either a correct FQDN or a specific IP Address. Example:
masterHost: 10.2.1.11
masterUser: Optional Specification of the user to be used to connect to the node. If this field is left empty "root" will be used to run theansbile scripts on the cluster master. Should consist of only uppercase/lowercase letters, numbers and underscores. Example:
masterUser: root
Default:
masterUser: root
masterPassword: Optional Specification of the userpassword to be used to connect to the node. Requirements for the password:
Password length must be between 1 and 128 characters longAllowed: alphanumeric characters, following symbols: '_!?-^@#$%*&():.,;<>'
Example:
masterPassword: password
Note: If you are not using 'masterPassword' you need to use certificates
An Example to show the structure of the file addNode.yaml :
apiVersion: lima/nodeconfig/v1alpha1 clusterName: example spec: masters: - host: 10.2.1.11 user: root password: admin123456 - host: master1.kubernative.net workers: - host: 10.2.1.12 user: root password: admin123456 - host: worker1.kubernative.net
masters Optional A list of all master nodes in the nodelist. Each node must have a hostname. The user and password are optional.
An Example to show the structure of the spec file: addNode.yaml . The cutout shows the structure of the masters spec Object:
apiVersion: lima/nodeconfig/v1alpha1 clusterName: example spec: masters: - host: 10.2.1.11 user: root password: admin123456 - host: master.kubernative.net
host: Mandatory Each host has a unique identifier. The hostname can be either a specific IP Address OR a correct FQDN. Example:
host: 10.2.1.11
or
host: master.kubernative.net
nodeconfig API Objects
user: Optional Specification of the user to be used to connect to the node. Example:
user: root
Default:
user: root
password: Optional Specification of the password to be used to connect to the node. Example:
password: admin123456
workers Optional A list of all worker nodes in the nodelist. Each node must have a hostname. The user and password are optional.
An Example to show the structure of the spec file: addNode.yaml . The cutout shows the structure of the workers spec Object:
apiVersion: lima/nodeconfig/v1alpha1 clusterName: example spec: workers: - host: 10.2.1.12 user: root password: admin123456 - host: worker.kubernative.net
host: Mandatory Each host has a unique identifier. The hostname can be either a specific IP Address OR a correct FQDN. Example:
host: 10.2.1.12
or
host: worker.kubernative.net
user: Optional Specification of the user to be used to connect to the node. Example:
user: root
Default:
user: root
password: Optional Specification of the password to be used to connect to the node. Example:
password: admin123456
Kubernative Homepage http://www.kubernative.net/de/
Kubernetes API reference https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/
CNCF Landscape https://landscape.cncf.io
Make sure that the Kubernative registry is available when using LIMA.If you are not using the parameter masterPassword , you have to set up your cluster over SSH. In this scenarioyou have to authenticate your machines with your admin node over public keys (ssh-keygen).If you are setting up a cluster as a user (not root), you have to give the user wheel group permissions to executecommands without sudo.If you remove a node from your cluster be aware that your node is not reseted.
kubectl, kubeadm and kubectl are still installedthe weave directories are not deleted
Furthermore pay attention to following rules:only remove a master node if you have at least one worker node in your clusterbe sure to have an uneven amount of master nodes
Linkpage
Known Issues