Difference between revisions of "Kubernetes installation"

From wikieduonline
Jump to navigation Jump to search
Tags: Mobile web edit, Mobile edit
 
(3 intermediate revisions by 2 users not shown)
Line 77: Line 77:
  
 
== Activities ==
 
== Activities ==
* [[CKA v1.18]]: Install Kubernetes master and nodes
+
* [[CKA v1.18]]: [[Install Kubernetes master and nodes]]
 
 
  
 
== Related terms ==
 
== Related terms ==
Line 84: Line 83:
 
* <code>[[eksctl create cluster]]</code>
 
* <code>[[eksctl create cluster]]</code>
 
* [[Deploy EKS cluster using Terraform]]
 
* [[Deploy EKS cluster using Terraform]]
* <code>[[kubeadm]]</code>
+
* [[Deploy GKE cluster using Terraform]]
 +
* <code>[[kubeadm init]]</code>
 +
* <code>[[kubectl version --short]]</code>
  
 
== See also ==
 
== See also ==

Latest revision as of 07:34, 31 October 2022

Installation[edit]

Kubernetes as of April 2019 can be installed in more that 40 different ways[1] and in particular can be installed using your Linux distribution packages or using Kubernetes upstream version. It is also possible to use any of Kubernetes managed solution offered by Cloud Computing provider like EKS from AWS, Google Kubernetes Engine (GKE) in Google Cloud Platform (GCP) or GKE on-prem[2] or also some CI/CD tools like Jenkins X and GitLab[3] that support integration with different Kubernetes Cloud providers.

Install Kubernetes on Debian/Ubuntu using upstream[4][edit]

  • Our first step is to download and add the key for the Kubernetes and docker install. Back at the terminal, issue the following command:
  • Add the Docker Repository on all your servers:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
 $(lsb_release -cs) \
 stable"
  • Add the Kubernetes repository in your apt source.list on all your servers.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.12.2-00 kubeadm=1.12.2-00 kubectl=1.12.2-00
sudo apt-mark hold docker-ce kubelet kubeadm kubectl

Initialize your master node[edit]

  • Enable net.bridge.bridge-nf-call-iptables on all your nodes.
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
  • On only the Kube Master server, initialize the cluster and configure kubectl.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

When this completes, you'll be presented with the exact command you need to join the nodes to the master. In case you make any mistake and want to undo your changes you can use: kubeadm reset [6] command.

  • Before you join a node, you need to issue the following commands:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Install the flannel networking plugin in the cluster by running this command on the Kube Master server.
[[kubectl apply]] -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
  • The kubeadm init command that you ran on the master should output a kubeadm join command containing a token and hash. You will need to copy that command from the master and run it on both worker nodes with sudo.
sudo kubeadm join $controller_private_ip:6443 --token $token --discovery-token-ca-cert-hash $hash
  • Now you are ready to verify that the cluster is up and running. On the Kube Master server, check the list of nodes.
kubectl get nodes
NAME                      STATUS   ROLES    AGE   VERSION
wboyd1c.mylabserver.com   Ready    master   54m   v1.12.2
wboyd2c.mylabserver.com   Ready    <none>   49m   v1.12.2
wboyd3c.mylabserver.com   Ready    <none>   49m   v1.12.2

Clustering and Nodes[edit]

Kubernetes implements a clustered architecture . In a typical production environment, you will have multiple servers that are able to run your workloads (containers) These servers which actually run the containers are called nodes. A kubernetes cluster has one or more control servers which manage and control the cluster and host the Kubernetes API. These control server are usually separate from worker nodes, which run applications within the cluster.

  • Get a list of nodes:
kubectl get nodes
  • Get more information about a specific node:
kubectl describe node $node_name

Networking in Kubernetes [edit]

Activities[edit]

Related terms[edit]

See also[edit]

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy.

Source: Wikiversity

Advertising: