Editing Kubernetes installation

Jump to navigation Jump to search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.

Latest revision Your text
Line 1: Line 1:
= Installation =
+
== Installation ==
 
[[Kubernetes]] as of April 2019 can be installed in more that 40 different ways<ref>https://linuxacademy.com/blog/linux-academy/top-ten-ways-not-to-sink-the-kubernetes-ship/?utm_source=intercom&utm_medium=email&utm_campaign=AprilNewsletter2019</ref> and in particular can be installed using your Linux distribution packages or using Kubernetes [[upstream]] version.  
 
[[Kubernetes]] as of April 2019 can be installed in more that 40 different ways<ref>https://linuxacademy.com/blog/linux-academy/top-ten-ways-not-to-sink-the-kubernetes-ship/?utm_source=intercom&utm_medium=email&utm_campaign=AprilNewsletter2019</ref> and in particular can be installed using your Linux distribution packages or using Kubernetes [[upstream]] version.  
 
It is also possible to use any of Kubernetes managed solution offered by Cloud Computing provider like [[EKS]] from AWS, [[Google Kubernetes Engine]] (GKE) in [[Google Cloud Platform]] (GCP) or GKE on-prem<ref>https://cloud.google.com/gke-on-prem/</ref> or also some CI/CD tools like [[Jenkins X]] and [[GitLab]]<ref>https://about.gitlab.com/solutions/kubernetes/</ref> that support integration with different Kubernetes Cloud providers.
 
It is also possible to use any of Kubernetes managed solution offered by Cloud Computing provider like [[EKS]] from AWS, [[Google Kubernetes Engine]] (GKE) in [[Google Cloud Platform]] (GCP) or GKE on-prem<ref>https://cloud.google.com/gke-on-prem/</ref> or also some CI/CD tools like [[Jenkins X]] and [[GitLab]]<ref>https://about.gitlab.com/solutions/kubernetes/</ref> that support integration with different Kubernetes Cloud providers.
  
== Install Kubernetes on Debian/Ubuntu using upstream<ref>
+
=== Install Kubernetes on Debian/Ubuntu using upstream<ref>
 
https://www.techrepublic.com/article/how-to-quickly-install-kubernetes-on-ubuntu/
 
https://www.techrepublic.com/article/how-to-quickly-install-kubernetes-on-ubuntu/
</ref> ==
+
</ref> ===
  
 
* Our first step is to download and add the key for the '''Kubernetes and docker''' install. Back at the terminal, issue the following command:
 
* Our first step is to download and add the key for the '''Kubernetes and docker''' install. Back at the terminal, issue the following command:
Line 22: Line 22:
 
EOF</pre>
 
EOF</pre>
  
* And now, '''Install Docker, [[kubeadm]], [[kubelet]]<ref>https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/</ref>, and [[kubectl]]''' on all your servers.
+
* And now, '''Install Docker, [[/kubeadm/]], [[/kubelet/]]<ref>https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/</ref>, and [[/kubectl/]]''' on all your servers.
sudo apt-get update
+
<pre>sudo apt-get update
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.12.2-00 kubeadm=1.12.2-00 kubectl=1.12.2-00
+
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.12.2-00 kubeadm=1.12.2-00 kubectl=1.12.2-00
sudo [[apt-mark hold]] docker-ce kubelet kubeadm kubectl
+
sudo apt-mark hold docker-ce kubelet kubeadm kubectl</pre>
  
==Initialize your [[master node]]==
+
===Initialize your master===
  
 
* Enable '''net.bridge.bridge-nf-call-iptables''' on all your nodes.
 
* Enable '''net.bridge.bridge-nf-call-iptables''' on all your nodes.
Line 34: Line 34:
  
 
* On only '''the Kube Master server''', initialize the cluster and configure '''kubectl.'''
 
* On only '''the Kube Master server''', initialize the cluster and configure '''kubectl.'''
<code>sudo [[kubeadm init]] --pod-network-cidr=10.244.0.0/16</code>
+
<code>sudo [[/kubeadm/]] init --pod-network-cidr=10.244.0.0/16</code>
  
 
When this completes, you'll be presented with the exact command you need to join the nodes to the master.
 
When this completes, you'll be presented with the exact command you need to join the nodes to the master.
In case you make any mistake and want to undo your changes you can use: <code>[[kubeadm reset]]</code> <ref>https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/</ref> command.
+
In case you make any mistake and want to undo your changes you can use: <code>kubeadm reset<ref>https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/</ref></code> command.
  
 
* Before you join a node, you need to issue the following commands:
 
* Before you join a node, you need to issue the following commands:
Line 48: Line 48:
  
 
*Install '''the flannel networking''' plugin in the cluster by running this command '''on the Kube Master''' server.
 
*Install '''the flannel networking''' plugin in the cluster by running this command '''on the Kube Master''' server.
<pre>[[kubectl apply]] -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml</pre>
+
<pre>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml</pre>
  
* The <code>[[kubeadm init]]</code> command that you ran on the master should output a <code>kubeadm join</code> command containing a '''token and hash'''. You will need to copy that command '''from the master''' and run it on both worker nodes with '''sudo'''.
+
* The <code>kubeadm init</code> command that you ran on the master should output a <code>kubeadm join</code> command containing a '''token and hash'''. You will need to copy that command '''from the master''' and run it on both worker nodes with '''sudo'''.
 
<pre>sudo kubeadm join $controller_private_ip:6443 --token $token --discovery-token-ca-cert-hash $hash</pre>
 
<pre>sudo kubeadm join $controller_private_ip:6443 --token $token --discovery-token-ca-cert-hash $hash</pre>
  
Line 61: Line 61:
 
wboyd3c.mylabserver.com  Ready    <none>  49m  v1.12.2
 
wboyd3c.mylabserver.com  Ready    <none>  49m  v1.12.2
 
</pre>
 
</pre>
 +
 +
===Containers and Pods===
 +
 +
'''Pods'''<ref>https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/</ref> are the smallest and most basic building block of the Kubernetes model.
 +
A pod consist of one or more containers storage resources, and a unique IP address in the Kubernetes cluster network.
 +
 +
In order to run containers, Kubernetes '''schedules''' pods to run on servers in the cluster. When a pod is scheduled the server will run the containers that are part of that pod.
 +
 +
Create a simple pod running an nginx container, for more configuration options check Kubernetes Pod official documentation<ref>https://kubernetes.io/docs/tasks/configure-pod-container/</ref>:
 +
* Create a basic Pod file definition with your container image: <code>mypod.yml</code>
 +
<pre>
 +
apiVersion: v1
 +
kind: Pod
 +
metadata:
 +
  name: MyNginxPod
 +
spec:
 +
  containers:
 +
  - name: MyNginxContainer
 +
    image: nginx
 +
</pre>
 +
* Create Pod: <code>kubectl create -f mypod.yml</code>
 +
 +
*Get a list of pods and verify that your new nginx pod is in the Running state:
 +
<pre>kubectl get pods</pre>
 +
 +
*Get more information about your nginx pod:
 +
<pre>kubectl describe pod nginx</pre>
 +
 +
*Delete the pod:
 +
<pre>kubectl delete pod nginx</pre>
 +
 +
 +
See also ReplicaSet<ref>https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/</ref> concept.
  
 
===Clustering and Nodes===
 
===Clustering and Nodes===
Line 66: Line 99:
 
Kubernetes implements a clustered architecture . In a typical production environment, you will have multiple servers that are able to run your workloads (containers)
 
Kubernetes implements a clustered architecture . In a typical production environment, you will have multiple servers that are able to run your workloads (containers)
 
These servers which actually run the containers are called '''nodes.'''
 
These servers which actually run the containers are called '''nodes.'''
A kubernetes cluster has one or more '''control servers''' which manage and control the cluster and host the '''[[Kubernetes API]]'''. These control server are usually separate from worker nodes, which run applications within the cluster.
+
A kubernetes cluster has one or more '''control servers''' which manage and control the cluster and host the '''kubernetes API'''. These control server are usually separate from worker nodes, which run applications within the cluster.
  
*Get a list of nodes:
+
*Get a list of nodes: <code>kubectl get nodes</code>
::<code>[[kubectl get nodes]]</code>
 
  
 
*Get more information about a specific node:
 
*Get more information about a specific node:
::<code>[[kubectl describe node]] $node_name</code>
+
<pre>kubectl describe node $node_name</pre>
 +
 
 +
===Networking in Kubernetes===
 +
 
 +
The Kubernetes networking model involves creating a '''virtual network''' across the whole cluster. This means that every pod on the cluster has a unique IP address, and can communicate with any other pod in the cluster, even if that other pod is running on a different node.
 +
 
 +
Kubernetes supports a variety of networking plugins that implements this model in various ways. One of the most popular and easy-to-use<ref>https://linuxacademy.com/blog/linux-academy/top-ten-ways-not-to-sink-the-kubernetes-ship/?utm_source=intercom&utm_medium=email&utm_campaign=AprilNewsletter2019</ref> is '''Flannel''', although as of April 2019 do not support network policies.
 +
 
 +
*Create a deployment with two nginx pods:
 +
<pre>cat << EOF | kubectl create -f -
 +
apiVersion: apps/v1
 +
kind: Deployment
 +
metadata:
 +
  name: nginx
 +
  labels:
 +
    app: nginx
 +
spec:
 +
  replicas: 2
 +
  selector:
 +
    matchLabels:
 +
      app: nginx
 +
  template:
 +
    metadata:
 +
      labels:
 +
        app: nginx
 +
    spec:
 +
      containers:
 +
      - name: nginx
 +
        image: nginx:1.15.4
 +
        ports:
 +
        - containerPort: 80
 +
EOF</pre>
  
=== [[Networking in Kubernetes ]]===
+
*Create a busybox pod to use for testing:
 +
<pre>cat << EOF | kubectl create -f -
 +
apiVersion: v1
 +
kind: Pod
 +
metadata:
 +
  name: busybox
 +
spec:
 +
  containers:
 +
  - name: busybox
 +
    image: radial/busyboxplus:curl
 +
    args:
 +
    - sleep
 +
    - "1000"
 +
EOF</pre>
  
== Activities ==
+
*Get the IP addresses of your pods:
* [[CKA v1.18]]: [[Install Kubernetes master and nodes]]
+
<pre>kubectl get pods -o wide</pre>
  
== Related terms ==
+
*Get the IP address of one of the nginx pods, then contact that nginx pod from the busybox pod using the nginx pod's IP address:
* [[Kubernetes (snap install)]]: <code>[[juju deploy charmed-kubernetes]]</code>
+
<pre>kubectl exec busybox -- curl $nginx_pod_ip</pre>
* <code>[[eksctl create cluster]]</code>
 
* [[Deploy EKS cluster using Terraform]]
 
* [[Deploy GKE cluster using Terraform]]
 
* <code>[[kubeadm init]]</code>
 
* <code>[[kubectl version --short]]</code>
 
  
 
== See also ==
 
== See also ==
* {{kubectl}}
+
* {{K8s}}
* {{K8s installation}}
 
  
  

Please note that all contributions to wikieduonline may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see Wikieduonline:Copyrights for details). Do not submit copyrighted work without permission!

Cancel Editing help (opens in new window)

Advertising: