Editing Kubectl get events

Jump to navigation Jump to search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.

Latest revision Your text
Line 1: Line 1:
 
{{lc}}
 
{{lc}}
TOMERGE: [[Kubernetes node events]]
+
* <code>[[kubectl get]] events</code>
 
 
* <code>[[kubectl get events --help]]</code>
 
 
* <code>[[kubectl get events -A]]</code>
 
* <code>[[kubectl get events -A]]</code>
* <code>[[kubectl get events -A]] | grep [[Warning]]</code>
+
kubectl get events -o yaml
* <code>[[kubectl get events -A -o wide]] | grep [[Warning]]</code>
 
* <code>[[kubectl get]] events</code>
 
* <code>[[kubectl get events -o wide]]</code>
 
* <code>kubectl get events -o yaml</code>
 
  
 
* <code>[[kubectl get]] events --sort-by=.metadata.creationTimestamp</code>
 
* <code>[[kubectl get]] events --sort-by=.metadata.creationTimestamp</code>
 
* <code>[[kubectl get]] events --sort-by=.metadata.creationTimestamp -A</code>
 
* <code>[[kubectl get]] events --sort-by=.metadata.creationTimestamp -A</code>
 
* <code>[[kubectl get]] events --sort-by='.lastTimestamp</code>
 
* <code>[[kubectl get]] events --sort-by='.lastTimestamp</code>
 +
  
 
* <code>[[kubectl -n gitlab-runner get events --field-selector type!=Normal]]</code>
 
* <code>[[kubectl -n gitlab-runner get events --field-selector type!=Normal]]</code>
  
  
  kubectl get events -A | grep [[Warning]] | egrep "[[FailedMount]]|[[FailedAttachVolume]]|[[Unhealthy]]|[[ClusterUnhealthy]]|[[FailedScheduling]]"
+
  kubectl get events -A | grep [[Warning]]
 
+
  [[kubectl get events -A]] | grep Normal
  [[kubectl get events -A]] | grep [[Normal]]
 
  
 
== Events examples ==
 
== Events examples ==
* [[Normal]], [[Warning]], [[Critical]]
 
 
 
=== Warning ===
 
=== Warning ===
  
[[BackoffLimitExceeded]]
+
  your_namespace        28s        Warning  FailedScheduling                  pod/kibana-kibana-654ccb45bd-pbp4r            0/2 nodes are available: 2 [[Insufficient cpu]].
[[CalculateExpectedPodCountFailed]]
 
[[ClusterUnhealthy]]
 
[[FailedMount]]
 
[[FailedScheduling]]
 
[[InvalidDiskCapacity]]
 
[[Unhealthy]]
 
.../...
 
 
 
 
 
 
 
  your_namespace        28s        Warning  [[FailedScheduling]]                 pod/kibana-kibana-654ccb45bd-pbp4r            0/2 nodes are available: 2 [[Insufficient cpu]].
 
  
 
  your_namespace        4m53s      Warning  [[ProbeWarning]]              [[pod]]/[[metabase]]-prod-f8f4b765b-h4pgs                                  Readiness probe warning:
 
  your_namespace        4m53s      Warning  [[ProbeWarning]]              [[pod]]/[[metabase]]-prod-f8f4b765b-h4pgs                                  Readiness probe warning:
Line 44: Line 25:
  
 
  your_namespace        26m        Warning  [[Unhealthy]]                pod/elasticsearch-master-1                                          [[Readiness probe failed]]: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )...
 
  your_namespace        26m        Warning  [[Unhealthy]]                pod/elasticsearch-master-1                                          [[Readiness probe failed]]: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )...
 +
  
 
  your_namespace        99s        Warning  [[BackOff]]                pod/elasticsearch-master-0                Back-off restarting failed container
 
  your_namespace        99s        Warning  [[BackOff]]                pod/elasticsearch-master-0                Back-off restarting failed container
Line 59: Line 41:
 
  default      107s        Warning  [[ProvisioningFailed]]    persistentvolumeclaim/myprometheus-server        (combined from similar events): failed to provision volume with StorageClass "[[gp2]]": rpc error: code = Internal desc = [[Could not create volume]] "pvc-4e14416c-c9c2-4d39-b749-9ce0fa98d597": could not create volume in EC2: [[UnauthorizedOperation]]: [[You are not authorized to perform this operation]]. Encoded authorization failure message: Goz6E3qExxxxx.../...
 
  default      107s        Warning  [[ProvisioningFailed]]    persistentvolumeclaim/myprometheus-server        (combined from similar events): failed to provision volume with StorageClass "[[gp2]]": rpc error: code = Internal desc = [[Could not create volume]] "pvc-4e14416c-c9c2-4d39-b749-9ce0fa98d597": could not create volume in EC2: [[UnauthorizedOperation]]: [[You are not authorized to perform this operation]]. Encoded authorization failure message: Goz6E3qExxxxx.../...
  
  [[kube-system]]     9m44s      Warning  [[FailedMount]]               pod/[[kube-dns]]-85df8994db-v8qdg                          [[MountVolume]].SetUp failed for volume "kube-dns-config" : failed to sync [[configmap cache]]: [[timed out waiting for the condition]]</code>
+
=== Normal ===
 +
  default     4s          Normal    [[Provisioning]]           persistentvolumeclaim/myprometheus-alertmanager  External provisioner is provisioning volume for claim "default/myprometheus-alertmanager"
 +
 +
Related: <code>[[kubectl get pvc]]</code>
  
 
+
== Events ==
 
+
  [[BackOff]]
  [[kube-system]]     43m        [[Warning]]   [[ClusterUnhealthy]]         [[configmap/]]  
+
[[Completed]]
  [[cluster-autoscaler-status]]  
+
[[Created]]
 
+
[[FailedMount]]
     
+
  [[FailedScheduling]]
  LAST SEEN  TYPE      REASON              OBJECT                                MESSAGE
+
  [[Generated]]
28s        Warning  FailedScheduling    pod/deployment-1230/3 nodes
+
  [[PresentError]]
are available: 3 [[persistentvolumeclaim]] "your" bound to non-existent
+
[[Pulled]]
persistentvolume "".
 
19m        Warning  FailedScheduling    pod/deployment-123      0/3 nodes
 
  are available: 3 persistentvolumeclaim "your" not found.
 
10m        Warning  FailedScheduling    pod/deployment-91234      0/3 nodes
 
are available: 3 persistentvolumeclaim "your" bound to non-existent
 
persistentvolume "".
 
17m        Warning  [[ClaimLost]]           persistentvolumeclaim/yourclaim  [[Bound claim has lost reference to PersistentVolume]]. Data on the volume is lost!          [[Cluster has no ready nodes]].
 
 
 
=== Normal ===
 
Started
 
Created
 
Pulled
 
 
  [[Pulling]]
 
  [[Pulling]]
 +
[[Requested]]
 +
[[SawCompletedJob]]
 
  [[Scheduled]]
 
  [[Scheduled]]
  [[Killing]]
+
  [[Started]]
[[Evict]]
+
  [[SuccessfulCreate]]
[[SandboxChanged]]
 
  [[SuccessfulCreate]] - [[ReplicaSet]]
 
 
  [[SuccessfulDelete]]
 
  [[SuccessfulDelete]]
[[NodeNotSchedulable]]
 
[[RemovingNode]]
 
[[TaintManagerEviction]]
 
[[WaitForFirstConsumer]]
 
[[ExternalProvisioning]]
 
[[TaintManagerEviction: Cancelling deletion of pod]]
 
 
 
  
  default    4s          Normal    [[Provisioning]]           persistentvolumeclaim/myprometheus-alertmanager  External provisioner is provisioning volume for claim "default/myprometheus-alertmanager"
+
  [[FailedKillPod]]
 
Related: <code>[[kubectl get pvc]]</code>
 
  
 +
[[NoPods]]
  
ingress-nginx  53m        Normal    [[UpdatedLoadBalancer]]      service/nginx-ingress-controller                        Updated load balancer with new hosts
 
ingress-nginx  54m        Warning  [[UnAvailableLoadBalancer]]  service/nginx-ingress-controller                        There are no available nodes for LoadBalancer
 
  
== Events ==
+
[[NodeHasNoDiskPressure]]
* <code>[[BackOff]]</code>
 
* <code>[[Completed]]</code>
 
* <code>[[Created]]</code>
 
* <code>[[DeadlineExceeded]]</code>
 
* <code>[[Failed]]</code>
 
* <code>[[FailedAttachVolume]]</code>
 
* <code>[[FailedCreatePodSandBox]]</code>
 
* <code>[[FailedMount]]</code>
 
* <code>[[FailedKillPod]]</code>
 
* <code>[[FailedScheduling]]</code>
 
* <code>[[FailedToUpdateEndpoint]]</code>
 
* <code>[[FailedToUpdateEndpointSlices]]</code>
 
* <code>[[Generated]]</code>
 
* <code>[[PresentError]]</code>
 
* <code>[[Pulled]]</code>
 
* <code>[[Pulling]]</code>
 
* <code>[[Requested]]</code>
 
* <code>[[SawCompletedJob]]</code>
 
* <code>[[Scheduled]]</code>
 
* <code>[[Started]]</code>
 
* <code>[[SuccessfulCreate]]</code>
 
* <code>[[SuccessfulDelete]]</code>
 
* <code>[[NetworkNotReady]]</code>
 
* <code>[[NodeNotReady]]</code>
 
* <code>[[NodeAllocatableEnforced]]</code>
 
* <code>[[NoPods]]</code>
 
* <code>[[NodeHasNoDiskPressure]]</code>
 
* <code>[[UnAvailableLoadBalancer]]</code>
 
* <code>[[Unhealthy]]</code>
 
* <code>[[VolumeFailedDelete]]</code>
 
  
 
== Related ==
 
== Related ==
* <code>[[kubectl events]]</code>
 
 
* <code>[[kubectl top]]</code>
 
* <code>[[kubectl top]]</code>
 
* <code>[[kubectl logs]]</code>
 
* <code>[[kubectl logs]]</code>
* <code>[[gcloud logging read resource.labels.cluster_name]]</code>
+
* <code>[[FailedScheduling]]</code>
 +
* <code>[[kubectl get events --help]]</code>
 
* [[job-controller]]
 
* [[job-controller]]
 +
* [[kubelet]]
 
* [[GCP Node logs]]
 
* [[GCP Node logs]]
 +
* [[Events]]
 
* <code>[[gcloud logging read]]  projects/yourproject/logs/[[kubelet]]</code>
 
* <code>[[gcloud logging read]]  projects/yourproject/logs/[[kubelet]]</code>
* <code>[[kubectl describe nodes (conditions:)]]</code>
 
* <code>[[kubectl describe nodes]] | grep [[KubeletReady]]</code>
 
* <code>[[--event-ttl]]</code> defines the amount of time to retain events, default 1h.
 
  
 
== See also ==
 
== See also ==
 
* {{kubectl events}}
 
* {{kubectl events}}
 +
* {{kubectl info}}
 
* {{K8s troubleshooting}}
 
* {{K8s troubleshooting}}
* {{K8s monitoring}}
 
  
 
[[Category:K8s]]
 
[[Category:K8s]]

Please note that all contributions to wikieduonline may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see Wikieduonline:Copyrights for details). Do not submit copyrighted work without permission!

Cancel Editing help (opens in new window)

Advertising: