Difference between revisions of "Kubectl get events"

From wikieduonline
Jump to navigation Jump to search
Line 61: Line 61:
  
 
    
 
    
  [[kube-system]]    43m        [[Warning]]  [[ClusterUnhealthy]]          [[configmap/]][[cluster-autoscaler-status]]                     [[Cluster has no ready nodes]].
+
  [[kube-system]]    43m        [[Warning]]  [[ClusterUnhealthy]]          [[configmap/]]  
 +
[[cluster-autoscaler-status]]        
 +
LAST SEEN  TYPE      REASON              OBJECT                                MESSAGE
 +
28s        Warning  FailedScheduling    pod/deployment-1230/3 nodes
 +
are available: 3 persistentvolumeclaim "your" bound to non-existent
 +
persistentvolume "".
 +
19m        Warning  FailedScheduling    pod/deployment-123      0/3 nodes
 +
are available: 3 persistentvolumeclaim "your" not found.
 +
10m        Warning  FailedScheduling    pod/deployment-91234      0/3 nodes
 +
are available: 3 persistentvolumeclaim "your" bound to non-existent
 +
persistentvolume "".
 +
6m          Normal    SuccessfulDelete    replicaset/deployment-123    Deleted
 +
pod: deployment-61234
 +
5m47s      Normal    SuccessfulCreate    replicaset/deployment-123    Created
 +
pod: deployment-123
 +
6m          Normal    ScalingReplicaSet  deployment/deployment                Scaled down
 +
replica set deployment-1234 to 0
 +
5m47s      Normal    ScalingReplicaSet  deployment/deployment                Scaled up
 +
replica set deployment-6123 to 1
 +
17m        Warning  ClaimLost          persistentvolumeclaim/storagedbtemp  Bound claim
 +
has lost reference to PersistentVolume. Data on the volume is lost!          [[Cluster has  
 +
no ready nodes]].
  
 
=== Normal ===
 
=== Normal ===

Revision as of 12:01, 12 January 2024

TOMERGE: Kubernetes node events


kubectl get events -A | grep Warning | egrep "FailedMount|FailedAttachVolume|Unhealthy|ClusterUnhealthy|FailedScheduling"
kubectl get events -A | grep Normal

Events examples

Warning

BackoffLimitExceeded
CalculateExpectedPodCountFailed
ClusterUnhealthy 
FailedMount
FailedScheduling
InvalidDiskCapacity 
Unhealthy
.../...


your_namespace        28s         Warning   FailedScheduling                  pod/kibana-kibana-654ccb45bd-pbp4r             0/2 nodes are available: 2 Insufficient cpu.
your_namespace         4m53s       Warning   ProbeWarning              pod/metabase-prod-f8f4b765b-h4pgs                                   Readiness probe warning:
your_namespace         30m         Warning   BackoffLimitExceeded      job/your-job27740460                           Job has reached the specified backoff limit
your_namespace         26m         Warning   Unhealthy                 pod/elasticsearch-master-1                                          Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )...
your_namespace        99s         Warning   BackOff                pod/elasticsearch-master-0                Back-off restarting failed container
your_namespace        108s        Warning   BackOff                pod/elasticsearch-master-1                Back-off restarting failed container
your_namespace        12m         Warning   PresentError           challenge/prod-admin-tls-cert-dzmbt-2545              Error presenting challenge: error getting clouddns service account: secret "clouddns-dns01-solver-svc-acct" not found
your_namespace 27m         Warning   OOMKilling         node/gke-you-pool4   Memory cgroup out of memory: Killed process 2768158 (python) total-vm:5613088kB, anon-rss:3051580kB, file-rss:65400kB, shmem-rss:0kB, UID:0 pgtables:7028kB oom_score_adj:997
your_namespace 8m51s       Warning   FailedScheduling       pod/myprometheus-alertmanager-5967d4ff85-5glkh    running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition
default     4m58s       Normal    ExternalProvisioning   persistentvolumeclaim/myprometheus-alertmanager   waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator

Solution: Install aws-ebs-csi-driver
default       107s        Warning   ProvisioningFailed     persistentvolumeclaim/myprometheus-server         (combined from similar events): failed to provision volume with StorageClass "gp2": rpc error: code = Internal desc = Could not create volume "pvc-4e14416c-c9c2-4d39-b749-9ce0fa98d597": could not create volume in EC2: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: Goz6E3qExxxxx.../...
kube-system     9m44s       Warning   FailedMount               pod/kube-dns-85df8994db-v8qdg                           MountVolume.SetUp failed for volume "kube-dns-config" : failed to sync configmap cache: timed out waiting for the condition


kube-system     43m         Warning   ClusterUnhealthy          configmap/ 
cluster-autoscaler-status          
LAST SEEN   TYPE      REASON              OBJECT                                MESSAGE
28s         Warning   FailedScheduling    pod/deployment-1230/3 nodes 
are available: 3 persistentvolumeclaim "your" bound to non-existent 
persistentvolume "".
19m         Warning   FailedScheduling    pod/deployment-123       0/3 nodes 
are available: 3 persistentvolumeclaim "your" not found.
10m         Warning   FailedScheduling    pod/deployment-91234      0/3 nodes 
are available: 3 persistentvolumeclaim "your" bound to non-existent 
persistentvolume "".
6m          Normal    SuccessfulDelete    replicaset/deployment-123     Deleted 
pod: deployment-61234
5m47s       Normal    SuccessfulCreate    replicaset/deployment-123     Created 
pod: deployment-123
6m          Normal    ScalingReplicaSet   deployment/deployment                 Scaled down 
replica set deployment-1234 to 0
5m47s       Normal    ScalingReplicaSet   deployment/deployment                 Scaled up 
replica set deployment-6123 to 1
17m         Warning   ClaimLost           persistentvolumeclaim/storagedbtemp   Bound claim 
has lost reference to PersistentVolume. Data on the volume is lost!           [[Cluster has 
no ready nodes]].

Normal

Started
Created
Pulled
Pulling
Scheduled
Killing
Evict
SandboxChanged
SuccessfulCreate - ReplicaSet
SuccessfulDelete
NodeNotSchedulable
RemovingNode
TaintManagerEviction
WaitForFirstConsumer 
ExternalProvisioning
TaintManagerEviction: Cancelling deletion of pod


default     4s          Normal    Provisioning           persistentvolumeclaim/myprometheus-alertmanager   External provisioner is provisioning volume for claim "default/myprometheus-alertmanager"

Related: kubectl get pvc


ingress-nginx   53m         Normal    UpdatedLoadBalancer       service/nginx-ingress-controller                        Updated load balancer with new hosts
ingress-nginx   54m         Warning   UnAvailableLoadBalancer   service/nginx-ingress-controller                        There are no available nodes for LoadBalancer

Events

Related

See also

Advertising: