id | title | sidebar_label |
---|---|---|
pod-cpu-hog |
Pod CPU Hog Details |
Pod CPU Hog |
Type | Description | Tested K8s Platform |
---|---|---|
Generic | Consume CPU resources on the application container | GKE, Packet(Kubeadm), Minikube, EKS, AKS |
- Ensure that Kubernetes Version > 1.16
- Ensure that the Litmus Chaos Operator is running by executing
kubectl get pods
in operator namespace (typically,litmus
). If not, install from here - Ensure that the
pod-cpu-hog
experiment resource is available in the cluster by executingkubectl get chaosexperiments
in the desired namespace. If not, install from here
- Application pods are healthy on the respective nodes before chaos injection
- Application pods are healthy on the respective nodes post chaos injection
- This experiment consumes the CPU resources on the application container on specified number of cores
- It simulates conditions where app pods experience CPU spikes either due to expected/undesired processes thereby testing how the overall application stack behaves when this occurs.
- Causes CPU resource consumption on specified application containers using cgroups and litmus nsutil which consume CPU resources of the given target containers.
- It can test the application's resilience to potential slowness/unavailability of some replicas due to high CPU load
- Pod CPU Hog can be effected using the chaos library:
litmus
-
This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer Getting Started
-
Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-cpu-hog-sa
namespace: default
labels:
name: pod-cpu-hog-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-cpu-hog-sa
namespace: default
labels:
name: pod-cpu-hog-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: [""]
resources: ["pods","events"]
verbs: ["create","list","get","patch","update","delete","deletecollection"]
- apiGroups: [""]
resources: ["pods/exec","pods/log","replicationcontrollers"]
verbs: ["create","list","get"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","daemonsets","replicasets"]
verbs: ["list","get"]
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
- apiGroups: ["argoproj.io"]
resources: ["rollouts"]
verbs: ["list","get"]
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-cpu-hog-sa
namespace: default
labels:
name: pod-cpu-hog-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-cpu-hog-sa
subjects:
- kind: ServiceAccount
name: pod-cpu-hog-sa
namespace: default
Note: In case of restricted systems/setup, create a PodSecurityPolicy(psp) with the required permissions. The chaosServiceAccount
can subscribe to work around the respective limitations. An example of a standard psp that can be used for litmus chaos experiments can be found here.
- Provide the application info in
spec.appinfo
- Override the experiment tunables if desired in
experiments.spec.components.env
- To understand the values to provided in a ChaosEngine specification, refer ChaosEngine Concepts
Variables | Description | Type | Notes |
---|---|---|---|
CPU_CORES | Number of the cpu cores subjected to CPU stress | Optional | Default to 1 |
TOTAL_CHAOS_DURATION | The time duration for chaos insertion (seconds) | Optional | Default to 60s |
LIB | The chaos lib used to inject the chaos. Available libs are litmus and pumba |
Optional | Default to litmus |
LIB_IMAGE | Image used to run the helper pod. | Optional | Defaults to litmuschaos/go-runner:1.13.8 |
STRESS_IMAGE | Container run on the node at runtime by the pumba lib to inject stressors. Only used in LIB pumba |
Optional | Default to alexeiled/stress-ng:latest-ubuntu |
TARGET_PODS | Comma separated list of application pod name subjected to pod cpu hog chaos | Optional | If not provided, it will select target pods randomly based on provided appLabels |
TARGET_CONTAINER | Name of the target container under chaos. | Optional | If not provided, it will select the first container of the target pod |
PODS_AFFECTED_PERC | The Percentage of total pods to target | Optional | Defaults to 0 (corresponds to 1 replica), provide numeric value only |
CONTAINER_RUNTIME | container runtime interface for the cluster | Optional | Defaults to docker, supported values: docker, containerd and crio for litmus and only docker for pumba LIB |
SOCKET_PATH | Path of the containerd/crio/docker socket file | Optional | Defaults to `/var/run/docker.sock` |
RAMP_TIME | Period to wait before and after injection of chaos in sec | Optional | |
SEQUENCE | It defines sequence of chaos execution for multiple target pods | Optional | Default value: parallel. Supported: serial, parallel |
INSTANCE_ID | A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name. | Optional | Ensure that the overall length of the chaosresult CR is still < 64 characters |
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: default
spec:
# It can be active/stop
engineState: 'active'
appinfo:
appns: 'default'
applabel: 'app=nginx'
appkind: 'deployment'
chaosServiceAccount: pod-cpu-hog-sa
experiments:
- name: pod-cpu-hog
spec:
components:
env:
- name: TOTAL_CHAOS_DURATION
value: '60' # in seconds
- name: CPU_CORES
value: '1'
## Percentage of total pods to target
- name: PODS_AFFECTED_PERC
value: ''
## provide the cluster runtime
- name: CONTAINER_RUNTIME
value: 'docker'
# provide the socket file path
- name: SOCKET_PATH
value: '/var/run/docker.sock'
-
Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
kubectl apply -f chaosengine.yml
-
If the chaos experiment is not executed, refer to the troubleshooting section to identify the root cause and fix the issues.
-
Set up a watch on the applications interacting/dependent on the affected pods and verify whether they are running
watch kubectl get pods -n <application-namespace>
-
Verify the resource consumption
kubectl top pod <target-pod-name> -n <application-namespace>
-
To stop the pod-cpu-hog experiment immediately, either delete the ChaosEngine resource or execute the following command:
kubectl patch chaosengine <chaosengine-name> -n <namespace> --type merge --patch '{"spec":{"engineState":"stop"}}'
-
To restart the experiment, either re-apply the ChaosEngine YAML or execute the following command:
kubectl patch chaosengine <chaosengine-name> -n <namespace> --type merge --patch '{"spec":{"engineState":"active"}}'
-
Check whether the application stack is resilient to CPU spikes on the app replica, once the experiment (job) is completed. The ChaosResult resource name is derived like this:
<ChaosEngine-Name>-<ChaosExperiment-Name>
.kubectl describe chaosresult nginx-chaos-pod-cpu-hog -n <application-namespace>
- A sample recording of this experiment execution is provided here.