id | title | sidebar_label |
---|---|---|
gcp-vm-disk-loss |
GCP VM Disk Loss Experiment Details |
GCP VM Disk Loss |
Type | Description | Tested K8s Platform |
---|---|---|
GCP | Causes loss of a non-boot storage persistent disk from a GCP VM instance for a specified duration of time | GKE, Minikube |
- Ensure that Kubernetes Version > 1.16
- Ensure that the Litmus Chaos Operator is running by executing
kubectl get pods
in operator namespace (typically,litmus
). If not, install from here - Ensure that the
gcp-vm-disk-loss
experiment resource is available in the cluster by executingkubectl get chaosexperiments
in the desired namespace If not, install from here - Ensure that your service account has an editor access or owner access for the GCP project.
- Ensure the target disk volume to be detached should not be the root volume its instance.
- Ensure to create a Kubernetes secret having the GCP service account credentials in the default namespace. A sample secret file looks like:
apiVersion: v1
kind: Secret
metadata:
name: cloud-secret
type: Opaque
stringData:
type:
project_id:
private_key_id:
private_key:
client_email:
client_id:
auth_uri:
token_uri:
auth_provider_x509_cert_url:
client_x509_cert_url:
- Disk volumes are attached to their respective instances
- Disk volumes are attached to their respective instances
- Causes chaos to disrupt state of GCP persistent disk volume by detaching it from its VM instance for a certain chaos duration using the disk name.
-
This Chaos Experiment can be triggered by creating a ChaosEngine resource on the cluster. To understand the values to provide in a ChaosEngine specification, refer Getting Started
-
Follow the steps in the sections below to create the chaosServiceAccount, prepare the ChaosEngine & execute the experiment.
- Use this sample RBAC manifest to create a chaosServiceAccount in the desired (app) namespace. This example consists of the minimum necessary role permissions to execute the experiment.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gcp-vm-disk-loss-sa
namespace: default
labels:
name: gcp-vm-disk-loss-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: gcp-vm-disk-loss-sa
labels:
name: gcp-vm-disk-loss-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: [""]
resources: ["pods","events","secrets"]
verbs: ["create","list","get","patch","update","delete","deletecollection"]
- apiGroups: [""]
resources: ["pods/exec","pods/log"]
verbs: ["create","list","get"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gcp-vm-disk-loss-sa
labels:
name: gcp-vm-disk-loss-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: gcp-vm-disk-loss-sa
subjects:
- kind: ServiceAccount
name: gcp-vm-disk-loss-sa
namespace: default
- Provide the application info in
spec.appinfo
. It is an optional parameter for infra level experiment. - Provide the auxiliary applications info (ns & labels) in
spec.auxiliaryAppInfo
- Override the experiment tunables if desired in
experiments.spec.components.env
- To understand the values to provided in a ChaosEngine specification, refer ChaosEngine Concepts
GCP_PROJECT_ID | The ID of the GCP Project of which the disk volumes are a part of | Mandatory | All the target disk volumes should belong to a single GCP Project |
DISK_VOLUME_NAMES | Target non-boot persistent disk volume names | Mandatory | Multiple disk volume names can be provided as disk1,disk2,... |
DISK_ZONES | The zones of respective target disk volumes | Mandatory | Provide the zone for every target disk name as zone1,zone2... in the respective order of DISK_VOLUME_NAMES |
DEVICE_NAMES | The device names of respective target disk volumes | Mandatory | Provide the device name for every target disk name as deviceName1,deviceName2... in the respective order of DISK_VOLUME_NAMES |
Variables | Description | Specify In ChaosEngine | Notes |
---|---|---|---|
TOTAL_CHAOS_DURATION | The time duration for chaos insertion (sec) | Optional | Defaults to 30s |
CHAOS_INTERVAL | The time interval between the successive chaos iterations (sec) | Optional | Defaults to 30s |
RAMP_TIME | Period to wait before injection of chaos in sec | Optional | Default is 0 sec |
SEQUENCE | It defines sequence of chaos execution for multiple instance | Optional | Default value: parallel. Supported: serial, parallel |
INSTANCE_ID | A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name. | Optional | Ensure that the overall length of the chaosresult CR is still < 64 characters |
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: gcp-disk-chaos
namespace: default
spec:
# It can be active/stop
engineState: 'active'
chaosServiceAccount: gcp-vm-disk-loss-sa
experiments:
- name: gcp-vm-disk-loss
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '30'
# set chaos interval (in sec) as desired
- name: CHAOS_INTERVAL
value: '30'
# set the GCP project id
- name: GCP_PROJECT_ID
value: ''
# set the disk volume name(s) as comma seperated values
# eg. volume1,volume2,...
- name: DISK_VOLUME_NAMES
value: ''
# set the disk zone(s) as comma seperated values in the corresponding
# order of DISK_VOLUME_NAME
# eg. zone1,zone2,...
- name: DISK_ZONES
value: ''
# set the device name(s) as comma seperated values in the corresponding
# order of DISK_VOLUME_NAME
# eg. device1,device2,...
- name: DEVICE_NAMES
value: ''
-
Create the ChaosEngine manifest prepared in the previous step to trigger the Chaos.
kubectl apply -f chaosengine.yml
-
If the chaos experiment is not executed, refer to the troubleshooting section to identify the root cause and fix the issues.
-
Monitor the attachment status for ebs volume from AWS CLI.
gcloud compute disks describe DISK_NAME --zone=DISK_ZONE
-
GCP console can also be used to monitor the disk volume attachment status.
-
To stop the gcp-vm-disk-loss experiment immediately, either delete the ChaosEngine resource or execute the following command:
kubectl patch chaosengine <chaosengine-name> -n <namespace> --type merge --patch '{"spec":{"engineState":"stop"}}'
-
To restart the experiment, either re-apply the ChaosEngine YAML or execute the following command:
kubectl patch chaosengine <chaosengine-name> -n <namespace> --type merge --patch '{"spec":{"engineState":"active"}}'
-
Check whether the application is resilient to the GCP disk loss, once the experiment (job) is completed. The ChaosResult resource name is derived like this:
<ChaosEngine-Name>-<ChaosExperiment-Name>
.kubectl describe chaosresult gcp-disk-chaos-gcp-vm-disk-loss
- A sample recording of this experiment execution will be added soon.