To start with the right foot let's define a set of tasks that are nice things to do before you go any further in your week
By performing these tasks we will keep the broken window effect under control, preventing future pain and mess.
Here is a suggested checklist of things to do at the start of an on-call shift:
- Change Slack Icon: Click name. Click
Set status
. Click grey smile face. Type:pagerduty:
. SetClear after
to end of on-call shift. ClickSave
- Join alert channels: If not already a member,
/join
#alerts
,#alerts-general
,#alerts-prod-abuse
,#tenable-notifications
,#marquee_account_alrts
- Turn on slack channel notifications: Open
#production
and#incident-management
Notification Preferences (and optionally #infrastructure-lounge). Set Desktop and Mobile toAll new messages
- Turn on slack alert notifications: Open
#alerts
and#alerts-general
, Notification Preferences. Set Desktop only toAll new messages
- At the start of each on-call day, read the on-call handover issue that has been assigned to you by the previous EOC, and familiarize yourself with any ongoing incidents.
At the end of a shift:
- Turn off slack channel notifications: Open notification preferences in monitored Slack channels from the previous checklist and return alerts to the desired values.
- Leave noisy alert channels:
/leave
alert channels (It's good to stay in#alerts
and#alerts-general
) - Comment on any open S1 incidents at: https://gitlab.com/gitlab-com/gl-infra/production/issues?scope=all&utf8=✓&state=opened&label_name%5B%5D=incident&label_name%5B%5D=S1
- At the end of each on-call day, post a quick update in slack so the next person is aware of anything ongoing, any false alerts, or anything that needs to be handed over.
First check the on-call issues to familiarize yourself with what has been happening lately. Also, keep an eye on the #production and #incident-management channels for discussion around any on-going issues.
Start by checking how many alerts are in flight right now
- go to the fleet overview dashboard and check the number of Active Alerts, it should be 0. If it is not 0
- go to the alerts dashboard and check what is being triggered
- watch the #alerts, #alerts-general, and #alerts-gstg channels for alert notifications; each alert here should point you to the right runbook to fix it.
- if they don't, you have more work to do.
- be sure to create an issue, particularly to declare toil so we can work on it and suppress it.
Check how many targets are not scraped at the moment. alerts are in flight right now, to do this:
- go to the fleet overview dashboard and check the number of Targets down. It should be 0. If it is not 0
- go to the [targets down list] and check what is.
- try to figure out why there is scraping problems and try to fix it. Note that sometimes there can be temporary scraping problems because of exporter errors.
- be sure to create an issue, particularly to declare toil so we can work on it and suppress it.
We use PagerDuty to manage our on-call rotation schedule and alerting for emergency issues. We currently have a split schedule between EMEA and AMER for on-call rotations in each geographical region; we will also incorporate a rotation for team members in the APAC region as we continue to grow over time.
The EMEA and AMER schedule each have a shadow schedule which we use for on-boarding new engineers to the on-call rotations.
When a new engineer joins the team and is ready to start shadowing for an on-call rotation, overrides should be enabled for the relevant on-call hours during that rotation. Once they have completed shadowing and are comfortable/ready to be inserted into the primary rotations, update the membership list for the appropriate schedule to add the new team member.
This pagerduty forum post was referenced when setting up the blank shadow schedule and initial overrides for on-boarding new team members.