Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor grammar fixes and tweaking use of CRD #1

Open
wants to merge 1 commit into
base: gatekeeper
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ You can also check out recordings from the following Kubecon EU 2019 sessions:

If your organization has been operating Kubernetes, you probably have been looking for ways to control what end-users can do on the cluster and ways to ensure that clusters are in compliance with company policies. These policies may be there to meet governance and legal requirements or to enforce best practices and organizational conventions. With Kubernetes, how do we ensure compliance without sacrificing development agility and operational independence?

For example, you can enforce policies like:
For example, you may want to enforce policies like:

* All images must be from approved repositories
* All ingress hostnames must be globally unique
Expand All @@ -33,8 +33,8 @@ Kubernetes allows decoupling policy decisions from the API server by means of [a
Before we dive into the current state of Gatekeeper, let’s take a look at how the Gatekeeper project has evolved.

* Gatekeeper v1.0 - Uses OPA as the admission controller with the kube-mgmt sidecar enforcing configmap-based policies. It provides validating and mutating admission control. Donated by Styra.
* Gatekeeper v2.0 - Uses Kubernetes policy controller as the admission controller with OPA and kube-mgmt sidecars enforcing configmap-based policies. It provides validating and mutating admission control and audit functionality. Donated by Microsoft.
* Gatekeeper v3.0 - The admission controller is integrated with the [OPA constraint framework](https://github.com/open-policy-agent/frameworks/tree/master/constraint) to enforce CRD-based policies and allow declaratively configured policies to be reliably shareable. Built with kubebuilder, it provides validating and, eventually, mutating (to be implemented) admission control and audit functionality. This enables the creation of policy templates for [Rego](https://www.openpolicyagent.org/docs/v0.10.7/how-do-i-write-policies/) policies, creation of policies as CRDs, and storage of audit results on policy CRDs. This project is a collaboration between Google, Microsoft, Red Hat, and Styra.
* Gatekeeper v2.0 - Uses Kubernetes Policy Controller as the admission controller with OPA and kube-mgmt sidecars enforcing configmap-based policies. It provides validating and mutating admission control and audit functionality. Donated by Microsoft.
* Gatekeeper v3.0 - The admission controller is integrated with the [OPA Constraint Framework](https://github.com/open-policy-agent/frameworks/tree/master/constraint) to enforce CRD-based policies and allow declaratively configured policies to be reliably shareable. Built with kubebuilder, it provides validating and, eventually, mutating (to be implemented) admission control and audit functionality. This enables the creation of policy templates backed by [Rego](https://www.openpolicyagent.org/docs/v0.10.7/how-do-i-write-policies/), creation of policies as custom resources, and storage of audit results on policy resources. This project is a collaboration between Google, Microsoft, Red Hat, and Styra.

![](/images/blog/2019-05-22-opa-gatekeeper/v3.png)

Expand All @@ -44,17 +44,17 @@ Now let’s take a closer look at the current state of Gatekeeper and how you ca

### Validating Admission Control

Once all the Gatekeeper components have been [installed](https://github.com/open-policy-agent/gatekeeper) in your cluster, the API server will trigger the Gatekeeper admission webhook to process the admission request whenever a resource in the cluster is created, updated, or deleted.
Once all the Gatekeeper components have been [installed](https://github.com/open-policy-agent/gatekeeper) in your cluster, the API server will trigger the Gatekeeper admission webhook to process the admission request whenever a resource in the cluster is created or updated.

During the validation process, Gatekeeper acts as a bridge between the API server and OPA. The API server will enforce all policies executed by OPA.

### Policies and Constraints

With the integration of the OPA Constraint Framework, a constraint is a declaration that its author wants a system to meet a given set of requirements. Each constraint is written with [Rego](https://www.openpolicyagent.org/docs/v0.10.7/how-do-i-write-policies/), a declarative query language used by OPA to enumerate instances of data that violate the expected state of the system. All constraints are evaluated as a logical AND. If one constraint is not satisfied, then the whole request is rejected.

Before defining a constraint, you need to create a Constraint Template that allows people to declare new constraints. Each template describes both the [Rego](https://www.openpolicyagent.org/docs/v0.10.7/how-do-i-write-policies/) logic that enforces the constraint and the schema for the constraint, which includes the schema of the CRD and the parameters that can be passed into a constraint, much like arguments to a function.
Before defining a constraint, you need to create a constraint template that allows people to declare new constraints. Each template describes both the [Rego](https://www.openpolicyagent.org/docs/v0.10.7/how-do-i-write-policies/) logic that enforces the constraint and the schema for the constraint, which includes the schema of the CRD and the parameters that can be passed into a constraint, much like arguments to a function.

For example, here is a constraint template CRD that requires certain labels to be present on an arbitrary object.
For example, here is a constraint template that requires certain labels to be present on an arbitrary object.

```yaml
apiVersion: templates.gatekeeper.sh/v1alpha1
Expand Down Expand Up @@ -90,7 +90,7 @@ spec:
}
```

Once a constraint template has been deployed in the cluster, an admin can now create individual constraint CRDs as defined by the constraint template. For example, here is a constraint CRD that requires the label `hr` to be present on all namespaces.
Once a constraint template has been deployed in the cluster, an admin can now create individual constraint resources as defined by the constraint template. For example, here is a constraint resource that requires the label `hr` to be present on all namespaces.

```yaml
apiVersion: constraints.gatekeeper.sh/v1alpha1
Expand All @@ -106,7 +106,7 @@ spec:
labels: ["hr"]
```

Similarly, another constraint CRD that requires the label `finance` to be present on all namespaces can easily be created from the same constraint template.
Similarly, another constraint resource that requires the label `finance` to be present on all namespaces can easily be created from the same constraint template.

```yaml
apiVersion: constraints.gatekeeper.sh/v1alpha1
Expand All @@ -122,11 +122,11 @@ spec:
labels: ["finance"]
```

As you can see, with the Constraint framework, we can reliably share Regos via the constraint templates, define the scope of enforcement with the match field, and provide user-defined parameters to the constraints to create customized behavior for each constraint.
As you can see, with the Constraint Framework, we can reliably share Rego via the constraint templates, define the scope of enforcement with the match field, and provide user-defined parameters to the constraints to create customized behavior for each constraint.

### Audit

The audit functionality enables periodic evaluations of replicated resources against the constraints enforced in the cluster to detect pre-existing misconfigurations. Audit results are stored as violations listed in the status field of the constraint CRD.
The audit functionality enables periodic evaluations of replicated resources against the constraints enforced in the cluster to detect pre-existing misconfigurations. Audit results are stored as violations listed in the status field of the constraint resource.

```yaml
apiVersion: constraints.gatekeeper.sh/v1alpha1
Expand Down Expand Up @@ -185,4 +185,5 @@ spec:

Post KubeCon EU, the community behind the Gatekeeper project will be focusing on providing mutating admission control to support mutation scenarios (for example: annotate objects automatically with departmental information when creating a new resource), support external data to inject context external to the cluster into the admission decisions, support dry run to see impact of a policy on existing resources in the cluster before enforcing it, and more audit functionalities.

If you are interested in learning more about the project, check out the [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) repo. If you are interested to help define the direction of Gatekeeper, join the [#kubernetes-policy](https://openpolicyagent.slack.com/messages/CDTN970AX) channel on OPA Slack, and join our [weekly meetings](https://docs.google.com/document/d/1A1-Q-1OMw3QODs1wT6eqfLTagcGmgzAJAjJihiO3T48/edit) to discuss development, issues, use cases, etc.
If you are interested in learning more about the project, check out the [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) repo. If you are interested to help define the direction of Gatekeeper, join the [#kubernetes-policy](https://openpolicyagent.slack.com/messages/CDTN970AX) channel on OPA Slack, and join our [weekly meetings](https://docs.google.com/document/d/1A1-Q-1OMw3QODs1wT6eqfLTagcGmgzAJAjJihiO3T48/edit) to discuss development, issues, use cases, etc.