Skip to content

Commit

Permalink
adding app
Browse files Browse the repository at this point in the history
  • Loading branch information
Bryan Dollery committed Sep 15, 2020
1 parent fc18847 commit 62795fb
Show file tree
Hide file tree
Showing 5 changed files with 426 additions and 2 deletions.
122 changes: 120 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Use Grafana to observe Consul-Connect service-mesh metrics collected by Promethe
### Pre-requites…
Most people can run this on their laptops, and if you can then this is the recommended approach. If your laptop runs out of steam, try it on Sandbox. You'll need docker, helm, and kubectl installed. The already exist in the sandbox, but you might have to install them onto your local machines if you are running the lab there.

## Getting started
## Let's start making things

You will progress faster if you use a makefile for your commands. Start with the following and we'll add more to it as we progress:

Expand Down Expand Up @@ -75,7 +75,7 @@ Running `make` or `make cluster` will create a k3d cluster capable of running th

The `list` target exists to examine, and possibly debug, our work via helm.

## Installing Consul
### Installing Consul
We will install consul from the official helm chart, with the following values

**`consul-values.yaml`**
Expand Down Expand Up @@ -135,3 +135,121 @@ Before you run `make install` you'll have to run `make init` to create the requi

> This is a lab quality consul installation. For production hardening, please review [Secure Consul on K8S](https://learn.hashicorp.com/tutorials/consul/kubernetes-secure-agents)

### Installing Prometheus & Grafana

We need values files for both of these components:

**`prometheus-values.yaml`**
```yaml
server:
persistentVolume:
enabled: false
alertmanager:
enabled: false
```

We are disabling the alert manager because we're not using it for this lab. In a production environment you would want alerts enabled and you would want to configure them to let you know via email, slack, and other more formal and continuosly monitored channels (ServiceNow for example) if there is any kind of systemic outage that you need to attend to.

Also, because this is a lab environment, we're not going to need to persist prometheus' data for later, so we're disabling the persistent volume capability.

**`grafana-values.yaml`**
```yaml
adminUser: admin
adminPassword: password
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus-server
access: proxy
isDefault: true
service:
type: NodePort
targetPort: 3000
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
labels: {}
hosts: [""]
path: /grafana
```

We have exposed a NodePort to make using the service a little easier.

**`Makefile`**
```makefile
install: install-consul install-ingress-nginx install-prometheus install-grafana
install-prometheus:
helm install -f prometheus-values.yaml prometheus prometheus-community/prometheus -n prograf | tee -a output.log
delete-prometheus:
helm delete -n prograf prometheus
helm delete -n prograf prometheus-consul-exporter
install-grafana:
helm install -f grafana-values.yaml grafana grafana/grafana -n prograf | tee -a output.log
delete-grafana:
helm delete -n prograf grafana
```

> *Please, change the `install` target rather than creating a new one*






---


## Extra Credits:- Making Ingress Work
### Installing ingress-nginx
We will need the following values:

**`ingress-nginx-values.yaml`**
```yaml
controller:
resources:
limits:
cpu: 100m
memory: 90Mi
requests:
cpu: 100m
memory: 90Mi
metrics:
port: 10254
enabled: true
service:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
serviceMonitor:
enabled: true
namespaceSelector:
any: true
scrapeInterval: 30s
```

And we should add the following to the `Makefile`:

**`Makefile`**
```makefile
install: install-consul install-ingress-nginx
install-ingress-nginx:
helm install ingress-nginx ingress-nginx/ingress-nginx -f ingress-nginx-values.yaml | tee -a output.log
delete-ingress-nginx:
helm delete ingress-nginx
```
97 changes: 97 additions & 0 deletions demo-app/frontend.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
---
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: frontend
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: frontend
---

apiVersion: v1
kind: ServiceAccount
metadata:
name: frontend
automountServiceAccountToken: true

---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configmap
data:
config: |
# /etc/nginx/conf.d/default.conf
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
# Proxy pass the api location to save CORS
# Use location exposed by Consul connect
location /api {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
selector:
matchLabels:
service: frontend
app: frontend
template:
metadata:
labels:
service: frontend
app: frontend
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9102"
consul.hashicorp.com/connect-inject: "true"
consul.hashicorp.com/connect-service-upstreams: "public-api:8080"
spec:
serviceAccountName: frontend
volumes:
- name: config
configMap:
name: nginx-configmap
items:
- key: config
path: default.conf
containers:
- name: frontend
image: hashicorpdemoapp/frontend:v0.0.3
ports:
- containerPort: 80
volumeMounts:
- name: config
mountPath: /etc/nginx/conf.d
readOnly: true
90 changes: 90 additions & 0 deletions demo-app/products-api.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
---
# Service to expose web frontend

apiVersion: v1
kind: Service
metadata:
name: products-api-service
spec:
selector:
app: products-api
ports:
- name: http
protocol: TCP
port: 9090
targetPort: 9090

---

apiVersion: v1
kind: ServiceAccount
metadata:
name: products-api
automountServiceAccountToken: true

---
apiVersion: v1
kind: ConfigMap
metadata:
name: db-configmap
data:
config: |
{
"db_connection": "host=postgres port=5432 user=postgres password=password dbname=products sslmode=disable",
"bind_address": ":9090",
"metrics_address": ":9103"
}
---
# Web frontend

apiVersion: apps/v1
kind: Deployment
metadata:
name: products-api
labels:
app: products-api
spec:
replicas: 1
selector:
matchLabels:
app: products-api
template:
metadata:
labels:
app: products-api
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9102"
consul.hashicorp.com/connect-inject: "true"
consul.hashicorp.com/connect-service-upstreams: "postgres:5432"
spec:
serviceAccountName: products-api
volumes:
- name: config
configMap:
name: db-configmap
items:
- key: config
path: conf.json
containers:
- name: products-api
image: hashicorpdemoapp/product-api:v0.0.11
ports:
- containerPort: 9090
- containerPort: 9103
env:
- name: "CONFIG_FILE"
value: "/config/conf.json"
livenessProbe:
httpGet:
path: /health
port: 9090
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 30
volumeMounts:
- name: config
mountPath: /config
readOnly: true
63 changes: 63 additions & 0 deletions demo-app/products-db.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
selector:
app: postgres

---

apiVersion: v1
kind: ServiceAccount
metadata:
name: postgres
automountServiceAccountToken: true

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
service: postgres
app: postgres
template:
metadata:
labels:
service: postgres
app: postgres
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9102"
consul.hashicorp.com/connect-inject: "true"
spec:
serviceAccountName: postgres
containers:
- name: postgres
image: hashicorpdemoapp/product-api-db:v0.0.11
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: products
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: password
volumeMounts:
- mountPath: "/var/lib/postgresql/data"
name: "pgdata"
volumes:
- name: pgdata
emptyDir: {}
Loading

0 comments on commit 62795fb

Please sign in to comment.