added K8s-Deployment and edited README

This commit is contained in:
Carl Sander 2021-08-18 08:38:05 +00:00
parent f1e5e53e8f
commit 60b0ee2959
8 changed files with 142 additions and 0 deletions

View file

@ -31,6 +31,10 @@ docker run -d --restart=always -p 3001:3001 -v uptime-kuma:/app/data --name upti
Browse to http://localhost:3001 after started.
### ☸️ Kubernetes
See more [here](kubernetes/README.md)
If you want to change **port** and **volume**, or need to browse via a reserve proxy, please read <a href="https://github.com/louislam/uptime-kuma/wiki/Installation#docker">wiki</a>.

27
kubernetes/README.md Normal file
View file

@ -0,0 +1,27 @@
# Uptime-Kuma K8s Deployment
## How does it work?
Kustomize is a tool which builds a complete deployment file for all config elements.
You can edit the files in the ```uptime-kuma``` folder except the ```kustomization.yml``` until you know what you're doing.
It creates a certificate with the specified Issuer and creates the Ingress for the Uptime-Kuma ClusterIP-Service
## What do i have to edit?
You have to edit the ```ingressroute.yml``` to your needs.
This ingressroute.yml is for the [nginx-ingress-controller](https://kubernetes.github.io/ingress-nginx/) in combination with the [cert-manager](https://cert-manager.io/).
- host
- secrets and secret names
- (Cluster)Issuer (optional)
- the Version in the Deployment-File
- update:
- change to newer version and run the above commands, it will update the pods one after another
## How To use:
- install [kustomize](https://kubectl.docs.kubernetes.io/installation/kustomize/)
- Edit files mentioned above to your needs
- run ```kustomize build > apply.yml```
- run ```kubectl apply -f apply.yml```
Now you should see some k8s magic and Uptime-Kuma should be available at the specified address.

View file

@ -0,0 +1,10 @@
namespace: uptime-kuma
namePrefix: uptime-kuma-
commonLabels:
app: uptime-kuma
bases:
- uptime-kuma

View file

@ -0,0 +1,34 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: uptime-kuma
name: uptime-kuma
spec:
selector:
matchLabels:
component: uptime-kuma
replicas: 3
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
component: uptime-kuma
spec:
containers:
- name: uptime-kuma
image: louislam/uptime-kuma:1.2.0
ports:
- containerPort: 3001
volumeMounts:
- mountPath: /app/data
name: uptime-kuma-storage
volumes:
- name: uptime-kuma-storage
persistentVolumeClaim:
claimName: uptime-kuma-pvc

View file

@ -0,0 +1,39 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/server-snippets: |
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_cache_bypass $http_upgrade;
}
name: uptime-kuma-ingress
spec:
tls:
- hosts:
- monitor.cxde.link
secretName: monitor-cxde-link-tls
rules:
- host: monitor.cxde.link
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: uptime-kuma-uptime-kuma
port:
number: 3001

View file

@ -0,0 +1,5 @@
resources:
- deployment.yml
- service.yml
- ingressroute.yml
- pvc.yml

View file

@ -0,0 +1,10 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: uptime-kuma-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: uptime-kuma
spec:
selector:
component: uptime-kuma
type: ClusterIP
ports:
- name: http
port: 3001
targetPort: 3001
protocol: TCP