Goal
Install on all nodes
- Install ubuntu server in all nodes
- if you plan to use ceph… create two permissions
- one your root normal parition
- the other for ceph, to use for storage
- to have HA you need to have at least 3 nodes
Enable Ingress + https
- Ingress is the resource that enables outside traffic to the cluster
- K3s already comes with traefik enables as a ingress
- to have letsencrypt https certificate, I use cert-manager to install…
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.17.0/cert-manager.yaml
- Now we need create a
ClusterIssuer by kubectl apply -f letsencrypt.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
annotations:
cert-manager.io/default-issuer: "true"
spec:
acme:
email: <YOUR_EMAIL_HERE>
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: traefik
- with that we can use the following to create a subdomain… here is a small example that will deploy an empty nginx
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
labels:
app: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: nginxdemos/hello
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-service
spec:
selector:
app: hello-world
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hola-world-ingress
annotations:
spec.ingressClassName: traefik
traefik.ingress.kubernetes.io/router.middlewares: default-redirect-https@kubernetescrd
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- hello.domain.net
secretName: hello-domain-net-tls
rules:
- host: hello.domain.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello-world-service
port:
number: 80
Enable NFS
- I have a small synology NAS in my homelab
- Is nice for some usages to mount some folder of the NAS in the homelab
- need to install nfs-commons
apt-get install nfs-common
- now we need to create a new storage class
nfs.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: nfs
namespace: default
spec:
chart: nfs-subdir-external-provisioner
repo: https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
targetNamespace: default
set:
nfs.server: YOUR_NAS_IP
nfs.path: /volume1/kubernetes
storageClass.name: nfs
- now you can use
nfs as a storageClass in your PVC here is an example
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: media-pvc
namespace: media
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs
resources:
requests:
storage: 500G
Enable ceph Ceph
- Ceph allow you to have a distribute block storage
- if you don’t have this, and you need persistence in a PVC… you could use a NAS… but you have to be careful with the databases (postgresql or sqlite)
Create Rook operator
storage: # cluster level storage configuration and selection
useAllNodes: true
devices:
- name: "/dev/mapper/ubuntu--vg-ceph--osd"
- cech the osd pods logs… to check that the osd is detected
- check the dashboard that everything did go well
- forward the port
8443 from a mgr to localhost
- to get the password
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
- the username is admin
- create a storage class
storage class
- for using a block pvc, you will need to create CephBlockPool
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: replicapool
namespace: rook-ceph # namespace:cluster
spec:
failureDomain: host
replicated:
size: 3
requireSafeReplicaSize: true
Enable artifacts
Enable CI/CL (Flux CD)