Virtualisation, Storage and various other ramblings.

Category: Kubernetes (Page 1 of 4)

Kubernetes on RK1 / Turing Pi 2: Automation with Ansible, Cilium and Cert-Manager

TLDR: Take me to the Playbook

Note – This is just a high-level overview, I’ll likely follow up with a post dedicated on the CIlium/BGP configuration.

I’ve had my Turing Pi 2 board for a while now, and during that time I’ve struggled to decide which automation tooling to use to bootstrap K3s to it. However, I reached a decision to use Ansible. It’s not something I’m overly familiar with, but this would provide a good opportunity to learn by doing.

The idea is pretty straightforward:

Each RK1 module is fairly well equipped:

  • 32GB Ram
  • 8 Core CPU
  • 512GB NVME SSD
  • Pre-Installed with Ubuntu

Mikrotik Config

Prior to standing up the cluster, the Mikrotik router can be pre-configured to peer with each of the RK1 Nodes, as I did:

routing/bgp/connection/ add address-families=ip as=64512 disabled=no local.role=ibgp name=srv-rk1-01 output.default-originate=always remote.address=172.16.10.221 routing-table=main

routing/bgp/connection/ add address-families=ip as=64512 disabled=no local.role=ibgp name=srv-rk1-02 output.default-originate=always remote.address=172.16.10.222 routing-table=main

routing/bgp/connection/ add address-families=ip as=64512 disabled=no local.role=ibgp name=srv-rk1-03 output.default-originate=always remote.address=172.16.10.223 routing-table=main

routing/bgp/connection/ add address-families=ip as=64512 disabled=no local.role=ibgp name=srv-rk1-04 output.default-originate=always remote.address=172.16.10.224 routing-table=main

Workflow

The following represents an overview of the steps code repo

To summarise each step:

Create Partition

Each of my RK1 Modules has a dedicated 512GB NVME drive – This will be used for primary Kubernetes storage as well as container storage. The drive is presented as a raw block device and therefore needs partitioning before mounting.

Mount Partition

The created partition is mounted to /mnt/data and checked.

Create Symlinks

Three directories are primarily used by K3s/Containerd to store data. Symlinks are created so their contents effectively reside on the NVME drive. These are:

/run/k3s -> /mnt/data/k3s
/var/lib/kubelet -> /mnt/data/k3s-kubelet
/var/lib/rancher -> /mnt/data/k3s-rancher

Install K3s

To facilitate replacing both Kube-Proxy and the default CNI to Cilium’s equivalents, a number of flags are passed to the Server install script:

--flannel-backend=none
--disable-network-policy
--write-kubeconfig-mode 644
--disable servicelb
--token {{ k3s_token }}
--disable-cloud-controller
--disable local-storage
--disable-kube-proxy
--disable traefik

In addition, the GatewayAPI CRD’s are installed:

- name: Apply gateway API CRDs
  kubernetes.core.k8s:
    state: present
    src: https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.1.0/experimental-install.yaml

Install Cilium

Cilium is is customised with the following options enabled:

  • BGP Control Plane
  • Hubble (Relay and UI)
  • GatewayAPI
  • BGP configuration to Peer with my Mikrotik router

Install Cert-Manager

Cert-Manager facilitates certificate generation for exposed services. In my environment, the API Gateway is annotated in a way that Cert-Manager will automatically generate a TLS Certificate for, using DNS challenges to Azure DNS.

This also includes the required clusterIssuer resource that provides configuration and authentication details

Expose Hubble-UI

A gateway and httproute resource is created to expose the Hubble UI:

Mikrotik BGP Peering Check

Using Winbox, the BGP peering and route propagation can be checked:

In my instance, 10.200.200.1 resolves to my API Gateway IP, with each node advertising this address.

Changing the default apps wildcard certificate in OCP4

In a standard OCP4 installation, several route objects are created by default and secured with a internally signed wildcard certificate.

These routes are configured as <app-name>.apps.<domain>. In my example, I have a cluster with the assigned domain ocp-acm.virtualthoughts.co.uk, which results in the routes below:

oauth-openshift.apps.ocp-acm.virtualthoughts.co.uk
console-openshift-console.apps.ocp-acm.virtualthoughts.co.uk
grafana-openshift-monitoring.apps.ocp-acm.virtualthoughts.co.uk
thanos-querier-openshift-monitoring.apps.ocp-acm.virtualthoughts.co.uk
prometheus-k8s-openshift-monitoring.apps.ocp-acm.virtualthoughts.co.uk
alertmanager-main-openshift-monitoring.apps.ocp-acm.virtualthoughts.co.uk

Inspecting console-openshift-console.apps.ocp-acm.virtualthoughts.co.uk shows us the default wildcard TLS certificate used by the Ingress Operator:

Because it’s internally signed, it’s not trusted by default by external clients. However, this can be changed.

Installing Cert-Manager

OperatorHub includes the upstream cert-manager chart, as well as one maintained by Red Hat. This can be installed to manage the lifecycle of our new certificate. Navigate to Operators -> Operator Hub -> cert-manager and install.

Create Secret, Issuer and Certificate resources

With Cert-Manager installed, we need to provide configuration so it knows how to issue challenges and generate certificates. In this example:

  • Secret – A client secret created from my cloud provider for authentication used to satisfy the challenge type. In this example AzureDNS, as I’m using the DNS challenge request type to prove ownership of this domain.
  • ClusterIssuer – A cluster wide configuration that when referenced, determines how to get (issue) certs. You can have multiple Issuers in a cluster, namespace or cluster scoped pointing to different providers and configurations.
  • Certificate – TLS certs can be generated automatically from ingress annotations, however in this example, it is used to request and store the certificate in its own lifecycle, not tied to a specific ingress object.

Let’s Encrypt provides wildcard certificates, but only through the DNS-01 challenge. The HTTP-01 challenge cannot be used to issue wildcard certificates. This is reflected in the config:

apiVersion: v1
kind: Secret
metadata:
  name: azuredns-config
  namespace: cert-manager
type: Opaque
data:
  client-secret: <Base64 Encoded Secret from Azure>
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-production
  namespace: cert-manager
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: <email>
    privateKeySecretRef:
      name: letsencrypt
    solvers:
    - dns01:
        azureDNS:
          clientID: <clientID>
          clientSecretSecretRef:
            name: azuredns-config
            key: client-secret
          subscriptionID: <subscriptionID>
          tenantID: <tenantID>
          resourceGroupName: <resourceGroupName>
          hostedZoneName: virtualthoughts.co.uk
          # Azure Cloud Environment, default to AzurePublicCloud
          environment: AzurePublicCloud
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: wildcard-apps-certificate
  namespace: openshift-ingress
spec:
  secretName: apps-wildcard-tls
  issuerRef:
    name: letsencrypt-production
    kind: ClusterIssuer
  commonName: "*.apps.ocp-acm.virtualthoughts.co.uk"
  dnsNames:
  - "*.apps.ocp-acm.virtualthoughts.co.uk"

Applying the above will create the respective objects required for us to request, receive and store a wildcard certificate from LetsEncrypt, using the DNS challenge request with AzureDNS.

The certificate may take ~2mins or so to become Ready due to the nature of the DNS style challenge.

oc get cert -A

NAMESPACE           NAME                        READY   SECRET              AGE
openshift-ingress   wildcard-apps-certificate   True    apps-wildcard-tls   33m

Patch the Ingress Operator

With the certificate object created, the Ingress Operator needs re configuring, referencing the secret name of the certificate object for our new certificate:

oc patch ingresscontroller.operator default \
--type=merge -p \
'{"spec":{"defaultCertificate":{"name":"apps-wildcard-tls"}}}' \
--namespace=openshift-ingress-operator

Validate

After applying, navigating back to the clusters console will present the new wildcard cert:

Improving the CI/build process for the community Rancher Exporter

One of my side projects is developing and maintaining an unofficial Prometheus Exporter for Rancher. It exposes metrics pertaining to Rancher-specific resources including, but not limited to managed clusters, Kubernetes versions, and more. Below shows an example dashboard based on these metrics.

overview-dashboard.png

Incidentally, if you are using Rancher, I’d love to hear your thoughts/feedback.

Previous CI workflow

The flowchart below outlines the existing process. Whilst automated, pushing directly to latest is bad practice.

To improve this. Several additional steps were added. First of which acquires the latest, versioned image of the exporter and saves it to the $GITHUB_OUTPUT environment

    - name: Retrieve latest Docker image version
        id: get_version
        run: |
          echo "image_version=$(curl -s "https://registry.hub.docker.com/v2/repositories/virtualthoughts/prometheus-rancher-exporter/tags/" | jq -r '.results[].name' | grep -v latest | sort -V | tail -n 1)" >> $GITHUB_OUTPUT


Referencing this, the next version can be generated based on MAJOR.MINOR.PATCH. Incrementing the PATCH version. In the future, this will be modified to add more flexibility to change MAJOR and MINOR versions.

      - name: Increment version
        id: increment_version
        run: |
          # Increment the retrieved version
          echo "updated_version=$(echo "${{ steps.get_version.outputs.image_version }}" | awk -F. -v OFS=. '{$NF++;print}')" >> $GITHUB_OUTPUT

With the version generated, the subsequent step can tag and push both the incremented version, and latest.

      - name: Build and push
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          tags: |
            virtualthoughts/prometheus-rancher-exporter:${{ steps.increment_version.outputs.updated_version }}
            virtualthoughts/prometheus-rancher-exporter:latest

Lastly, the Github action will also modify the YAML manifest file to reference the most recent, versioned image:

      - name: Update Kubernetes YAML manifest
        run: |
          # Install yq
          curl -sL https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -o yq
          chmod +x yq
          sudo mv yq /usr/local/bin/
          
          # Find and update the image tag in the YAML file
          IMAGE_NAME="virtualthoughts/prometheus-rancher-exporter"
          NEW_TAG="${{ steps.increment_version.outputs.updated_version }}"
          OLD_TAG=$(yq eval '.spec.template.spec.containers[] | select(.name == "rancher-exporter").image' manifests/exporter.yaml | cut -d":" -f2)
          NEW_IMAGE="${IMAGE_NAME}:${NEW_TAG}"
          sed -i "s|${IMAGE_NAME}:${OLD_TAG}|${NEW_IMAGE}|" manifests/exporter.yaml

Which results in:

« Older posts

© 2024 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS
Twitter
Visit Us
Follow Me