Virtualisation, Storage and various other ramblings.

Author: David (Page 1 of 25)

Gen AI tools are making me lose my fondness for programming

For clarity – I’m not a software engineer but I like writing stuff in Go and have a few side projects. I originally decided to learn Go so I could more effectively read and eventually contribute to open source projects in the K8s space, where there’s a lot of Go.

Almost every day I see posts/articles about how AI’s going to take over software engineering jobs and I find it exhausting because deep down, I know it’s bullshit, but it’s everywhere.

Yet, I feel compelled to use tools like Copilot and ChatGPT in my programming projects to keep up. I feel guilty if I don’t – like I’m not keeping up with with the latest tools.

However, if I do, It’s so tempting to just keep copy-pasting generated code until something “Just Works”, rather than going down rabbit holes myself, diving into docs, experiment, fail, repeat until I get it working exactly how I want.

Perhaps it’s just lack of discipline on my side – I should just not use the tools. I’m actively hoping for Gen AI to plateau – which I think is already happening so people can temper their expectations.

For those who actually code for work as a career – I entirely sympathise with you all for the nonsense the industry is going through at the moment.

ArgoCD – Ordering with ApplicationSets

In a previous post, I alluded to the use of ApplicationSets for my homelab deployments. I continue to leverage them, to the point I now have quite a number of applications managed by one:

  • ArgoCD (Itself)
  • Cert-Manager
  • Cilium
  • External-snapshotter
  • Gateway API CRD’s
  • Gateway API gateways
  • Homepage
  • Kanboard
  • Kubevirt
  • Longhorn
  • OpenTelemetry Operator
  • Sealed Secrets
  • System Upgrade Controller

The problem I had, was there was no ordering, dependency management or concurrency limits, so applications would simply update as and whenever changes were pushed.

This caused a number of issues, namely:

  1. What if ArgoCD updates when another application is or vice-versa?
  2. What if Cilium updates when another application is or vice-versa?

This could cause some negative outcomes. To mitigate against this, I had a think about how I could group certain applications together, automatically.

Re-arranging the Git Repo

I decided to group applications by directory, reflecting levels of importance:

├── argocd-apps
│   ├── 00-core-infrastructure
│   ├── 01-core-services
│   ├── 02-platform-services
│   └── 03-user-workloads

The directory prefix represents the order, which I reflect in the ApplicationSet:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: addon-applications
  namespace: argocd
spec:
  strategy:
    type: RollingSync
    rollingSync:
      steps:
        - matchExpressions:
            - key: orderLabel
              operator: In
              values: ["00"]
          maxUpdate: 1
        - matchExpressions:
            - key: orderLabel
              operator: In
              values: ["01"]
          maxUpdate: 1
        - matchExpressions:
            - key: orderLabel
              operator: In
              values: ["02"]
        - matchExpressions:
            - key: orderLabel
              operator: In
              values: ["03"]
  goTemplate: true
  goTemplateOptions: ["missingkey=error"]
  generators:
  - git:
      repoURL: 'https://github.com/David-VTUK/turing-pi-automation.git'
      revision: HEAD
      directories:
        - path: 'argocd-apps/00-core-infrastructure/*'
        - path: 'argocd-apps/01-core-services/*'
        - path: 'argocd-apps/02-platform-services/*'
        - path: 'argocd-apps/03-user-workloads/*'
  template:
    metadata:
      name: '{{ .path.basename }}'
      labels:
        orderLabel: '{{ index (splitList "-" (index (splitList "/" .path.path) 1)) 0 }}'

The end result being

  • Applications with the orderLabel 00 (folder prefix) will apply first
    • Only 1 application can update at a time, preventing conflicting apps from trying to update at the same time (for example, Cilium and ArgoCD
  • Applications with the orderLabel 01 (folder prefix) will then apply
    • Only 1 application can update at a time
  • Applications with the orderLabel 02 (folder prefix) will then apply.
    • No concurrency limit
  • Applications with the orderLabel 03 (folder prefix( will then apply.
    • No concurrency limit

Whenever I add a new application I have to decide where it sits in my hierarchy:

  1. Core Infrastructure (00)
    • Examples: Cilium, Longhorn
  2. Core Services (01)
    • Examples: ArgoCD, Cert-Manager
  3. Platform Services (02)
    • Examples: OpenTelemetry Operator, Sealed Secrets
  4. User Workloads (03)
    • Examples: Homepage, Kanboard

This also helps on the occasion I need to tear down and rebuild my homelab – dependencies are installed in the correct order.

KubeVirt on ARM64 – CDI Workaround

According to the KubeVirt documentation, CDI is not currently supported on ARM64, which is the architecture my Turing RK1 nodes use.

As a workaround, I experimented with writing an image directly to a PVC which can then be cloned/mounted to a KubeVirt VM. This example dd's an ISO image to a PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fedora-workstation-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi
  volumeMode: Block
---
apiVersion: batch/v1
kind: Job
metadata:
  name: upload-fedora-workstation-job
spec:
  template:
    spec:
      containers:
      - name: writer
        image: fedora:latest
        command: ["/bin/bash", "-c"]
        args:
          - |
            set -e
            echo "[1/3] Installing tools..."
            dnf install -y curl xz
            echo "[2/3] Downloading and decompressing Fedora Workstation image..."
            curl -L https://download.fedoraproject.org/pub/fedora/linux/releases/41/Workstation/aarch64/images/Fedora-Workstation-41-1.4.aarch64.raw.xz | xz -d > /tmp/disk.raw
            echo "[3/3] Writing image to PVC block device..."
            dd if=/tmp/disk.raw of=/dev/vda bs=4M status=progress conv=fsync
            echo "Done writing Fedora Workstation image to PVC!"
        volumeDevices:
        - name: disk
          devicePath: /dev/vda
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        securityContext:
          runAsUser: 0
      restartPolicy: Never
      volumes:
      - name: disk
        persistentVolumeClaim:
          claimName: fedora-workstation-pvc
      - name: tmp
        emptyDir: {}

Which can then be mounted to a VM:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: my-arm-vm
spec:
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/domain: my-arm-vm
    spec:
      domain:
        cpu:
          cores: 2
        resources:
          requests:
            memory: 2Gi
        devices:
          disks:
            - name: disk0
              disk:
                bus: virtio
      volumes:
        - name: disk0
          persistentVolumeClaim:
            claimName: fedora-workstation-pvc
« Older posts

© 2025 Virtual Thoughts

Theme by Anders NorenUp ↑

RSS
Twitter
Visit Us
Follow Me