Virtual Thoughts

Virtualisation, Storage and various other ramblings.

Page 6 of 24

Taking a Modular Approach to my Homelab with Pulumi

Architecture

After reviewing the key components of my lab environment, I translated these into the Pulumi stacks as illustrated in the diagram below. Pulumi has a blog post about the benefits of adopting multiple stacks and I found organising my homelab this way enables greater flexibility and organisation. I can also use stacks as a “template” to further build out my lab environment, for example, repeating the “Tools-Cluster” stack to add additional clusters.

The main objectives are:

  • Create a 3 node, K3s cluster utilising vSphere VM’s
  • Install Metallb, Rancher and Cert-Manager into this cluster
  • Using Rancher, create an RKE2 cluster to accommodate shared tooling services, ie:
    • Rancher Monitoring Stack (Prometheus, Grafana, Alertmanager, etc)
    • Hashicorp Vault
    • etc

Building

Each stack contains the main Pulumi code, a YAML file to hold various variables to influence parameters such as VM names, Networking config, etc.

├── rancher-application
│   ├── Assets
│   │   └── metallb
│   │       └── metallb-values.yaml
│   ├── go.mod
│   ├── go.sum
│   ├── main.go
│   ├── Pulumi.dev.yaml
│   └── Pulumi.yaml
├── rancher-management-cluster
│   ├── Assets
│   │   ├── metadata.yaml
│   │   └── userdata.yaml
│   ├── go.mod
│   ├── go.sum
│   ├── main.go
│   ├── Pulumi.dev.yaml
│   └── Pulumi.yaml
└── rancher-tools-cluster
    ├── Assets
    │   └── userdata.yaml
    ├── go.mod
    ├── go.sum
    ├── main.go
    ├── Pulumi.dev.yaml
    └── Pulumi.yaml

Each stack has a corresponding assets directory which contains supporting content for a number of components:

  • Rancher Application – Values.yaml to influence the metallb L2 VIP addresses
  • Rancher Management Cluster – Userdata and Metadata to send to the created VM’s, including bootstrapping K3s
  • Rancher Tools Cluster – Userdata to configure the local registry mirror

Rancher Management Cluster Stack

This is the first stack that needs to be created and is relatively simple in terms of its purpose. The metadata.yaml contains a template for defining cloud-init metadata for the nodes:

network:
  version: 2
  ethernets:
    ens192:
      dhcp4: false
      addresses:
        - $node_ip
      gateway4: $node_gateway
      nameservers:
        addresses:
          - $node_dns
local-hostname: $node_hostname
instance-id: $node_instance

userdata.yaml contains k3s-specific configuration pertaining to my local registry mirror as well a placeholder for the K3S bootstrapping process, $runcmd.

#cloud-config
write_files:
  - path: /etc/rancher/k3s/registries.yaml
    content: |
      mirrors:
        docker.io:
          endpoint:
            - "http://172.16.10.208:5050"
runcmd:
  - $runcmd

Creating the VM’s leverages the existing vSphere Pulumi provider, seeding the nodes with cloud-init user/metadata which also instantiates K3s.

userDataEncoded := base64.StdEncoding.EncodeToString([]byte(strings.Replace(string(userData), "$runcmd", k3sRunCmdBootstrapNode, -1)))

				vm, err := vsphere.NewVirtualMachine(ctx, vmPrefixName+strconv.Itoa(i+1), &vsphere.VirtualMachineArgs{
					Memory:         pulumi.Int(6144),
					NumCpus:        pulumi.Int(4),
					DatastoreId:    pulumi.String(datastore.Id),
					Name:           pulumi.String(vmPrefixName + strconv.Itoa(i+1)),
					ResourcePoolId: pulumi.String(resourcePool.Id),
					GuestId:        pulumi.String(template.GuestId),
					Clone: vsphere.VirtualMachineCloneArgs{
						TemplateUuid: pulumi.String(template.Id),
					},
					Disks: vsphere.VirtualMachineDiskArray{vsphere.VirtualMachineDiskArgs{
						Label: pulumi.String("Disk0"),
						Size:  pulumi.Int(50),
					}},
					NetworkInterfaces: vsphere.VirtualMachineNetworkInterfaceArray{vsphere.VirtualMachineNetworkInterfaceArgs{
						NetworkId: pulumi.String(network.Id),
					},
					},
					ExtraConfig: pulumi.StringMap{
						"guestinfo.metadata.encoding": pulumi.String("base64"),
						"guestinfo.metadata":          pulumi.String(metaDataEncoded),
						"guestinfo.userdata.encoding": pulumi.String("base64"),
						"guestinfo.userdata":          pulumi.String(userDataEncoded),
					},
				},
				)
				if err != nil {
					return err
				}

The first node initiates the K3s cluster creation process. Subsequent nodes have their $rucmd manipulated by identifying the first node’s IP address and using that to join the cluster:

userDataEncoded := vms[0].DefaultIpAddress.ApplyT(func(ipaddress string) string {

					runcmd := fmt.Sprintf(k3sRunCmdSubsequentNodes, ipaddress)
					return base64.StdEncoding.EncodeToString([]byte(strings.Replace(string(userData), "$runcmd", runcmd, -1)))
				}).(pulumi.StringOutput)

				vm, err := vsphere.NewVirtualMachine(ctx, vmPrefixName+strconv.Itoa(i+1), &vsphere.VirtualMachineArgs{
					Memory:         pulumi.Int(6144),

Rancher Application Stack

This stack makes extensive use of the (currently experimental) Helm Release Resource as well as the cert-manager package from the Pulumi Registry

For example, creating the Metallb config map based on the aforementioned asset file:

		metallbConfigmap, err := corev1.NewConfigMap(ctx, "metallb-config", &corev1.ConfigMapArgs{
			Metadata: &metav1.ObjectMetaArgs{
				Namespace: metallbNamespace.Metadata.Name(),
			},
			Data: pulumi.StringMap{
				"config": pulumi.String(metallbConfig),
			},
		})

And the Helm release:

		_, err = helm.NewRelease(ctx, "metallb", &helm.ReleaseArgs{
			Chart:     pulumi.String("metallb"),
			Name:      pulumi.String("metallb"),
			Namespace: metallbNamespace.Metadata.Name(),
			RepositoryOpts: helm.RepositoryOptsArgs{
				Repo: pulumi.String("https://charts.bitnami.com/bitnami"),
			},
			Values: pulumi.Map{"existingConfigMap": metallbConfigmap.Metadata.Name()},
		})

And for Rancher:

		_, err = helm.NewRelease(ctx, "rancher", &helm.ReleaseArgs{
			Chart:     pulumi.String("rancher"),
			Name:      pulumi.String("rancher"),
			Namespace: rancherNamespace.Metadata.Name(),
			RepositoryOpts: helm.RepositoryOptsArgs{
				Repo: pulumi.String("https://releases.rancher.com/server-charts/latest"),
			},
			Values: pulumi.Map{
				"hostname":           pulumi.String(rancherUrl),
				"ingress.tls.source": pulumi.String("secret"),
			},
			Version: pulumi.String(rancherVersion),
		}, pulumi.DependsOn([]pulumi.Resource{certmanagerChart, rancherCertificate}))

As I used an existing secret for my TLS certificate I had to create a cert-manager cert object, for which there are a number of options that I experimented with:

1. Read a file

Similarly to the metallb config, A file could be read that contained the YAML to create the Custom Resource type, although this was a feasible approach, I wanted something that was less error-prone.

2. Use the API extension type

The Pulumi Kubernetes provider enables the provisioning of the type NewCustomResource. For my requirements, this is an improvement over simply reading a YAML file, however, anything beyond the resources metadata isn’t strongly typed

rancherCertificate, err := apiextensions.NewCustomResource(ctx, "rancher-cert", &apiextensions.CustomResourceArgs{
			ApiVersion: pulumi.String("cert-manager.io/v1"),
			Kind:       pulumi.String("Certificate"),
			Metadata: &metav1.ObjectMetaArgs{
				Name:      pulumi.String("tls-rancher-ingress"),
				Namespace: pulumi.String(rancherNamespaceName),
			},
			OtherFields: kubernetes.UntypedArgs{
				"spec": map[string]interface{}{
					"secretName": "tls-rancher-ingress",
					"commonName": "rancher.virtualthoughts.co.uk",
					"dnsNames":   []string{"rancher.virtualthoughts.co.uk"},
					"issuerRef": map[string]string{
						"name": "letsencrypt-staging",
						"kind": "ClusterIssuer",
					},
				},
			},
		}, pulumi.DependsOn([]pulumi.Resource{certmanagerChart, certmanagerIssuers}))

3. Use crd2pulumi

crd2pulumi is used to generate typed CustomResources based on Kubernetes CustomResourceDefinitions, I took the cert-manager CRD’s and ran it through this tool, uploaded to a repo and repeated the above process:

import (
	certmanagerresource "github.com/david-vtuk/cert-manager-crd-types/types/certmanager/certmanager/v1"
        ...
        ...
)
	rancherCertificate, err := certmanagerresource.NewCertificate(ctx, "tls-rancher-ingress", &certmanagerresource.CertificateArgs{
			ApiVersion: pulumi.String("cert-manager.io/v1"),
			Kind:       pulumi.String("Certificate"),
			Metadata: &metav1.ObjectMetaArgs{
				Name:      pulumi.String("tls-rancher-ingress"),
				Namespace: pulumi.String(rancherNamespaceName),
			},
			Spec: &certmanagerresource.CertificateSpecArgs{
				CommonName: pulumi.String(rancherUrl),
				DnsNames:   pulumi.StringArray{pulumi.String(rancherUrl)},
				IssuerRef: certmanagerresource.CertificateSpecIssuerRefArgs{
					Kind: leProductionIssuer.Kind,
					Name: leProductionIssuer.Metadata.Name().Elem(),
				},
				SecretName: pulumi.String("tls-rancher-ingress"),
			},
		})

Much better!

Tools Cluster Stack

Comparatively, this is the simplest of all the Stacks. Using the Rancher2 Pulumi Package makes it pretty trivial to build out new clusters and install apps:

_, err = rancher2.NewClusterV2(ctx, "tools-cluster", &rancher2.ClusterV2Args{
			CloudCredentialSecretName: cloudcredential.ID(),
			KubernetesVersion:         pulumi.String("v1.21.6+rke2r1"),
			Name:                      pulumi.String("tools-cluster"),
			//DefaultClusterRoleForProjectMembers: pulumi.String("user"),
			RkeConfig: &rancher2.ClusterV2RkeConfigArgs{

.........
}

				monitoring, err := rancher2.NewAppV2(ctx, "monitoring", &rancher2.AppV2Args{
					ChartName: pulumi.String("rancher-monitoring"),
					ClusterId: cluster.ClusterV1Id,
					Namespace: pulumi.String("cattle-monitoring-system"),
					RepoName:  pulumi.String("rancher-charts"),
				}, pulumi.DependsOn([]pulumi.Resource{clusterSync}))

Creating Kubernetes Clusters with Rancher and Pulumi

tldr; Here is the code repo

Intro

My Job at Suse (via Rancher) involves hosting a lot of demos, product walk-throughs and various other activities that necessitate spinning up tailored environments on-demand. To facilitate this, I previously leaned towards Terraform, and have since curated a list of individual scripts I have to manage on an individual basis as they address a specific use case.

This approach reached a point where it became difficult to manage. Ideally, I wanted an IaC environment that catered for:

  • Easy, in-code looping (ie for and range)
  • “Proper” condition handling, ie if monitoring == true, install monitoring vs the slightly awkward HCL equivalent of repurposing count as a sudo-replacement for condition handling.
  • Influence what’s installed by config options/vars.
  • Complete end-to end creation of cluster objects, in my example, create:
    • AWS EC2 VPC
    • AWS Subnets
    • AWS AZ’s
    • AWS IGW
    • AWS Security Group
    • 1x Rancher provisioned EC2 cluster
    • 3x single node K3S clusters used for Fleet
Architectural Overview

Pulumi addresses these requirements pretty comprehensively. Additionally, I can re-use existing logic from my Terraform code as the Rancher2 Pulumi provider is based on the Terraform implementation, but I can leverage Go tools/features to build my environment.

Code Tour – Core

The core objects are created directly, using types from the Pulumi packages:

VPC:

// Create AWS VPC
vpc, err := ec2.NewVpc(ctx, "david-pulumi-vpc", &ec2.VpcArgs{
	CidrBlock:          pulumi.String("10.0.0.0/16"),
	Tags:               pulumi.StringMap{"Name": pulumi.String("david-pulumi-vpc")},
	EnableDnsHostnames: pulumi.Bool(true),
	EnableDnsSupport:   pulumi.Bool(true),
})

You will notice some interesting types in the above – such as pulumi.Bool and pulumi.String. The reason for this is, we need to treat cloud deployments as asynchronous operations. Some values we will know at runtime (expose port 80), some will only be known at runtime (the ID of a VPC, as below). These Pulumi types are a facilitator of this asynchronous paradigm.

IGW

// Create IGW
igw, err := ec2.NewInternetGateway(ctx, "david-pulumi-gw", &ec2.InternetGatewayArgs{
	VpcId: vpc.ID(),
})

Moving to something slightly more complex, such as looping around regions and assigning a subnet to each:

// Get the list of AZ's for the defined region
azState := "available"
zoneList, err := aws.GetAvailabilityZones(ctx, &aws.GetAvailabilityZonesArgs{
	State: &azState,
})

if err != nil {
	return err
}

//How many AZ's to spread nodes across. Default to 3.
zoneNumber := 3
zones := []string{"a", "b", "c"}

var subnets []*ec2.Subnet

// Iterate through the AZ's for the VPC and create a subnet in each
for i := 0; i < zoneNumber; i++ {
	subnet, err := ec2.NewSubnet(ctx, "david-pulumi-subnet-"+strconv.Itoa(i), &ec2.SubnetArgs{
		AvailabilityZone:    pulumi.String(zoneList.Names[i]),
		Tags:                pulumi.StringMap{"Name": pulumi.String("david-pulumi-subnet-" + strconv.Itoa(i))},
		VpcId:               vpc.ID(),
		CidrBlock:           pulumi.String("10.0." + strconv.Itoa(i) + ".0/24"),
		MapPublicIpOnLaunch: pulumi.Bool(true),
	})

This is repeated for each type

Code Tour – Config

The config file allows us to store information required by providers (unless using env variables or something externally) and values that we can use to influence the resources that are created. In particular, I added the following boolean values:

config:
  Rancher-Demo-Env:installCIS: false
  Rancher-Demo-Env:installIstio: false
  Rancher-Demo-Env:installLogging: false
  Rancher-Demo-Env:installLonghorn: false
  Rancher-Demo-Env:installMonitoring: false
  Rancher-Demo-Env:installOPA: false
  Rancher-Demo-Env:installFleetClusters: false

This directly influence what will be created in my main demo cluster, as well as individual “Fleet” clusters. Within the main Pulumi code, these values are extracted:

conf := config.New(ctx, "")
InstallIstio := conf.GetBool("installIstio")
installOPA := conf.GetBool("installOPA")
installCIS := conf.GetBool("installCIS")
installLogging := conf.GetBool("installLogging")
installLonghorn := conf.GetBool("installLonghorn")
installMonitoring := conf.GetBool("installMonitoring")
installFleetClusters := conf.GetBool("installFleetClusters")

Because of this, native condition handling can be leveraged to influence what’s created:

if installIstio {
	_, err := rancher2.NewAppV2(ctx, "istio", &rancher2.AppV2Args{
		ChartName:    pulumi.String("rancher-istio"),
		ClusterId:    cluster.ID(),
		Namespace:    pulumi.String("istio-system"),
		RepoName:     pulumi.String("rancher-charts"),
		ChartVersion: pulumi.String("1.8.300"),
	}, pulumi.DependsOn([]pulumi.Resource{clusterSync}))

	if err != nil {
		return err
	}
}

As there’s a much more dynamic nature to this project, I have a single template which I can tailor to address a number of use-cases with a huge amount of customisation. One could argue the same could be done in Terraform with using count, but I find this method cleaner. In addition, my next step is to implement some testing using go’s native features to further enhance this project.

Bootstrapping K3s

One challenge I encountered was being able to create and import K3s clusters. Currently, only RKE clusters can be directly created from Rancher. To address this, I created the cluster object in Rancher, extract the join command, and passed it together with the K3s install script so after K3s has stood up, it will run the join command:

if installFleetClusters {
	// create some EC2 instances to install K3s on:
	for i := 0; i < 3; i++ {
		cluster, _ := rancher2.NewCluster(ctx, "david-pulumi-fleet-"+strconv.Itoa(i), &rancher2.ClusterArgs{
			Name: pulumi.String("david-pulumi-fleet-" + strconv.Itoa(i)),
		})

		joincommand := cluster.ClusterRegistrationToken.Command().ApplyString(func(command *string) string {
			getPublicIP := "IP=$(curl -H \"X-aws-ec2-metadata-token: $TOKEN\" -v http://169.254.169.254/latest/meta-data/public-ipv4)"
			installK3s := "curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.19.5+k3s2 INSTALL_K3S_EXEC=\"--node-external-ip $IP\" sh -"
			nodecommand := fmt.Sprintf("#!/bin/bash\n%s\n%s\n%s", getPublicIP, installK3s, *command)
			return nodecommand
		})

		_, err = ec2.NewInstance(ctx, "david-pulumi-fleet-node-"+strconv.Itoa(i), &ec2.InstanceArgs{
			Ami:                 pulumi.String("ami-0ff4c8fb495a5a50d"),
			InstanceType:        pulumi.String("t2.medium"),
			KeyName:             pulumi.String("davidh-keypair"),
			VpcSecurityGroupIds: pulumi.StringArray{sg.ID()},
			UserData:            joincommand,
			SubnetId:            subnets[i].ID(),
		})

		if err != nil {
			return err
		}
	}

}

End result:

     Type                               Name                                  Status       
 +   pulumi:pulumi:Stack                Rancher-Demo-Env-dev                  creating...  
 +   pulumi:pulumi:Stack                Rancher-Demo-Env-dev                  creating..   
 +   pulumi:pulumi:Stack                Rancher-Demo-Env-dev                  creating..   
 +   ├─ rancher2:index:Cluster          david-pulumi-fleet-1                  created      
 +   ├─ rancher2:index:Cluster          david-pulumi-fleet-2                  created      
 +   ├─ rancher2:index:CloudCredential  david-pulumi-cloudcredential          created      
 +   ├─ aws:ec2:Subnet                  david-pulumi-subnet-1                 created      
 +   ├─ aws:ec2:Subnet                  david-pulumi-subnet-0                 created      
 +   ├─ aws:ec2:InternetGateway         david-pulumi-gw                       created     
 +   ├─ aws:ec2:Subnet                  david-pulumi-subnet-2                 created     
 +   ├─ aws:ec2:SecurityGroup           david-pulumi-sg                       created     
 +   ├─ aws:ec2:DefaultRouteTable       david-pulumi-routetable               created     
 +   ├─ rancher2:index:NodeTemplate     david-pulumi-nodetemplate-eu-west-2b  created     
 +   ├─ rancher2:index:NodeTemplate     david-pulumi-nodetemplate-eu-west-2a  created     
 +   ├─ rancher2:index:NodeTemplate     david-pulumi-nodetemplate-eu-west-2c  created     
 +   ├─ aws:ec2:Instance                david-pulumi-fleet-node-0             created     
 +   ├─ aws:ec2:Instance                david-pulumi-fleet-node-2             created     
 +   ├─ aws:ec2:Instance                david-pulumi-fleet-node-1             created     
 +   ├─ rancher2:index:Cluster          david-pulumi-cluster                  created     
 +   ├─ rancher2:index:NodePool         david-pulumi-nodepool-2               created     
 +   ├─ rancher2:index:NodePool         david-pulumi-nodepool-1               created     
 +   ├─ rancher2:index:NodePool         david-pulumi-nodepool-0               created     
 +   ├─ rancher2:index:ClusterSync      david-clustersync                     created     
 +   ├─ rancher2:index:AppV2            opa                                   created     
 +   ├─ rancher2:index:AppV2            monitoring                            created     
 +   ├─ rancher2:index:AppV2            istio                                 created     
 +   ├─ rancher2:index:AppV2            cis                                   created     
 +   ├─ rancher2:index:AppV2            logging                               created     
 +   └─ rancher2:index:AppV2            longhorn                              created     
 
Resources:
    + 29 created

Duration: 19m18s

20mins for a to create all of these resources fully automated is pretty handy. This example also includes all the addons – opa, monitoring, istio, cis, logging and longhorn.

Writing my first Prometheus exporter and scraping with Rancher

TLDR; Code repo can be found here. Huge thanks to Spencer for the original blog post that helped me, answering some questions over email, and providing a really helpful Prometheus exporter template (in Go) that I used for this example.

The monitoring framework in Rancher 2.5 was significantly changed – including, but not limited to, giving us the ability to scrape for our own application metrics and creating custom Grafana dashboards. The Monitoring stack in Rancher is a culmination of a number of open-source technologies:

  • Prometheus – For collecting and storing metrics into a time-series database.
  • Grafana – Analytics and visualisation of metrics (IE Prometheus).
  • AlertManager – An extension of Prometheus that enabling configuration of alerts and routing them to notification, paging, and automation systems.

Prometheus Exporters

Prometheus-native applications expose their own metrics which can be scraped from an HTTP endpoint. If we want to capture Prometheus metrics from a system that doesn’t natively export them in this format we need Exporters.

Exporters act as an interpretation layer, taking non-Prometheus formatted metrics and exposing them as such.

Test System – VDSL Modem

My VDSL modem (a EchoLife HG612 with unlocked firmware) does expose some metrics about my connection, but not in a format understood by Prometheus. Accessing /html/status/xdslStatus.asp on my modem returns:

var DSLCfg = new Array(new stDsl("InternetGatewayDevice.WANDevice.1.WANDSLInterfaceConfig","Up","VDSL2","","8852","42044","0","0","8852","40780","0","222","62","134","62","134","Unknown Mode"),null); var DSLStats = new Array(new stStats("InternetGatewayDevice.WANDevice.1.WANDSLInterfaceConfig.Stats.Showtime","90","4294967290","238","127","0","0","32","0","18","0","0","0"),null); var DslUpTime = "0"; var time = 0;

This information gives me some info about my connection. After some quick Googling, I found a reference table that defines what each of these fields relates to.

Writing the Exporter

As the aforementioned metrics aren’t in a format Prometheus can understand, I need to write an Exporter. Prometheus expects to scrape from HTTP-based endpoints, so writing one in go is quite trivial. Prometheus has packages you can leverage to help write your own – which I’ve used as part of the HTTP handler.

func main() {
	//Kick off collector in background
	go collector.Collect()

	//This section will start the HTTP server and expose
	//any metrics on the /metrics endpoint.
	http.Handle("/metrics", promhttp.Handler())
	log.Info("Beginning to serve on port :8080")
	log.Fatal(http.ListenAndServe(":8080", nil))
}

I wrote my own page for collector, which also includes Prometheus packages:

package collector

import (
	"github.com/prometheus/client_golang/prometheus"
...
...
)

// Stats get reset after disconnect, hence the use of Gauge Type
type myMetrics struct {
	UpstreamCurrRate      prometheus.Gauge
	DownstreamCurrRate    prometheus.Gauge
	UpstreamCurrRate2     prometheus.Gauge
	DownstreamCurrRate2   prometheus.Gauge
	UpstreamMaxRate       prometheus.Gauge
...
...

For tidiness, I decided to encapsulate all my metrics into a single instance of a myMetrics struct. Alternatively, you could simply store these in individual variables but with the number of metrics this collects, this seemed to be the more appropriate way.

The Prometheus package also exposes certain types, Guage is heavily used as over time, these values could fluctuate or even reset to 0.

The bulk of the Collector package performs the following:

  • Makes an HTTP call to the VDSL modems stats page.
  • Uses Regex to extrapolate the quoted values
  • Calls a helper function to convert these into float – which is what the prometheus.Gauge type expects
//Form Regex to extract all quoted strings
			re := regexp.MustCompile("\"(.*?)\"")
			extractedValues := re.FindAll(bodyBytes, -1)

			modemMetrics.UpstreamCurrRate.Set(convertToFloat(extractedValues[4]))
			modemMetrics.DownstreamCurrRate.Set(convertToFloat(extractedValues[5]))
			modemMetrics.UpstreamCurrRate2.Set(convertToFloat(extractedValues[6]))
			modemMetrics.DownstreamCurrRate2.Set(convertToFloat(extractedValues[7]))
			modemMetrics.UpstreamMaxRate.Set(convertToFloat(extractedValues[8]))
			modemMetrics.DownstreamMaxRate.Set(convertToFloat(extractedValues[9]))
			modemMetrics.UpstreamNoiseMargin.Set(convertToFloat(extractedValues[10]))

Running this code and navigating to http://localhost:8080 shows the metrics:

Packaging in a Container

In the aforementioned repo, a Github CI job kicks off on a push to package this application inside a docker container, which makes it easy to deploy to Kubernetes.

Scraping with Rancher

To scrape additional metrics within Rancher (after installing the Monitoring chart), we can define additional servicemonitor objects to specify what needs to be scraped. In this example, I created a simple deployment object for my Exporter container, inside a Pod, exposed by a Service of type clusterIP

Which we can then inspect the metrics for in Prometheus:

And visualise in Grafana:

« Older posts Newer posts »

© 2025 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS
Twitter
Visit Us
Follow Me