Virtualisation, Storage and various other ramblings.

Author: David (Page 7 of 25)

Writing my first Prometheus exporter and scraping with Rancher

TLDR; Code repo can be found here. Huge thanks to Spencer for the original blog post that helped me, answering some questions over email, and providing a really helpful Prometheus exporter template (in Go) that I used for this example.

The monitoring framework in Rancher 2.5 was significantly changed – including, but not limited to, giving us the ability to scrape for our own application metrics and creating custom Grafana dashboards. The Monitoring stack in Rancher is a culmination of a number of open-source technologies:

  • Prometheus – For collecting and storing metrics into a time-series database.
  • Grafana – Analytics and visualisation of metrics (IE Prometheus).
  • AlertManager – An extension of Prometheus that enabling configuration of alerts and routing them to notification, paging, and automation systems.

Prometheus Exporters

Prometheus-native applications expose their own metrics which can be scraped from an HTTP endpoint. If we want to capture Prometheus metrics from a system that doesn’t natively export them in this format we need Exporters.

Exporters act as an interpretation layer, taking non-Prometheus formatted metrics and exposing them as such.

Test System – VDSL Modem

My VDSL modem (a EchoLife HG612 with unlocked firmware) does expose some metrics about my connection, but not in a format understood by Prometheus. Accessing /html/status/xdslStatus.asp on my modem returns:

var DSLCfg = new Array(new stDsl("InternetGatewayDevice.WANDevice.1.WANDSLInterfaceConfig","Up","VDSL2","","8852","42044","0","0","8852","40780","0","222","62","134","62","134","Unknown Mode"),null); var DSLStats = new Array(new stStats("InternetGatewayDevice.WANDevice.1.WANDSLInterfaceConfig.Stats.Showtime","90","4294967290","238","127","0","0","32","0","18","0","0","0"),null); var DslUpTime = "0"; var time = 0;

This information gives me some info about my connection. After some quick Googling, I found a reference table that defines what each of these fields relates to.

Writing the Exporter

As the aforementioned metrics aren’t in a format Prometheus can understand, I need to write an Exporter. Prometheus expects to scrape from HTTP-based endpoints, so writing one in go is quite trivial. Prometheus has packages you can leverage to help write your own – which I’ve used as part of the HTTP handler.

func main() {
	//Kick off collector in background
	go collector.Collect()

	//This section will start the HTTP server and expose
	//any metrics on the /metrics endpoint.
	http.Handle("/metrics", promhttp.Handler())
	log.Info("Beginning to serve on port :8080")
	log.Fatal(http.ListenAndServe(":8080", nil))
}

I wrote my own page for collector, which also includes Prometheus packages:

package collector

import (
	"github.com/prometheus/client_golang/prometheus"
...
...
)

// Stats get reset after disconnect, hence the use of Gauge Type
type myMetrics struct {
	UpstreamCurrRate      prometheus.Gauge
	DownstreamCurrRate    prometheus.Gauge
	UpstreamCurrRate2     prometheus.Gauge
	DownstreamCurrRate2   prometheus.Gauge
	UpstreamMaxRate       prometheus.Gauge
...
...

For tidiness, I decided to encapsulate all my metrics into a single instance of a myMetrics struct. Alternatively, you could simply store these in individual variables but with the number of metrics this collects, this seemed to be the more appropriate way.

The Prometheus package also exposes certain types, Guage is heavily used as over time, these values could fluctuate or even reset to 0.

The bulk of the Collector package performs the following:

  • Makes an HTTP call to the VDSL modems stats page.
  • Uses Regex to extrapolate the quoted values
  • Calls a helper function to convert these into float – which is what the prometheus.Gauge type expects
//Form Regex to extract all quoted strings
			re := regexp.MustCompile("\"(.*?)\"")
			extractedValues := re.FindAll(bodyBytes, -1)

			modemMetrics.UpstreamCurrRate.Set(convertToFloat(extractedValues[4]))
			modemMetrics.DownstreamCurrRate.Set(convertToFloat(extractedValues[5]))
			modemMetrics.UpstreamCurrRate2.Set(convertToFloat(extractedValues[6]))
			modemMetrics.DownstreamCurrRate2.Set(convertToFloat(extractedValues[7]))
			modemMetrics.UpstreamMaxRate.Set(convertToFloat(extractedValues[8]))
			modemMetrics.DownstreamMaxRate.Set(convertToFloat(extractedValues[9]))
			modemMetrics.UpstreamNoiseMargin.Set(convertToFloat(extractedValues[10]))

Running this code and navigating to http://localhost:8080 shows the metrics:

Packaging in a Container

In the aforementioned repo, a Github CI job kicks off on a push to package this application inside a docker container, which makes it easy to deploy to Kubernetes.

Scraping with Rancher

To scrape additional metrics within Rancher (after installing the Monitoring chart), we can define additional servicemonitor objects to specify what needs to be scraped. In this example, I created a simple deployment object for my Exporter container, inside a Pod, exposed by a Service of type clusterIP

Which we can then inspect the metrics for in Prometheus:

And visualise in Grafana:

K3s, Rancher and Pulumi

TLDR; Repo can be found here (Be warned, I’m at best, a hobbyist programmer and certainly not a software engineer in my day job)

I’ve been recently getting acquainted with Pulumi as an alternative to Terraform for managing my infrastructure. I decided to create a repo that would do a number of activities to stand up Rancher in a new K3s cluster, all managed by Pulumi in my vSphere Homelab, consisting of the following activities:

  • Provision three nodes from a VM Template.
  • Use cloud-init as a bootstrapping utility:
    • Install K3s on the first node, elected to initialise the cluster.
    • Leverage K3s’s Auto-Deploying Manifests feature to install Cert-Manager, Rancher and Metallb.
  • Join two additional nodes to the cluster to form a HA, embedded etcd cluster.

The Ingress Controller is exposed via a loadbalancer service type, leveraging Metallb.

After completion, Pulumi will output the IP address (needed to create a DNS record) and the specified URL:

tputs:
    Rancher IP (Set DNS): "172.16.10.167"
    Rancher url:        : "rancher.virtualthoughts.co.uk"

Why cloud-init ?

For this example, I wanted to have a zero-touch deployment model relative to the VM’s themselves – IE no SSH’ing directly to the nodes to remotely execute commands. cloud-init addresses these requirements by having a way to seed an instance with configuration data. This Pulumi script leverages this in two ways:

  1. To set the instance (and therefore host) name as part of metadata.yaml (which is subject to string replacement)
  2. To execute a command on boot that initialises the K3s cluster (Or join an existing cluster for subsequent nodes) as part of userdata.yaml
  3. To install cert-manager, rancher and metallb, also as part of userdata.yaml

Reflecting on Using Pulumi

Some of my observations thus far:

  • I really, really like having “proper” condition handling and looping. I never really liked repurposing count in Terraform as awkward condition handling.
  • Being able to leverage standard libraries from your everyday programming language makes it hugely flexible. An example of this was taking the cloud-init user and metadata and encoding it in base64 by using the encoding/base64 package.

Automated deployment of K3s and Rancher on vSphere with Terraform

Previously, my local Rancher installs were based on RKE. However, since K3S is now a supported distribution, I decided to rebuild my environment leveraging it. Additionally, it was a good opportunity to automate the process with Terraform.

TL;DR

https://github.com/David-VTUK/Rancher-K3s-vSphere contains the Terraform code required to do this.

Quick note on K3S with Embedded DB

This installation method is currently experimental. Do not leverage it in production (yet). Towards the end of August 2020, we (Rancher) plan to replace it with embedded etcd as per the roadmap. I’m a fan of simplicity, therefore when v1.19 does come out, I plan to simply tear down and rebuild my cluster using this Terraform code. However, one could equally modify it to leverage an external DB for a more production-ready setup.

Resources Created

The aforementioned Terraform code will create:

  • A single VM with NGINX installed acting as a Loadbalancer, forwarding TCP 80/443/6443 to the K3s Nodes
  • Three VM’s which will form the K3s cluster with an embedded DB. The first of which is used to initialise the cluster
  • Once the cluster is created, Cert-Manager and Rancher are installed which are probed for readiness.
file:///home/david/Downloads/Architecture.png

Prerequisites

  • Terraform version 0.13
  • Prior to running this script, a DNS record needs to be created to point at the Loadbalancer IP address, defined in the variable lb_address.
  • The VM template used must have the Cloud-Init Datasource for VMware GuestInfo project installed, which facilitates pulling meta, user, and vendor data from VMware vSphere’s GuestInfo interface. This can be achieved with:
curl -sSL https://raw.githubusercontent.com/vmware/cloud-init-vmware-guestinfo/master/install.sh | sh -

Or use the following Packer Template:

https://github.com/David-VTUK/Rancher-Packer/tree/master/vSphere/ubuntu_2004_cloud_init_guestinfo

Acquire Kubeconfig

  • SSH to one of the K3s nodes
  • Grab /etc/rancher/k3s/k3s.yaml
  • Replace server: https://127.0.0.1:6443 with the IP address defined in lb_address
« Older posts Newer posts »

© 2025 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS
Twitter
Visit Us
Follow Me