Recently, I’ve started to embrace Fedora Silverblue (an immutable desktop OS) as my daily driver. One of the issues I encountered was trying to get my vscode (flatpak) to leverage remote containers via podman for development.
The following steps are fairly well documented to get started:
flatpak install com.visualstudio.code
flatpak install com.visualstudio.code.tool.podman
Set "dev.containers.dockerPath": "podman-remote" in VSCode settings/json:
However, these following (mandatory) steps took a bit more digging around. In a repo’s .devcontainer.json file, add:
// Extra args to enable compatibility between Flatpak vscode and podman
"runArgs": ["--userns=keep-id"],
"containerUser": "vscode",
"workspaceMount": "source=${localWorkspaceFolder},target=/workspace,type=bind,Z",
"workspaceFolder": "/workspace"
Without doing so, I found my dev container attempting to mount the workspace incorrectly, resulting in a empty workspace view.
Red Hat Openshift Virtualisation provides a platform for running and managing Virtual Machines alongside Containers using a consistent API. It also provides a mechanism for migrating VMs from platforms such as vSphere.
As I have both environments, I wanted to deploy an Openshift Virtualisation setup that mimics my current vSphere setup so I could migrate Virtual Machines to it.
Existing vSphere Design
Below is a diagram depicting my current vSphere setup. My ESXi hosts are dual-homed with a separation of management (vmkernel) and virtual machine traffic.
vmnic1 is connected to a trunk port accommodating several different VLANs. These are configured as corresponding port groups in the Distributed Switch.
Integrating an Openshift Virtualisation host
Given an Openshift host with the same number of NICs, we can design a similar solution including a test use case:
By default, an existing bridge (ovs-system) is created by Openshift to facilitate cluster networking. To achieve the same level of isolation configured in the vSphere environment, an additional bridge is required. This will be called vlan-trunk and as the name implies, it will act as a trunk interface for a range of VLAN networks.
Once configured, a Virtual Machine Instance can be created, connected to one of these VLAN networks and reside on the same L2 network as their vSphere-managed VM counterparts.
Configuring the Openshift Node
There are several ways to accomplish this, however for ease, the NMState Operator can be used to configure host networking in a declarative way:
Once installed, a default NMState object needs to be created:
After which we can define an instance of the NodeNetworkConfigurationPolicy object that creates our additional bridge interface and includes a specific NIC.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: vlan-trunk-ens34-policy
spec:
desiredState:
interfaces:
- name: vlan-trunk
description: Linux bridge with ens34 as a port
type: linux-bridge
state: up
ipv4:
enabled: false
bridge:
options:
stp:
enabled: false
port:
- name: ens34
To validate, run ip addr show on the host:
2: ens33: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
link/ether 00:50:56:bb:e3:c3 brd ff:ff:ff:ff:ff:ff
altname enp2s1
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vlan-trunk state UP group default qlen 1000
link/ether 00:50:56:bb:97:0d brd ff:ff:ff:ff:ff:ff
altname enp2s2
...
653: vlan-trunk: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:50:56:bb:97:0d brd ff:ff:ff:ff:ff:ff
In a similar way that Distributed Port groups are created in vSphere, we can create NetworkAttachmentDefinition objects that represent our physical network(s) in software.
The example below is comparable to a Distributed Port Group in vSphere that’s configured to tag traffic with the VLAN ID of 40. If required, we could repeat this process for each VLAN/Distributed Port group so we have a 1:1 mapping between both the vSphere and Openshift Virtualisation environments.
After a short period, the VM’s IP address will be reported to the console. In my example, I have a DHCP server running on that VLAN, which is how this VM acquired its IP address:
Which we can test connectivity from another machine with ping. such as a VM running on an ESXi Host:
sh-5.1# ping 172.16.40.4
PING 172.16.40.4 (172.16.40.4) 56(84) bytes of data.
64 bytes from 172.16.40.4: icmp_seq=1 ttl=63 time=1.42 ms
64 bytes from 172.16.40.4: icmp_seq=2 ttl=63 time=0.960 ms
64 bytes from 172.16.40.4: icmp_seq=3 ttl=63 time=0.842 ms
64 bytes from 172.16.40.4: icmp_seq=4 ttl=63 time=0.967 ms
64 bytes from 172.16.40.4: icmp_seq=5 ttl=63 time=0.977 ms
By taking this approach, we can gradually start migrating VM’s from vSphere to Openshift Virtualisation with minimal disruption, which I will cover in a subsequent post.
With one example explored, I wanted to put a spin on it – Therefore, I had an idea:
“Can I use eBPF to identify and store the contents of the protocol header for IP packets on a specific interface?”
It’s more of a rhetorical question – of course we can! The code can be found here.
To summarise, the eBPF C program is a little more complicated. It still leverages XDP, however instead of counting the number of packets, it will inspect each IP packet, extract the protocol number, and store it in a map.
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
#include <linux/if_ether.h>
#include <linux/ip.h>
struct {
__uint(type, BPF_MAP_TYPE_ARRAY);
__type(key, __u32);
__type(value, __u64);
__uint(max_entries, 255);
} protocol_count SEC(".maps");
SEC("xdp")
int get_packet_protocol(struct xdp_md *ctx) {
void *data_end = (void *)(long)ctx->data_end;
void *data = (void *)(long)ctx->data;
// Parse Ethernet header
struct ethhdr *eth = data;
if ((void *)(eth + 1) > data_end) {
return XDP_PASS;
}
// Check if the packet is an IP packet
if (eth->h_proto != __constant_htons(ETH_P_IP)) {
return XDP_PASS;
}
// Parse IP header
struct iphdr *ip = data + sizeof(struct ethhdr);
if ((void *)(ip + 1) > data_end) {
return XDP_PASS;
}
__u32 key = ip->protocol; // Using IP protocol as the key
__u64 *count = bpf_map_lookup_elem(&protocol_count, &key);
if (count) {
__sync_fetch_and_add(count, 1);
}
return XDP_PASS;
}
The Go application is over 100 lines, therefore for brevity, it can be viewed here.
The eBPF map to store this could be visualised as:
Where the Key represents the IP protocol number and value counting the number of instances.
The Go application leverages a helper function to map the protocol number to name in stdout.
Running the application probes the map and outputs non-zero values and their corresponding key. It can be easily tested by running the app and generating traffic. Note how after executing ping the map updates with ICMP traffic.