Virtual Thoughts

Virtualisation, Storage and various other ramblings.

Page 11 of 24

Container Packet Inspection with NSX-T

For troubleshooting (or just being a bit nosey) we have a number of tools that allow us to inspect the traffic between two endpoints. When it comes to containers however, our approach has to be adjusted slightly. When using NSX-T as a CNI, we have some of these tools available to us out of the box.

App Overview

 

For this example, I’ve deployed a standard WordPress deployment consisting of a frontend (web) pod and a backend (DB) pod. The objective is to identify and capture the traffic between these pods.

 

Configure NSX-T Port Mirroring

Navigate to the “Port Mirroring Session” section in NSX-T via “Advanced Networking & Security” and click “Add”

Define a Local SPAN session:

Define the transport node (Source ESXI host)

Select the physical NIC(s) that participate in the n-vds configuration that we want to capture packets from.

Important: Enable “Encapsulated Packet” (Disabled by default) so we can inspect the underlying Geneve overlay packets. However, the NIC we want to mirror to must support the increased overall packet size. Therefore, adjust the MTU of the nic on the Wireshark host accordingly.

 

Select the source VNIC to capture from – This is the NIC of the Kubernetes worker node VM.

 

 

Finally, select the destination for traffic mirroring. For this example, it’s a NIC on a VM I have Wireshark installed on

Traffic Inspection

With all the hard work out of the way, we can simply provide a filter in Wireshark to show only packets originating from my WordPress Web Pod and terminating at the WordPress DB pod:

Notes:

  • The overall frame size of 1600 further validates our requirement to have the recipient of the traffic mirroring having an interface configured accordingly.
  • Because we previously defined the mirroring of encapsulated packets we can see the overlay information
  • We can see the entire packet structure consisting of:
    • Outer Ethernet (NSX-T Transport Node MAC)
    • Outer IP (NSX-T VTEP IP)
    • Geneve UDP
    • Geneve Data
    • Inner Ethernet (Pod Ethernet addresses)
    • Inner IP (Pod IP addresses)
    • Inner TCP (MYSQL connection)
    • Inner Payload (MYSQL Query) – Highlighted

 

Step by Step – Istio up and running

Service Mesh is a pretty hot topic in the Kubernetes ecosystem currently, and I wanted to get it up and running in my own lab environment. Istio’s documentation has a pre-baked solution to demonstrate some of its capabilities (a book app, if memory serves me correctly), but I wanted to deploy my own app to get more “hands-on” experience with the tech, even if it’s only very basic to start with.

Install Istio

There are a number of prerequisite steps that need to be satisfied prior to installing Istio. These are specific to my environment,  others may differ.

Install the Helm client

 sudo snap install helm --classic 

Grab Istio (1.2.0 in this example)

curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.2.0 sh - cd istio-1.2.0/

Create the Helm service account (named “tiller”)

kubectl apply -f install/kubernetes/helm/helm-service-account.yaml

Initialise Helm  using the service account specified in the previous step

helm init --service-account tiller

Create a namespace to accommodate the Istio components

kubectl create ns istio-system

Initialise Istio into the aforementioned namespace:

helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system

Monitor the state of the pods – it will take some time for the pods to finish – these create the CRD’s required for Istio

kubectl get pods -n istio-system
NAME                      READY   STATUS              RESTARTS   AGE
istio-init-crd-10-t82m6   0/1     ContainerCreating   0          95s
istio-init-crd-11-42622   0/1     ContainerCreating   0          95s
istio-init-crd-12-65m5v   0/1     ContainerCreating   0          95s

Install Istio into the aforementioned namespace

helm install install/kubernetes/helm/istio --name istio --namespace istio-system

Configure a namespace for automatic sidecar injection

By this point, we have the internal foundations for Istio, but we’re not leveraging it. One of the fundamental workings of Istio is the use of pod sidecars. Sidecars act as the data plane, facilitating a lot of the features we want to leverage from Istio.

The overall architecture of an Istio-based application.

Istio doesn’t do this automatically, out of the box for all pods deployed into an environment, but Istio will inject sidecars into pods deployed into namespaces that have the istio-injection=enabled label set.

kubectl create ns app-with-injection
namespace/app-with-injection created
kubectl label namespace app-with-injection istio-injection=enabled
namespace/app-with-injection labeled

We can validate this by creating a pod into this namespace:

kubectl run nginx -n app-with-injection --image nginx

And checking the Pod contents (notice how this pod has two containers)

kubectl get pods -n app-with-injection
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7cdbd8cdc9-96mbz   2/2     Running   0          52s

The proxy sidecar:

 

Environment Anatomy

The diagram below shows how my test environment is set up

 

 

Key considerations:

The Istio gateway will reside on the edge
80% of all traffic will be routed to v1 of my web application
20% of all traffic will be routed to v2 of my web application

The application manifest can be found at https://raw.githubusercontent.com/David-VTUK/istioexample/master/webapp.yaml

To accomplish this we need to implement two key objects:

Gateway

This is our entry point into our application. By default, Istio deploys the gateway object (we must note the external IP)

kubectl get svc -n istio-system
NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP                
istio-citadel            ClusterIP      10.100.200.47                        
istio-galley             ClusterIP      10.100.200.149                       
istio-ingressgateway     LoadBalancer   10.100.200.244   10.10.20.150,100.64.80.1   
istio-pilot              ClusterIP      10.100.200.170                       
istio-policy             ClusterIP      10.100.200.3                        
istio-sidecar-injector   ClusterIP      10.100.200.169                   
istio-telemetry          ClusterIP      10.100.200.141                      
prometheus               ClusterIP      10.100.200.238                     

We configure the Gateway by deploying a gateway manifest file:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: http-gateway
spec:
  selector:
    istio: ingressgateway # use Istio default gateway implementation
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"              
  • Kind : Type of object. Gateway is a CRD (Custom Resource Definition) that Istio implements
  • Selector: What this applies to, in this case the default Ingress Gateway
  • Ports: Which ports we want to listen to on the external IP address, together with a name and protocol
  • Hosts : We can implement layer 7 load balancing on the edge, but as I’ll be testing this out via IP address, “*” will suffice. In production, this would likely be an FQDN of an external facing website

 

VirtualService

gateway object helps us define the entry point into the cluster, but we have yet to effectively tell the gateway where to route traffic to. This is where the VirtualService object type comes in. This is where we define routing intelligence.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: demoapp
spec:
  hosts:
  - "*"
  gateways:
  - http-gateway
  http:
  - route:
    - destination:
        port:
          number: 80
        host: vt-webapp-v1.app-with-injection.svc.cluster.local
      weight: 80
    - destination:
        port:
          number: 80
        host: vt-webapp-v2.app-with-injection.svc.cluster.local
      weight: 20      

What the above effectively does is listen for all HTTP requests (hence the “*” under “hosts”) and route 80% of traffic to V1 of the webapp, by directing traffic at the respective service and 20% to v2.

Testing

The “WebApp” is pretty simple. It simply displays one of the following (depending on the version)

 

What we should now see from accessing the external IP is traffic being split across both services via a 80/20 split:

Out of 10 curl commands 8 were routed to v1 of my app, 2 were routed to v2 of my app.

Conclusion

Admittedly, this is an extremely simple example of a more simple use case of Istio, but as I’m learning, I think it’s a decent start, and I hope others find it useful.

Removing Unused PKS LoadBalancer IP allocations in NSX-T

I’m constantly building, removing, destroying, breaking and constantly tinkering in my lab, which unfortunately can leave behind a bit of technical mess. Most recently I’ve been doing some messing around in PKS and consequently had a load of load balancer VIPs no longer in use due to clusters being destroyed and / or recreated.

Unfortunately, there’s no way to remove these in the UI (for now, at least). But you can remove them using the API.:


curl -k -u <NSX-USERNAME>:<NSX-PASSWORD> -X POST 'https://<NSX-IP-OR-FQDN>/api/v1/pools/ip-pools/<POOL-ID>?action=RELEASE' -H "X-Allow-Overwrite: true" -d {"allocation_id":"10.10.10.2"}' -H "Content-Type: application/json"

This is great for the odd IP, but for me, I had 10’s of IP addresses I needed releasing. A quick and easy Bash script to the rescue!


for i in {2..254}
do
curl -k -u <NSX-USERNAME>:<NSX-PASSWORD> -X POST 'https://<NSX-IP-OR-FQDN>/api/v1/pools/ip-pools/<POOL-ID>?action=RELEASE' -H "X-Allow-Overwrite: true" -d '{"allocation_id":"10.10.10.'"$i"'"}' -H "Content-Type: application/json"
done

Modify the for loop for your environment. I typically have .1 as my default gateway.

Note : After releasing an IP, it seems to take a while before it’s reflected in the UI, therefore give it a few minutes.

« Older posts Newer posts »

© 2025 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS
Twitter
Visit Us
Follow Me