Virtualisation, Storage and various other ramblings.

Author: David (Page 14 of 24)

vRealize Log Insight + PKS Integration

Introduction

In this blog post, we take a look into the integration between PKS and vRealize Log Insight and how this integration benefits the enterprise. As a bit of a recap:

PKS – PKS is a purpose-built enterprise level container solution leveraging the capabilities of Kubernetes, BOSH, VMware NSX-T, Harbour and more to deliver a highly available, highly flexible container runtime that operates on a number of cloud platforms, both private and public, including vSphere, AWS, Azure and GCP.

VMware also released VMware Cloud PKS, a fully managed service that combines the technical capabilities of AWS, PKS and Kuberntes which can be consumed in a similar fashion to other cloud services.

vRealize Log Insight – vRealize Log Insight is a log management system that’s designed to operate within heterogeneous environments, however, it’s much more than a simple aggregator of logging information. vRealize Log Insight has analytical and trend-identification capabilities which allow operators to gain invaluable insight into the state, health, and events which are transpiring in the environment. vRealize Log Insight works across physical, virtual and cloud environments.

Containers and Coexistence with VM’s

VM’s have existed for a long time now. Consequently, there are very mature, battle-hardened tools and software which can be used to monitor a plethora of operating systems, software, components and more. Containers, on the other hand, are relatively new in the enterprise. Although there is an overlap, there are significant differences in the way we monitor and collect logs from VM’s and  containers. How can this be addressed?

There are a number of ways to monitor a container based environment. Prometheus and Wavefront come to mind, but for environments that already leverage vRealize Log Insight, we can integrate PKS with it to facilitate a single plane of glass view of logging information from VM’s, their underlying infrastructure as well as containers and their underlying infrastructure.

 

What can we expect PKS to send to Log Insight

At a high level, the Integration between PKS and vRLI will facilitate the propagation of the following logs:

  • BOSH jobs
  • Core Kubernetes processes & nodes
  • Core BOSH processes
  • Kubernetes event logs
  • Individual Pod stdout and stderr

I’ve highlighted the last one as I can see real value in this. Imagine centralising all stdout and stderr from pods in combination with the analytics and trend identification capabilities from vRLI? Pretty interesting. Of course, we’re not that interested in what individual pods are logging, but if we have an example where some new code has been pushed out and 10’s / 100’s or 1000’s of pods start logging errors, we can identify, categorise and analyse these pretty easily with vRLI.

 

PKS and vRealize Log Insight in action

Talk is cheap, so let’s crack on.

Log into Ops Manager and select the PKS tile

 

Select “Logging” from the left and select “yes” under vRLI integration:

Enter the host and SSL settings where applicable in your environment:

Apply the changes:

if you keep an eye on the logs, references for the vRLI configuration will be shown:

– fluentd_vrli_ca_cert: “<redacted>”
– fluentd_vrli_host: “<redacted>”
+ fluentd_vrli_host: “<redacted>”
– fluentd_vrli_rate_limit_msec: “<redacted>”
+ fluentd_vrli_rate_limit_msec: “<redacted>”
– fluentd_vrli_skip_cert_verify: “<redacted>”
+ fluentd_vrli_skip_cert_verify: “<redacted>”
– fluentd_vrli_use_ssl: “<redacted>” + fluentd_vrli_use_ssl: “<redacted>”

Next, deploy a cluster in PKS:

After which, the following “hosts” can be observed, which in essence, is a reflection of the services within our Kubernetes cluster:

 

I also create a individual pod, named nginx-sleep. Below are the logs that were ingested for this event:

To validate the stdout capturing, create a cluster that writes to stdout:

 

And check the logs from the pod:

 

And also from Log Insight:

 

Conclusion

vRealize Log Insight provides a compelling platform for log ingestion, and it’s flexibility to ingest, analyse and interpret logs from physical, virtual and container based solutions makes it an extremely versatile tool in any admins repertoire.

My VCAP6-NV Experience

Preamble (pun intended)

I’ve been eyeing up the VCAP6-NV exam for quite some time now, but due to work and personal projects I’ve not been able to focus on this exam. Having some time between jobs I decided to start revising and push myself into taking the exam. At time of writing, there is still no NV-Design exam, therefore, anyone who sits and passes the VCAP6-NV Deploy exam is automatically given the VCIX6-NV certification.

VMware Certified Implementation Expert 6 – Network Virtualization

I’m not a full on networking person. Back in the day (~10 years ago) I wanted to be, but the opportunities didn’t exist for me and I consequently went down a generic sysadmin path until ending up where I am today, primarily focusing on SDDC and cloud technologies. There are a lot of people who dive into NSX that do come from traditional networking backgrounds. For me, I found this quite intimidating, however the point is you do not have to be some kind of Cisco God to appreciate NSX, or pass this exam.

Preparation

Read any blog post, forum or reddit post about any VMware-based exam and it won’t take long until someone says something like:

“Read, study, understand and master the contents of the blueprint.”

And it’s absolutely correct. The version of the blueprint for the exam I sat can be found at https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/certification/vmw-vcap6-nv-deploy-3v0-643-guide.pdf

This, as well as vZealand.com’s excellent guide were my primary study resources and I would prepare by doing the following:

  1. Go over each section  of the blueprint via vZealand’s guide in my own lab, following the instructions on the site.
  2. Go over each section of the blueprint without any initial external assistance, assess accuracy after each objective by checking the guide.
  3. Go over each section of the blueprint in a way where I was confident in going over the objectives from practice.

Also, bear in mind the exam is based on 6.2 of NSX, therefore it’s a good idea to have a lab running on this version as there have been a number of significant changes since then.

After I accomplished all three, I felt confident to sit the exam

The exam itself

You’re looking at 205 minutes in total covering 23 questions that cover the blueprint in its entirety. Without breaking NDA my observations are:

  • Time management – This is the third VCAP exam I’ve taken and with each the time has flown by. It really doesn’t feel like a long time when you’ve finished. I personally found the NSX VCAP exam much more demanding on time compared to the VCAP-DCV Deploy exam. I didn’t complete all my questions in the NV exam whereas I had about 30-40mins left when I took the DCV exam.
  • Content – I feel like the exam was a pretty good reflection of the blueprint and was fairly well represented.
  • HOL Interface – The Exam simulation feels very similar to the VMware HOL, including (unfortunately) the latency. The performance for my exam wasn’t great, but wasn’t terrible.
  • Skip questions you’re not sure on – Time is an expensive commodity in this exam, if you’re struggling with some questions skip and move on. You may have time to come back to it later. I skipped a couple of questions.

 

Result

I passed, but not by much. But a pass is a pass and I was pretty chuffed. It’s definitely the hardest VCAP exam I’ve taken to date.

 

NSX-T, Kubernetes and Microsegmentation

For the uninitiated, VMware NSX comes in two “flavours”, NSX-V which is heavily integrated with vSphere, and NSX-T which is more IaaS agnostic. NSX-T also has more emphasis on facilitating container-based applications, providing a  number of features into our container ecosystem. In this blog post, we discuss the microsegmentation capabilities provided by NSX-T in combination with container technology.

What is Microsegmentation?

Prior to Software-defined networking, firewall functions were largely centralised, typically manifested as edge devices which were and still are, good for controlling traffic to and from the datacenter, otherwise known as north-south traffic:

The problem with this model, however, is the lack of control for resources that reside within the datacenter, aka east-west traffic. Thankfully, VMware NSX (-V or -T) can facilitate this, manifested by the distributed firewall.

Because of the distributed firewall, we have complete control over lateral movement within our datacenter. In the example above we can define firewall rules between logical tiers of our application which enforce permitted traffic.

 

But what about containers?

Containers are fundamentally different from Virtual machines in both how they’re instantiated and how they’re managed. Containers run on hosts that are usually VM’s themselves, so how can we achieve the same level of lateral network security we have with Virtual Machines, but with containers?

 

Introducing the NSX-T Container Plugin

The NSX-T container plugin facilitates the exposure of container “Pods” as NSX-T logical switch ports and because of this, we can implement microsegmentation rules as well as expose Pod’s to the wider NSX ecosystem, using the same approach we have with Virtual Machines.

Additionally, we can leverage other NSX-T constructs with our YAML files. For example, we can request load balancers from NSX-T to facilitate our application, which I will demonstrate further on. For this example, I’ve leveraged PKS to facilitate the Kubernetes infrastructure.

Microsegmentation in action

Talk is cheap, so here’s a demonstration of the concepts previously discussed. First, we need a multitier app. For my example, I’m simply using a bunch of nginx images, but with some imagination you can think of more relevant use cases:

Declaring Load balancers

To begin with, I declare two load balancers, one for each tier of my application. Inclusion into these load balancers is determined by tags.


apiVersion: v1
kind: Service
metadata:
name: web-loadbalancer
  labels:
  namespace: vt-web
spec:
  type: LoadBalancer
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: web-frontend
    tier: frontend
---
apiVersion: v1
kind: Service
metadata:
 name: app-loadbalancer
 labels:
 namespace: vt-web
spec:
 type: LoadBalancer
 ports:
 - port: 8080
 protocol: TCP
 targetPort: 80
 selector:
 app: web-midtier
 tier: midtier
---

Declaring Containers

Next, I define the containers I want to run for this application.

&lt;pre&gt;---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: web-frontend
 namespace: vt-web
spec:
 replicas: 2
 template:
 metadata:
 labels:
 app: vt-webapp
 tier: webtier
 spec:
 containers:
 - name: web-frontend
 image: nginx:latest
 ports:
 - containerPort: 80
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: web-midtier
 namespace: vt-web
spec:
 replicas: 2
 template:
 metadata:
 labels:
 app: web-midtier
 tier: apptier
 spec:
 containers:
 - name: web-midtier
 image: nginx:latest
 ports:
 - containerPort: 80&lt;/pre&gt;

Logically, this app looks like this:

 

 

Deploying app

david@ubuntu_1804:~/vt-webapp$ kubectl create namespace vt-web
namespace "vt-web" created
david@ubuntu_1804:~/vt-webapp$ kubectl apply -f webappv2.yaml
service "web-loadbalancer" created
service "app-loadbalancer" created
deployment "web-frontend" created
deployment "web-midtier" created

 

Testing Microsegmentation

At this stage, we’re not leveraging the Microsegmentation capabilities of NSX-T.  To validate this we can simply do a traceflow between two web-frontend containers over port 80:

 

As expected, traffic between these two containers is permitted. So, lets change that. In the NSX-T web interface go to inventory -> Groups and click on “Add”. Give it a meaningful name.

As for membership Criteria, we can select the tags we’ve previously designed, namely tier and app.

Click “add”. After which we can validate:

We can then create a firewall rule to block TCP 80 between members of this group:

Consequently, if we run the same traceflow exercise:

Conclusion

NSX-T provides an extremely comprehensive framework for containerised applications. Given the nature of containers in general, I think container + microsegmentation are a winning combination to secure these workloads. Dynamic inclusion adheres to the automated mentality of containers, and with very little effort we can implement microsegmentation using a framework that is agnostic – the principles are the same between VM’s and containers.

« Older posts Newer posts »

© 2025 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS
Twitter
Visit Us
Follow Me