Virtual Thoughts

Virtualisation, Storage and various other ramblings.

Page 23 of 24

Facilitating multi-SAN Environments

The what, where and why

At the moment I’m involved in a fair amount of project work involving data (VM) migrations between different storage platforms, sometimes even from the same vendor. What seems to be quite a simple process can get complicated depending on the design considerations for both storage platforms and the migration criteria – particularly if both storage platforms need to co-exist for either the short or long term and migrated VM’s must be performed live.

 

But we have shared nothing vMotion for this, right?

Yes and No. A lot of administrators don’t like to work with multiple SAN environments – and with good reason. Conflicting design considerations (which I will touch on later) can cause some incompatibility depending on your host configuration (think iSCSI port bindings as an example). For some migrations it’s perfectly acceptable to (for example) logically segment ESXi hosts that belong to different SAN environments and simply vMotion across. Once you’ve migrated everything across reconfigure all hosts to see the new, shiny storage platform only.

However, requirements I’ve received recently from various customers stipulate that both SAN environments must co-exist on all hosts in a supported fashion. Some for sort term migrations, others for mid term (maybe to keep the old storage environment for test/dev). These clients have a small number of densely populated hosts, and therefore do not want segmentation of hosts (ie cluster A = SANA, cluster B = SANB).

So how can we facilitate this?

Assess design considerations for the existing SAN

All storage vendors will (or should) have existing documentation pertaining to best practice for implementing their flavor of storage array. As an example I’ll pick the Dell EqualLogic as I’m quite familiar with it.

The EqualLogic is an active/passive storage device, commonly implemented in VMware environments by leveraging the Software iSCSI initiator. We then create either one vSwitch with two VMKernel port groups for iSCSI, each having their own dedicated NIC and the rest being unused, or two vSwitches with 1 VMKernel port group each, with their own dedicated NIC and the rest being unused. IP addresses for all initiators and the target reside on the same VLAN/Subenet Range, so iSCSI port binding is used.

Assess design considerations for the new SAN

Commonplace at the moment are new 10Gb iSCSI SANs being implemented, including new NICs/HBA’s into hosts, switches and obviously the storage device itself. As an example I was asked to co-exist a EqualLogic (1Gb) with another storage device (10Gb) that had different design considerations:

The new storage device is an active/active device, with best practice dictating that each storage controller reside on its own VLAN/Subnet range.

Assessing how to configure hosts to support both SAN environments

In the example previously described we have a particular issue:

  • Existing configuration involves iSCSI port binding with all participating initiators and targets in the same VLAN.

This is perfectly common and acceptable with the EqualLogic. But this cannot support the new storage device. Although we technically *could* modify the existing binding to accommodate the new SAN, this will cause issues and is by in large not supported. https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2038869. Even if we put the new storage device on the same VLAN as the EqualLogic, it still isn’t a good idea. Single targets should really be used and including mixed, speed nics in iSCSI port binding is a very bad idea.

Simply put, we can not, and should not modify the existing iSCSI port bindings. We can only have 1 software iSCSI initiator, and when we use port binding it will not leverage other VMKernel port groups.

This leaves us with two options:

Option #1 iSCSI-Independent HBA’s

Independent HBA’s do not rely upon ESXi for configuration parameters. They’re configured outside of the operating system. This is probably the better way of achieving segmentation between multiple SAN environments. This way we have independent HBA’s facilitating traffic to the new storage array, and existing NICs in the existing iSCSI port binding facilitating traffic to the existing storage array.

Option #2 iSCSI-Dependent HBA’s

I’m honestly not sure if it’s just a demand thing, or if people are finding acceptable levels of performance from software iSCSI implementations (I know I am) but I don’t really come across dedicated iSCSI Independent HBA’s anymore. I’d imagine they represent a bit of a false economy with the limited number of PCI-Express slots of small servers these days.

Converged network adapters on the other hand I find very common. For those that are not aware, CNA’s are like multi-personality NICs. They’re presented in ESXi as both regular NIC’s and iSCSI (and/or FCoE) storage adapters with the same MAC. They are usually considered iSCSI-Dependent NIC’s though.

iSCSI

What we’re able to do here is create our vmkernel ports for the new storage array (one for each uplink for our active/active array) on different VLAN’s, IP address ranges etc. Then assign one to each of our iSCSI uplinks and off we go. We still achieve separation of traffic because the software iSCSI initiator remains untouched, and traffic to the new SAN is facilitated by different nics, VMK’s, etc.

It’s important to identify there that VMware does support using mixed software AND hardware iSCSI initiators but not to the same target.

Option #3 – Swing Host

I’ve heard people using these before. They’re essentially a designated host that’s configured with perhaps not a supported or best practice configuration, but is homed to both SAN environments and is just used for migration purposes. You essentially VMotion a VM onto this host, do what you need to do with it, and vMotion it off again to a supported host.

It often works, but is considered a “quick and dirty” solution to this problem, and unlikely to receive any official support from both VMware and your storage vendor.

Conclusion

Hopefully this might help others who may be tasked with a similar requirement. It’s not ideal, and as I mentioned before we try and not implement multi SAN-environments where possible. But sometimes we have to. What’s important is to implement something that is supported and stable.

 

In-guest iSCSI to native VMDK

Do we really need in-guest iSCSI volumes?

Well, yes and no.

I’ll admit, the need for VM’s with their own iSCSI initiator have decreased over the various improvements made to vSphere and ESXi. However, I would imagine there are a number of implementations that (justifiably)  still need this arrangement, and those that don’t. I was recently tasked with eliminating a number of guest-initiated iSCSI disks in favor of using native VMDK’s.

I’m sure a lot of VMware admins have either gone through this process, or will find themselves with this task at some point. This post serves as a rough guide to my approach – which doesn’t necessarily mean that it’s the only way to do this, but it worked for me.

Idea #1 – VMware Converter

VMware Converter is an easy piece of software to use. Pick a source, pick a destination, modify the properties of the associated disks. Et Voila! However, one of the main considerations to make when using this is the maintenance window involved. If you’re converting a number of virtual disks, particularly to the same storage array, then you’ll need a sizeable disk space overhead, as you may have to essentially mirror all the data before you can delete the source. This also takes time

Idea #2 – OS Native File Copy to a VMDK

The principle behind this is quite easy. As an example, a file server VM could have a in-guest iSCSI volume to hold all share data. A VMDK could be created and added to the VM, then we can robocopy/rsync the data across and re-configure sharing etc. Again, similar with Idea #1 there are space considerations to factor for, as you’re duplicating data for a short period.

Idea #3 – Convert the disk to a VMDK

This idea differs from the previous two by converting the drive that currently holds the data into a native VMDK. There’s no need to mirror/duplicate the data, but there’s still a maintenance window involved.

Idea #3 seemed most suitable for me. Duplicating data would take up too much space, put extra strain on my SAN, and should anything go a miss I always have decent backups to restore from. So lets go a bit more in depth on how we convert a in-guest iSCSI volume in to a native VMDK.

Overview – Idea #3 fleshed out

There’s no single-step process to convert a in-guest iSCSI volume into a native VMDK. We can do it by following the following conversion process
VMDK

We must (at time of writing) convert the in-guest iSCSI volume to a virtual mode RDM, at which point we can then Storage vMotion (sVMotion) it to a native VMDK. Below is my approach at doing so:

 

Step #1 – Find out what services are touching the drive we want to convert

Some VM’s will be easier than others when it comes to finding this out. Some drives are dedicated to specific services such as SQL server. We need to know which because we want to be careful with data consistency. If unsure, we can use tools such as handle.exe from Microsoft Sysinternals which will give us an idea as to which files are currently being used:

 

Handle

In this example E:\ was my mapped iSCSI volume. Executing handle.exe |findstr /i e:\ revealed which files on E:\ had active file handles. This can also be accomplished by Process Explorer too. Next we shut down the services that have handles to this drive. So in this example I shut down SQL server.

 

Step #2 – Disconnect all iSCSI based volumes, disable iSCSI vNIC’s and shutdown the VM

  1. Log into the Virtual Machine.
  2. Open the “Disk Management” MMC snapin.
  3. Right click the drive representing the in-guest iSCSI volume and select “offline”.
  4. The disk should no longer be mounted.
  5. Launch the iSCSI initiator and select the “Targets” tab.
  6. Select the target that’s currently connected and click “Disconnect”.
  7. The volume should be listed as Inactive and no longer visible from “Disk Management”.
  8. In Network Connections disable the iSCSI NIC.
  9. Shut down the VM.

 

Step #3 – Present previously used in-guest volume to ESXi hosts

We need to do this so we can add the volume as a Virtual Mode RDM to the VM. How we accomplish this depends on your storage vendor. But as a top level overview:

  1. Log in to SAN management application
  2. Modify the existing volume access policies so volume is visible to all ESXi hosts by authentication methods such as Access Policy / CHAP / initiator name / IP address /etc

 

Step #4 – Add volume as a Virtual Mode RDM to VM

  1. Perform a rescan of the ESXi host HBA’s so the newly presented volume is visible.
  2. Right click VM > Edit Settings.
  3. Add new Device > Hard Disk > Click Next.
  4. Select Raw Device Mapping as the Disk Type.
  5. Select the volume from the list.
  6. Select a datastore use to map this volume. Click Next.
  7. Select “Virtual” as the compatibility mode. Click Next.
  8. Leave advanced options as-is, unless required. Click Next.
  9. Click finish.
  10. Click OK to commit the VM configuration changes

 

Step #5 – Power on VM and check data integrity

  1. Power on the VM.
  2. Open “Disk Management”.
  3. Right click the added volume and select the “Online” option.
  4. Check drive contents (The volume should be mapped with the previous volume label/drive letter).

 

Step #6 – Re-enable services that require access

Opposite of step 1.

 

Step #7 – Storage vMotion disk and change disk type

  1. Right click the VM in vSphere and select “Migrate”.
  2. Select “Change datastore” and click “next”.
  3. Click the “Advanced” button.
  4. Select the appropriate datastore for the RDM disk and change the disk format from “Same Format as source” to “Thin/Thick Provision”. Other drives remain unchanged (ie OS drive).
  5. Click Next.
  6. Click Finish
  7. Wait until the storage vMotion has completed.
  8. Validate the vMotion by viewing the settings of the VM and checking the aforementioned drive is listed as a standard thin/thick provisioned vmdk and not a RDM

Step #8 – Cleanup

At this point we have finished our conversion process and can clean up by removing any integration tools from the VM, removing the iSCSI vNIC and deleting the volume originally used from the SAN.

VCAP5-DCD Experience

Motivation

Towards to end of 2015 I decided to make passing the VCAP5-DCD exam a target. I had already refreshed my VCP certification to version 6 and wanted to move further up VMware’s certification chain. It may have seen unusual timing giving the impending release of version 6 of the VCAP exams, but as I was in between jobs at the time it seemed like a good idea to not only demonstrate some additional skills, but enhance my CV for potential employers.

For those unfamiliar with VMware certifications the VCAP  requires an existing and valid VCP, and forms part of the VCIX certification, as shown by the image below. It’s worth mentioning however that you can achieve the VCIX6-DCV certification with a VCAP5 + the corresponding VCAP6. So for me I can sit the VCAP6-DCV Deploy exam and be awarded the VCIX6-DCV certification.

 

Certs

Preparation

It goes without saying – the VCAP exam is a bit of a beast. Over a relatively short amount of time the format has changed somewhat. When I sat the exam the specification was:

  • 22 Questions.
  • Mix of Design and Multiple choice questions.
  • One deign question designated as the “Master Design”.
  • 180 Minutes total exam time.

This is taken from the official VMware VCAP5-DCD Blueprint and consequently the first resource I consulted. The blueprint lists exactly the areas you will expect to be tested on in the exam.

You really don’t need a lab to study for this exam. It’s predominantly theory and process. However you do have to read. A lot.

My bookshelf for this exam mainly consisted of:

  • Official Blueprint.
  • vBrownbags VCAP-DCD Videos.
  • Official VCAP5-DCD study guide from VMWare press.
  • vSphere Design Best practices book from Packt Publishing.
  • VCAP Study pack from virtualtiers.com.
  • Watched the VCAP tool demo video from the VMWare website.
  • As many VMware best practice PDF’s as google will return. Mainly around Storage and Neworking.
  • Mastering vSphere 6.

I also posted and read a lot of content on the Google Plus VCAP5-DCD study group. Seriously check it out, the guys/gals there are great.

Before the exam

Like with any exam I’ve ever taken I’ve taken a similar approach to each:

  • Early night the day before.
  • Chill out the night before. Go watch a film/TV show.
  • Relax.
  • Try and schedule the exam after lunch (I don’t like morning exams)
  • Have a good breakfast and/or lunch.
  • Arrive at the test center early. Allocate an appropriate amount of time for travel, compensating for traffic.
  • Try and have some confidence (difficult for me).

During the exam

During my research for this exam I looked on various blog sites and and social community comments. There was an overwhelming trend in posts relating to the VCAP5-DCD exam – It’s all about time management. To an extent, this is true, but I think the change to bring it down to 22 questions has made this a lot easier.

I spent a good 30-45mins alone on my “master” design question, about 15-20mins on the other design questions and the remaining time on the multiple choice questions.

I will admit though, after the design questions I was absolutely exhausted. I don’t even remember a lot of the multiple choice questions and for quite a few I pretty much clicked random answers. I got the impression that you’re heavily marked on the design questions. At the end I had about 20-30mins left to spare.

After the exam

I was really nervous about finishing the exam. I was so mentally exhausted I convinced myself I failed and was already mentally preparing when to sit my resit. However I was absolutely delighted when I found out I passed. All the hard work and effort paid off.

For those that are thinking of sitting a VCAP exam – good luck!

« Older posts Newer posts »

© 2025 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS
Twitter
Visit Us
Follow Me