Virtualisation, Storage and various other ramblings.

Category: Virtualisation (Page 5 of 8)

VMware Cloud on AWS

Perhaps one of VMware’s most significant announcements made in recent times is the partnership with Amazon Web Services (AWS), including the ability to leverage AWS’s infrastructure to provision vSphere managed resources. What exactly does this mean and what benefits could this bring to the enterprise?

 

Collaboration of Two Giants

To understand and appreciate the significance of this partnership we must acknowledge the position and perspective of each.

 

 

 

  • Market leader in private cloud offerings
  • Deep roots and history in virtualisation
  • Expanding portfolio

 

 

 

 

  • Market leader in public cloud offerings
  • Broad and expanding range of services
  • Global scale

 

VMware has a significant presence in the on-premise datacentre, in contrast to AWS which focuses entirely on the public cloud space. VMware cloud on AWS sits in the middle as a true hybrid cloud solution leveraging the established, industry-leading technologies and software developed by VMware, together with the infrastructure capabilities provided by AWS.

 

How it Works

In a typical setup, an established vSphere private cloud already exists. Customers can then provision an AWS-backed vSphere environment using a modern HTML5 based client. The environment created by AWS leverages the following technologies:

  • ESXi on bare metal servers
  • vSphere management
  • vSAN
  • NSX

 

The connection between the on-premise and AWS hosted vSphere environments is facilitated by Hybrid Linked Mode. This allows customers to manage both on-premise and AWS hosted environments through a single management interface. This also allows us to, for example, migrate and manage workloads between the two.

Advantages

Existing vSphere customers may already be leveraging AWS resources in a different way, however, there are significant advantages associated with implementing VMware cloud on AWS, such as:

Delivered as a service from VMware – The entire ecosystem of this hybrid cloud solution is sold, delivered and supported by VMware. This simplifies support, management, billing amongst other activities such as patching and updates.

Consistent operational model – Existing private cloud users use the same tools, processes and technologies to manage the solution. This includes integration with other VMware products included in the vRealize product suite.

Enterprise-grade capabilities – This solution leverages the extensive AWS hardware capabilities which include the latest in low latency IO storage technology based on Solid State Drive technology and high-performance networking.

Access to native AWS resources – This solution can be further expanded to access and consume native AWS technologies pertaining to databases, AI, analytics and more.

Use Cases

VMware Cloud on AWS has several applications, including (but not limited to) the following:

 

Datacenter Extension

 

Because of how rapidly an AWS-backed software-defined datacenter can be provisioned, expanding an on-premise environment becomes a trivial task. Once completed, these additional resources can be consumed to meet various business and technical demands.

 

 

 

Dev / Test

 

Adding additional capabilities to an existing private cloud environment enables the division of duties/responsibilities. This enables organisations to separate out specific environments for the purposes of security, delegation and management.

 

 

 

 

 

Application Migration

 

 

VMware cloud on AWS enables us to migrate N-tier applications to an AWS backed vSphere environment without the need to re-architect or convert our virtual machine/compute and storage constructs. This is because we’re using the same software-defined data centre technologies across our entire estate (vSphere, NSX and vSAN).

 

 

 

 

 

 

Conclusion

There are a number of viable applications for VMware Cloud on AWS and it’s a very strong offering considering the pedigree of both VMware and AWS. Combining the strengths from each creates a very compelling option for anyone considering a hybrid cloud adoption strategy.

To learn more about VMware Cloud on AWS please review the following:

https://aws.amazon.com/vmware/

https://cloud.vmware.com/vmc-aws

 

Homelab – Nested ESXi with NSX and vSAN

The Rebuild

I decided to trash and rebuild my nested homelab to include both NSX and vSAN. When I attempted to prepare the hosts for NSX I received the following message:

 

 

I’ve not had this issue before so I conducted some research. I found a lot of blog posts / comments / KB articles linking this issue to VUM. For example : https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2053782

However, after following the instructions I couldn’t set the “bypassVumEnabled” setting. Nor could I manually install the NSX vibs and was presented with the following:

 

[root@ESXi4:~] esxcli software vib install -v /vmfs/volumes/vsanDatastore/VIB/vib20/esx-nsxv/VMware_bootbank_esx-nsxv_6.5.0-0.0.6244264.vib –force
[LiveInstallationError]
Error in running [‘/etc/init.d/vShield-Stateful-Firewall’, ‘start’, ‘install’]:
Return code: 1
Output: vShield-Stateful-Firewall is not running
watchdog-dfwpktlogs: PID file /var/run/vmware/watchdog-dfwpktlogs.PID does not exist
watchdog-dfwpktlogs: Unable to terminate watchdog: No running watchdog process for dfwpktlogs
ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to release memory reservation for vsfwd
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Resource pool ‘host/vim/vmvisor/vsfwd’ release failed. retrying..
Set memory minlimit for vsfwd to 256MB
ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to set memory reservation for vsfwd to 256MB
ERROR: ld.so: object ‘/lib/libMallocArenaFix.so’ from LD_PRELOAD cannot be preloaded: ignored.
Failed to release memory reservation for vsfwd
Resource pool ‘host/vim/vmvisor/vsfwd’ released.
Resource pool creation failed. Not starting vShield-Stateful-Firewall

It is not safe to continue. Please reboot the host immediately to discard the unfinished update.
Please refer to the log file for more details.
[root@ESXi4:~]

In particular I was intrigued by the “Failed to release memory reservation for vsfwd” message. I decided to increase the memory configuration of my ESXi VM’s from 6GB to 8GB and I was then able to prepare the hosts from the UI.

TLDR; If you’re running  ESXi 6.5, NSX 6.3.3 and vSAN 6.6.1 and experiencing issues preparing hosts for NSX, increase the ESXi memory configuration to at least 8GB.

vDS to vSS and back again

Overview

I was recently tasked with migrating a selection of ESXi 5.5 hosts into a new vSphere 6.5 environment. These hosts leveraged Fibre Channel HBA’s for block storage and 2x10Gbe interfaces for all other traffic types. I assumed that doing a vDS detach and resync was not the correct approach to do this, even though some people reported success doing it this way.  The /r/vmware Reddit community agreed and later I found a VMware KB article that backs the more widely accepted solution involving moving everything to a vSphere Standard Switch first.

 Automating the process

There are already several resources on how to do vDS -> vSS migrations but I fancied trying it myself. I used Virtually Ghetto’s script as a foundation for my own but wanted to add a few changes that were applicable to my specific environment. These included:

  • Populating a vSS dynamically by probing the vDS the host was attached to, including VLAN ID tags
    • Additionally, add a prefix to differentiate between the vSS and vDS portgroups
  • Automating the migration of VM port groups from the vDS to a vSS in a way that would result in no downtime.

Script process

This script performs the migration on a specific host, defined in $vmhost.

  1. Connect to vCenter Server
  2. Create a vSS on the host called “vSwitch_Migration”
  3. Iterate through the vDS portgroups, recreate on the vSS like-for-like, including VLANID tagging (where appropriate).
  4. Acquire list of VMKernel adaptors
  5. Move vmnic0 from the vDS to the vSS. At the same time migrate the VMKernel interfaces
  6. Iterate through all the VM’s on the host, reconfigure port group so it resides in the vSS
  7. Once all the VM’s have migrated, add the second (and final, in my environment) vmnic to the vSS
  8. At this point nothing specific to this host resides on the vDS, therefore remove the vDS from this host

If you plan to run these scripts in your environment, test first in a non-production environment.


Write-Host "Connecting to vCenter Server" -foregroundcolor Green
Connect-VIServer -Server "vCenterServer" -User administrator@vsphere.local -Pass "somepassword" | Out-Null

# Individual ESXi host to migrate from vDS to VSS
$vmhost = "192.168.1.20"
Write-Host "Host selected: " $vmhost -foregroundcolor Green

# Create a new vSS on the host
$vss_name = New-VirtualSwitch -VMHost $vmhost -Name vSwitch_Migration
Write-Host "Created new vSS on host" $vmhost "named" "vSwitch_Migration" -foregroundcolor Green

#VDS to migrate from
$vds_name = "MyvDS"
$vds = Get-VDSwitch -Name $vds_name

#Probe the VDS, get port groups and re-create on VSS
$vds_portgroups = Get-VDPortGroup -VDSwitch $vds_name
foreach ($vds_portgroup in $vds_portgroups)
{
if([string]::IsNullOrEmpty($vds_portgroup.vlanconfiguration.vlanid))
{
Write-Host "No VLAN Config for" $vds_portgroup.name "found" -foregroundcolor Green
$PortgroupName = $vds_portgroup.Name
New-VirtualPortGroup -virtualSwitch $vss_name -name "VSS_$PortgroupName" | Out-Null
}

else

{
Write-Host "VLAN config present for" $vds_portgroup.name -foregroundcolor Green
$PortgroupName = $vds_portgroup.Name
New-VirtualPortGroup -virtualSwitch $vss_name -name "VSS_$PortgroupName" -VLanId $vds_portgroup.vlanconfiguration.vlanid | Out-Null
}
}

#Create a list of VMKernel adapters
$management_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk0"
$vmotion1_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk1"
$vmotion2_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk2"
$vmkernel_list = @($management_vmkernel,$vmotion1_vmkernel,$vmotion2_vmkernel)

#Create mapping for VMKernel -> vss Port Group
$management_vmkernel_portgroup = Get-VirtualPortGroup -name "VSS_Mgmt" -Host $vmhost
$vmotion1_vmkernel_portgroup = Get-VirtualPortGroup -name "VSS_vMotion1" -Host $vmhost
$vmotion2_vmkernel_portgroup = Get-VirtualPortGroup -name "VSS_vMotion2" -Host $vmhost
$pg_array = @($management_vmkernel_portgroup,$vmotion1_vmkernel_portgroup,$vmotion2_vmkernel_portgroup)

#Move 1 uplink to the vss, also move over vmkernel interfaces
Write-Host "Moving vmnic0 from the vDS to VSS including vmkernel interfaces" -foregroundcolor Green
Add-VirtualSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic0" -VMHost $vmhost) -VirtualSwitch $vss_name -VMHostVirtualNic $vmkernel_list -VirtualNicPortgroup $pg_array -Confirm:$false

#Moving VM's from vDS to VSS
$vmlist = Get-VM | Where-Object {$_.VMHost.name -eq $vmhost}

foreach ($vm in $vmlist)
{
#VM's may have more that one nic
$vmniclist = Get-NetworkAdapter -vm $vm
foreach ($vmnic in $vmniclist)
{
$newportgroup = "VSS_" + $vmnic.NetworkName
Write-Host "Changing port group for" $vm.name "from" $vmnic.NetworkName "to " $newportgroup -foregroundcolor Green
Set-NetworkAdapter -NetworkAdapter $vmnic -NetworkName $newportgroup -Confirm:$false | Out-Null
}
}

#Moving additional vmnic to vss
Write-Host "All VM's migrated, adding second vmnic to vss" -foregroundcolor Green
Add-VirtualSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic1" -VMHost $vmhost) -VirtualSwitch $vss_name -Confirm:$false

#Tidyup - Remove DVS from this host
Write-Host "Removing host from vDS" -foregroundcolor Green
$vds | Remove-VDSwitchVMHost -VMHost $vmhost -Confirm:$false

 

 

The reverse

Although vSphere has some handy tools to migrate hosts, portgroups and networking to a vDS, scripting the reverse didn’t require many changes to the original script:


Write-Host "Connecting to vCenter Server" -foregroundcolor Green
Connect-VIServer -Server "vCenterServer" -User administrator@vsphere.local -Pass "somepassword" | Out-Null

# Individual ESXi host to migrate from vDS to VSS
$vmhost = "192.168.1.20"
Write-Host "Host selected: " $vmhost -foregroundcolor Green

#VDS to migrate to
$vds_name = "MyvDS"
$vds = Get-VDSwitch -Name $vds_name

#Vss to migrate from
$vss_name = "vSwitch_Migration"
$vss = Get-VirtualSwitch -Name $vss_name -VMHost $vmhost

#Add host to vDS but don't add uplinks yet
Write-Host "Adding host to vDS without uplinks" -foregroundcolor Green
Add-VDSwitchVMHost -VMHost $vmhost -VDSwitch $vds

#Create a list of VMKernel adaptors
$management_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk0"
$vmotion1_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk1"
$vmotion2_vmkernel = Get-VMHostNetworkAdapter -VMHost $vmhost -Name "vmk2"
$vmkernel_list = @($management_vmkernel,$vmotion1_vmkernel,$vmotion2_vmkernel)

#Create mapping for VMKernel -> vds Port Group
$management_vmkernel_portgroup = Get-VDPortgroup -name "Mgmt" -VDSwitch $vds_name
$vmotion1_vmkernel_portgroup = Get-VDPortgroup -name "vMotion0" -VDSwitch $vds_name
$vmotion2_vmkernel_portgroup = Get-VDPortgroup -name "vMotion1" -VDSwitch $vds_name
$vmkernel_portgroup_list = @($management_vmkernel_portgroup,$vmotion1_vmkernel_portgroup,$vmotion2_vmkernel_portgroup)

#Move 1 uplink to the vDS, also move over vmkernel interfaces
Write-Host "Moving vmnic0 from the vSS to vDS including vmkernel interfaces" -foregroundcolor Green
Add-VDSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic0" -VMHost $vmhost) -DistributedSwitch $vds_name -VMHostVirtualNic $vmkernel_list -VirtualNicPortgroup $vmkernel_portgroup_list -Confirm:$false

#Moving VM's from VSS to vDS
$vmlist = Get-VM | Where-Object {$_.VMHost.name -eq $vmhost}

foreach ($vm in $vmlist)
{
#VM's may have more that one nic
$vmniclist = Get-NetworkAdapter -vm $vm
foreach ($vmnic in $vmniclist)
{
$newportgroup = $vmnic.NetworkName.Replace("VSS_","")
Write-Host "Changing port group for" $vm.name "from" $vmnic.NetworkName "to " $newportgroup -foregroundcolor Green
Set-NetworkAdapter -NetworkAdapter $vmnic -Portgroup $newportgroup -Confirm:$false | Out-Null
}
}

#Moving additional vmnic to vds
Write-Host "All VM's migrated, adding second vmnic to vDS" -foregroundcolor Green
Add-VDSwitchPhysicalNetworkAdapter -VMHostPhysicalNic (Get-VMHostNetworkAdapter -Physical -Name "vmnic1" -VMHost $vmhost) -DistributedSwitch $vds_name -Confirm:$false

#Tidyup - Remove vSS from this host
Write-Host "Removing VSS from host" -foregroundcolor Green
Remove-VirtualSwitch -VirtualSwitch $vss -Confirm:$false
« Older posts Newer posts »

© 2025 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS
Twitter
Visit Us
Follow Me