lunedì 30 dicembre 2013

VMware: Shrink VMDKs by removing zeroed blocks

VMDK size is one of the main aspects to consider for administrators when deploying new VMs since it could introduce several hazards in storage space usage. By introducing thin provisioning VMware mitigated space exhaustion issues due to the fact that this provisioning mechanism allocates only blocks that are effectively used by the operating system residing on VMDK itself preventing in this way storage to be pre-emptively filled by overzealous admins planning for big virtual machine disks.



But what about shrinking size of those VMDKs that were already provisioned using thick provisioning? They usually consume a lot of storage space and not always their thick allocation is justified. Usually thick eager is the allocation mechanism choosen for I/O intensive virtual machines due to the fact that by zeroing blocks during VMDK creation, and not when operating system issues the first write on a block, introduce a little performance increase (just on first write though).



As most of you certainly know thick lazy zeroed is another allocation mechanism that pre-emptively reserve storage space required by the virtual machine VMDK. Difference between thick lazy and thick eager is that the former reserve whole VMDK size on storage without zeroing blocks, which will be zeroed at first write.



Another crucial aspect to consider is that ESXi is not aware of space freed by the upper operating system due to file deletion. This space cannot be reclaimed by underlying hypervisor and because of this VMDK cannot be "downsized automatically".

Let's clarify this with an example: suppose that we have a virtual machine whose VMDK is thin provisioned.
Operating system is 5GB in size. We install 2GB of additional software. Operating system will report a space consumption of 5+2=7GB. VMDK will be the same (5+2=7GB) size.
We later decide that additional software just installed is useless so we delete/uninstall it. Operating system will report a space consumption of 7-2=5GB. VMDK will be unchanged with a size of 7GB.
This is because ESXi is not aware that space is currently unused by the guest OS and cannot reclaim it.

By shrinking VMDKs this space can be reclaimed. A prior important condition for unused space reclamation is that operating system zeroes currently unused blocks, this means that for shrink to be the most effective guest operating system should fill all currently unused space with zeroes. This can be done in Windows guests by using softwares like Hard Disk Scrubber or SDelete and in Linux guest with dd if=/dev/zero of=/<destination_path>/placeholderfile bs=8192 && rm -rf /<destination_path>/placeholderfile command.

Since unused space is reclaimed after a shrink operation VMDK will be converted from thin, eager zeroed, lazy zeroed to a thin provisioned VMDK.

Let's now delve into how shrinking is performed. A common method is using Storage vMotion. When storage vMotioning a virtual machine resulting disk format can be choosen. Another method is using VMware Converter by converting guest OS like a classic P2V conversion. Pros of those two methods is that they can be performed while VM is powered on.

A third method to shrink a VMDK is by Using vmkfstools.

This method requires the VM to be powered off during shrinking process and you to connect to ESXi host using SSH to run some commands.

For article purpouse I created a VM with a thick provisioned VMDK. Following image show you space usage reported at Linux guest level:



Guest OS reports a total space of 60GB of which 2.3 GB are currently used.

Since it's thick provisioned VMDK is also 60GB in size



To shrink VMDK using vmkfstools the following command is used:

vmkfstools --punchzero <path_to_VMDK_to_shrink>.vmdk



--punchzero option is quite self-explicative, it basically removes all zeroed blocks in a VMDK by freeing up space.

As you can expect, based on previous explanation, the resultant VMDK will be shrinked down to the size of effective guest OS space usage.



That's all!!

lunedì 23 dicembre 2013

VMware: Automating vFRC deployment with PowerCLI

After Increasing VM read performances using vFRC let's discuss about automating vFRC deployment in datacenter using PowerCLI.

A few weeks ago VMware released some brand new cmdlets for automating VSAN and vFRC using PowerCLI, so, after some testing, I decided to come up with a blog post regarding vFRC automation. What's about vFRC cmdlets? They are packed in an additional PowerCLI module that needs to be imported prior to be used. Once imported we are ready to go and as they say "sky is the limit".

At first you need to download cmdlets: Download VSAN & vFRC PowerCLI cmdlets.

As well explained in installation instructions place downloaded module in your reference folder, if you already don't have one create a new folder wherever you like. I used "C:\Users\paolop\Documents\WindowsPowerShell\Modules".

Then we need to add this new folder as an environment variable, so we can import modules from.

Here's the script that will do this for you. Copy these lines over to a text file, edit them accordingly to your environment and save it with .ps1 extension then run it from PowerCLI shell.

 $p = [Environment]::GetEnvironmentVariable("PSModulePath")  
 echo $p #Show your current path to modules  
 $p += ";C:\Users\<YOUR_USER>\Documents\WindowsPowerShell\Modules" #Add your custom location for modules  
 [Environment]::SetEnvironmentVariable("PSModulePath",$p)  

Once environment variable is set we need to import VSAN & vFRC module into PowerCLI shell:

Import-Module VMware.VimAutomation.Extensions

New cmdlets are now available:

Get-HardDiskVFlashConfiguration
Get-VMHostVFlashConfiguration
Get-VsanDisk
Get-VsanDiskGroup
New-VsanDisk
New-VsanDiskGroup
Remove-VsanDisk
Remove-VsanDiskGroup
Set-HardDiskVFlashConfiguration
Set-VMHostVFlashConfiguration

For article purpouse we only consider vFRC related cmdlets:

Get-Command -Module VMware.VimAutomation.Extensions | Format-List


 Name       : Get-HardDiskVFlashConfiguration  
 Definition    : Get-HardDiskVFlashConfiguration [[-HardDisk] <HardDisk[]>] [  
           -Server <VIServer[]>] [-Verbose] [-Debug] [-ErrorAction <Act  
           ionPreference>] [-WarningAction <ActionPreference>] [-ErrorV  
           ariable <String>] [-WarningVariable <String>] [-OutVariable  
           <String>] [-OutBuffer <Int32>]  
 Name       : Get-VMHostVFlashConfiguration  
 Definition    : Get-VMHostVFlashConfiguration [[-VMHost] <VMHost[]>] [-Serve  
           r <VIServer[]>] [-Verbose] [-Debug] [-ErrorAction <ActionPre  
           ference>] [-WarningAction <ActionPreference>] [-ErrorVariabl  
           e <String>] [-WarningVariable <String>] [-OutVariable <Strin  
           g>] [-OutBuffer <Int32>]  
 Name       : Set-HardDiskVFlashConfiguration  
 Definition    : Set-HardDiskVFlashConfiguration [-VFlashConfiguration] <Hard  
           DiskVFlashConfiguration[]> [-CacheSizeGB <Decimal>] [-CacheB  
           lockSizeKB <Int64>] [-Verbose] [-Debug] [-ErrorAction <Actio  
           nPreference>] [-WarningAction <ActionPreference>] [-ErrorVar  
           iable <String>] [-WarningVariable <String>] [-OutVariable <S  
           tring>] [-OutBuffer <Int32>] [-WhatIf] [-Confirm]  
 Name       : Set-VMHostVFlashConfiguration  
 Definition    : Set-VMHostVFlashConfiguration [-RemoveVFlashResource] [-VFla  
           shConfiguration <VMHostVFlashConfiguration[]>] [-SwapCacheRe  
           servationGB <Int64>] [-AddDevice <VMHostDisk[]>] [-AttachExi  
           stingVffs <String>] [-Server <VIServer[]>] [-Verbose] [-Debu  
           g] [-ErrorAction <ActionPreference>] [-WarningAction <Action  
           Preference>] [-ErrorVariable <String>] [-WarningVariable <St  
           ring>] [-OutVariable <String>] [-OutBuffer <Int32>] [-WhatIf  
           ] [-Confirm]  

Let's now delve more specifically into PowerCLI with an sample script that creates a new DC, enable flash pool resource on DC's hosts and create new virtual machines enabling vFRC on their disk.

To do this I've created 3 different scripts:

CreateDC.ps1: that creates a datacenter, a cluster with HA and DRS, and then dinamically adds hosts into DC by reading them from an input file containing host's IP or FQDN.

vFRC.ps1: adds server's local flash drives to vFRC pool.

CreateVMs.ps1: Deploy VMs from an existing template and reserve 1GB of their disk on vFRC pool.

At first we need to create a Settings.xml file used by PowerCLI scripts to read variables from. As you will notice from my code I will use variables coded in an XML file and read by PowerCLI scripts.

Edit this according to your environment.

 <?xml version="1.0" encoding="utf-8"?>  
 <Settings>  
 <vCenterIPorFQDN>10.0.0.127</vCenterIPorFQDN>  
 <vCenterUsername>Administrator@vsphere.local</vCenterUsername>  
 <vCenterPassword>vmware</vCenterPassword>  
 <DatacenterName>My Datacenter</DatacenterName>  
 <DatacenterFolder>Datacenter Folder</DatacenterFolder>  
 <ClusterName>My Cluster</ClusterName>  
 <HostUsername>root</HostUsername>  
 <HostPassword>mypassword</HostPassword>  
 <VmNumber>1</VmNumber>  
 <VmBaseName>vFRC_VM</VmBaseName>  
 <VmTemplate>Windows2k8Template</VmTemplate>  
 <VmDatastore>Datastore</VmDatastore>  
 <CacheSizeGB>1</CacheSizeGB>  
 </Settings>  

Variables name are quite self-explicative, although it may be useful to point out that:

HostUsername: username for hosts that will be added in the cluster. For sake of simplicity I assume all hosts have the same user/password.
HostPassword: password for hosts that will be added in the cluster. For sake of simplicity I assume all hosts have the same user/password.
VmNumber: is the number of VM to deploy from base template (Windows2k8Template in this example)
VmDatastore: datastore name where virtual machine will reside.
CacheSizeGB: how much GB of vFRC will be assigned to virtual machine disk.

Another configuration file required by script is hosts.txt which contains ESXi hosts IP addresses or FQDN.

My hosts.txt contains one host IP address only:

 10.0.0.126  

First PowerCLI script is CreateDC.ps1:

 [xml] $xmlconfigurations=get-content Settings.xml  
 Write-Host "Connecting to" $xmlconfigurations.Settings.vCenterIPorFQDN "vCenter" -foregroundcolor "magenta"  
 Connect-VIServer -Server $xmlconfigurations.Settings.vCenterIPorFQDN -User $xmlconfigurations.Settings.vCenterUsername -Password $xmlconfigurations.Settings.vCenterPassword  
 Write-Host "Creating" $xmlconfigurations.Settings.DatacenterFolder "Folder" -foregroundcolor "magenta"  
 Get-Folder -NoRecursion | New-Folder -Name $xmlconfigurations.Settings.DatacenterFolder  
 Write-Host "Creating" $xmlconfigurations.Settings.DatacenterName "Datacenter and" $xmlconfigurations.Settings.ClusterName "Cluster" -foregroundcolor "magenta"  
 New-Cluster -Location (  
 New-Datacenter -Location $xmlconfigurations.Settings.DatacenterFolder -Name $xmlconfigurations.Settings.DatacenterName  
 ) -Name $xmlconfigurations.Settings.ClusterName -HAEnabled -HAAdmissionControlEnabled:$false -DRSEnabled -DrsAutomationLevel FullyAutomated  
 Get-Content hosts.txt | Foreach-Object { #Read hosts in hosts.txt   
 Write-Host "Adding" $_ "to" $xmlconfigurations.Settings.ClusterName "Cluster" -foregroundcolor "magenta"  
 Add-VMHost $_ -Location $xmlconfigurations.Settings.ClusterName -User $xmlconfigurations.Settings.HostUsername -Password $xmlconfigurations.Settings.HostPassword -RunAsync -force:$true  
 }  

As stated above it will create a new datacenter, a cluster, and will add hosts to it. Nothing particular here, no "new" cmdlets are used here. I provide this code as an example for reading variables from an XML and TXT file. As most of you already have a datacenter properly setup you can safely skip this script and move to next which will bring in vFRC cmdlets.







vFRC.ps1 will create a new vFRC pool by using host local unused SSD diks.

 Write-Host "Importing PowerCLI vFRC cmdlets" -foregroundcolor "magenta"  
 Import-Module VMware.VimAutomation.Extensions #Import module  
 Get-Content hosts.txt | Foreach-Object { #Read hosts in hosts.txt   
 Write-Host "Getting current vFRC configuration for" $_ "host" -foregroundcolor "magenta"  
 $vFlashConfig = Get-VMHostVFlashConfiguration -VMHost $_  
 echo $vFlashConfig  
 Write-Host "Getting" $_ "host SSDs to be used by vFRC" -foregroundcolor "magenta"  
 $vFlashDisk = Get-VMHostDisk -VMHost $_  
 echo $vFlashDisk  
 Set-VMHostVFlashConfiguration -VFlashConfiguration $vFlashConfig -AddDevice $vFlashDisk #Enable vFRC on selected host   
 }  

This is done by using Set-VMHostVFlashConfiguration cmdlet.

Set-VMHostVFlashConfiguration -VFlashConfiguration $vFlashConfig -AddDevice $vFlashDisk





Last script is CreateVMs.ps1 that deploys VMs from a base template, reserve an amount of their disk space in vFRC pool and then place them on a host (DRS will place VM to right host according to resource availability). Host IP address is statically defined in the following script but can be easily read from an external file as explained in scripts above.

 [xml] $xmlconfigurations=get-content Settings.xml  
 Write-Host "Creating" $xmlconfigurations.Settings.NumberOfVM "VMs" -foregroundcolor "magenta"  
 $vmname = $xmlconfigurations.Settings.VmBaseName  
 1..$xmlconfigurations.Settings.NumberOfVM | Foreach {  #Creates VMs on 10.0.0.126 host
 New-VM -VMHost 10.0.0.126 -Name $vmname$_ -Template $xmlconfigurations.Settings.VmTemplate -Datastore $xmlconfigurations.Settings.VmDatastore  
 }  
 Import-Module VMware.VimAutomation.Extensions #Import module  
 Write-Host "Enabling vFRC on" $xmlconfigurations.Settings.VmNumber "VMs" -foregroundcolor "magenta"  
 1..$xmlconfigurations.Settings.NumberOfVM | Foreach {  
 Set-HardDiskVFlashConfiguration -VFlashConfiguration (Get-HardDiskVFlashConfiguration -HardDisk (Get-HardDisk -VM $vmname$_)) -CacheSizeGB $xmlconfigurations.Settings.CacheSizeGB -Confirm:$false  
 }  

vFRC cmdlet used here is:

Set-HardDiskVFlashConfiguration

which enable vFRC on a given virtual machine disk.







Thats' all!!

venerdì 20 dicembre 2013

VMware: ESXi Unattended Scripted Installation

ESXi installation is an easy job for one or two hosts, but imagine to repeat such installation for 40/50 hosts: it would take all day. To prevent such time-consuming situation VMware allow administrators to perform unattended ESXi installation.

Unattended installation is performed using a kickstart script that will be provided during boot. Kickstart script contains all parameters needed by ESXi to automatically complete installation process without further human intervention.

At first I suggest you to have a look at official documentation regarding ESXi 5.5 scripted installation:

Deploying ESXi 5.x using the Scripted Install feature (2004582)
About Installation and Upgrade Scripts

Here's my kickstart file. I named it ks.cfg. You can use it as a base template and edit it according to your requirements.

 #  
 # Sample scripted installation file  
 #  
 # Accept EULA  
 vmaccepteula  
 # Set root password  
 rootpw mypassword  
 #Install on local disk overwriting any existing VMFS datastore  
 install --firstdisk --overwritevmfs  
 # Network configuration  
 network --bootproto=static --device=vmnic0 --ip=192.168.116.228 --netmask=255.255.255.0 --gateway=192.168.116.2 --nameserver=192.168.116.2 --hostname=esx1.testdomain.local --vlanid=100 --addvmportgroup=1  
 #Reboot after installation completed  
 reboot  

As you can see the code is already commented but let me spend a few words on:

install --firstdisk --overwritevmfs

This is used to install ESXi on first available local disk overwriting any existent VMFS partition.

While:

network --bootproto=static --device=vmnic0 --ip=192.168.116.228 --netmask=255.255.255.0 --gateway=192.168.116.2 --nameserver=192.168.116.2 --hostname=esx1.testdomain.local --vlanid=100 --addvmportgroup=1

Specifies that vmnic0 will be used for management and assigns to it IP address, netmask, gateway and vlan id.

--addvmportgroup=1 creates the VM Network portgroup to which virtual machines will be connected by default.

Let me now explain how to use this kickstart file during installation.

Boot your host with ESXi installation media attached (I use CDROM). During boot press SHIFT + O to Edit boot options. Weasel prompt will appear.

> runweasel

Basic command to use a network accessible (HTTP, HTTPS, NFS, FTP) kickstart file is:

> runweasel ks=<kickstart_file_location> ip=<ip_address_to_use_to_retrieve_ks> netmask=<netmask_to_use_to_retrieve_ks> gateway=<gateway_to_use_to_retrieve_ks> vlanid=<vlan_to_use_to_retrieve_ks>

kickstart script s location can be not just an HTTP(S) server. Even FTP, NFS, cdrom or usb are accepted in the form of:

ks=protocol://<serverpath>
ks=cdrom:/<path\>
ks=file://<path>
ks=usb:</path>


In this example I retrieve kickstart file from a webserver (an HTTP location) and assign 192.168.116.222 as IP address for host during installation process.

> runweasel ks=http://192.168.116.1:8080/ks.cfg ip=192.168.116.222 netmask=255.255.255.0 gateway=192.168.116.2




Unattended installation will begin by parsing kickstart file.



When installation is completed host will reboot and ESXi will be ready to be used.



That's all!!

lunedì 16 dicembre 2013

VMware: vSphere Replication Part 7 - Provision additional Replication Servers

As stated in previous article vSphere Replication architecture comprises two key elements: vSphere Replication Manager and one or more vSphere Replication Servers. Although there is only one vSphere Replication Manager in a vSphere Replication deployment we can add more than just one vSphere Replication Server in our environment to achieve a better load balancing for replicating VMs to target site.

Provisioning more than one vSphere Replication Server concurr in offloading replication tasks to more than just one server leading to a potentially faster replication process.

How can we deploy more than just the standard vSphere Replication Server comprised in vSphere Replication Appliance?

The process is really simple, just login to vSphere Web Client and select vSphere Replication -> Home -> Manage -> vSphere Replication -> Replication Servers and click Deploy Additional RS Server.



The usual OVF deploy form will pop-out, make sure you select the assembly file vSphere_Replication_AddOn_OVF10.ovf that came with vSphere Replication appliance iso downloaded from VMware website.

You will need to provide IP address or leave DHCP assignation if you prefer, and an administrative password to manage the server. Please bear in mind that vSphere Replication Server unlike vSphere Replication Management Server does not have a VAMI so you cannot login to a webpage to configure it but vSphere Replication Management Server will address this for you and will give you the opportunity to configure additional vSphere Replication Servers from it's VAMI.



Once deployed and powered on we can add it as an additional replication server.

Click Register vSphere Replication Server and select the newly deployed standalone server.



Newly deployed vSphere Replication Server will be added and will be listed under Replication Servers page.



Additional vSphere Replication Server can now be selected when performing a replication. Auto-assign mechanisms will always choose least busy vSphere Replication Server (i.e the one with least replications assigned) while manual selection gives you the opportunity to manually choose by which server your VMs will be replicated to target.


Other blog posts in vSphere Replication Series:

vSphere Replication Part 1 - Introduction
vSphere Replication Part 2 - Installation
vSphere Replication Part 3 - Roles & Permissions
vSphere Replication Part 4 - Configuration
vSphere Replication Part 5 - Enable Replication
vSphere Replication Part 6 - Perform Recovery
vSphere Replication Part 7 - Provision additional Replication Servers

venerdì 13 dicembre 2013

VMware: vSphere Replication Part 6 - Perform Recovery

When everything goes berserk it is time to perform a recovery!!
vSphere Replication's main goal is to allow administrators to recover corrupted or lost VM disks. Recovery process is quite simple yet at the same time really ingenious. How does it works?

On a global level the process comprises the following steps:

1)Select VM to recover
2)Recover VM on replication site, power it on, verify it works as expected
3)Configure an inverse replication process to move back VM from destination to source site

Let's now delve more into these steps to see how recovery is performed.

As first it could be intresting to note the fact that if virtual machine is available at source site vSphere Replication allows you to synchronize disk data back from target site to source site. This could be useful in case of something goes wrong at source site VM but VM itself is still available and running.

For this article purpose I will consider a proper failure scenario in which VM at source site went corrupted.

As pointed out in previous articles my lab comprises two sites: SiteA and SiteB each managed by a different vCenter.

To perform VM recovery connect to vCenter at remote site (SiteB) and select Monitor -> vSphere Replication -> Manage -> Incoming Replications. Select the VM, right click on it and then click Recover button.



Recover procedure will pop-out and will guide you through the process.
First selection concerns recovery options, as said before I will go through a complete failure of source SiteA.




Select where to recover VM. In this case I created a dedicated folder in my datacenter where to place recovered VMs.



Select host that will host your VM



Then when ready click Finish. You also might want to select Power on the virtual machine after recovery. Don't worry about potential VM IP address conflicts, I will later explain why.



Once recovery has been performed a new VM will be created on your source site (SiteB) and on Summary tab of the VM you will be informed the VM was recovered.



If when enabling replication you selected to keep PIT copies of your VM all these point in times were associated to the VM as form of snapshots. To recover VM state at a certain PIT right click VM, select Snapshots Manager and all snapshots (PITs) will be listed there.



Once the VM has been recovered to the desired PIT and powered on if you open VM console you will notice that VM has no network attached. This is because recovering a VM and powering it on on remote site (SiteB) could introduce IP addresses conflicts with other VMs already there at the remote site.
To prevent such conflicts by default recovered VMs has no network connected.



To re-attach network adapter in recovered VM just right click it, select Edit Settings and check Connected box right to the network adapter.



VM is now fully functional at remote site (SiteB) but how about moving it back to local site (SiteA)? To move VM back to source site a reverse replication must be configured. By enabling vSphere Replication on recovered VM at target site back to source site will bring VM back at it's original place.

This can be performed as explained in vSphere Replication Part 5 - Enable Replication article reversing sites i.e. SiteB will be source site and SiteA will be destination site.



Other blog posts in vSphere Replication Series:

vSphere Replication Part 1 - Introduction
vSphere Replication Part 2 - Installation
vSphere Replication Part 3 - Roles & Permissions
vSphere Replication Part 4 - Configuration
vSphere Replication Part 5 - Enable Replication
vSphere Replication Part 6 - Perform Recovery
vSphere Replication Part 7 - Provision additional Replication Servers

lunedì 9 dicembre 2013

VMware: vSphere Replication Part 5 - Enable Replication

We already discussed about vSphere Replication, how to install it, how to configure it and now it is finally time to replicate VMs.
Replication can be set on individual VMs through a step-by-step guided configuration in which you will be prompted for destination datastore, RPO, replication server to use, wheter to keep or not PIT copies, etc.

As discussed in vSphere Replication Part 2 - Installation my environment comprises two sites (local SiteA and remote SiteB) managed by two different vCenters.

vSphere Replication can only be managed from vSphere Web Client, so login into it and click on vSphere Replication.



Select Home tab, then your local (SiteA) vCenter Server and click Manage.



To verify everything is properly working we need to check that our local Replication Server is connected. Click on vSphere Replication -> Replication Servers.
vSphere Replication Appliance will be listed since as for now it is the only vSphere Replication Server deployed in our environment.



Let's add a target site establishing connection to remote SiteB's vCenter. Click Target Sites.



Enter remote vCenter Server FQDN or IP Address and credentials. Since this is a test deployment and I'm the only user administrating the whole infrastructure I will use Administrator role to connect to remote vCenter. A best practice is to restrict users permissions whether your infrastructure comprises different users or not. For further informations have a look at previous article: vSphere Replication Part 3 - Roles & Permission.



Target site (SiteB) will be listed as connected.



We have completed the initial replication setup, we have linked our local vSphere Replication Server to the remote vSphere Replication Server using vCenters as replication proxyies.

Let's enable replications on single VMs.

Select the VM(s) you need to replicate, right click then select All vSphere Replication Actions -> Configure Replication.



Select vSphere Replication Server location to use for replication. Here you can select both local vSphere Replication Server and remote vSphere Replication Server. I will be using SiteB remote vSphere Replication Server.



Since every location could have more than one vSphere Replication Servers you can use Auto-assign vSphere Replication Server or manually select it. Auto-assign will assign by default the least busy one (i.e. the one that has fewer replications).



Select Target Datastore, this will be replication destination datastore. If you selected a remote vSphere Replication Server remote datastores managed by remote vCenter will be available.



Next select quiescing method for VM. VSS is only enabled if you are replicating a Windows virtual machine. Using VSS while quiescing VM for creating PIT consistent copy to be sent to target datastore provides consistency not only to VM itself but even application-level consistency. If VSS quiescing occurrs correctly data integrity is retained in databases like Exchange, ActiveDirectory, etc.



Next selection is RPO. Recovery Point Objective (RPO) metric indicates how much data your are willing to loss in case of a distaster. vSphere Replication allow RPO from a maximum of 24 hours up to a minimum of 15 minutes. As explained in introduction post more aggressive RPOs cannot be achieved due to the fact that quiescing too frequently a VM could introduce a performance degradation of the VM itself.

RPO selection is a key setting during replication and must be planned carefully. Low RPOs cannot always be achieved due to technical constraints. A key element to consider is the bandwidth available between source site and target site since it affects the time needed to copy changes from SiteA to SiteB therefore RPOs.

Let's discuss it in a more detailed and practical aspect to better clarify this concept. Consider a VM that has the most aggressive RPO allowed by vSphere Replication (15minutes). A common disbelief is that RPO of 15mins means that every 15 minutes virtual machine at source site is replicated over to destination site. This is incorrect, RPO of 15 minutes means that VM state at remote site can be at a maximum of 15mins older in comparison to the virtual machine state at source site. vSphere Replication documentation has a nice example of this concept, let's quote it:

You set the RPO during replication configuration to 15 minutes. If the replication starts at 12:00 and it takes five minutes to transfer to the target site, the instance becomes available on the target site at 12:05, but it reflects the state of the virtual machine at 12:00. The next replication can start no later than 12:10. This replication instance is then available at 12:15 when the first replication instance that started at 12:00 expires.

If you set the RPO to 15 minutes and the replication takes 7.5 minutes to transfer an instance, vSphere Replication transfers an instance all the time. If the replication takes more than 7.5 minutes, the replication encounters periodic RPO violations. For example, if the replication starts at 12:00 and takes 10 minutes to transfer an instance, the replication finishes at 12:10. You can start another replication immediately, but it finishes at 12:20. During the time interval 12:15-12:20, an RPO violation occurs because the latest available instance started at 12:00 and is too old.

Another element to consider when planning RPO is the average amount of modified data in the VM. Since vSphere Replication transfer deltas only you should consider the variation of delta size between two replications. Can available connection bandwidth transfer this delta over to the destination site meeting selected RPO? Is this delta quite constant in its size or VM workload patterns bring in some spikes in data committed to VM disks making these deltas grow bigger and unpredictably therefore potentially preventing connection to complete the transfer of the entire delta during replication?

In conclusion when considering the right RPO for your VMs you must do some math based on available bandwidth between source and target site as well as delta file size increase due to virtual machine workload patterns.

Another feature of vSphere Replication are Point In Time (PIT) instances. These are used to "freeze" the state of a certain virtual machine at a certain moment. By enabling PIT copies you can select how many instances to keep and for how many days.



Once selected RPO we are ready to complete, click Finish.



Click on the replicated VM, under Summary tab a new dashboard called VM Replication will appear.



To monitor current replication configured for vCenter, both incoming or outgoing, go to vCenter -> Monitor -> vSphere Replication.



Other blog posts in vSphere Replication Series:

vSphere Replication Part 1 - Introduction
vSphere Replication Part 2 - Installation
vSphere Replication Part 3 - Roles & Permissions
vSphere Replication Part 4 - Configuration
vSphere Replication Part 5 - Enable Replication
vSphere Replication Part 6 - Perform Recovery 
vSphere Replication Part 7 - Provision additional Replication Servers 

venerdì 6 dicembre 2013

VMware: vSphere Replication Part 4 - Configuration

vSphere Replication is ready out-of-the-box, once the appliance has been deployed it is ready to be used but what if you need to change some settings?

vSphere Replication Appliance has a VAMI (VMware Appliance Management Interface) that can be accessed from:

https://<vSphere_Replication_IP_or_FQDN>:5480

All configuration changes are performed through VAMI

The most important tabs are VR -> Configuration



Here you can configure vSphere Replication database. If embedded database is not sufficient for your deployment you can also use and external SQL or Oracle database as well as import existing DB from another vSphere Replication Manager Server.



VRM Host can be edited in case you need to change to another vSphere Replication Manager Server in your environment.
VRM Site Name and vCenter Server Address, vCenter Server Port and vCenter Server Admin Mail can also be customized.



SSL section allows you to accept only signed certificates as well as generate a self-signed certificate and/or upload an existing one.



Service Status section allow you to monitor the state of vSphere Replication Manager Server and restart it.



VR -> Security tab allows you to change vSphere Replication password and VR -> Support to generate the support bundle useful in case of service request.

Network -> Address is used to change vSphere Replication Appliance network settings like hostname, IP address, default gateway, etc.



Other blog posts in vSphere Replication Series:

vSphere Replication Part 1 - Introduction
vSphere Replication Part 2 - Installation
vSphere Replication Part 3 - Roles & Permissions
vSphere Replication Part 4 - Configuration
vSphere Replication Part 5 - Enable Replication 
vSphere Replication Part 6 - Perform Recovery  
vSphere Replication Part 7 - Provision additional Replication Servers