giovedì 27 febbraio 2014

VMware: WebPowerCLI

WebPowerCLI is a webpage which allows you to run PowerCLI commands. A major advantage of running PowerCLI through a webpage is that you can write and run scripts virtually from any platform having a web browser.

Download WebPowerCLI from GitHub




How to use it:

First cmdlet of your scripts must always be Connect-VIServer (...) in order to establish connection to the vCenter Server or to an ESXi host.

For example:

Connect-VIServer -Server <vcenter_ip_address> -User <username> -Password <password>

Features:


-Run PowerCLI from web.
-Autocomplete for PowerCLI cmdlets.
-Click to add cmdlet. Full syntax for any PowerCLI cmdlet.
-Every cmdlet links to official VMware documentation.
-Place your own PowerCLI scripts in "ps" folder in order to use them from WebPowerCLI.
-EsxCLI support.

Installation steps:

1)Install Windows 2012 and configure networking according to your vSphere environment (Windows server must be able to communicate with vCenter Server/ESXi hosts).
2)Install IIS Web Server.
3)Install PHP5
4)Open PowerShell and run Set-ExecutionPolicy Unrestricted.
5)Install VMware PowerCLI.
6)Copy all files from webpowercli.zip archive to your www directory. You could either create a new website from IIS either replace the default one.

Known Issues:

There are some issues in the current version of WebPowerCLI. For example PowerCLI comment tag "#" does not work. "Save to File" feature works only in Chrome and sometimes cmdlets require more than a single click to be inserted in Input text-area.

Disclaimer: I'm NOT a developer and I coded WebPowerCLI in my spare time. I'm totally aware the code is far from elegant and far from efficient but please don't take this software too seriously.

THIS SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. AUTHOR IS NOT RESPONSIBLE FOR ANY DAMAGE RESULTING IN USING THIS SOFTWARE.

Current WebPowerCLI version 1.0 - This page will be updated whenever new versions will be available.

Download WebPowerCLI from GitHub


Source: WebPowerCLI repository on GitHub

lunedì 24 febbraio 2014

VMware: VSAN Part4 - Automate VSAN using PowerCLI

VSAN deployment can be automated using PowerCLI. PowerCLI Extensions must be installed in order to add VSAN & vFRC cmdlets to PowerCLI.

As explained in Automating vFRC deployment with PowerCLI  post a few steps are required in order to register the VMware.VimAutomation.Extensions module.

After that several new cmdlets become available:

Get-VsanDisk  
Get-VsanDiskGroup  
New-VsanDisk  
New-VsanDiskGroup  
Remove-VsanDisk  
Remove-VsanDiskGroup  

The following script allows you to automate the creation of a VSAN enabled cluster in just one click.

Here the steps performed by the script:

-Import VSAN cmdlets in PowerCLI session in order to use new cmdlets.
-Connect to vCenter Server.
-Create a Datacenter.
-Create a VSAN enabled Cluster.
-Insert and assign license to VSAN cluster (this is optional but in my vLab I was not able to claim VSAN disks without prior licensing VSAN solution).
-Add all hosts participating in VSAN cluster.
-Add a VSAN vmkernel to each host vSwitch.



Prior launching PowerCLI script make sure you correctly set required variables.

Here is the script, I've also added it on my GitHub repository:

Download Automating VSAN.ps1 from GitHub

 #Registering VSAN PowerCLI module  
 $p = [Environment]::GetEnvironmentVariable("PSModulePath")   
 echo $p #Show your current path to modules   
 $p += ";C:\Users\Paolo\WindowsPowerShell\Modules" #Add your custom location for modules   
 [Environment]::SetEnvironmentVariable("PSModulePath",$p)  
 #Variable declaration  
 $vCenterIPorFQDN="192.168.243.40"  
 $vCenterUsername="Administrator@vsphere.local"  
 $vCenterPassword="vmware"  
 $DatacenterFolder="DCFolder"  
 $DatacenterName="VSANDC"  
 $ClusterName="NewCluster"  
 $VSANHosts= @("192.168.243.137","192.168.243.142","192.168.243.141") #IP or FQDN of hosts participating in VSAN cluster  
 $HostUsername="root"  
 $HostPassword="mypassword"  
 $vSwitchName="vSwitch0" #vSwitch on which create VSAN enabled vmkernel  
 $VSANvmkernelIP= @("10.24.45.1","10.24.45.2","10.24.45.3") #IP for VSAN enabled vmkernel  
 $VSANvmkernelSubnetMask="255.255.255.0" #Subnet Mask for VSAN enabled vmkernel  
 $vsanLicense="XXXXX-XXXXX-XXXXX-XXXXX-XXXXX" #VSAN License code  
 Write-Host "Importing PowerCLI VSAN cmdlets" -foregroundcolor "magenta"   
 Import-Module VMware.VimAutomation.Extensions  
 Write-Host "Connecting to vCenter" -foregroundcolor "magenta"   
 Connect-VIServer -Server $vCenterIPorFQDN -User $vCenterUsername -Password $vCenterPassword  
 Write-Host "Creating Folder" -foregroundcolor "magenta"   
 Get-Folder -NoRecursion | New-Folder -Name $DatacenterFolder  
 Write-Host "Creating Datacenter and Cluster" -foregroundcolor "magenta"   
 New-Cluster -Location (   
 New-Datacenter -Location $DatacenterFolder -Name $DatacenterName   
 ) -Name $ClusterName -VsanEnabled:$true -VsanDiskClaimMode Automatic  
 $i = 0 #Initialize loop variable  
 Write-Host "Licensing VSAN cluster" -foregroundcolor "magenta"  
 #Credits to Mike Laverick - http://www.mikelaverick.com/2013/11/back-to-basics-post-configuration-of-vcenter-5-5-install-powercli/  
 $datacenterMoRef = (Get-Cluster -Name NewCluster | get-view).MoRef  
 $serviceinstance = Get-View ServiceInstance  
 $LicManRef=$serviceinstance.Content.LicenseManager  
 $LicManView=Get-View $LicManRef  
 $licenseassetmanager = Get-View $LicManView.LicenseAssignmentManager  
 $licenseassetmanager.UpdateAssignedLicense($datacenterMoRef.value,$vsanLicense,"Virtual SAN 5.5 Advanced")  
 foreach ($element in $VSANHosts) {  
      Write-Host "Adding" $element "to Cluster" -foregroundcolor "magenta"   
      Add-VMHost $element -Location $ClusterName -User $HostUsername -Password $HostPassword -RunAsync -force:$true   
      Write-Host "One minute sleep in order to register" $element "into the cluster" -foregroundcolor "magenta"  
      Start-Sleep -s 60  
      Write-Host "Enabling VSAN vmkernel on" $element "host" -foregroundcolor "magenta"  
      if ($i -le $VSANHosts.Length) {  
           New-VMHostNetworkAdapter -VMHost (Get-VMHost -Name $element) -PortGroup VSAN -VirtualSwitch $vSwitchName -IP $VSANvmkernelIP[$i] -SubnetMask $VSANvmkernelSubnetMask -VsanTrafficEnabled:$true  
      }       
      $i++  
 }  

Other blog posts in VSAN Series:

VSAN Part1 - Introduction
VSAN Part2 - Initial Setup
VSAN Part3 - Storage Policies
VSAN Part4 - Automate VSAN using PowerCLI

lunedì 17 febbraio 2014

VMware: VSAN Part3 - Storage Policies

By being a Software Defined Storage VSAN uses Storage Policies to provide storage resources to virtual machines. These policy driven objects specify how much of storage resources will be able to be consumed, or will be granted, to certain VMs.

Policies can be created in vCenter Server under VM Storage Policies.



Select VSAN enabled cluster to which policies will be granted.



Next step is to create a new VM Storage Policy. A VM Storage Policy is a group of rules that define how storage resources will be consumed by virtual machines to which this policy will be assigned to.
Since, allegedly, different virtual machines will have different storage requirements more VM Storage Policies will be created.



Rule set deserves a few words to be spent on. As first select VSAN under Rules based on vendor-specific capabilities.
A whole set of rules will be available:

Number of disk stripes per object: The number of HDDs across which each replica of a storage object is striped. A value higher than 1 may result in better performance (for e.g. when flash read cache misses need to get serviced from HDD), but also results in higher use of system resources. Default value is 1, maximum value is 12.

Flash read cache reservation(%): Flash capacity reserved as flash read cache for the storage object. Specified as a percentage of the logical size of the object. To be used only for addressing read performance issues. Reserved flash capacity cannot be used by other objects. Unreserved flash is shared fairly among all objects. Default value is 0%. Maximum value is 100%.

Number of failures to tolerate: Defines the number of host, disk or network failures a object can tolerate. For n failures tolerated "n+1" copies of the object are created and "2n+1" hosts contributing storage are required. Default value is 1, maximum value is 3.

Force provisioning: If this option is set to "yes" the object will be provisioned even if the policy specified in the storage policy is not satisfiable with the resources currently available in the cluster. VSAN will try to bring the object into compliance if and when resources become available. Default value is NO.

Object space reservation (%): Percentage of the logical size of the storage object that will be reserved (thick provisioned) upon VM provisioning. The rest of the storage object is thin provisioned. Default value is 0%, maximum value is 100%.



Select the datastore to which these rules will be applied: vsanDatastore will be our choice.



VM Storage Policies can be applied to either already created VMs by going to All vCenter Actions->VM Storage Policies-> Manage VM Storage Policies.



Select desired storage policy then click Apply to disks.



VM Storage Policies can also be assigned to specific virtual machines during VM creation process.



You are now ready for the VSAN experience!!

Other blog posts in VSAN Series:

VSAN Part1 - Introduction
VSAN Part2 - Initial Setup
VSAN Part3 - Storage Policies
VSAN Part4 - Automate VSAN using PowerCLI

giovedì 13 febbraio 2014

VMware: VSAN Part2 - Initial Setup

Since VSAN is coded in ESXi 5.5 hypervisor it does not require an installation but an enablement. A VSAN capable cluster must be created and appropriate disks must be claimed by hosts in order to provide capacity and performance to the cluster.

For VSAN testing we need at least three ESXi hosts, each with an unused, unformatted, SSD and HDD. VSAN supports up to a maximum of 1 SSD and 7 HDDs for each host. If you installed ESXi locally on an HDD this one cannot be used for VSAN since it has been formatted with VMFS. 
VSAN, at this moment, allows up to eight ESXi hosts, both "active" or "passive", within the same cluster. As explained in previous article not every host participating in VSAN cluster must have local HDD and SSD (referred by me, for sake of simplicity, as "active host") but we need at least three of them in order to VSAN to work properly since every VM has, by default policy, its vmdks backed by two hosts with a third host acting as witness.

VSAN can also be tested in a virtual lab, no hardware requirements except an hypervisor (ESXi or Workstation) and enough disk space.

For this article purpouse I create a vLab environment for VSAN testing so let's start by creating three ESXi 5.5 hosts.

Each VM on which will be installed ESXi has been configured with:

-VMware ESXi 5.5
-4GB of RAM
-A 2GB HDD for installing ESXi
-A 4GB *fake* SSD for VSAN
-An 8GB HDD for VSAN

Of course you can tune these values according to your needs.

SSDs in nested virtualization are simply virtual disks faked to be recognized by ESXi as SSDs. This can be done following this great article by William Lam: Emulating an SSD Virtual Disk in a VMware Environment.

Another great resource provided by William Lam is a deployment template for a VSAN host. Basically it creates a VM with the aforementioned specifications, so if you don't want to manually configure a VM just dowload William's one.

After ESXi hosts has been installed enter vSphere Web Client and add them to a datacenter.



In order to work VSAN requires a dedicated network for VSAN traffic. A vmkernel is required, when you create/modify it ensure to tick Virtual SAN Traffic checkbox.



The resulting vSwitch will be similar to this one:



Now let's create a VSAN enabled cluster. Cluster creation is the same as for any cluster you already created but in this case we need to tick Virtual SAN checkbox. You can leave Automatic under Add disks to storage in order to automatically reclaim suitable VSAN disks by each host.
DRS & HA can be enabled since fully supported by VSAN.



Add your hosts to the newly created cluster.



VSAN can be managed under cluster's Manage -> Settings -> Virtual SAN. General tab reports VSAN status like used and usable capacity.



Assign VSAN license under Configuration -> Virtual SAN Licensing.



Now let's assign disks to diskgroup. A diskgroup can be seen as a logical container of both SSDs and HDDs resources created aggregating local SSDs and HDDs of each host. SSDs will provide performances and will not be counted as usable space because all writes, after being acknowledged, will be staged from SSDs to HDDs, conversely HDDs will provide capacity.
Click on Claim Disks button.



Claim disks popup window will appear, here will be listed all unused HDDs and SSDs, claimable by VSAN, for each server.

Select them by clicking Select all eligible disks button.



Diskgroup will be created.



New changes will be reflected under General tab. As said before as Total capacity of VSAN datastore will be reported only HDDs provided space.



At this point VSAN cluster is correctly setup, we now need to create a custom storage policy and assign it to our VMs residing on VSAN. This will be explained in Part3.

Other blog posts in VSAN Series:


VSAN Part1 - Introduction
VSAN Part2 - Initial Setup
VSAN Part3 - Storage Policies 
VSAN Part4 - Automate VSAN using PowerCLI

venerdì 7 febbraio 2014

VMware: VLAN Tagging - EST, VST & VGT

A few days ago a customer asked me about VLAN tagging in VMware virtual switches and how configurations are reflected at physical switch level. I already discussed about VLANs and vSwitches in a previous article but in this post I would like to have a brief look at various VLAN tagging methods that could be implemented in a VMware infrastructure.

There are three different ways in which Ethernet frames can be VLAN tagged: External Switch VLAN Tagging (EST), Virtual Switch VLAN Tagging (VST) and Virtual Guest VLAN Tagging (VGT).

In EST Ethernet frames are VLAN tagged only at physical switch level, virtual switches are unaware of VLANs and every physical NIC can be statically configured to carry only one VLAN at a time.
Due to this 1:1 vmnic to VLAN association this tagging method is not the most flexible one and can only be used in environments where the number of VLANs is quite small and no, or minor changes, could occurr in networking.
Even small modifications at physical switch level (i.e. a switch port should carry more than one VLAN) will introduce a major amount of work bringing in a potential complete redesign of the entire virtual networking.

External Switch VLAN Tagging is completely transparent to virtual switches, no VLAN ID has to be set. VLAN must be configured at physical switch level only.



VST is the most common way used to perform VLAN tagging. Working principles has been already explained in the aforementioned article but I will summarize them here again: VLAN tag is added to frames just before they leave the virtual switch.
Every PortGroup can tag frames on a specific VLAN comprised between 1 and 4094 allowing in this way a physical NIC to carry more than one VLAN therefore reducing hardware requirements: no need of a 1:1 mapping like in EST.
Physical switch must be properly configured in order for a particular port to carry more than one VLAN at the same time. These ports are known in Cisco language as trunk ports.



In VGT the VLAN tagging is performed at guest OS level using a specific 802.1Q VLAN trunking driver. Guest OS adds VLAN tag to frames before they leave virtual machine's virtual NIC.
VM PortGroups in virtual switches must be configured with VLAN ID 4095 (i.e trunk) as they can carry frames from any VLAN, this configuration must be reflected in respective ports in physical switch.
Like in VST a single physical NIC can carry more than one VLAN reducing hardware requirements.

domenica 2 febbraio 2014

VMware: HP StoreVirtual VSA Part4 - Multi VSA Cluster

In fourth post of HP StoreVirtual VSA series we are gonna discuss about storage clusters composed by two or more virtual storage appliances.
As said in previous articles HP StoreVirtual is a scale-out storage solution which benefits from the addition of other nodes by increasing data reliability and throughput by load balancing requests through different nodes.

To create a proper cluster with load balancing features we need at least two VSAs.



When dealing with a cluster comprising two or more HP StoreVirtual VSAs we need to introduce a new component named as Failover Manager (simply referred as FOM). As for the physical HP StoreVirtual counterpart even VSA use a Failover Manager. Basically FOM is a virtual appliance deployed in VMware environment and to quote official HP documentation:

A specialized manager running as a VMware guest operating system that can act as a quorum tie-breaker system when installed into a third location in the network to provide for automated failover/failback of the Multi-Site SAN clusters. It also added to two-system management groups to provide the optimum quorum configuration.

The Failover Manager is a specialized version of the SAN/iQ software designed to run as a virtual appliance in either a VMware or Microsoft Hyper-V Server environment. The Failover Manager participates in the management group as a real manager in the system; however, it performs quorum operations only, not data movement operations. It is especially useful in a Multi-Site SAN configuration to manage quorum for the multi-site configuration without requiring additional storage systems to act as managers in the sites.



Despite HP StoreVirtual could even use the embedded Virtual Manager to manage quorum ownership in a single-site configuration it is a common practice to deploy FOM due to the fact that the former has some known limitations.

FOM is deployed as a virtual appliance either installable from the HP StoreVirtual VSA for VMware Installer, either downloadable from HP website as an ova assembly. In this scenario I used the former solution.



Since FOM need to communicate with all VSAs in the cluster a proper IP address must be assigned to it.



Once FOM has been installed it has to be found using CMC as for a common VSA by inserting its IP address.



Regardless you are going to create a new cluster or adding FOM to an existing one, the optimal configuration is based on one FOM per cluster plus two, or more, VSAs.

A major benefit brought by clusters with two VSAs is the ability to use Network RAID-10 in which data is replicated on both VSAs to provide a greater degree of availability.



In case of outage of the physical ESXi host running one VSA data is still available and online from other VSA running on the other powered-on host.



When VSA on failed host will be powered back on by HA it will be automatically recognized online and will actively partecipate in the cluster again.



When three or more VSAs are added to a cluster the Data Protection Level of volumes becomes pretty intresting. Network RAID-5 and Network RAID-10+1 can also be selected.

Network RAID-5 is the least space consuming protection level which only distributes parity across VSAs while Network RAID-10+1 is the most secure, yet the most space consuming, level available of data protection which replicates volume data on all three VSAs vaulting data against two appliance failures at the same time.



In a two VSA cluster the cluster itself is not impacted in case of FOM failure.



In case of failure of both FOM plus one of the two VSA the cluster becomes unusable until FOM and/or VSA are brought back on.



To prevent this situation a good practice is to place every appliace (FOM + both VSAs) on separate physical hosts decreasing the probability that an host outage could affect the entire storage cluster.

Other blog posts in HP StoreVirtual VSA Series:

HP StoreVirtual VSA Part1 - Installation
HP StoreVirtual VSA Part2 - Initial Configuration 
HP StoreVirtual VSA Part3 - Management Groups Clusters and Volume
HP StoreVirtual VSA Part4 - Multi VSA Cluster