Showing posts with label vSphere. Show all posts

Convert a VM snapshot to Memory Dump



Imagine we come across a very critical Virtual Machine hosted on VMware platform which is hung at a particular stage. We see that either the machine has freezed at a screen or has been hung at Blue screen of death. What options do we have rather than hard rebooting the machine to bring back the primary functionality of the Virtual Machine online but we often being asked the question as to why the machine got to where it was and how can we avoid it from happening again.

Yes, we all know if we have configured Crash Dump or Minidump settings on the guest OS, we would definitely be able to analyze the dump to understand the state of the Virtual Machine at that stage. However if we do not have Crash Dump Collection Enabled on any machine or if you feel the Pagefile is not configured enough to capture a crash dump or if you realise that the space where the Dump has to be created is not sufficient then we would not get the desired Crash dump for analysis. Well in that case, before rebooting the server, we can take a Virtual Machine snapshot.

Yes, This snapshot can be converted into a Memory Dump which can be then analyzed using various debugger tools like Windebug etc.

1. Download the vmss2core.exe tool
2. Copy it to one of your Windows Server having sufficient free space 
3. Copy the Snapshot File [.vmss] from the datastore, where the VM is located, to the same location where vmss2core.exe is residing
4. Run the utility to convert the snapshot to dump as shown below

vmss2core -W VM_Snapshot_Filename.vmss

5. This will convert the Snapshot File to Memory Dump that we can use to analyse the cause of Server Hung 


What if vCenter Server\Appliance is Down?



The best thing about Vmware platform is the centralized management of all the resources using vCenter Server. Using vSphere or Web Client we all connect to vCenter Server to administer the virtual datacenter. Imagine if there was no vCenter Server and we had to connect to each ESXi host manually and manage the VMs. Yes, it sounds like a tedious task.
vCenter Server also provides multiple features like DRS, sRDS, vMotion, HA, FT etc. Today we are talking about a scenario where the vCenter Server or Appliance goes down and what are its impact on each of these functionalitiess. Let's check the impact of each of these functionalities below

Management :
Managing the environment won't have much impact as we can still connect to each ESXi host via SSH or vSphere client and manage the servers. It is not easy but there is no impact to the environment

Virtual Machines & ESXi Hosts:
There is no dependency of vCenter server on functionality or uptime of any other Virtual Server or ESXi Host. The Hosts can still be connected via SSH or vSphere Client and all the Virtual machines are still working

Distributed Resource Scheduling:
DRS works with vCenter Server to balance the resources and Virtual Machines across ESXi Hosts using DRS Clusters, the DRS functionality will fail if the vCenter server is down

vMotion\svMotion:
vMotion and svMotion features are spanned across hosts and since it is a feature is based on DRS, Both vMotion and svMotion will fail if vCenter server is down

High Availability:
HA will have medium impact as the Hosts\Clusters configured with HA enabled will have the HA running even if vCenter Server is down. However we would not be able to change any settings like Admission Control Policies while the vCenter Server is down.

Fault Tolerance:
FT will also work in case of all Virtual Machines which are configured before vCenter Server went down. No Changes can be done once the vCenter server is down.

Distributed Switch:
Distributed Switch would still continue to work even after vCenter Service is down. It will still connect to the Network it is configured on.

VM Snapshots:
There would be no issues in taking the snapshots of a Virtual Machine. We need to connect to ESXi host and take the VM snapshot.

Virtual Update Manager:
Since Virtual Update Manager is a vCenter plugin, the functionality of VUM will fail while the vCenter is down.

Note:
We have tested these features and impacts mentioned above only on vCenter Server 5.x version only. 

Kill VM Stuck Due To Running Tasks


Virtual Machines are sometimes hung due to active actions like Snapshot Collection, Removal, Disk Consolidation or other tasks. In these cases we usually wait until the tasks are completed on its own. However in some cases we do not see the progress in the Task.

In order to kill the ongoing task, we need to kill the VM process. Please note that this process also powers off the Virtual Machine and is not always recommended as it abruptly terminates the IOPS but if there are no other options and if we cannot wait any longer, we can use the following steps to kill the VM

1. Check the ESXi host on which this VM is hosted
2. Enable SSH on the host from Security Profiles
3. Connect to ESXi Host via SSH [You can use Putty to connect] using root credentials
4. Run the below command to list the running Virtual Machines and their World IDs

          esxcli vm process list

5. Locate the VM that you need to kill and note down the World ID
6. Kill the VM and its tasks by running the following command

          esxcli vm process kill --type=[soft, hard, force] --world-id=[world-id]

where type would be either soft, hard or force kill
& world-id is World ID of the VM that we noted down in Step#4

Location of vCenter Server Logs on Windows Server and Appliance


Location of vCenter Server Logs on Windows Server and Appliance

The main focus of this post is to mention the location of vCenter Server Logs. These logs are very necessary for Technical Troubleshooting related to vCenter Server. As we know vCenter Server service can be installed on Windows Server as well as on an Appliance. The location of this logs also varies on different Operating Systems. There are more detailed articles on VMware blogs for more indepth logs and their locations for various VMware products. Here I am only focussing on vCenter Server Logs. In the upcoming posts I will also provide locations of various other VMware products.

There are 2 methods to fetch the VMware vCenter logs
1. By Connecting to VMware vsphere client or web client and logging onto the vCenter Server
2. By either taking a RDP session to vCenter Server [Hosted on Windows Server] and accessing the share paths given below OR by connecting to vCenter Server via SSH [Hosted on Appliance]

1. By Connecting to VMware vsphere client or web client and logging onto the vCenter Server
a. Connect to VMware vCenter via vsphere or web client
b. Go to Home screen -> Click on System Logs
c. At the top, click on Export System Logs
d. On the next prompt, select the VCenter Tree and enter the destination where you need the logs to be exported.

2. By either taking a RDP session to vCenter Server [Hosted on Windows Server] and accessing the share paths given below OR by connecting to vCenter Server via SSH [Hosted on Appliance]

A) On Windows Server having vCenter Server Service Installed
> vCenter Server 5.X and earlier version [if installed on Windows XP, 2000 or 2003] -> %ALLUSERSPROFILE%\Application Data\VMware\VMware VirtualCenter\Logs

> vCenter Server 5.X and earlier version [if installed on Windows Vista, 7 or 2008] -> C:\ProgramData\VMware\VMware Virtualization\Logs\

> vCenter Server 6.0 -> %ALLUSERSPROFILE%\VMWare\vCenterServer\logs

Note: If the vCenter Service is running with a different account, then the logs would be present under that accounts profile instead of %ALLUSERPROFILE%

B) On Appliances having vCenter Server Service Installed
> vCenter Server Appliance 5.X -> /var/log/vmware/vpx

> vCenter Server Appliance 5.X UI -> /var/log/vmware/vami


> vCenter Server Appliance 6.0 -> /var/log/vmware/

>> References:
https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1021804


https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2110014

Force Mount Snapshot LUNs on ESXi


Force Mounting Snapshot LUNS on VMWARE ESXi Hosts


We all know that the recommended way to implement the Virtual Server Environment is to have a Clustered ESXi hosts together with Clustered Datastores which is presented to all the hosts in the same ESXi cluster. We ideally add the storage to one of the Host on the Cluster which then is automatically presented\reflected under all the hosts in the cluster but there are cases where some hosts do not see the Storage disk\share automatically.

In the above case, an Administrator usually tries to scan the Storage and HBAs or try to add the Storage LUN from Add Storage Console and select the Disk by identifying the NAA or WWN ID which is shared by Storage Team to see if it helps to add the presented disk to the Host and it helps in most cases. 

As mentioned, the above steps do help the Administrators in adding the disks in most cases but there are times when we don't see the disk on the Host. This is because the Host consider the LUN as a Snapshot Disk instead and won't add it automatically. In order to add the LUN manually, we need to follow the below steps:

1. Connect to the Host using SSH Putty Session
2. Run the command to list visible Volumes : esxcfg-volume -l 
3. Once the Volumes are listed above: Make a note of the Volume UUID as we would need it to manually mount it.
4. Run the following command to mount the volume: esxcfg-volume -m <UUID> 
Here the UUID is replaced with UUID which we received in point#2

Note: Please see below screenshot (highlighted part shows the Storage Disk UUID and Label which are separated by "/" character. While adding the disk, use only the UUID as shown in the last line of the screenshot)

By running the above command we would be able to add the LUN to the Host successfully and we would see the change in vCenter Client within few minutes

VMware Configuration Maximums



There has been a lot of improvements and research done in each product released by VMware. We all know that VMware has been specializing in Virtualizing Server Infrastructure and have been making life easier for the Server Administrators around the world. With every new product release, VMware pushes very hard to enable more features extending their support for more Configuration.
In this post, I would be collating all the configurations maximums supported by vSphere product and putting them all together. 


Virtual Machine Maximums
  ESX 6.0 ESX 5.5 ESX 5.1 ESX 5 ESX 4.1 ESX 4.0 ESX 3.5 ESX 3 ESX 2 ESX 1
Virtual CPUs per virtual machine 128 64 64 32 8 8 4 4 2 1
RAM per virtual machine 4TB 1TB 1TB 1TB 255GB 255GB 65GB 16GB 3,6GB 2GB
Virtual SCSI adapters per virtual machine 4 4 4 4 4 4 4 4 4 4
Virtual SCSI targets per virtual SCSI adapter 15 15                
Virtual SCSI targets per virtual machine 60 60                
Virtual SATA adapters per virtual machine 4 4                
Virtual SATA devices per virtual SATA adapter 30 30                
Virtual disk size 62TB 62TB 2TB 2TB 2TB 2TB 2TB 2TB    
IDE controllers per virtual machine 1 1 1 1 1 1 1      
IDE devices per virtual machine 4 4 4 4 4 4 4      
Floppy controllers per virtual machine 1 1 1 1 1 1 1 1 1  
Floppy devices per virtual machine 2 2 2 2 2 2 2 2 2  
Virtual NICs per virtual machine 10 10 10 10 10 10 4 4 4 4
USB controllers per virtual machine 1 1 1 1 1          
USB devices connected to a virtual machine 20 20 20 20 20          
Parallel ports per virtual machine 3 3 3 3 3 3 3 3 1  
USB 3.0 devices per virtual machine     1 1            
Concurrent remote console connections 40 40 40 40 40 40 10 10    
Video memory per virtual machine 512MB 512MB                


Host Maximums
  ESX 6.0 ESX 5.5 ESX 5.1 ESX 5 ESX 4.1 ESX 4.0 ESX 3.5 ESX 3 ESX 2 ESX 1
Logical CPUs per host 480 320 160 160 160 64 32 32 16 8
Virtual machines per host 2048 512 512 512 320 320 128 128 80 64
Virtual CPUs per host 4096 4096 2048 2048 512 512 128 128 80 64
Virtual CPUs per core 32 32 25 25 25 20 8 8 8 8
RAM per host 12TB 4TB 2TB 2TB 1TB 1TB 256GB 256GB 64GB 64GB
LUNs per server 256 256 256 256 256 256        

vCenter Server Maximums
vCenter Server Maximums ESX 5.5 ESX 5.1 ESX 5 ESX 4.1 ESX 4.0 ESX 3.5 ESX 3
Hosts per vCenter Server 1000 1000 1000 1000 300 200 200
Powered on virtual machines 10000 10000 10000 10000 3000 2000  
Registered virtual machines 15000 15000 15000 15000 4500 2000  
Linked vCenter Servers 10 10 10 10 10    
Hosts in linked vCenter Servers 3000 3000 3000 3000 1000    
Powered on virtual machines in linked vCenter 30000 30000 30000 30000 10000    
Registered virtual machines in linked vCenter 50000 50000 50000 50000 15000    
Concurrent vSphere Clients 100 100 100 100 30    
Number of host per datacenter 500 500 500 400 100    
MAC addresses per vCenter Server 65536 65536 65536        
USB devices connected at vSphere Client 20 20 20        


Cluster and Resource Pool Maximums
  ESX 6.0 ESX 5.5 ESX 5.1 ESX 5 ESX 4.1 ESX 4.0 ESX 3.5
Hosts per cluster 64 32 32 32 32 32 32
Virtual machines per cluster 4000 4000 4000 3000 3000 1280  
Virtual machines per host 2048 512 512 512 320 100  
Maximum concurrent host HA failover     32 32 4 4  
Failover as percentage of cluster     100% 100% 50% 50%  
Resource pools per cluster   1600 1600 1600 512 512 128
Resource pools per host   1600 1600 1600   4096  
Children per resource pool   1024 1024 1024 1024    
Resource pool tree depth   8 8 8 8 12 12

Network Maximums
  ESX 5.5 ESX 5.1 ESX 5 ESX 4.1 ESX 4.0 ESX 3.5 ESX 3
Total virtual network switch ports per host 4096 4096 4096 4096 4096 127  
Maximum active ports per host 1016 1050 1016 1016 1016 1016  
Virtual network switch creation ports 4088 4088 4088 4088 4088    
Port groups 512 256 256 512 512 512  
Distributed virtual network switch ports per vCenter 60000 60000 30000 20000 6000    
Static port groups per vCenter 10000 10000 5000 5000 512    
Ephemeral port groups per vCenter 1016 256 256 1016      
Hosts per VDS 500 500 350 350 64    
Distributed switches per vCenter 128 128 32 32 16    
e1000 1Gb Ethernet ports (Intel PCI?x)   32 32 32 32 32 32
e1000e 1Gb Ethernet ports (Intel PCI?e) 24 24 24 24 32 32 32
igb 1Gb Ethernet ports (Intel) 16 16 16 16 16    
tg3 1Gb Ethernet ports (Broadcom) 32 32 32 32 32    
bnx2 1Gb Ethernet ports (Broadcom) 16 16 16 16 16    
forcedeth 1Gb Ethernet ports (NVIDIA)   2 2 2 2    
nx_nic 10Gb Ethernet ports (NetXen) 8 8 8 4 4    
ixgbe 10Gb Ethernet ports (Intel) 8 8 8 4 4    
bnx2x 10Gb Ethernet ports (Broadcom) 8 8 8 4 4    
be2net 10Gb Ethernet ports (Emulex) 8 8 8 4 4    
VMDirectPath PCI/PCIe devices per host 8 8 8 8 8    
VMDirectPath PCI/PCIe devices per virtual machine 4 4 4 4      
Concurrent vMotion operations per host (1Gb/s network) 4 4 4 2 2    
Concurrent vMotion operations per host (10Gb/s network) 8 8 8        

Storage Maximums
  ESX 5.5 ESX 5.1 ESX 5 ESX 4.1 ESX 4.0 ESX 3.5 ESX 3
Qlogic 1Gb iSCSI HBA initiator ports per server 4 4 4 4      
Broadcom 1Gb iSCSI HBA initiator ports per server 4 4 4 4      
Broadcom 10Gb iSCSI HBA initiator ports per server 4 4 4 4      
Software iSCSI NICs per server 8 8 8 8      
Number of total paths on a server 1024 1024 1024 1024      
Number of paths to a iSCSI LUN 8 8 8 8      
Qlogic iSCSI: dynamic targets per adapter port 64 64 64 64      
Qlogic iSCSI: static targets per adapter port 62 62 62 62      
Broadcom 1Gb iSCSI HBA targets per adapter port 64 64 64 64      
Broadcom 10Gb iSCSI HBA targets per adapter port 128 128 128 64      
Software iSCSI targets 256 256 256 256 256    
NFS mounts per host 256 256 256 64 64    
FC LUNs per host 256 256 256 256 256 256 256
FC LUN ID 255 255 255 255 255 255 255
FC Number of paths to a LUN 32 32 32 32 32 32 32
Number of total paths on a server 1024 1024 1024 1024 1024 1024 1024
Number of HBAs of any type 8 8 8 8 8    
HBA ports 16 16 16 16 16 16  
Targets per HBA 256 256 256 256 256 15  
Software FCoE adapters 4 4 4        
Volumes per host 256 256 256 256 256 256  
Hosts per volume 64 64 64 64 64 32  
Powered on virtual machines per VMFS volume 2048 2048 2048   256    
Concurrent vMotion operations per datastore 128 128 128        
Concurrent Storage vMotion operations per datastore 8 8 8        
Concurrent Storage vMotion operations per host 2 2 2        
Concurrent non vMotion provisioning operations per host 8 8 8        
VMFS Volume size 64TB 64TB 64TB 64TB 64TB 64TB  
Virtual disks per datastore cluster 9000            
Datastores per datastore cluster 64            
Datastore clusters per vCenter 256            

Fault Tolerance Maximums
  ESX 6.0 ESX 5.5 ESX 5.1 ESX 5.0
Virtual disks 16 16 16  
Virtual CPUs per virtual machine 4 1 1 1
RAM per FT VM 64GB 64GB 64GB 64GB
Virtual machines per host 4 4 4 4

Virtual SAN Maximums
  ESX 5.5
Virtual SAN disk groups per host 5
Magnetic disks per disk group 7
SSD disks per disk group 1
Spinning disks in all diskgroups per host 35
Components per Virtual SAN host 3000
Number of Virtual SAN nodes in a cluster 32
Number of datastores per cluster 1
Virtual machines per host 100
Virtual machines per cluster 3200
Virtual machine virtual disk size 2032GB
Disk stripes per object 12
Percentage of flash read cache reservation 100
Failure to tolerate 3
Percentage of object space reservation 100
Virtual SAN networks/physical network fabrics 2

All the information are collected from VMware blogs and articles. 
https://www.vmware.com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.pdf