Kill VM Stuck Due To Running Tasks


Virtual Machines are sometimes hung due to active actions like Snapshot Collection, Removal, Disk Consolidation or other tasks. In these cases we usually wait until the tasks are completed on its own. However in some cases we do not see the progress in the Task.

In order to kill the ongoing task, we need to kill the VM process. Please note that this process also powers off the Virtual Machine and is not always recommended as it abruptly terminates the IOPS but if there are no other options and if we cannot wait any longer, we can use the following steps to kill the VM

1. Check the ESXi host on which this VM is hosted
2. Enable SSH on the host from Security Profiles
3. Connect to ESXi Host via SSH [You can use Putty to connect] using root credentials
4. Run the below command to list the running Virtual Machines and their World IDs

          esxcli vm process list

5. Locate the VM that you need to kill and note down the World ID
6. Kill the VM and its tasks by running the following command

          esxcli vm process kill --type=[soft, hard, force] --world-id=[world-id]

where type would be either soft, hard or force kill
& world-id is World ID of the VM that we noted down in Step#4

ESXi Server Build Checklist


Below is a checklist that can be followed for ESXi Build. There could be some points which I might have missed, please let me know in comments if I can add anything extra

SL
Checks
1
Racking and Stacking for Cisco UCS Server
2
CIMC configuration (IP, CIMC Name, DNS & Standard CIMC Password)
3
DNS Records creations for CIMC & ESXi Servers
4
Verification of CIMC connectivity with CIMC Hostname
5
ESXi Installed on Local Storage or SAN
6
ESXi Version
7
Addition of ESXi in the required vCenter
8
NTP Configuration
9
DNS & Routing Configuration
10
License Configuration for ESXi
11
Boot LUN rename (esxiname_boot_lun)
12
Add ESXi Servers in Distribution Switch
13
DvSwitch Creation [if not created already]
14
Management PortGroup Creation
15
Create vMotion & Virtual Machine Traffic PortGroup in DvSwitch
16
Configure Port Channel with LACP on switch ports
17
Configure Teaming and Failover Policy as – Route Based on IP Hash [if above step is done]
18
Create NFS PortGroup and VMKernel IP Configuration
19
Existing Datastore allocation to New ESXI Server
20
Configure Scratch Partition on Shared Storage
21
Addition ESXi Host in HA DRS Cluster
22
Creation and Configuration HA DRS cluster [if not already]
23
DRS Set to Fully Automatic
24
Configure EVC [Enhance vMotion Compatibility] for ESXi Cluster
25
Fix Errors / Warnings for ESXi Host [if any]
26
Verification of vMotion Functionality
27
Add ESXi Server in Monitoring
28
Add ESXi Server in Cisco CIMC Supervisor [if it is a Cisco Server]
29
Addition of Cluster and Datacenter in CMDB together with ESXi

Default Passwords for Remote Management Boards for Physical Servers



All the Server Administrators have to deal with physical servers of different manufacturers and models. The market leaders are HP, Dell, IBM and Cisco Products who have a large range of physical Rack and Blade Servers. Every now and then, the Servers are out of reach making it difficult to troubleshoot connectivity issue remotely, so we have Remote Management Board connected to these servers to alternately connect to these physical boxes when they are not on network or probably down.

Now each of these vendors have different names for the Remote Management Board. HP calls it Integrated Lights Out [ILO], Dell calls it Dell Remote Access Card [DRAC], IBM calls it Integrated Management Module [IMM] and Cisco calls it [Cisco Integrated Management Controller]. They all serve the same purpose.

I am writing this post to provide the default passwords for these Remote Management Boards. When these RMBs are initially configured, we need these credentials to access the Boards.
Please find below credentials

Make
RMB
Username
Password
Remarks
HP
ILO
Administrator
Printed on the Tag
8 Character password printed on the tag of Hardware
Dell
DRAC
root
calvin

IBM
IMM
USERID
PASSW0RD
There is number “zero” in the password, not “O”
Cisco
CIMC
admin
password

What are VSS Writers and How to Troubleshoot Error States



Every Windows Administrator come across Backup Issues related to File Level Backup. We often see these issues are fixed mostly by reboot [as we all know Reboot fixes most of the issues] but it is hard to get the required Application\Server downtime to fix these issues. Also requesting the Server\Application owners for a reboot every now and then causes a lot of problem when it is caused on a single server more often.

We see most of the Backups are failed as one of the VSS writer is in Error\Failed or Waiting for Completion state. Reboot does fix these VSS writers and hence fixing the Backup Failure Issue.

What are these VSS Writers?
VSS Writers are Application Specific components designed by Microsoft [which is acronym of Volume Shadow Copy Service]. These Writers are compatible with various applications which helps in taking a complete snapshot of the data even though there are Ongoing Input\Output Transactions. This makes sure that there is no incomplete data collected. In the process if there are any transactions affecting the snapshot process the VSS writers may go into Error state hence causing Backup Failures. In this case most Administrators recommend rebooting the servers to fix this issue but there is a better way to decrease the downtime and fix the issue with the VSS writers by bringing them in stable state.

I have listed down few VSS writers and their associated Windows Services which can terminate the snapshot process and bring back the Writers in Stable state. Simply restart the below service if any VSS writer is in Error \ Failed or Waiting for Completion State.

VSS Writer Name Service Name Service Display Name
ASR Writer VSS Volume Shadow Copy
BITS Writer BITS Background Intelligent Transfer Service
Certificate Authority CertSvc Active Directory Certificate Services
COM+ REGDB Writer VSS Volume Shadow Copy
DFS Replication service writer DFSR DFS Replication
DHCP Jet Writer DHCPServer DHCP Server
FRS Writer NtFrs File Replication
FSRM writer srmsvc File Server Resource Manager
IIS Config Writer AppHostSvc Application Host Helper Service
IIS Metabase Writer IISADMIN IIS Admin Service
Microsoft Exchange Replica Writer MSExchangeRepl Microsoft Exchange Replication Service
Microsoft Exchange Writer MSExchangeIS Microsoft Exchange Information Store
Microsoft Hyper-V VSS Writer vmms Hyper-V Virtual Machine Management
MSMQ Writer MSMQ Message Queuing
MSSearch Service Writer WSearch Windows Search
NPS VSS Writer EventSystem COM+ Event System
NTDS NTDS Active Directory Domain Services
Registry Writer VSS Volume Shadow Copy
Shadow Copy Optimization Writer VSS Volume Shadow Copy
SMS Writer SMS_SITE_VSS_WRITER SMS_SITE_VSS_WRITER
SqlServerWriter SQLWriter SQL Server VSS Writer
System Writer CryptSvc Cryptographic Services
TermServLicensing TermServLicensing Remote Desktop Licensing
WMI Writer Winmgmt Windows Management Instrumentation


Location of vCenter Server Logs on Windows Server and Appliance


Location of vCenter Server Logs on Windows Server and Appliance

The main focus of this post is to mention the location of vCenter Server Logs. These logs are very necessary for Technical Troubleshooting related to vCenter Server. As we know vCenter Server service can be installed on Windows Server as well as on an Appliance. The location of this logs also varies on different Operating Systems. There are more detailed articles on VMware blogs for more indepth logs and their locations for various VMware products. Here I am only focussing on vCenter Server Logs. In the upcoming posts I will also provide locations of various other VMware products.

There are 2 methods to fetch the VMware vCenter logs
1. By Connecting to VMware vsphere client or web client and logging onto the vCenter Server
2. By either taking a RDP session to vCenter Server [Hosted on Windows Server] and accessing the share paths given below OR by connecting to vCenter Server via SSH [Hosted on Appliance]

1. By Connecting to VMware vsphere client or web client and logging onto the vCenter Server
a. Connect to VMware vCenter via vsphere or web client
b. Go to Home screen -> Click on System Logs
c. At the top, click on Export System Logs
d. On the next prompt, select the VCenter Tree and enter the destination where you need the logs to be exported.

2. By either taking a RDP session to vCenter Server [Hosted on Windows Server] and accessing the share paths given below OR by connecting to vCenter Server via SSH [Hosted on Appliance]

A) On Windows Server having vCenter Server Service Installed
> vCenter Server 5.X and earlier version [if installed on Windows XP, 2000 or 2003] -> %ALLUSERSPROFILE%\Application Data\VMware\VMware VirtualCenter\Logs

> vCenter Server 5.X and earlier version [if installed on Windows Vista, 7 or 2008] -> C:\ProgramData\VMware\VMware Virtualization\Logs\

> vCenter Server 6.0 -> %ALLUSERSPROFILE%\VMWare\vCenterServer\logs

Note: If the vCenter Service is running with a different account, then the logs would be present under that accounts profile instead of %ALLUSERPROFILE%

B) On Appliances having vCenter Server Service Installed
> vCenter Server Appliance 5.X -> /var/log/vmware/vpx

> vCenter Server Appliance 5.X UI -> /var/log/vmware/vami


> vCenter Server Appliance 6.0 -> /var/log/vmware/

>> References:
https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1021804


https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2110014

Force Mount Snapshot LUNs on ESXi


Force Mounting Snapshot LUNS on VMWARE ESXi Hosts


We all know that the recommended way to implement the Virtual Server Environment is to have a Clustered ESXi hosts together with Clustered Datastores which is presented to all the hosts in the same ESXi cluster. We ideally add the storage to one of the Host on the Cluster which then is automatically presented\reflected under all the hosts in the cluster but there are cases where some hosts do not see the Storage disk\share automatically.

In the above case, an Administrator usually tries to scan the Storage and HBAs or try to add the Storage LUN from Add Storage Console and select the Disk by identifying the NAA or WWN ID which is shared by Storage Team to see if it helps to add the presented disk to the Host and it helps in most cases. 

As mentioned, the above steps do help the Administrators in adding the disks in most cases but there are times when we don't see the disk on the Host. This is because the Host consider the LUN as a Snapshot Disk instead and won't add it automatically. In order to add the LUN manually, we need to follow the below steps:

1. Connect to the Host using SSH Putty Session
2. Run the command to list visible Volumes : esxcfg-volume -l 
3. Once the Volumes are listed above: Make a note of the Volume UUID as we would need it to manually mount it.
4. Run the following command to mount the volume: esxcfg-volume -m <UUID> 
Here the UUID is replaced with UUID which we received in point#2

Note: Please see below screenshot (highlighted part shows the Storage Disk UUID and Label which are separated by "/" character. While adding the disk, use only the UUID as shown in the last line of the screenshot)

By running the above command we would be able to add the LUN to the Host successfully and we would see the change in vCenter Client within few minutes

Various Ways to Login to MS Azure



Recently I underwent a training on Microsoft provided solution to Cloud Computing - I am referring about Microsoft Azure. MS Azure was announced to the world in 2008 and was finally presented on 1st February 2010

I will be covering a lot of aspects in the upcoming posts om Azure. In this post, I am only covering different ways to connect to your subscription on Azure Cloud

Classic Portal [Old Portal] 
This portal would not be used much in the future as Microsoft is planning on to decommission this portal completely and move on to much synchronized portal.
This Portal can be accessed using the Link https://manage.windowsazure.com 
The below screenshot is a glimpse of the Classic Portal

On the left hand side, we see set of options to select from. This pane is called the Hub Menu and once you click on any option, a new window is displayed which is called the Blades in Azure Terminology. The same stands true in case of the new Portal.

New Azure Portal
This is a New Portal presented by Azure which is more synchronized and is implemented to have better view and provides better usability. This Portal can be accessed by going to : https://portal.azure.com. This portal is easier to navigate and have tiles like view which can be easily customized as various components can be dragged and dropped on the tiles of the dashboard.
This portal also looks like the previous classic portal having the Hub Menu on the left hand side and blades which are displayed overlapping dashboard window. The White boxes are various Azure components like Resources, Resource Groups, VNet, Storage Accounts. In upcoming posts I will cover the major differences between these 2 portals and it is important to understand both the portals until the functionalities in classic portal are included in the newer portal.

Connect using Powershell
Every Cloud or Infrastructure Admin is aware that almost all the Mass Administration \ Automation is being carried out using Powershell scripts. Believe me you would not even connect to above 2 portals if you get comfortable with Powershell scripting as it has few very cool and swift ways to manage stuffs on the Azure Cloud.
Once the Windows SDK for VS 2013 and Microsoft Azure modules is installed on your machine, you are good to go with the scripting. Connect to the Azure Cloud using the Powershell script Login-AzureRMAccount. You will get a window to provide your Single Sign On Credentials like shown below
Once the credentials are verified and accepted, we would be able to proceed with managing the cloud environment from Powershell. You need not be connected to Corporate Network, Citrix and any VDI network. Microsoft Azure can be managed from anywhere via Internet access unless it is restricted by Administrators.


I would be sharing other posts on Azure and related technologies. Please share your view. Thanks

Active Directory - Flexible Single Master Operations Role



Active Directory is based on a Single Master Operations Roles. These roles can be hosted on different Windows Domain Controllers or a single Domain Controller can also hold all the roles. 

There are in-total 5 FSMO Roles. 2 are Forest-Wide Roles and 3 are Domain-Wide Roles.
i.e 2 roles are held by Domain Controller for entire Forest whereas Domain Roles are held by Domain Controllers in the Domain.

Forest Wide Roles:
Schema Master Role
Active Directory uses attributes to define various objects like Users and Computers. These attributes are used by various applications which depend on Active Directory like Exchange, Lync and many others. This role is responsible for updating any changes to the attributes to the Schema of the forest. These changes are irreversible and these changes are then replicated to other Domain Controllers in the Forest.

Domain Naming Master Role
This Role helps in validating the Domain Name Space in the Partitions container. This role helps in adding or removing a domain from the forest. It cross verifies a Domain name before adding it to the Forest. This role helps in writing in the Partitions container.


Domain Wide Roles:
PDC Emulator Operation Role
All the FSMO roles are equally important but PDC Emulator is considered to be the one which needs to be online and available at most times as it helps in Password sync and Time Synchronization within the Domain. This role helps in syncing the password for objects with it and replicates to the other Domain Controllers.
PDC Emulator also plays a vital role in Time Sync where there is Domain Hierarchy configuration is set. It syncs up with the external Time Source and other domain members syncs up with the PDC Emulator.

RID Master Role
This Role is responsible for providing SID Pools for the Domains which is needed for creating different AD objects like users and computers. This SID contains a Domain SID [which is common for all objects in the domain] plus RID [Relative ID] which is assigned to individual Objects

Infrastructure Master Role
The Infrastructure FSMO role owner is the DC responsible for updating a cross-domain object reference in the event that the referenced object is moved, renamed, or deleted. In this case, the Infrastructure Master role should be held by a domain controller that is not a GC server. If the Infrastructure Master runs on a GC server, it will not update object information, because it does not contain any references to objects that it does not hold. This is because a GC server holds a partial replica of every object in the forest. When an object in one domain is referenced by another object in another domain, it represents the reference as a dsname. If all the domain controllers in a domain also host the GC, then all the domain controllers have the current data, and it is not important which domain controller owns the Infrastructure Master (IM) role




VMware Configuration Maximums



There has been a lot of improvements and research done in each product released by VMware. We all know that VMware has been specializing in Virtualizing Server Infrastructure and have been making life easier for the Server Administrators around the world. With every new product release, VMware pushes very hard to enable more features extending their support for more Configuration.
In this post, I would be collating all the configurations maximums supported by vSphere product and putting them all together. 


Virtual Machine Maximums
  ESX 6.0 ESX 5.5 ESX 5.1 ESX 5 ESX 4.1 ESX 4.0 ESX 3.5 ESX 3 ESX 2 ESX 1
Virtual CPUs per virtual machine 128 64 64 32 8 8 4 4 2 1
RAM per virtual machine 4TB 1TB 1TB 1TB 255GB 255GB 65GB 16GB 3,6GB 2GB
Virtual SCSI adapters per virtual machine 4 4 4 4 4 4 4 4 4 4
Virtual SCSI targets per virtual SCSI adapter 15 15                
Virtual SCSI targets per virtual machine 60 60                
Virtual SATA adapters per virtual machine 4 4                
Virtual SATA devices per virtual SATA adapter 30 30                
Virtual disk size 62TB 62TB 2TB 2TB 2TB 2TB 2TB 2TB    
IDE controllers per virtual machine 1 1 1 1 1 1 1      
IDE devices per virtual machine 4 4 4 4 4 4 4      
Floppy controllers per virtual machine 1 1 1 1 1 1 1 1 1  
Floppy devices per virtual machine 2 2 2 2 2 2 2 2 2  
Virtual NICs per virtual machine 10 10 10 10 10 10 4 4 4 4
USB controllers per virtual machine 1 1 1 1 1          
USB devices connected to a virtual machine 20 20 20 20 20          
Parallel ports per virtual machine 3 3 3 3 3 3 3 3 1  
USB 3.0 devices per virtual machine     1 1            
Concurrent remote console connections 40 40 40 40 40 40 10 10    
Video memory per virtual machine 512MB 512MB                


Host Maximums
  ESX 6.0 ESX 5.5 ESX 5.1 ESX 5 ESX 4.1 ESX 4.0 ESX 3.5 ESX 3 ESX 2 ESX 1
Logical CPUs per host 480 320 160 160 160 64 32 32 16 8
Virtual machines per host 2048 512 512 512 320 320 128 128 80 64
Virtual CPUs per host 4096 4096 2048 2048 512 512 128 128 80 64
Virtual CPUs per core 32 32 25 25 25 20 8 8 8 8
RAM per host 12TB 4TB 2TB 2TB 1TB 1TB 256GB 256GB 64GB 64GB
LUNs per server 256 256 256 256 256 256        

vCenter Server Maximums
vCenter Server Maximums ESX 5.5 ESX 5.1 ESX 5 ESX 4.1 ESX 4.0 ESX 3.5 ESX 3
Hosts per vCenter Server 1000 1000 1000 1000 300 200 200
Powered on virtual machines 10000 10000 10000 10000 3000 2000  
Registered virtual machines 15000 15000 15000 15000 4500 2000  
Linked vCenter Servers 10 10 10 10 10    
Hosts in linked vCenter Servers 3000 3000 3000 3000 1000    
Powered on virtual machines in linked vCenter 30000 30000 30000 30000 10000    
Registered virtual machines in linked vCenter 50000 50000 50000 50000 15000    
Concurrent vSphere Clients 100 100 100 100 30    
Number of host per datacenter 500 500 500 400 100    
MAC addresses per vCenter Server 65536 65536 65536        
USB devices connected at vSphere Client 20 20 20        


Cluster and Resource Pool Maximums
  ESX 6.0 ESX 5.5 ESX 5.1 ESX 5 ESX 4.1 ESX 4.0 ESX 3.5
Hosts per cluster 64 32 32 32 32 32 32
Virtual machines per cluster 4000 4000 4000 3000 3000 1280  
Virtual machines per host 2048 512 512 512 320 100  
Maximum concurrent host HA failover     32 32 4 4  
Failover as percentage of cluster     100% 100% 50% 50%  
Resource pools per cluster   1600 1600 1600 512 512 128
Resource pools per host   1600 1600 1600   4096  
Children per resource pool   1024 1024 1024 1024    
Resource pool tree depth   8 8 8 8 12 12

Network Maximums
  ESX 5.5 ESX 5.1 ESX 5 ESX 4.1 ESX 4.0 ESX 3.5 ESX 3
Total virtual network switch ports per host 4096 4096 4096 4096 4096 127  
Maximum active ports per host 1016 1050 1016 1016 1016 1016  
Virtual network switch creation ports 4088 4088 4088 4088 4088    
Port groups 512 256 256 512 512 512  
Distributed virtual network switch ports per vCenter 60000 60000 30000 20000 6000    
Static port groups per vCenter 10000 10000 5000 5000 512    
Ephemeral port groups per vCenter 1016 256 256 1016      
Hosts per VDS 500 500 350 350 64    
Distributed switches per vCenter 128 128 32 32 16    
e1000 1Gb Ethernet ports (Intel PCI?x)   32 32 32 32 32 32
e1000e 1Gb Ethernet ports (Intel PCI?e) 24 24 24 24 32 32 32
igb 1Gb Ethernet ports (Intel) 16 16 16 16 16    
tg3 1Gb Ethernet ports (Broadcom) 32 32 32 32 32    
bnx2 1Gb Ethernet ports (Broadcom) 16 16 16 16 16    
forcedeth 1Gb Ethernet ports (NVIDIA)   2 2 2 2    
nx_nic 10Gb Ethernet ports (NetXen) 8 8 8 4 4    
ixgbe 10Gb Ethernet ports (Intel) 8 8 8 4 4    
bnx2x 10Gb Ethernet ports (Broadcom) 8 8 8 4 4    
be2net 10Gb Ethernet ports (Emulex) 8 8 8 4 4    
VMDirectPath PCI/PCIe devices per host 8 8 8 8 8    
VMDirectPath PCI/PCIe devices per virtual machine 4 4 4 4      
Concurrent vMotion operations per host (1Gb/s network) 4 4 4 2 2    
Concurrent vMotion operations per host (10Gb/s network) 8 8 8        

Storage Maximums
  ESX 5.5 ESX 5.1 ESX 5 ESX 4.1 ESX 4.0 ESX 3.5 ESX 3
Qlogic 1Gb iSCSI HBA initiator ports per server 4 4 4 4      
Broadcom 1Gb iSCSI HBA initiator ports per server 4 4 4 4      
Broadcom 10Gb iSCSI HBA initiator ports per server 4 4 4 4      
Software iSCSI NICs per server 8 8 8 8      
Number of total paths on a server 1024 1024 1024 1024      
Number of paths to a iSCSI LUN 8 8 8 8      
Qlogic iSCSI: dynamic targets per adapter port 64 64 64 64      
Qlogic iSCSI: static targets per adapter port 62 62 62 62      
Broadcom 1Gb iSCSI HBA targets per adapter port 64 64 64 64      
Broadcom 10Gb iSCSI HBA targets per adapter port 128 128 128 64      
Software iSCSI targets 256 256 256 256 256    
NFS mounts per host 256 256 256 64 64    
FC LUNs per host 256 256 256 256 256 256 256
FC LUN ID 255 255 255 255 255 255 255
FC Number of paths to a LUN 32 32 32 32 32 32 32
Number of total paths on a server 1024 1024 1024 1024 1024 1024 1024
Number of HBAs of any type 8 8 8 8 8    
HBA ports 16 16 16 16 16 16  
Targets per HBA 256 256 256 256 256 15  
Software FCoE adapters 4 4 4        
Volumes per host 256 256 256 256 256 256  
Hosts per volume 64 64 64 64 64 32  
Powered on virtual machines per VMFS volume 2048 2048 2048   256    
Concurrent vMotion operations per datastore 128 128 128        
Concurrent Storage vMotion operations per datastore 8 8 8        
Concurrent Storage vMotion operations per host 2 2 2        
Concurrent non vMotion provisioning operations per host 8 8 8        
VMFS Volume size 64TB 64TB 64TB 64TB 64TB 64TB  
Virtual disks per datastore cluster 9000            
Datastores per datastore cluster 64            
Datastore clusters per vCenter 256            

Fault Tolerance Maximums
  ESX 6.0 ESX 5.5 ESX 5.1 ESX 5.0
Virtual disks 16 16 16  
Virtual CPUs per virtual machine 4 1 1 1
RAM per FT VM 64GB 64GB 64GB 64GB
Virtual machines per host 4 4 4 4

Virtual SAN Maximums
  ESX 5.5
Virtual SAN disk groups per host 5
Magnetic disks per disk group 7
SSD disks per disk group 1
Spinning disks in all diskgroups per host 35
Components per Virtual SAN host 3000
Number of Virtual SAN nodes in a cluster 32
Number of datastores per cluster 1
Virtual machines per host 100
Virtual machines per cluster 3200
Virtual machine virtual disk size 2032GB
Disk stripes per object 12
Percentage of flash read cache reservation 100
Failure to tolerate 3
Percentage of object space reservation 100
Virtual SAN networks/physical network fabrics 2

All the information are collected from VMware blogs and articles. 
https://www.vmware.com/pdf/vsphere6/r60/vsphere-60-configuration-maximums.pdf