-Tune Virtual Machine memory configurations
-Tune Virtual Machine networking configurations
-Tune Virtual Machine CPU configurations
-Tune Virtual Machine storage configurations
Tune Virtual Machine memory configurations
· Use shares, limits and reservations to apply prioritisation policies to your VMware environment that reflect your requirements for what happens if memory contention occurs on an ESXi host
· Ensure VMware tools is installed and up-to-date on all guest VMs, this will help ensure that VMware is able to properly manage memory usage from within the guest VM.
· Use an appropriate location for the Host cache (SSD is the canonical example)
· Changing the configuration can be done with the vSphere Client(except for Virtual Machine Hardware version vmx-10) or vSphere Web Client (all versions).
· The maximum amount of Virtual Machine Memory depends on the Virtual Machine Hardware Version
Tune Virtual Machine networking configurations
· Use a VMXNET3 adapter by default, using an alternative virtual adapter should be an unusual decision that requires justification.
· If you need to replace an existing vNIC with a VMXNET3 version consider capturing the MAC Address of the old NIC first so that it can be re-applied to the replacement NIC.
** Here’s an easy one – use the paravirtualized network adapter, also known as the VMXNET3 adapter
– Requires VMware tools to be installed
– Requires virtual machine hardware version 7 or later
– Ensure the guest operating system is supported
– Enable jumbo frames for the VM if the rest of the infrastructure is using jumbo frames
– Is set in the guest OS drivr
Enabling jumbo frame support on a virtual machine requires an enhanced vmxnet adapter for that virtual machine.
Check that the Enhanced vmxnet adapter is connected to a standard switch or distributed switch with jumbo frames enabled.
Network I/O Control
A technology that allows a vDS to allocate network bandwidth according to traffic type by automatically generating network resource
pools that correspond to each type of network traffic recognized by vSphere. This includes VM, management, vMotion, Fault Tolerance, vSphere Replication, iSCSI, and NAS traffic types. It also allows the use of user-defined network resource pools.
There are some of pre-defined resource pools:
· Fault Tolerance Traffic
· iSCSI Traffic
· vMotion Traffic
· Management Traffic
· vSphere Replication Traffic
· NFS Traffic
· Virtual Machine Traffic.
A primer on Network I/O Control
SplitRx mode allows a host to use multiple physical CPUs to process network packets received in a single network queue. This feature can improve network performance for certain types of workloads such when multiple virtual machines on the same host are receiving multicast traffic from the same source.
SplitRx mode can only be configured on VMXNET3 virtual network adapters, and is disabled by default. It can be enabled on a per NIC basis by using the ethernetX.emuRxMode variable in the virtual machines .vmx file, where X is the ID of the virtual network adapter. Setting ethernetX.emuRxMode = “0″ will disable SplitRx on an adapter, whilst setting ethernetX.emuRxMode = “1″ will enable it.
To change this setting using the vSphere client, select the virtual machine then click Edit settings. On the options tab, click Configuration Parameters (found under the Gene
Tune Virtual Machine CPU configurations
· Don’t over allocate CPUs, this can lead to poor VM performance as the CPU scheduler waits while trying to find multiple CPUs ready to run the guest (%RDY wait times can increase)
· Use shares, limits and reservations to apply prioritization policies to your VMware environment that reflect your requirements for what happens if CPU contention occurs on an ESXi host
-If hyperthreading is enabled for the host, ensure that the Hyperthreaded Core Sharing Mode for your virtual machines are set to Any
-If you need to disable hyperthreading for a particular virtual machine, set the Hyperthreaded Core Sharing Mode to None
-Select the proper hardware abstraction layer (HAL) for the guest operating system you are using
– This only applies for the guest operating systems that have different kernels for single processor (UP) and multiple processors (SMP). Single vCPU would use UP and all others will use SMP
-If your application or guest OS can’t leverage multiple processors then configure them with only 1 vCPU
-If your physical hosts are using NUMA, ensure the virtual machines are hardware version 8 as this exposes the NUMA architecture to the guest operating systems allowing NUMA aware applications to take advantage of it. This is known as Virtual NUMA
Tune Virtual Machine storage configurations
· Depending on the underlying storage hardware the choice of Virtual Disk provisioning method (thin, thick lazy zeroed, thick eager zeroed) can have an impact on performance
· VMware provides the paravirtualized SCSI adapter (PVSCSI) as an alternative to BusLogic or LSI Logic storage adapters. Similar to the vmxnet 3 network adapter ,, The paravirtualized SCSI adapter (PVSCSI) can improve performance for some workload
· The PVSCSI adapter can provide higher throughput and lower CPU utilization
· Requires virtual machine hardware version 7 or later
· Choice of VMDK vs RDM has no impact on storage performance
Dependent, affected by vSphere snapshots
Independent-Persistent, not affected by vSphere snapshots. Changes are immediately and permanently written to disk
Independent-Nonpersistent, not affected by vSphere snapshots. Changes to disk are discarded when you power off or revert to snapshot.
Use Disk Shares, see also topic on SIOC.
Configure Flash Read Cache for a VM (only available for Virtual Machine Hardware version 10 or higher)