Some Best Practices For VMware !

  • Storage Best Practices.
  • Best Practices for Virtual Machine and Host Security.
  • vSphere Security Best Practices.
  • vSphere Availability Best Practices
  • Best Practices for Upgrades and Migrations.
  • Best Practices and Recommendations for Update Manager Environment
  • Auto Deploy Best Practices
  • Troubleshooting Best Practices

Download as PDF

VMware Best Practices

Networking Best Practices:

Consider these best practices when you configure your network:

  • Isolate from one another the networks for host management, vSphere vMotion, vSphere FT, and so on, to improve security and performance.
  • Assign a group of virtual machines on a separate physical NIC. This separation allows for a portion of the total networking workload to be shared evenly across multiple CPUs. The isolated virtual machines can then better handle application traffic, for example, from a Web client.,
  • To physically separate network services and to dedicate a particular set of NICs to a specific network service, create a vSphere Standard Switch or vSphere Distributed Switch for each service. If this is not possible, separate network services on a single switch by attaching them to port groups with different VLAIDs. In either case, verify with your network administrator that the networks or VLANs you choose are isolated from the rest of your environment and that no routers connect them.
  • Keep the vSphere vMotion connection on a separate network. When migration with vMotion occurs ,the contents of the guest operating system’s memory is transmitted over the network. You can do this either by using VLANs to segment a single physical network or by using separate physical networks (the latter is preferable).
  • When using pass through devices with a Linux kernel versio2.6.20 or earlier, avoid MSI and MSI-X modes because these modes have significant performance impact.
  • You can add and remove network adapters from a standard or distributed switch without affecting the virtual machines or the network service that is running behind that switch. If you remove all the running hardware, the virtual machines can still communicate among themselves. If you leave one network adapter intact, all the virtual machines can still connect with the physical network.
  • To protect your most sensitive virtual machines, deploy firewalls in virtual machines that route between virtual networks with uplinks to physical networks and pure virtual networks with no uplinks.
  • For best performance, use VMXNET 3 virtual machine NICs.
  • Physical network adapters connected to the same vSphere Standard Switch or vSphere Distributed Switch should also be connected to the same physical network.
  • Configure all VMkernel network adapters in a vSphere Distributed Switch with the same MTU. When several VMkernel network adapters, configured with different MTUs, are connected to vSphere distributed switches, you might experience network connectivity problems.
  • When creating a distributed port group, do not use dynamic port binding. Dynamic port binding has been deprecated since ESXi 5.0.
  • When making changes to the networks that your clustered ESXi hosts are on, suspend the Host Monitoring feature. Changing your network hardware or networking settings can interrupt the

heartbeats that vSphere HA uses to detect host failures, and this might result in unwanted attempts to fail over virtual machines.

  • When you change the networking configuration on the ESXi hosts themselves, for example, adding port groups, or removing vSwitches, suspend Host Monitoring. After you have made the networking configuration changes, you must reconfigure vSphere HA on all hosts in the cluster, which causes the network information to be reinspected. Then re-enable Host Monitoring.

Networks Used for vSphere HA Communications:

To identify which network operations might disrupt the functioning of vSphere HA, you should know which management networks are being used for heart beating and other vSphere HA communications.

  • On legacy ESX hosts in the cluster, vSphere HA communications travel over all networks that are designated as service console networks. VMkernel networks are not used by these hosts for vSphere HA communications.
  • On ESXi hosts in the cluster, vSphere HA communications, by default, travel over VMkernel networks, except those marked for use with vMotion. If there is only one VMkernel network, vSphere HA shares it with vMotion, if necessary. With ESXi 4.x and ESXi, you must also explicitly enable the Management traffic checkbox for vSphere HA to use this network.

Network Isolation Addresses

A network isolation address is an IP address that is pinged to determine whether a host is isolated from the network. This address is pinged only when a host has stopped receiving heartbeats from all other hosts in the cluster. If a host can ping its network isolation address, the host is not network isolated, and the other hosts in the cluster have either failed or are network partitioned. However, if the host cannot ping its isolation address, it is likely that the host has become isolated from the network and no failover action is taken.

By default, the network isolation address is the default gateway for the host. Only one default gateway is specified, regardless of how many management networks have been defined. You should use the das.isolationaddress[…] advanced attribute to add isolation addresses for additional networks.

Network Adapter Best Practices

If you plan to enable software FCoE adapters to work with network adapters, specific considerations apply.

  • Make sure that the latest microcode is installed on the FCoE network adapter.
  • If the network adapter has multiple ports, when configuring networking, add each port to a separate vSwitch. This practice helps you to avoid an APD condition when a disruptive event, such as an MTU change, occurs.
  • Do not move a network adapter port from one vSwitch to another when FCoE traffic is active. If you need to make this change, reboot your host afterwards.
  • If you changed the vSwitch for a network adapter port and caused a failure, moving the port back to the original vSwitch resolves the problem.

Best Practices for vMotion Networking:

Recommended networking best practices are as follows:

*Provide the required bandwidth inone of the following ways:

  • Dedicate at least one GigE adapter for vMotion. Use at least one 10 GigE adapter if you migrate workloads that have many memory operations. If only two Ethernet adapters are available:

*For best security, dedicate the GigE adapter to vMotion, and use VLANs to divide the virtual machine and management traffic on the other adapter.

*For best availability, combine both adapters into a bond, and use VLANs to divide traffic into networks: one or more for virtual machine traffic and one for vMotion.

  • Alternatively, direct vMotion traffic to one or more physical NICs that are shared between other types of traffic as well.

*To distribute and allocate more bandwidth to vMotion traffic across several physical NICs, use multiple-NIC vMotion.

*On a vSphere Distributed Switch 5.1 and later, use Network I/O Control shares to guarantee bandwidth to outgoing vMotion traffic. Defining shares also prevents from contention as a result from excessive vMotion or other traffic.

*Use traffic shaping inegress directionon the vMotion port group on the destination host to avoid saturation of the physical NIC link as a result from intense incoming vMotion traffic. By using traffic shaping you can limit the average and peak bandwidth available to vMotion traffic, and reserve resources for other traffic types.

*Provision at least one additional physical NIC as a failover NIC.

*Use jumbo frames for best vMotion performance. Ensure that jumbo frames are enabled on all network devices that are on the vMotion path including physical NICs, physical switches and virtual switches.

Storage Best Practices:

Best Practices for Software FCoE Boot:

VMware recommends several best practices when you boot your system from a software FCoE LUN.

  • Make sure that the host has access to the entire boot LUN. The boot LUN cannot be shared with other hosts even on shared storage.
  • If you use Intel 10 Gigabit Ethernet Controller (Niantec) with a Cisco switch, configure the switch port in the following way:
    • Enable the Spanning Tree Protocol (STP).
    • Turn off switch port trunk native vlan for the VLAN used for FCoE.

Best Practices for Fibre Channel Storage:

Disable Automatic Host Registration in the vSphere Web Client:

When you use EMC CLARiiON or Invista arrays for storage, it is required that the hosts register with the arrays. ESXi performs automatic host registration by sending the host’s name and IP address to the array. If you prefer to perform manual registration using storage management software, disable the ESXi auto registration feature.


1 Browse to the host in the vSphere Web Client navigator.

2 Click the Manage tab, and click Settings.

3 Under System, click Advanced System Settings.

4 Under Advanced System Settings, select the Disk.EnableNaviReg parameter and click the Edit icon.

5 Change the value to 0. This disables the automatic host registration enabled by default.

 Optimizing Fibre Channel SAN Storage Performance

Make sure that the paths through the switch fabric are not saturated, that is, that the switch fabric is running at the highest throughput.

>> As a best practice, configure virtual IP load balancing in SAN/iQ for all ESXi authentication groups.

Storage Array Performance:

  • When assigning LUNs, remember that each LUN is accessed by a number of hosts, and that a number of virtual machines can run on each host. One LUN used by a host can service I/O from many different applications running on different operating systems. Because of this diverse workload, the RAID group containing the ESXi LUNs should not include LUNs used by other servers that are not running ESXi.
  • Make sure read/write caching is enabled.
  • SAN storage arrays require continual redesign and tuning to ensure that I/O is load balanced across all storage array paths. To meet this requirement, distribute the paths to the LUNs among all the SPs to provide optimal load balancing. Close monitoring indicates when it is necessary to rebalance the LUN distribution.

Server Performance with Fibre Channel:

Each server application must have access to its designated storage with the following conditions:

  • High I/O rate (number of I/O operations per second)
  • High throughput (megabytes per second)
  • Minimal latency (response times)

Best Practices for Software FCoE Boot

VMware recommends several best practices when you boot your system from a software FCoE LUN.

* Make sure that the host has access to the entire boot LUN. The boot LUN cannot be shared with other hosts even on shared storage.

* If you use Intel 10 Gigabit Ethernet Controller (Niantec) with a Cisco switch, configure the switch port in the following way:

* Enable the Spanning Tree Protocol (STP).

* Turn off switchport trunk native vlan for the VLAN used for FCoE.

Best Practices for iSCSI Storage:

When using ESXi with the iSCSI SAN, follow best practices that VMware offers to avoid problems.

Check with your storage representative if your storage system supports Storage API – Array Integration hardware acceleration features. If it does, refer to your vendor documentation for information on how to enable hardware acceleration support on the storage system side. For more information

Best Practices for SSD Devices:

Follow these best practices when you use SSD devices in vSphere environment.

  • Make sure to use the latest firmware with SSD devices. Frequently check with your storage vendors for any updates.
  • Carefully monitor how intensively you use the SSD device and calculate its estimated lifetime. The lifetime expectancy depends on how actively you continue to use the SSD device.

Virtual SAN Networking Requirements and Best Practices:

Virtual SAN requires correctly configured network interfaces.

The hosts in your Virtual SAN cluster must be part of a Virtual SAN network and must be on the same subnet. On each host, configure at least one Virtual SAN interface. You must configure this interface on all host in the cluster, no matter whether the hosts contribute storage or not.

NOTE   Virtual SAN does not support IPv6.

  • Virtual SAN requires a private 1Gb network. As a best practice, use 10Gb network.
  • On each host, dedicate at minimum a single physical 1Gb Ethernet NIC to Virtual SAN. You can also provision additional physical NIC as a failover NIC.
  • For each network that you use for Virtual SAN, configure a VMkernel port group with the Virtual SAN port property activated.
  • Use the same Virtual SAN Network label for each port group and ensure that the labels are consistent
  • across all hosts.
  • Use Jumbo Frames for best performance.
  • Virtual SAN supports IP-hash load balancing, but cannot guarantee improvement in performance for all configurations. You can benefit from IP-hash when Virtual SAN is among its many consumers. In this case, IP-hash performs the load balancing. However, if Virtual SAN is the only consumer, you might not notice changes. This specifically applies to 1G environments. For example, if you use four 1G physical adapters with IP-hash for Virtual SAN, you might not be able to use more tha1G. This also applies to all NIC teaming policies that we currently support. For more information NIC teaming, see the Networking Policies section of the vSphere Networking documentation.
  • Virtual SAN does not support multiple VMkernel adapters on the same subnet for load balancing. Multiple VMkernel adapters on different networks, such as VLAN or separate physical fabric, are supported.
  • You should connect all hosts participating in Virtual SAN to a single L2 network, which has multicast (IGMP snooping) enabled. If the hosts participating in Virtual SAN span across multiple switches or even across L3 boundaries, you must ensure that your network is configured correctly to enable multicast connectivity. You can change multicast addresses from the defaults if your network environment requires, or if you are running multiple Virtual SAN clusters on the same L2 network. You can disable IGMP snooping on the specific Virtual LAN for Virtual SAN to allow multicast traffic to flow smoothly.
  • As a best practice, do not include datastores that have hardware acceleration enabled in the same datastore cluster as datastores that do not have hardware acceleration enabled. Datastores in a datastore cluster must be homogeneous to guarantee hardware acceleration-supported behavior.

>>   the best practice is to use 1:10 ratio of SSD capacity to Raw HDD capacity. For example, if the size of your Raw HDD capacity on the disk group is 4TB, the recommended SSD capacity

is 400GB.

Best Practices for Virtual Machine and Host Security:

Virtual Machine Recommendations:

  • Installing Antivirus Software
  • Limiting Exposure of Sensitive Data Copied to the Clipboard: copy and paste operations for the guest operating system are disabled by default.

If necessary To copy and paste between the guest operating system and remote console, you must enable copy and paste operations using the vSphere Client.


1 Log into a vCenter Server system using the vSphere Client and select the virtual machine.
2 On the Summary tab, click Edit Settings.
3 Select Options > Advanced > General and click Configuration Parameters.
4 Click Add Row and type the following values in the Name and Value columns.

Name Value false false


These options override any settings made in the guest operating system’s VMware Tools control panel.

5 Click OK to close the Configuration Parameters dialog box, and click OK again to close the Virtual Machine Properties dialog box.
6 Restart the virtual machine.
  • Removing Unnecessary Hardware Devices.
  • Prevent a Virtual Machine User or Process from Disconnecting Devices:
    Use a text editor to add the following line to the .vmx file, where device_name is the name of the device

you want to protect (for example, ethernet1).

device_name.allowGuestConnectionControl = “false” / reboot VM
Turn off the virtual machine.

1 Log in to a vCenter Server system using the vSphere Client and select the virtual machine.
2 On the Summary tab, click Edit Settings.
3 Select Options > Advanced > General and click Configuration Parameters.
4 Add or edit the following parameters.

Name Value
isolation.device.connectable.disable true
isolation.device.edit.disable true

These options override any settings made in the guest operating system’s VMware Tools control panel.

vSphere Security Best Practices:

Best Practices for vSphere Users:

Use best practices for creating and managing users to increase the security and manageability of your vSphere environment.

VMware recommends several best practices for creating users in your vSphere environment:

  • Do not create a user named ALL. Privileges associated with the name ALL might not be available to all users in some situations. For example, if a user named ALL has Administrator privileges, a user with ReadOnly privileges might be able to log in to the host remotely. This is not the intended behavior.
  • Use a directory service or vCenter Server to centralize access control, rather than defining users on individual hosts.
  • Choose a local Windows user or group to have the Administrator role in vCenter Server.
  • Because of the confusion that duplicate naming can cause, check the vCenter Server user list before you create ESXi host users to avoid duplicating names. To check for vCenter Server users, review theWindows domain list.

Best Practices for vSphere Groups:

VMware recommends several best practices for creating groups in your vSphere environment:

  • Use a directory service or vCenter Server to centralize access control, rather than defining groups on individual hosts.
  • Choose a local Windows user or group to have the Administrator role in vCenter Server.
  • Create new groups for vCenter Server users. Avoid using Windows built-in groups or other existing groups.
  • If you use Active Directory groups, make sure that they are security groups and not distribution groups. Permissions assigned to distribution groups are not enforced by vCenter Server. For more information about security groups and distribution groups, see the Microsoft Active Directory documentation.

Best Practices for Roles and Permissions:

VMware recommends the following best practices when configuring roles and permissions in your vCenter Server environment:

  • Where possible, grant permissions to groups rather than individual users.
  • Grant permissions only where needed. Using the minimum number of permissions makes it easier to understand and manage your permissions structure.
  • If you assign a restrictive role to a group, check that the group does not contain the Administrator user or other users with administrative privileges. Otherwise, you could unintentionally restrict administrators’ privileges in parts of the inventory hierarchy where you have assigned that group the restrictive role.
  • Use folders to group objects to correspond to the differing permissions you want to grant for them.
  • Use caution when granting a permission at the root vCenter Server level. Users with permissions at the root level have access to global data on vCenter Server, such as roles, custom attributes, vCenter Server settings, and licenses. Changes to licenses and roles propagate to all vCenter Server systems in a Linked Mode group, even if the user does not have permissions on all of the vCenter Server systems in the group.
  • In most cases, enable propagation on permissions. This ensures that when new objects are inserted in to the inventory hierarchy, they inherit permissions and are accessible to users.
  • Use the No Access role to masks specific areas of the hierarchy that you don’t want particular users to have access to.

Best Practices for vCenter Server Privileges:

Strictly control vCenter Server administrator privileges to increase security for the system.

  • Full administrative rights to vCenter Server should be removed from the local Windows administrator account and granted to a special-purpose local vCenter Server administrator account. Grant full vSphere administrative rights only to those administrators who are required to have it. Do not grant this privilege to any group whose membership is not strictly controlled.
  • Avoid allowing users to log in directly to the vCenter Server system. Allow only those users who have legitimate tasks to perform to log into the system and ensure that these events are audited.
  • Install vCenter Server using a service account instead of a Windows account. You can use a service account or a Windows account to run vCenter Server. Using a service account allows you to enable Windows authentication for SQL Server, which provides more security. The service account must be an administrator on the local machine.
  • Check for privilege reassignment when you restart vCenter Server. If the user or user group that is assigned the Administrator role on the root folder of the server cannot be verified as a valid user or group, the Administrator privileges are removed and assigned to the local Windows Administrators group.
  • Grant minimal privileges to the vCenter Server database user. The database user requires only certain privileges specific to database access. In addition, some privileges are required only for installation and upgrade. These can be removed after the product is installed or upgraded.

vSphere Availability Best Practices

To ensure optimal vSphere HA cluster performance, you should follow certain best practices. This topic highlights some of the key best practices for a vSphere HA cluster. You can also refer to the vSphere High Availability Deployment Best Practices publication for further discussion.

Setting Alarms to Monitor Cluster Changes:

When vSphere HA or Fault Tolerance take action to maintain availability, for example, a virtual machine failover, you can be notified about such changes. Configure alarms in vCenter Server to be triggered when these actions occur, and have alerts, such as emails, sent to a specified set of administrators.

Several default vSphere HA alarms are available.

  • Insufficient failover resources (a cluster alarm)
  • Cannot find master (a cluster alarm)
  • Failover in progress (a cluster alarm)
  • Host HA status (a host alarm)
  • VM monitoring error (a virtual machine alarm)
  • VM monitoring action (a virtual machine alarm)
  • Failover failed (a virtual machine alarm)

Monitoring Cluster Validity

A valid cluster is one in which the admission control policy has not been violated.

A cluster enabled for vSphere HA becomes invalid when the number of virtual machines powered on exceeds the failover requirements, that is, the current failover capacity is smaller than

configured failover capacity. If admission control is disabled, clusters do not become invalid.

In the vSphere Web Client, select vSphere HA from the cluster’s Monitor tab and then select Configuration Issues. A list of current vSphere HA issues appears.

DRS behavior is not affected if a cluster is red because of a vSphere HA issue.

vSphere HA and Storage vMotion Interoperability in a Mixed Cluster

In clusters where ESXi 5.x hosts and ESX/ESXi 4.1 or prior hosts are present and where Storage vMotion is used extensively or Storage DRS is enabled, do not deploy vSphere HA. vSphere HA might respond to a host failure by restarting a virtual machine on a host with an ESXi version different from the one on which the virtual machine was running before the failure. A problem can occur if, at the time of failure, the virtual machine was involved in a Storage vMotion action on an ESXi 5.x host, and vSphere HA restarts the virtual machine on a host with a version prior to ESXi 5.0. While the virtual machine might power on, any subsequent attempts at snapshot operations could corrupt the vdisk state and leave the virtual machine unusable.

Using Auto Deploy with vSphere HA:

You can use vSphere HA and Auto Deploy together to improve the availability of your virtual machines. Auto Deploy provisions hosts when they power up and you can also configure it to install the vSphere HA agent on such hosts during the boot process. See the Auto Deploy documentation included in vSphere Installation and Setup for details.

Upgrading Hosts in a Cluster Using Virtual SAN:

If you are upgrading the ESXi hosts in your vSphere HA cluster to version 5.5 or higher, and you also plan to use Virtual SAN, follow this process.

  1. Upgrade all of the hosts.
  2. Disable vSphere HA.
  3. Enable Virtual SAN.
  4. Re-enable vSphere HA.

Admission Control Best Practices:

The following recommendations are best practices for vSphere HA admission control:

  • Select the Percentage of Cluster Resources Reserved admission control policy. This policy offers the most flexibility in terms of host and virtual machine sizing. When configuring this policy, choose a percentage for CPU and memory that reflects the number of host failures you want to support. For example, if you want vSphere HA to set aside resources for two host failures and have ten hosts of equal capacity in the cluster, then specify 20% (2/10).
  • Ensure that you size all cluster hosts equally. For the Host Failures Cluster Tolerates policy, an unbalanced cluster results in excess capacity being reserved to handle failures because vSphere HA reserves capacity for the largest hosts. For the Percentage of Cluster Resources Policy, an unbalanced cluster requires that you specify larger percentages than would otherwise be necessary to reserve enough capacity for the anticipated number of host failures.
  • If you plan to use the Host Failures Cluster Tolerates policy, try to keep virtual machine sizing requirements similar across all configured virtual machines. This policy uses slot sizes to calculate the amount of capacity needed to reserve for each virtual machine. The slot size is based on the largest reserved memory and CPU needed for any virtual machine. When you mix virtual machines of different CPU and memory requirements, the slot size calculation defaults to the largest possible, which limits consolidation.
  • If you plan to use the Specify Failover Hosts policy, decide how many host failures to support and then specify this number of hosts as failover hosts. If the cluster is unbalanced, the designated failover hosts should be at least the same size as the non-failover hosts in your cluster. This ensures that there is adequate capacity in case of failure.

Best Practices for Fault Tolerance

To ensure optimal Fault Tolerance results, you should follow certain best practices.

Host Configuration Best Practices:

Consider the following best practices when configuring your hosts:

  • Hosts running the Primary and Secondary VMs should operate at approximately the same processor frequencies, otherwise the Secondary VM might be restarted more frequently. Platform power management features that do not adjust based on workload (for example, power capping and enforced low frequency modes to save power) can cause processor frequencies to vary greatly. If Secondary VMs are being restarted on a regular basis, disable all power management modes on the hosts running fault tolerant virtual machines or ensure that all hosts are running in the same power management modes.
  • Apply the same instruction set extension configuration (enabled or disabled) to all hosts. The process for enabling or disabling instruction sets varies among BIOSes. See the documentation for your hosts’ BIOSes about how to configure instruction sets.

Homogeneous Clusters:

vSphere Fault Tolerance can function in clusters with nonuniform hosts, but it works best in clusters with compatible nodes. When constructing your cluster, all hosts should have the following configuration:

  • Processors from the same compatible processor group.
  • Common access to datastores used by the virtual machines.
  • The same virtual machine network configuration.
  • The same ESXi version.
  • The same Fault Tolerance version number (or host build number for hosts prior to ESX/ESXi 4.1).
  • The same BIOS settings (power management and hyperthreading) for all hosts. Run Check Compliance to identify incompatibilities and to correct them.

Run Check Compliance to identify incompatibilities and to correct them.


To increase the bandwidth available for the logging traffic between Primary and Secondary VMs use a 10Gbit NIC, and enable the use of jumbo frames.

#Store ISOs on Shared Storage for Continuous Access

Store ISOs that are accessed by virtual machines with Fault Tolerance enabled on shared storage that is accessible to both instances of the fault tolerant virtual machine. If you use this configuration, the CD-ROM in the virtual machine continues operating normally, even when a failover occurs.

For virtual machines with Fault Tolerance enabled, you might use ISO images that are accessible only to the Primary VM. In such a case, the Primary VM can access the ISO, but if a failover occurs, the CD-ROM reports errors as if there is no media. This situation might be acceptable if the CD-ROM is being used for a temporary, noncritical operation such as an installation.

#Avoid Network Partitions

A network partition occurs when a vSphere HA cluster has a management network failure that isolates some of the hosts from vCenter Server and from one another. See Network Partitions. When a partition occurs, Fault Tolerance protection might be degraded.

In a partitioned vSphere HA cluster using Fault Tolerance, the Primary VM (or its Secondary VM) could end up in a partition managed by a master host that is not responsible for the virtual machine. When a failover is needed, a Secondary VM is restarted only if the Primary VM was in a partition managed by the master host responsible for it.

To ensure that your management network is less likely to have a failure that leads to a network partition, follow the recommendations in Best Practices for Networking.

Best Practices for Upgrades and Migrations:

Best Practices for ESXi Upgrades and Migrations

When you upgrade or migrate hosts, you must understand and follow the best practices process for a successful upgrade or migration.

  1. Make sure that you understand the ESXi upgrade process, the effect of that process on your existing deployment, and the preparation required for the upgrade.
    • If your vSphere system includes VMware solutions or plug-ins, make sure they are compatible with the vCenter Server version that you are upgrading to.
    • Understand the changes in configuration and partitioning between ESX/ESXi 4.x and ESXi 5.x, the upgrade and migration scenarios that are supported, and the options and tools available to perform the upgrade or migration.
    • Read the VMware vSphere Release Notes for known installation issues.
    • If your vSphere installation is in a VMware View environment,
  2. Prepare your system for the upgrade.
    • Make sure your current ESX or ESXi version is supported for migration or upgrade
    • Make sure your system hardware complies with ESXi requirements.
    • Make sure that sufficient disk space is available on the host for the upgrade or migration. Migrating from ESX 4.x to ESXi 5.x requires 50MB of free space on your VMFS datastore.
    • If a SAN is connected to the host, detach the fibre before continuing with the upgrade or migration. Do not disable HBA cards in the BIOS.
  3. Back up your host before performing an upgrade or migration, so that, if the upgrade fails, you can restore your host.
  4.  Depending on the upgrade or migration method you choose, you might need to migrate or power off all virtual machines on the host. See the instructions for your upgrade or migration method.
  5.  After the upgrade or migration, test the system to ensure that the upgrade or migration completed successfully.
  6.  Reapply your host licenses. See “Reapplying Licenses After Upgrading to ESXi 5.5,” on page 216.
  7.  Consider setting up a syslog server for remote logging, to ensure sufficient disk storage for log files.  Setting up logging on a remote host is especially important for hosts with limited local storage. Optionally, you can install the vSphere Syslog Collector to collect logs from all hosts.
  8.  If the upgrade or migration was unsuccessful, and you backed up your host, you can restore your host.

Best Practices for vCenter Server Upgrades

When you upgrade vCenter Server, you must understand and follow the best practices process for a successful upgrade.

To ensure that each upgrade is successful, follow these best practices:

  1. 1 Make sure that you understand the vCenter Server upgrade process, the effect of that process on your existing deployment, and the preparation required for the upgrade.
    1.  If your vSphere system includes VMware solutions or plug-ins, make sure they are compatible with the vCenter Server version that you are upgrading to.
    2. Read all the subtopics in Preparing for the Upgrade to vCenter Server.
    3. Read the VMware vSphere Release Notes for known installation issues.
    4. If your vSphere installation is in a VMware View environment, see Upgrading vSphere Components Separately in a Horizon View Environment.
  2. Prepare your system for the upgrade.
    1. Make sure your system meets requirements for the vCenter Server version that you are upgrading to.
    2. Verify that your existing database is supported for the vCenter Server version that you are upgrading to.
    3. Make sure that your vCenter Server database is prepared and permissions are correctly set. See the information about preparing vCenter Server databases in the vSphere Installation and Setup documentation.
    4. Review the prerequisites for the upgrade. See Prerequisites for the vCenter Server Upgrade.
  3. Back up your vCenter Server databases and SSL certificates
    1. Make a full backup of the vCenter Server database and the vCenter Inventory Service database. For the vCenter Server database, see the vendor documentation for your vCenter Server database type. For the Inventory Service database, see the topics “Back Up the Inventory Service Database on Windows” and “Back Up the Inventory Service Database on Linux” in the vSphere Installation and Setup documentation.
    2. Back up the SSL certificates that are on the vCenter Server system before you upgrade to vCenter Server 5.5. The default location of the SSL certificates is %allusersprofile%\Application Data\VMware\VMware VirtualCenter.
  4. Stop the VMware VirtualCenter Server service.
  5. Run the vCenter Host Agent Pre-Upgrade Checker, and resolve any issues. See Run the vCenter Host Agent Pre-Upgrade Checker.
  6. Make sure that no processes are running that conflict with the ports that vCenter Server uses. See Required Ports for vCenter Server.
  7. Upgrade vCenter Server and required components. See the appropriate procedure for your existing vCenter Server deployment:
Use Simple Install to Upgrade vCenter Server and Required Components
Use Custom Install to Upgrade a Basic vCenter Single Sign-On Deployment of Version 5.1.x vCenter Server and Required Components
Use Custom Install to Upgrade vCenter Server from a Version 5.1.x High Availability vCenter Single Sign-On Deployment
Use Custom Install to Upgrade vCenter Server from a Version 5.1.x Multisite vCenter Single Sign-On Deployment
  1. Configure new vSphere 5.5 licenses.
  2. Review the topics in After You Upgrade vCenter Server for postupgrade requirements and options.

Best Practices and Recommendations for Update Manager Environment

You can install Update Manager on the server on which vCenter server runs or on a different server.

The Update Manager server and client plug-ins must be the same version. Update Manager, vCenter Server, and the vSphere Client must be of a compatible version. For more information about compatibility, see Update Manager Compatibility with vCenter Server, vSphere Client and vSphere Web Client.

Update Manager has two deployment models:

Internet-connected model

The Update Manager server is connected to the VMware patch repository, and third-party patch repositories (for ESX/ESXi 4.x, ESXi 5.x hosts, as well as for virtual appliances). Update Manager works with vCenter Server to scan and remediate the virtual machines, appliances, hosts, and templates.

Air-gap model

Update Manager has no connection to the Internet and cannot download patch metadata. In this model, you can use UMDS to download and store patch metadata and patch binaries in a shared repository. To scan and remediate inventory objects, you must configure the Update Manager server to use a shared repository of UMDS data as a patch datastore. For more information about using UMDS, see Installing, Setting Up, and Using Update Manager Download Service.

Outside of DRS clusters, you might not be able to remediate the host running the Update Manager or vCenter Server virtual machines by using the same vCenter Server instance, because the virtual machines cannot be suspended or shut down during remediation. You can remediate such a host by using separate vCenter Server and Update Manager instances on another host. Inside DRS clusters, if you start a remediation task on the host running the vCenter Server or Update Manager virtual machines, DRS attempts to migrate the virtual machines to another host, so that the remediation succeeds. If DRS cannot migrate the virtual machine running Update Manager or vCenter Server, the remediation fails. Remediation also fails if you have selected the option to power off or suspend the virtual machines before remediation.

Auto Deploy Best Practices

This section discusses several Auto Deploy best practices an helps you understand how to set up networking, configure vSphere HA, and otherwise optimize your environment for Auto Deploy. See the VMware Knowledge Base for additional best practice information.

Auto Deploy and vSphere HA Best Practices

You can improve the availability of the virtual machines running on hosts provisioned with Auto Deploy by following best practices.

Some environments configure the hosts provisioned with Auto Deploy with a distributed switch or configure virtual machines running on the hosts with Auto Start Manager. In those environments, deploy the vCenter Server system so that its availability matches the availability of the Auto Deploy server. Several approaches are possible.

In a proof of concept environment, deploy thevCenter Server system and the Auto Deploy server on the same system. In all other situations, install the two servers on separate systems.
Deploy vCenter Server Heartbeat.

VMware vCenter Server Heartbeat delivers high availability for vCenter Server, protecting the virtual and cloud infrastructure from application, configuration, operating system, or hardware related outages.

Deploy the vCenter Server system in a virtual machine. Run the vCenter Server virtual machine in a vSphere HA enabled cluster and configure the virtual machine with a vSphere HA restart priority of high. Include two or more hosts in the cluster that are not managed by Auto Deploy and pin the vCenter Server virtual machine to these hosts by using a rule (vSphere HA DRS required VM to host rule). You can set up the rule and then disable DRS if you do not wish to use DRS in the cluster. The greater the number of hosts that are not managed by Auto Deploy the greater your resilience to host failures.


This approach is not suitable if you use Auto Start Manager because Auto Start Manager is not supported in a cluster enabled for vSphere HA.

Auto Deploy Networking Best Practices

Prevent networking problems by following Auto Deploy networking best practices.

Auto Deploy and IPv6 Because Auto Deploy takes advantage of the iPXE infrastructure, it requires that each host has an IPv4 address. You can use those hosts in a mixed-mode deployment where each host has both an IPv4 address and an IPv6 address.
IP Address Allocation Using DHCP reservations is recommended for address allocation. Fixed IP addresses are supported by the host customization mechanism, but providing input for each host is not recommended.
VLAN Considerations Using Auto Deploy in environments that do not use VLANs is recommended.

If you intend to use Auto Deploy in an environment that uses VLANs, you must make sure that the hosts you want to provision can reach the DHCP server. How hosts are assigned to a VLAN depends on the setup at your site. The VLAN ID might be assigned by the switch or by the router, or you might be able to set the VLAN ID in the host’s BIOS or through the host profile. Contact your network administrator to determine the steps for allowing hosts to reach the DHCP server.

Auto Deploy and VMware Tools Best Practices

When you provision hosts with Auto Deploy, you can select an image profile that includes VMware Tools, or select the smaller image associated with the image profile that does not contain VMware Tools.

You can download two image profiles from the VMware download site.

xxxxx-standard: An image profile that includes the VMware Tools binaries, required by the guest operating system running inside a virtual machine. The image is usually named esxi-5.0.version-xxxxx-standard.
xxxxx-no-tools: An image profile that does not include the VMware Tools binaries. This image profile is usually smaller, has less memory overhead, and boots faster in a PXE-boot environment. This image is usually named esxi- versionxxxxx-no-tools

Starting with vSphere 5.0 Update 1, you can deploy ESXi using either image.

If the network boot time is of no concern and your environment has sufficient extra memory and storage overhead, choose the image that includes VMware Tools.
If you find the network boot time too slow when using the standard image, or if you want to save some space on the hosts, you can use the xxxxx-no-tools image profile and place the tools binaries on shared storage.

Follow these steps if you decide to use the xxxxx-no-tools image profile.

1 Boot an ESXi host that was not provisioned with Auto Deploy.
2 Copy the /productLocker directory from the ESXi host to a shared storage.
3 Change the UserVars.ProductLockerLocation variable to point to the /productLocker directory.

a In the vSphere Web Client, select the reference host and click the Manage tab.
b Select Settings and click Advanced System Settings.
c Filter for uservars, and select UserVars.ProductLockerLocation.
d Click the pen icon and edit the location so it points to the shared storage.
4 Create a host profile from the reference host.
5 Create an Auto Deploy rule that assigns the xxxxx-no-tools image profile and host profile from the reference host to all other hosts.
6 Boot your target hosts with the rule so they pick up the product locker location from the reference host.

Auto Deploy Load Management Best Practice

Simultaneously booting large numbers of hosts places a significant load on the Auto Deploy server. Because Auto Deploy is a web server at its core, you can use existing web server scaling technologies to help distribute the load. For example, one or more caching reverse proxy servers can be used with Auto Deploy. The reverse proxies serve up the static files that make up the majority of an ESXi boot image. Configure the reverse proxy to cache static content and pass all requests through to the Auto Deploy server. See the VMware Technical Publications Video Using Reverse Web Proxy Servers for Auto Deploy.

Configure the hosts to boot off the reverse proxy by using multiple TFTP servers, one for each reverse proxy server. Finally, set up the DHCP server to send different hosts to different TFTP servers.

When you boot the hosts, the DHCP server sends them to different TFTP servers. Each TFTP server sends hosts to a different server, either the Auto Deploy server or a reverse proxy server, significantly reducing the load on the Auto Deploy server.

After a massive power outage, VMware recommends that you bring up the hosts on a per-cluster basis. If you bring up multiple clusters simultaneously, the Auto Deploy server might experience CPU bottlenecks. All hosts come up after a potential delay. The bottleneck is less severe if you set up the reverse proxy.

vSphere Auto Deploy Logging and Troubleshooting Best Practices

To resolve problems you encounter with vSphere Auto Deploy, use the Auto Deploy logging information from the vSphere Web Client and set up your environment to send logging information and core dumps to remote hosts.

Auto Deploy Logs
1 In a vSphere Web Client connected to the vCenter Server system that Auto Deploy is registered with, go to the inventory list and select the vCenter Server system.
2 Click the Manage tab, select Settings, and click Auto Deploy.
3 Click Download Log to download the log file.
Setting Up Syslog Set up a remote Syslog server. See the vCenter Server and Host Management documentation for Syslog server configuration information. Configure the first host you boot to use the remote syslog server and apply that host’s host profile to all other target hosts. Optionally, install and use the vSphere Syslog Collector, a vCenter Serversupport tool that provides a unified architecture for system logging and enables network logging and combining of logs from multiple hosts.
Setting UpESXiDump Collector Hosts provisioned with Auto Deploy do not have a local disk to store core dumps on. Install ESXi Dump Collector and set up your first host so all core dumps are directed to ESXi Dump Collector, and apply the host profile from that host to all other hosts. See Configure ESXi Dump Collector with ESXCLI.

Using Auto Deploy in a Production Environment

When you move from a proof of concept setup to a production environment, take care to make the environment resilient.

Protect the Auto Deploy server. Auto Deploy and vSphere HA Best Practices gives an overview of the options you have.
Protect all other servers in your environment including the DHCP server and the TFTP server.
Follow VMware security guidelines, including those outlined in Auto Deploy Security Considerations.

Troubleshooting Best Practices

Approach troubleshooting and problem-solving systematically, and take notes so you can trace your steps. Follow these guidelines to resolve issues with your client application.

  • Do not change more than one thing at a time, and document each change and its result. Try to isolate the problem: Does it seem to be local, to the client? An error message generated from the server? A network problem between client and server?
  • Use the logging facilities for your programming language to capture runtime information for the client application. See the Log.cs sample application as an example.
  • C# client logging example: \SDK\vsphere-ws\dotnet\cs\samples\AppUtil\Log.cs
  • Use the following VMware tools for analysis and to facilitate debugging.
  • vSphere Web Services API. The DiagnosticManager service interface allows you to obtain information from the server log files, and to create a diagnostic bundle that contains all system log files and all server configuration information. The vSphere Client and the MOB provide graphical and Web based access to the DiagnosticManager. PerformanceManager supports exploration of bottlenecks. See vSphere Performance.
  • Managed Object Browser (MOB). The MOB provides direct access to live runtime server-side objects. You can use the MOB to explore the object hierarchy, obtain property values, and invoke methods. See Managed Object Browser.
  • VMware vSphere Client GUI. The vSphere Client allows you to examine log files for ESX/ESXi, vCenter Server, and virtual machines, and to change log level settings. Use vSphere Client menu commands to create reports that summarize configuration information, performance, and other details, and to export diagnostic bundles. The vSphere Client maintains its own local log files.
  • When an ESX/ESXi host is managed by vCenter Server, vSphere API calls cannot contact the host directly: they must go through vCenter. If necessary, especially during disaster recovery, the administrator must disassociate the ESXi host from vCenter Server before the host can be contacted directly.
  • Setting host acceptance levels is a best practice that allows you to specify which VIBs can be installed on a host and used with an image profile, and the level of support you can expect for a VIB. For example, you would probably set a more restrictive acceptance level for hosts in a production environment than for hosts in a testing environment.
  • Update Manager uses a SQL Server or Oracle database. You should use a dedicated database for Update Manager, not a database shared with vCenter Server, and should back up the database periodically. Best practice is to have the database on the same computer as Update Manager or on a computer in the local network.
  • NOTE   vCenter Server 5.5 supports connection between vCenter Server and vCenter Server components by IP address only if the IP address is IPv4-compliant. To connect to a vCenter Server system in an IPv6 environment, you must use the fully qualified domain name (FQDN) or host name of the vCenter Server. The best practice is to use the FQDN, which works in all cases, instead of the IP address, which can change if assigned by DHCP.
  • (Optional) The maximum size for the vSphere Auto Deploy repository. Best practice is to allocate 2GB to have enough room for four image profiles and some extra space. Each image profile requires approximately 350MB. Determine how much space to reserve for the vSphere Auto Deploy repository by considering how many image profiles you expect to use. The specified disk must have at least that much free space.

About Ahmad Sabry ElGendi
This entry was posted in VCAP5-DCA, Vmware. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s