Should I assign multiple cores per socket on a vCPU?
Since vSphere 4.1 you have the option of setting the number of cores per CPU in a virtual machine. This allows you to present logical processors to a VM into specific socket and core configurations. This feature is commonly know as corespersocket.
So the question arises…. is it better two assign multiple sockets or multiple cores per socket?
Below we look into the best practices and when to use which option…
This feature was originally introduced to address licensing issues where some operating systems or applications had limitations on the number of sockets that could be used per license, however they did not limit the number of cores. An example of this is Microsoft SQL licensing.
Because of this, you now have two options when assigning vCPUs to a virtual machine:
- Number of Sockets
- Number of cores per socket
It has been a common believe that either processor presentation does not affect performance of the VM, however the truth is that it can impact performance depending on the sizing and presentation of the virtual NUMA presented to guest operating system.
Example: Your VM needs 12 CPUs. Do you assign 12 sockets with 1 core per socket, or do you assign 1 socket with 12 cores per socket, or a variation between the two?
How to assign multiple cores to a vCPU
To assign multiple cores to a vCPU on a VMware virtual machine, follow VMware KB 1010184.
Recommended Best Practises
The following information has been taken from this VMware vSphere Blog article >>> Does corespersocket Affect Performance?:
#1 When creating a virtual machine, by default, vSphere will create as many virtual sockets as you’ve requested vCPUs and the cores per socket is equal to one. I think of this configuration as “wide” and “flat.” This will enable vNUMA to select and present the best virtual NUMA topology to the guest operating system, which will be optimal on the underlying physical topology.
#2 When you must change the cores per socket though, commonly due to licensing constraints, ensure you mirror physical server’s NUMA topology. This is because when a virtual machine is no longer configured by default as “wide” and “flat,” vNUMA will not automatically pick the best NUMA configuration based on the physical server, but will instead honor your configuration – right or wrong – potentially leading to a topology mismatch that does affect performance.
The following VMware vSphere Blog article tests performance of a VM against both scenarios – sockets vs cores per socket CPU presentation to a virtual machine >>> Does corespersocket Affect Performance?.
The results are pretty astonishing – up to a 31% increase in execution time when assigning 1 socket and 24 cores per socket vs assigning 24 sockets and 1 core per socket.
Things to consider
The critical component in the test case above was NUMA presentation to the VM. Therefore there are a few things to take away from this:
- You need to understand your VM and the application \ guest OS that is running on it
- You need to understand your physical host NUMA topology and understand how that can potentially affect your VM when assigning vCPUs to that guest
- Remember, NUMA only applies when there are 8 or more vCPUs assigned to a virtual machine
Rule of Thumb: What option should I go with most of the time?
So based on the test results above, the rule of thumb when assigning multiple vCPUs to a virtual machine is:
Assign multiple sockets (with 1 core per socket) to a virtual machine, unless there is a specific reason as to why the guest requires multiple cores (e.g. licensing requirements)
If you want more information have a look at the following links:
- What is NUMA
- Using vitual NUMA
- Setting the number of cores per CPU in a virtual machine
- Does corepersocket affect performance?
I am interested to hear your comments and past experiences with either of these settings in the comments below….