Windows Server Standard Core License Consumption inaccuracy for dynamically provisioned cluster
Has anyone come across the below issue while trying to optmize Windows Server Standard Core License Consumption for a cluster that is dynamically provisioned?
- Windows Server Standard Core License Consumption (for Dynamically Provisioned Cluster Environment): In case of Windows Server Standard License allocations for a dynamically provisioned cluster the tool is not doing the correct calculation of the consumptions. Let me illustrate this with a sample scenario below:
- (Actual Licensing to be applied): if there is a Cluster with 2 Hosts (Host A with a 4 Procs *4 Cores Configuration and Host B with 6 Procs*8 Cores Configuration) each running 3 VMs each with Windows Server Standard 2016 OS on each of them, and if that cluster is dynamically provisioned the licensing logic that is applied here is at any given point of time we consider each host to be hosting 6 VMs (3 VMs from Host A and 3 VMs from Host B) and allocate licenses accordingly. So here the Host A will be allocated a total of 96 Cores (6 VMs in total so calculation is 32*3 =96 Cores) and for Host B the allocation would be (6 VMs in total so calculation is 48*3=144 Cores) and the total for the cluster would be (96+144=240 Cores of Windows Server Standard 2016 License)
- Tool Consumption Logic: However, in the tool post creating the license (Including setting up the Product Usage Rights and tagging the appropriate application) when the allocations are done to Host A and Host B. The tool assumes the VMs within each of these hosts to be static to that particular host at that instance and calculates the consumption accordingly. Hence, the license allocation done by the tool would be for Host A (3 VMs in total so 32*2=64 Cores) and allocation for Host B would be (3 VMs in Host B so calculation is 48*2=96 Cores) and for the cluster the total allocation would be (64+96=160 Cores of Windows Server Standard License). I have explained a live example from one of the customer tenant data in the excel attached for an easy identification of the delta between actual licensing vs tool consumption logic.
This thread has been automatically locked due to inactivity.
To continue the discussion, please start a new thread.
This is definitively happening exactly as described by you and it is just wrong. I think I have posted this for an enhancement, but never got feedback. This really should be voted up.
Thank you for the reply!!
I second your opinion on voting this issue up as this has quite an impact on the license position and also the product involved here is from the server pool.
I am currently writing a script to allocate Windows servers optimally (Standard vs. DC). I wanted to leave the actual consumption calculation to FNMS, but it looks like proper Windows Server Standard cluster calculation ist still not happening. So I give this thread a push and also: https://flexerasfdc.ideas.aha.io/ideas/ITAM-I-239
Hello Markward and Ravi
Indeed, Windows Server licensing is complex and was not well addressed by FNMS.
A custom report for Windows Server Optimization has just been updated on: https://community.flexera.com/t5/FlexNet-Manager-Services/FNMS-Windows-Server-and-SQL-Server-Optimization-reports-on/tac-p/206558#M52
the report has been enhanced: the average number of VMs per host was not ideal (after testing intensively, it turned out to be even lower consumption than the current snapshot optimization.
The v3 just uploaded contains the Worst case that is so unfair to customers... but also two new metrics: the actual 90 day peak and the "highest number of VMs on a host for the cluster" that is between the actual peak and the worst case. On a 2500 ESX 30,000 Windows Server, the difference between the 90 peak and the worst case in €15 million. 90 day peak is worse being considered... more details on: https://www.youtube.com/watch?v=nMs3VWJ_snY