- Revenera Community
- :
- FlexNet Embedded
- :
- FlexNet Embedded Knowledge Base
- :
- Tuning XT Kit Performance
- Mark as New
- Mark as Read
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
Tuning XT Kit Performance
Tuning XT Kit Performance
Summary
Ways to tune an XT kit to optimize performance
Question
Please elaborate on how to go about optimizing the performance of XT kits.
Answer
Tuning XT Kit Performance
The XT kits now include the Identity Update Utility (identityupdateutil) which we provided to allow publishers a certain amount of control over the behavior of various low level mechanisms used by the client. It takes client identity data as an argument and spits out an updated version which contains extra configuration data. When this identity data is loaded by the XT kits those settings are used to configure that behavior. We will probably continue to add new options to this over time so it's always worth picking up the latest docs.
Observations
In general, the big items that impact performance of the XT kits are either:
- Operating system mechanisms to discover host-ids.
- Operating system mechanisms to detect virtualization.
- Tamper resistance checks on image integrity (e.g. checksums or signature validation).
Tamper resistance protection of FlxCore is an up front check which we do in one the early API calls (depends on C, Java or C# kit and platform). Ideally we'd like to be able to offer producers the choice to control that, dial it back, disable it, etc - but the reality is that we can't think of a way to disable it that doesn't weaken the security (or cost us significantly in QA/support). This has an impact but it only happens at the start of the process which means the main situations where that is a problem is where startup time.
With the passive changes outlined below we hope that virtualization detection performance will cease to be a problem, and we may look to add the ability to cache detection going forward if it should prove to be a problem again.
Host-id detection performance is a moving target for us. Hopefully by adding the caching mechanism described below we'll have dealt with the sort of concerns that come up most, and then we provide the further tweaks to control which mechanisms we use if startup performance becomes a problem.
Passive Changes
Before we go into using the tool, it's worth noting that we've also made a number of other performance tuning improvements which you may benefit from (2017.11 onward) and which you don't need to do anything to see the benefit of:
- At least on Windows, it turns out that the detection of the particular type of virtual machine is much more costly than simply detecting that we are running virtualized. Since nothing appeared to actually leverage this VM type (aside from idle curiosity) we no longer lookup VM type by default, leading to some nice performance improvements. If you still want it, that needs to be enabled by the publisher using the Capability Request addVmInfo API.
- We optimized some code paths that were repeatedly iterating across host-ids where it wasn't necessary. The effect was cumulative (I think) across the number of features, so not everyone would see the benefit of this optimization.
Active Changes: Host-ids
Explicitly, if host-id lookup performance is a problem then I suggest the producer experiment with things in the following order:
Change | Switch | |
---|---|---|
Restrict host id detection to expected types | -restrict-device-id <type> | This might not make much difference but it's a good idea anyway. Note that this should be specified repeatedly to support different types (some combination of 'vmuuid' along with 'mac_ecmc' and 'mac', would be typical). This is a risk free operation. |
Cache host-ids | -enable-device-id-caching all | If you want the cache to periodically be refreshed you would also provide "-caching-duration <seconds>". This might have some consequences if the host-id is removable and a caching duration isn't specified as once you have that host-id you will have it for good (see our docs for details). |
Limit the mechanism to a faster but not always correct version (Windows only) | -restrict-device-id-detection mac (and don't also use -restrict-device-id-detection mac_ecmc) | The situations in which it is not always correct are pretty complicated. We don't see them often and they typically depend on weird Windows configurations around networking (teamed NICs for example). This option is the final option to try after first testing if you still see performance challenges with the previous approaches. |
Measurements
With these changes, we've also formalized the process of measuring performance of XT kits. It's subjective but we tend to find that ignoring the startup cost of TRA and first host-id lookup (assuming host-id is cached) and with VM detection only (not type) that for the Java XT kits the process of creating a capability request and processing the response takes less than 40ms across Linux and Windows, including comms.
TL;DR
So the final result could be one of:
Best reliable performance | -restrict-device-id mac_ecmc -restrict-device-id mac -restrict-device-id vmuuid -enable-device-id-caching all |
Best performance, but may struggle to get correct host-ids on Windows in every situation | -restrict-device-id mac -restrict-device-id vmuuid -enable-device-id-caching all |