Level 6

Certificate error on installed FNMS Kubernetes agents

we are using FNMS 2021 R1 version and wanted to retrieve inventory information from AKS kubernetes. we are using external Digicert certification in the inventory beacon and extracted a cert.pem with the license key on the agent.  

we are getting the below message each time the pod restarted or when we manually trigger mgspolicy and ndtrack commands. However, if we include checkcertificaterevocation as False then we are not seeing this error.

[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE0000002: No such file or directory
[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE0500454: Failed to write local file /var/opt/managesoft/etc/ssl/ocsp/d170e61a.ocsp
[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE0000002: No such file or directory

[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE0500454: Failed to write local file /var/opt/managesoft/etc/ssl/crls/e83d98dd.r0
[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE0000002: No such file or directory
[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE0500454: Failed to write local file /var/opt/managesoft/etc/ssl/crls/e83d98dd.r0

[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE050057C: HTTPS certificate revocation status could not be determined

[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE050044D: Failed to create remote directory /ManageSoftRL

[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE0690099: Specified remote directory is invalid, or could not be created
[Mon Feb  7 13:27:36 2022 (G, 0)] {366} ERROR: Remote directory is invalid




0 Kudos
12 Replies
Community Manager Community Manager
Community Manager

I found one other report from 2019 of similar errors appearing in logs. In that case they worked around the issue by manually creating the /var/opt/managesoft/etc/ssl/{ocsp ,crls} directories. However it doesn't appear a root cause was identified to understand why the directories had to be created manually in the first place.

If nobody else here has insight on a root cause, you may need to contact Flexera and work with them to dig deeper.

(Did my reply solve the question? Click "ACCEPT AS SOLUTION" to help others find answers faster. Liked something? Click "KUDO". Anything expressed here is my own view and not necessarily that of my employer, Flexera.)

I haven't tried creating the directories you mention....but i did see this in our cloud environment with our USG customer.  The cloud instances couldn't reach out to the public URL of the CRL...so this would fail.   I just configured the install of the unix agents to have CHECKCERTIFICATEREVOCATION set to false in the bootstrap.   I ensure the server cert check was true...so validated the certifcate was valid...just didn't do the lookup.


This was with the Unix agent, not kubernetes....but i imagine it is similar.  

Level 3

I have the same issue on my monitor pod, but the filesystem is read-only for /var/opt/managesoft/etc/ssl, so I can't create the folders crls and ocsp manually. The cert.pem file is there, but if fails because the crls/ocsp directories don't exist. 

Any ideas?

0 Kudos

Temp solution is to disable the check:


After that it works.

0 Kudos

@guitms  CheckCertificateRevocation=False works in normal FNMS agent and in unix like platform but when we use the same in kubernetes every time when the instance pod starts and when if fetch the information from monitor pod its vanishing

0 Kudos

Hey Raghuvaran,

Yeah. I have the same issue with pod restarts, then I have to constantly run the mgsconfig again. It's annoying, because a pod restart should not cause the deletion of extra parameters. It should be specified in the krm.yaml, but the default installation doesn't ask for extra settings.

Luckily I'm not the only one.. 😉


0 Kudos
Level 4 Flexeran
Level 4 Flexeran

Are you using the spec.monitor.configPatches setting? This ensures that config.ini changes are applied whenever the monitor pod starts.


Level 4 Flexeran
Level 4 Flexeran

I'm assuming you are using the spec.monitor.tlsFiles attribute to provide the custom cert.pem. That attribute takes a VolumeSource type, which can be any of the storage types provided by Kubernetes. Using a Secret or a ConfigMap is convenient, but those volumes are read-only when mounted into the pod. You could set up some other type of volume that is read/write, for instance using a PersistentVolumeClaim, which would allow for the directories to be created and the files to be stored in them.

        claimName: example

The volume referenced in spec.monitor.tlsFiles is mounted to /var/opt/managesoft/etc/ssl, so the directory structure within it should be handled accordingly.

@Colvin  we are using a valid digicert certificate and we don't want to set certificate revocation or checkservercertificate to false and we want the agent to use a trusted encrypted communication all the time and its not happening


we have chosen the above approach just as a workaround, is there any suggestions to use it in port 443 and configure it in correct way please.    

0 Kudos

@raghuvaran_ram If the beacon URL uses the https scheme and the certificate served by the beacon was issued by a globally trusted authority, then it should all work without further configuration.

If the beacon's certificate was issued by a custom authority, then using the spec.monitor.tlsFiles feature and supplying a valid chain for the authority will allow you to avoid setting CheckServerCertificate=False, but may require you to set CheckCertificateRevocation=False when you're using a read-only volume type.

Using a writable volume type for spec.monitor.tlsFiles and placing your cert.pem file within it should have everything working as you expect.

If it is still not working, then you likely have some other issue such as an invalid certificate chain or one that does not contain the correct certificates, or some issue with end-to-end communication between the pod and the beacon.

@Colvin  Thanks for your reply.

Yes, I have tested with valid global certificate and also trusted internal certificate with valid chain. The non kubernetes FNMS agents are still able to communicate successfully using port 443 but not with Kubernetes. How do I find what is missing, any logs within the containers can help?


0 Kudos

@raghuvaran_ram The logs generated by the standard agent component can be found within the monitor pod at their standard location on Linux, /var/opt/managesoft/log. The Kubernetes agent binary, krm, writes its logs to standard output, so they can be viewed using the kubectl logs command, although it generally won't log anything useful for this particular issue. You can access a shell inside of the monitor pod and use some test commands, for instance the openssl s_client command, to test the certificate and communication with the beacon.