A new Flexera Community experience is coming on November 25th. Click here for more information.

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Certificate error on installed FNMS Kubernetes agents

we are using FNMS 2021 R1 version and wanted to retrieve inventory information from AKS kubernetes. we are using external Digicert certification in the inventory beacon and extracted a cert.pem with the license key on the agent.  

we are getting the below message each time the pod restarted or when we manually trigger mgspolicy and ndtrack commands. However, if we include checkcertificaterevocation as False then we are not seeing this error.

[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE0000002: No such file or directory
[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE0500454: Failed to write local file /var/opt/managesoft/etc/ssl/ocsp/d170e61a.ocsp
[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE0000002: No such file or directory

[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE0500454: Failed to write local file /var/opt/managesoft/etc/ssl/crls/e83d98dd.r0
[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE0000002: No such file or directory
[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE0500454: Failed to write local file /var/opt/managesoft/etc/ssl/crls/e83d98dd.r0

[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE050057C: HTTPS certificate revocation status could not be determined

[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE050044D: Failed to create remote directory /ManageSoftRL

[Mon Feb  7 13:27:36 2022 (N, 0)] {366} Error 0xE0690099: Specified remote directory is invalid, or could not be created
[Mon Feb  7 13:27:36 2022 (G, 0)] {366} ERROR: Remote directory is invalid

 

 

 

(14) Replies
ChrisG
By Community Manager Community Manager
Community Manager

I found one other report from 2019 of similar errors appearing in logs. In that case they worked around the issue by manually creating the /var/opt/managesoft/etc/ssl/{ocsp ,crls} directories. However it doesn't appear a root cause was identified to understand why the directories had to be created manually in the first place.

If nobody else here has insight on a root cause, you may need to contact Flexera and work with them to dig deeper.

(Did my reply solve the question? Click "ACCEPT AS SOLUTION" to help others find answers faster. Liked something? Click "KUDO". Anything expressed here is my own view and not necessarily that of my employer, Flexera.)

I haven't tried creating the directories you mention....but i did see this in our cloud environment with our USG customer.  The cloud instances couldn't reach out to the public URL of the CRL...so this would fail.   I just configured the install of the unix agents to have CHECKCERTIFICATEREVOCATION set to false in the bootstrap.   I ensure the server cert check was true...so validated the certifcate was valid...just didn't do the lookup.

 

This was with the Unix agent, not kubernetes....but i imagine it is similar.  

I have the same issue on my monitor pod, but the filesystem is read-only for /var/opt/managesoft/etc/ssl, so I can't create the folders crls and ocsp manually. The cert.pem file is there, but if fails because the crls/ocsp directories don't exist. 

Any ideas?

Temp solution is to disable the check:

CheckCertificateRevocation=False

After that it works.

@guitms  CheckCertificateRevocation=False works in normal FNMS agent and in unix like platform but when we use the same in kubernetes every time when the instance pod starts and when if fetch the information from monitor pod its vanishing

Hey Raghuvaran,

Yeah. I have the same issue with pod restarts, then I have to constantly run the mgsconfig again. It's annoying, because a pod restart should not cause the deletion of extra parameters. It should be specified in the krm.yaml, but the default installation doesn't ask for extra settings.

Luckily I'm not the only one.. 😉

Thanks

Are you using the spec.monitor.configPatches setting? This ensures that config.ini changes are applied whenever the monitor pod starts.

https://docs.flexera.com/fnms/EN/WebHelp/index.html#tasks/InvSet-KubPatchConfigIni.html

I'm assuming you are using the spec.monitor.tlsFiles attribute to provide the custom cert.pem. That attribute takes a VolumeSource type, which can be any of the storage types provided by Kubernetes. Using a Secret or a ConfigMap is convenient, but those volumes are read-only when mounted into the pod. You could set up some other type of volume that is read/write, for instance using a PersistentVolumeClaim, which would allow for the directories to be created and the files to be stored in them.

spec:
  monitor:
    tlsFiles:
      persistentVolumeClaim:
        claimName: example

The volume referenced in spec.monitor.tlsFiles is mounted to /var/opt/managesoft/etc/ssl, so the directory structure within it should be handled accordingly.

@Colvin  we are using a valid digicert certificate and we don't want to set certificate revocation or checkservercertificate to false and we want the agent to use a trusted encrypted communication all the time and its not happening

 

we have chosen the above approach just as a workaround, is there any suggestions to use it in port 443 and configure it in correct way please.    

@raghuvaran_ram If the beacon URL uses the https scheme and the certificate served by the beacon was issued by a globally trusted authority, then it should all work without further configuration.

If the beacon's certificate was issued by a custom authority, then using the spec.monitor.tlsFiles feature and supplying a valid chain for the authority will allow you to avoid setting CheckServerCertificate=False, but may require you to set CheckCertificateRevocation=False when you're using a read-only volume type.

Using a writable volume type for spec.monitor.tlsFiles and placing your cert.pem file within it should have everything working as you expect.

If it is still not working, then you likely have some other issue such as an invalid certificate chain or one that does not contain the correct certificates, or some issue with end-to-end communication between the pod and the beacon.

@Colvin  Thanks for your reply.

Yes, I have tested with valid global certificate and also trusted internal certificate with valid chain. The non kubernetes FNMS agents are still able to communicate successfully using port 443 but not with Kubernetes. How do I find what is missing, any logs within the containers can help?

 

@raghuvaran_ram The logs generated by the standard agent component can be found within the monitor pod at their standard location on Linux, /var/opt/managesoft/log. The Kubernetes agent binary, krm, writes its logs to standard output, so they can be viewed using the kubectl logs command, although it generally won't log anything useful for this particular issue. You can access a shell inside of the monitor pod and use some test commands, for instance the openssl s_client command, to test the certificate and communication with the beacon.

@Colvin  when I use cert.pem file to authorize the communication between beacon and the client for https connection, what should be placed in the cert.pem and in beacon from where should I extract the chain. please help for both internal and digicert certificate. 

@raghuvaran_ram cert.pem in the Kubernetes agent is identical to a standard agent installation on Linux -- only the means for setting it up is different -- and one could assume that if you need it for Kubernetes then you also need it for standard installations. You could reference one of the standard installations to see how it is set up, if you have one available, and you can probably use the same cert.pem.

Docs for setting it up on the Kubernetes agent is here, docs for the standard agent on Linux are here.