cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

k8s full agent

I'm trying to deploy k8s full inventory agent. The monitor pod is always pending and I got this error. So in ./krm.yaml generated from ./generate.sh, does it include all these requirements 

============

Decide how you will provide persistent storage to the Flexera Kubernetes inventory agent.

The statefulSet used by the Flexera Kubernetes inventory agent requires a PersistentVolumnClaim (PVC) that defines its storage configuration. The storage requirements are:
  • The volume is durable/reliable across restarts and upgrades of the Kubernetes pods containing the Flexera Kubernetes inventory agent
  • The volume is not shared with any other resources in the cluster
  • A minimum of 2GB of storage is available
  • The access mode must be ReadWriteOnce (this is the default for controllers)
  • The volume mode must be Filesystem (also the default for controllers).

============

I also attach the result of describe pod monitor.

(1) Solution

You will likely want to use one of the storage classes already configured in your cluster. Running 'kubectl get storageclass' will list those that are available. You provide a complete PersistentVolumeClaim spec in the KRM resource under the 'spec.monitor.storage' attribute, which commonly includes one of the storage classes provided by your cluster. You can read more about PVCs here: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims. The section on storage classes is particularly relevant. Once you've determined what storage classes are available and chosen one, you can generally search for some documentation on that storage class to find out how to troubleshoot issues with it.

Whether or not dynamic provisioning is available isn't directly related to whether or not your cluster is running on virtual machines, it is a property of a storage class. Each storage class either does or does not support dynamic provisioning, which is usually a consequence of the type of underlying storage provider it is using. What storage classes may be successfully implemented in you cluster may be impacted by the type of underlying machine (by virtue of what types of underlying storage providers are available), but most clusters will provide some sort of storage class out of the box that supports dynamic provisioning.

Based on your concerns about virtual machines and the fact that some of the pods are failing with an image pull error while some are not, are you using minikube? If so, you can simply use the "standard" storage class and it will support dynamic provisioning. Something like this would suffice:

storageClassName: standard
resources:
  requests:
    storage: 3Gi



The installation process is just as you've said. The only challenging part is configuring the storage and ensuring that you have network communication with a beacon set up correctly. With those details worked out, it is just running install.sh, running generate.sh or manually writing a KRM resource (krm.yaml), then applying that KRM resource to the cluster using 'kubectl apply'.

I've redacted the beacon URL, but here's my basic KRM resource from krm.yaml:

apiVersion: agents.flexera.com/v1
kind: KRM
metadata:
  name: instance
spec:
  monitor:
    beaconURL: https://beacon.example.org
    storage:
      storageClassName: standard
      resources:
        requests:
          storage: 3Gi

 

View solution in original post

(4) Replies

And does anyone successfully install k8s full agent? could you share your deployment.yaml and krm.yaml? 

After installation, how can I know it works and how data will be shown on portal?

Colvin
By
Flexera Alumni

The monitor pod is owned  by a StatefulSet, a type that combines a Deployment with a PersistentVolumeClaim. The pod will block in the Pending state until the PVC can be bound with a volume, because it cannot proceed until the volume has been mounted in the container. You can see in the "describe pod" screenshot an indication that the PVC has not been bound.

To determine why the PVC is not bound you can describe the PVC in question. You can see in the "describe pod" screenshot that the PVC is named "flexera-krm-data-krm-instance-monitor-0".

    kubectl describe pvc -n flexera flexera-krm-data-krm-instance-monitor-0

There are many reasons the volume could be unbound, most of which have to do with the details of the storage class you've specified. Perhaps the storage class is not valid or the storage class doesn't support dynamic provisioning. There are too many possibilities to provide any more specific guidance.

Once you've identified the problem, correct it by modifying the storage spec in the KRM resource (in your krm.yaml file) and applying that to the cluster. You may have to delete the existing PVC and StatefulSet to force everything to reset.

    kubectl delete pvc -n flexera flexera-krm-data-krm-instance-monitor-0
    kubectl delete statefulset -n flexera krm-instance-monitor

From your other screenshot we can see that there are three nodes in the cluster. Two of the three node pods are in the ImagePullBackoff state, indicating that they were unable to pull the image, yet the third was successful. You will want to determine why one node has access to the image but the other two do not.

Hello,

Does it mean that I have to create a storage class > deploy it > add it in file krm.yaml?

The ...monitor is the pvc, isn't it? Does it satisfy all requirement (especially ReadWriteOnce)? Because I found an error in describe pvc related to that.

About dynamic provisioning, I found this link https://jhooq.com/persistent-volume-no-volume-plugin-matched/ and it said that "can not be done on the Local Kubernetes cluster if you are using Virtual Machine" which is my environment now.

Have you try to install k8s agent? Is it complicated or just need to run generated.sh > install.sh > kubectl apply -f krm.yaml?

You will likely want to use one of the storage classes already configured in your cluster. Running 'kubectl get storageclass' will list those that are available. You provide a complete PersistentVolumeClaim spec in the KRM resource under the 'spec.monitor.storage' attribute, which commonly includes one of the storage classes provided by your cluster. You can read more about PVCs here: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims. The section on storage classes is particularly relevant. Once you've determined what storage classes are available and chosen one, you can generally search for some documentation on that storage class to find out how to troubleshoot issues with it.

Whether or not dynamic provisioning is available isn't directly related to whether or not your cluster is running on virtual machines, it is a property of a storage class. Each storage class either does or does not support dynamic provisioning, which is usually a consequence of the type of underlying storage provider it is using. What storage classes may be successfully implemented in you cluster may be impacted by the type of underlying machine (by virtue of what types of underlying storage providers are available), but most clusters will provide some sort of storage class out of the box that supports dynamic provisioning.

Based on your concerns about virtual machines and the fact that some of the pods are failing with an image pull error while some are not, are you using minikube? If so, you can simply use the "standard" storage class and it will support dynamic provisioning. Something like this would suffice:

storageClassName: standard
resources:
  requests:
    storage: 3Gi



The installation process is just as you've said. The only challenging part is configuring the storage and ensuring that you have network communication with a beacon set up correctly. With those details worked out, it is just running install.sh, running generate.sh or manually writing a KRM resource (krm.yaml), then applying that KRM resource to the cluster using 'kubectl apply'.

I've redacted the beacon URL, but here's my basic KRM resource from krm.yaml:

apiVersion: agents.flexera.com/v1
kind: KRM
metadata:
  name: instance
spec:
  monitor:
    beaconURL: https://beacon.example.org
    storage:
      storageClassName: standard
      resources:
        requests:
          storage: 3Gi