Error:
An unexpected error occurred. Please click Reload to try again.
Error:
An unexpected error occurred. Please click Reload to try again.
Prisma Cloud Compute: Crash loop back-off error on Console - Knowledge Base - Palo Alto Networks

Prisma Cloud Compute: Crash loop back-off error on Console

11447
Created On 11/11/19 23:31 PM - Last Modified 04/21/22 19:34 PM


Symptom


Error messages
  1. Check the twistlock pod inside your Kubernetes/Openshift environment
$ kubectl get pods -n twistlock

Where 'twistlock' is the namespace in which Twistlock pods are deployed.
  1. If using UI such as in Openshift platform, a Crash Loop Back-off error message would look like this:
Crash Loop Back-off error message


Environment


  • Self-Hosted 19.11 or later


Cause


"Crash Loop Back-off" error message is more relevant to orchestrator type deployment such as Kubernetes or Openshift. There can be many reasons for a pod to result in this state. In this article, we cover one of the reasons for this error message to appear on Console deployment - setting lower cgroup/cpu limits. When fewer than 100 Defenders are connected, Console requires 1GB of RAM and 10GB of storage. Our recommendation is to meet default system requirements and don’t set additional cpu or cgroup limits on Console. Frequent API queries and large data set processing can result in slowing down Console and in some cases cause OOM if lower limits are applied to it.

Resolution


Steps to confirm the issue
  1. oc / kubectl logs on the pod would show error messages like this:
Exit code 137: OOM killed
  1. Example yaml configuration that sets limits on cpu:
$ kubectl describe pods -n twistlock

Look for "limits" section in the yaml:

limits:
  - type: "Container"
    default:
      cpu: "500m"
      memory: "1024Mi"
    defaultRequest:
      cpu: "500m"
      memory: "1024Mi"

Troubleshooting steps

Remove custom limits on Console deployment. Completely remove the "limits" section from Twistlock Console yaml.

resources: # See: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-requests-are-scheduled
          limits:
            memory: "512Mi"
            cpu: "900m"
          requests:
            cpu: "256m"


Additional Information


In Openshift, resource limits can be scoped to projects. For more information, refer to this article from Openshift: https://docs.openshift.com/container-platform/3.10/dev_guide/compute_resources.html

If this is a universal setting for all containers, you may want to check with your system admin before making changes to it.


Actions
  • Print
  • Copy Link

    https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000PNRSCA4&refURL=http%3A%2F%2Fknowledgebase.paloaltonetworks.com%2FKCSArticleDetail

Choose Language