Prisma Cloud Compute error: Failed to initialize containerd client: failed to dial "/var/run/containerd/containerd.sock" on Daemonset Defenders Deployed on RKE Cluster
3001
Created On 01/29/24 21:16 PM - Last Modified 02/10/25 22:12 PM
Symptom
Deploying Daemonset on RKE cluster fails with error message
Failed to initialize containerd client: failed to dial "/var/run/containerd/containerd.sock"
Environment
- Prisma Cloud Compute Self-Hosted Console
- Prisma Cloud Compute SAAS Console
- RKE (Rancher Kubernetes Engine) Cluster
Resolution
Workaround to deploy defenders successfully on RKE Cluster.
Following are the changes you need to do before deploying YAML:
- Go to Runtime Security / Compute > Manage > Defenders > Manual Deploy and choose all the basic settings like containerd and kubernetes
- Click on Advance Settings > The third input from the top "Specify a custom container runtime socket path"
- Write this runtime address > /run/k3s/containerd/containerd.sock
- Download this YAML file and open this YAML file in the editor
- Find the keyname: docker-sock-folder
- Change the mountPath address to > /var/run/containerd
- Save these changes and deploy your defender using this new YAML file.
Additional Information
A sample YAML file for reference only.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: twistlock-view
rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"] # Allow Defenders to list RBAC resources
verbs: ["list"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"] # Allow Defenders to get Deployments and ReplicaSets
verbs: ["get"]
- apiGroups: [""] # Core API
resources: ["namespaces", "pods"] # Allow Defenders to get Namespaces and Pods
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: twistlock-view-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: twistlock-view
subjects:
- apiGroup:
kind: ServiceAccount
name: twistlock-service
namespace: twistlock
---
---
apiVersion: v1
kind: Secret
metadata:
name: twistlock-secrets
namespace: twistlock
type: Opaque
data:
service-parameter -----
defender-ca.pem -----
defender-client-cert.pem -----
defender-client-key.pem -----
admission-cert.pem -----
admission-key.pem -----
---
apiVersion: v1
kind: ServiceAccount # Service Account is used for managing security context constraints policies in Openshift (SCC)
metadata:
name: twistlock-service
namespace: twistlock
secrets:
- name: twistlock-secrets
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: twistlock-defender-ds
namespace: twistlock
spec:
selector:
matchLabels:
app: twistlock-defender
template:
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/twistlock-defender: unconfined
labels:
app: twistlock-defender
spec:
serviceAccountName: twistlock-service
restartPolicy: Always
containers:
- name: twistlock-defender
image: registry-auth.twistlock.com/tw_qcom6ed9wqj3v10fmrl6auirqhewps56/twistlock/defender:defender_32_00_161
volumeMounts:
- name: data-folder
mountPath: "/var/lib/twistlock"
- name: certificates # Setting the certificates mount after data-folder since it is nested and was overridden in CRI based GKE cluster
mountPath: "/var/lib/twistlock/certificates"
- name: docker-sock-folder
mountPath: "/var/run/containerd"
- name: passwd
mountPath: "/etc/passwd"
readOnly: true
- name: syslog-socket
mountPath: "/dev/log"
- name: cri-data
mountPath: /var/lib/containerd
- name: runc-proxy-sock-folder
mountPath: "/run"
env:
- name: WS_ADDRESS
value: ---------
- name: DEFENDER_TYPE
value: cri
- name: LOG_PROD
value: "true"
- name: SYSTEMD_ENABLED
value: "false"
- name: DOCKER_CLIENT_ADDRESS
value: "/run/k3s/containerd/containerd.sock"
- name: DEFENDER_CLUSTER_ID
value: "7e053538-e15f-c023-1c02-21d9ccf59b83"
- name: DEFENDER_CLUSTER
value: ""
- name: MONITOR_SERVICE_ACCOUNTS
value: "true"
- name: MONITOR_ISTIO
value: "false"
- name: COLLECT_POD_LABELS
value: "true"
- name: INSTALL_BUNDLE
value: "---"
- name: HOST_CUSTOM_COMPLIANCE_ENABLED
value: "true"
- name: CLOUD_HOSTNAME_ENABLED
value: "true"
- name: FIPS_ENABLED
value: "false"
securityContext:
readOnlyRootFilesystem: true
privileged: false
capabilities:
add:
- NET_ADMIN # Required for process monitoring
- NET_RAW # Required for iptables (CNNF, runtime DNS, WAAS). See: https://bugzilla.redhat.com/show_bug.cgi?id=1895032
- SYS_ADMIN # Required for filesystem monitoring
- SYS_PTRACE # Required for local audit monitoring
- SYS_CHROOT # Required for changing mount namespace using setns
- MKNOD # A capability to create special files using mknod(2), used by docker-less registry scanning
- SETFCAP # A capability to set file capabilities, used by docker-less registry scanning
- IPC_LOCK # Required for perf events monitoring, allowing to ignore memory lock limits
resources: # See: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-requests-are-scheduled
limits:
memory: "512Mi"
cpu: "900m"
requests:
cpu: "256m"
volumes:
- name: certificates
secret:
secretName: twistlock-secrets
defaultMode: 256
- name: syslog-socket
hostPath:
path: "/dev/log"
- name: data-folder
hostPath:
path: "/var/lib/twistlock"
- name: passwd
hostPath:
path: "/etc/passwd"
- name: docker-sock-folder
hostPath:
path: "/run/k3s/containerd"
- name: cri-data
hostPath:
path: /var/lib/containerd
- name: runc-proxy-sock-folder
hostPath:
path: "/run"
hostPID: true
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
---
apiVersion: v1
kind: Service # Expose the Defender as admission controller. Remark: by default, Defender will not listen on the service port
metadata:
name: defender
namespace: twistlock
labels:
app: twistlock-defender
spec:
ports:
- port: 443
targetPort: 9998
selector:
app: twistlock-defender