Enable Kubectl logs/exec to debug pods on the edge
Note for Helm deployments:
- Stream certificates are generated automatically and the CloudStream feature is enabled by default. Therefore, Steps 1-3 can be skipped unless customization is needed.
- Step 4 could be finished by iptablesmanager component by default, so manual operations are not needed. Refer to the cloudcore helm values.
- If CloudCore is deploy in container (by default), operations in Steps 5-6 can also be skipped.
Make sure you can find the kubernetes
ca.crtandca.keyfiles. If you set up your kubernetes cluster bykubeadm, those files will be in/etc/kubernetes/pki/directory.ls /etc/kubernetes/pki/Set the
CLOUDCOREIPSenvironment variable to specify the IP address of CloudCore, or a VIP if you have a highly available cluster. SetCLOUDCORE_DOMAINSinstead if Kubernetes uses domain names to communicate with CloudCore.export CLOUDCOREIPS="192.168.0.139"(Warning: the same terminal is essential to continue the work, or it is necessary to type this command again.) Checking the environment variable with the following command:
echo $CLOUDCOREIPSGenerate the certificates for CloudStream on the cloud node. Since the generation file is not located in
/etc/kubeedge/, copy it from the cloned GitHub repository.Switch to the root user:
sudo suCopy certificates generation file from original cloned repository:
cp $GOPATH/src/github.com/kubeedge/kubeedge/build/tools/certgen.sh /etc/kubeedge/Change directory to the kubeedge directory:
cd /etc/kubeedge/Generate certificates from certgen.sh
/etc/kubeedge/certgen.sh streamIt is needed to set iptables on the host. (This procedure should be executed on every node where an api-server is deployed. In this case, it is the control-plane node. Execute those commands as the root user.)
Note: First, get the configmap containing all the CloudCore IPs and tunnel ports:
kubectl get cm tunnelport -n kubeedge -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
tunnelportrecord.kubeedge.io: '{"ipTunnelPort":{"192.168.1.16":10350, "192.168.1.17":10351},"port":{"10350":true, "10351":true}}'
creationTimestamp: "2021-06-01T04:10:20Z"
...Then set all the iptables for multiple CloudCore instances to every node where the api-server runs. The CloudCore IPs and tunnel ports should be obtained from the configmap above.
iptables -t nat -A OUTPUT -p tcp --dport $YOUR-TUNNEL-PORT -j DNAT --to $YOUR-CLOUDCORE-IP:10003
iptables -t nat -A OUTPUT -p tcp --dport 10350 -j DNAT --to 192.168.1.16:10003
iptables -t nat -A OUTPUT -p tcp --dport 10351 -j DNAT --to 192.168.1.17:10003If you are unsure about the current iptables settings and want to clean all of them. (If you set up iptables wrongly, it will block you out of your
kubectl logsfeature)The following command can be used to clean up iptables:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -XUpdate
cloudcoreconfiguration to enable cloudStream. (The new version has this feature enabled by default in the cloud, so this configuration can be skipped.)If
cloudcoreis installed as binary, you can directly modify/etc/kubeedge/config/cloudcore.yamlwith using editor. Ifcloudcoreis running as kubernetes deployment, you can usekubectl edit cm -n kubeedge cloudcoreto updatecloudcore's ConfigurationMap.cloudStream:
enable: true
streamPort: 10003
tlsStreamCAFile: /etc/kubeedge/ca/streamCA.crt
tlsStreamCertFile: /etc/kubeedge/certs/stream.crt
tlsStreamPrivateKeyFile: /etc/kubeedge/certs/stream.key
tlsTunnelCAFile: /etc/kubeedge/ca/rootCA.crt
tlsTunnelCertFile: /etc/kubeedge/certs/server.crt
tlsTunnelPrivateKeyFile: /etc/kubeedge/certs/server.key
tunnelPort: 10004Update
edgecoreconfiguration to enable edgeStream.This modification needs to be done all edge system where
edgecoreruns to update/etc/kubeedge/config/edgecore.yaml. Make sure theserverIP address to the CloudCore IP (the same as $CLOUDCOREIPS).edgeStream:
enable: true
handshakeTimeout: 30
readDeadline: 15
server: 192.168.0.139:10004
tlsTunnelCAFile: /etc/kubeedge/ca/rootCA.crt
tlsTunnelCertFile: /etc/kubeedge/certs/server.crt
tlsTunnelPrivateKeyFile: /etc/kubeedge/certs/server.key
writeDeadline: 15Restart all the CloudCore and EdgeCore to apply the Stream configuration.
sudo suIf CloudCore is running in process mode:
pkill cloudcore
nohup cloudcore > cloudcore.log 2>&1 &If CloudCore is running in Kubernetes deployment mode:
kubectl -n kubeedge rollout restart deployment cloudcoreRestart the EdgeCore:
systemctl restart edgecore.serviceIf restarting EdgeCore fails, check if that is due to
kube-proxyand kill it. kubeedge rejects it by default, we use a succedaneum called edgemeshNote: It is important to avoid
kube-proxybeing deployed on edgenode and there are two methods to achieve this:- Method 1: Add the following settings by calling
kubectl edit daemonsets.apps -n kube-system kube-proxy:
spec:
template:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/edge
operator: DoesNotExistor just run the following command directly in the shell window:
kubectl patch daemonset kube-proxy -n kube-system -p '{"spec": {"template": {"spec": {"affinity": {"nodeAffinity": {"requiredDuringSchedulingIgnoredDuringExecution": {"nodeSelectorTerms": [{"matchExpressions": [{"key": "node-role.kubernetes.io/edge", "operator": "DoesNotExist"}]}]}}}}}}}'- Method 2: If you still want to run
kube-proxy, instruct edgecore not to check the environment by adding the environment variable inedgecore.service:
sudo vi /etc/kubeedge/edgecore.serviceAdd the following line into the edgecore.service file:
Environment="CHECK_EDGECORE_ENVIRONMENT=false"The final file should look like this:
Description=edgecore.service
[Service]
Type=simple
ExecStart=/root/cmd/ke/edgecore --logtostderr=false --log-file=/root/cmd/ke/edgecore.log
Environment="CHECK_EDGECORE_ENVIRONMENT=false"
[Install]
WantedBy=multi-user.target- Method 1: Add the following settings by calling