GKE容器被“内存cgroup内存不足”杀死,但监控、本地测试和pprof显示使用率远低于限制
我最近将一个新的容器映像推送到我的 GKE 部署之一,并注意到 API 延迟上升并且请求开始返回 502。
查看日志我发现容器由于 OOM 开始崩溃:
Memory cgroup out of memory: Killed process 2774370 (main) total-vm:1801348kB, anon-rss:1043688kB, file-rss:12884kB, shmem-rss:0kB, UID:0 pgtables:2236kB oom_score_adj:980
查看内存使用情况图,看起来 pod 使用的内存并没有超过 50MB。我的原始资源请求是:
...
spec:
...
template:
...
spec:
...
containers:
- name: api-server
...
resources:
# You must specify requests for CPU to autoscale
# based on CPU utilization
requests:
cpu: "150m"
memory: "80Mi"
limits:
cpu: "1"
memory: "1024Mi"
- name: cloud-sql-proxy
# It is recommended to use the latest version of the Cloud SQL proxy
# Make sure to update on a regular schedule!
image: gcr.io/cloudsql-docker/gce-proxy:1.17
resources:
# You must specify requests for CPU to autoscale
# based on CPU utilization
requests:
cpu: "100m"
...
然后我尝试将 API 服务器的请求提高到 1GB,但没有帮助。最后,帮助的是将容器映像恢复到以前的版本:
查看 golang 二进制文件中的更改,没有明显的内存泄漏。当我在本地运行它时,它最多使用 80MB 的内存,即使在来自与生产中相同请求的负载下也是如此。
我从 GKE 控制台获得的上图还显示了 pod 使用的内存远低于 1GB 的限制。
所以我的问题是:当 GKE 监控和在本地运行它只使用 1GB 限制中的 80MB 时,什么可能导致 GKE 为 OOM 终止我的进程?
=== 编辑 ===
添加同一中断的另一个图表。这次拆分 pod 中的两个容器。如果我理解正确,这里的指标是不可驱逐的 container/memory/used_bytes:
container/memory/used_bytes GA
Memory usage
GAUGE, INT64, By
k8s_container Memory usage in bytes. Sampled every 60 seconds.
memory_type: Either `evictable` or `non-evictable`. Evictable memory is memory that can be easily reclaimed by the kernel, while non-evictable memory cannot.
编辑 2021 年 4 月 26 日
我尝试将部署 yaml 中的资源字段更新为请求的 1GB RAM 和 1GB RAM 限制,如 Paul 和 Ryan 建议的那样:
resources:
# You must specify requests for CPU to autoscale
# based on CPU utilization
requests:
cpu: "150m"
memory: "1024Mi"
limits:
cpu: "1"
memory: "1024Mi"
不幸的是,更新后结果相同kubectl apply -f api_server_deployment.yaml:
{
insertId: "yyq7u3g2sy7f00"
jsonPayload: {
apiVersion: "v1"
eventTime: null
involvedObject: {
kind: "Node"
name: "gke-api-us-central-1-e2-highcpu-4-nod-dfe5c3a6-c0jy"
uid: "gke-api-us-central-1-e2-highcpu-4-nod-dfe5c3a6-c0jy"
}
kind: "Event"
message: "Memory cgroup out of memory: Killed process 1707107 (main) total-vm:1801412kB, anon-rss:1043284kB, file-rss:9732kB, shmem-rss:0kB, UID:0 pgtables:2224kB oom_score_adj:741"
metadata: {
creationTimestamp: "2021-04-26T23:13:13Z"
managedFields: [
0: {
apiVersion: "v1"
fieldsType: "FieldsV1"
fieldsV1: {
f:count: {
}
f:firstTimestamp: {
}
f:involvedObject: {
f:kind: {
}
f:name: {
}
f:uid: {
}
}
f:lastTimestamp: {
}
f:message: {
}
f:reason: {
}
f:source: {
f:component: {
}
f:host: {
}
}
f:type: {
}
}
manager: "node-problem-detector"
operation: "Update"
time: "2021-04-26T23:13:13Z"
}
]
name: "gke-api-us-central-1-e2-highcpu-4-nod-dfe5c3a6-c0jy.16798b61e3b76ec7"
namespace: "default"
resourceVersion: "156359"
selfLink: "/api/v1/namespaces/default/events/gke-api-us-central-1-e2-highcpu-4-nod-dfe5c3a6-c0jy.16798b61e3b76ec7"
uid: "da2ad319-3f86-4ec7-8467-e7523c9eff1c"
}
reason: "OOMKilling"
reportingComponent: ""
reportingInstance: ""
source: {
component: "kernel-monitor"
host: "gke-api-us-central-1-e2-highcpu-4-nod-dfe5c3a6-c0jy"
}
type: "Warning"
}
logName: "projects/questions-279902/logs/events"
receiveTimestamp: "2021-04-26T23:13:16.918764734Z"
resource: {
labels: {
cluster_name: "api-us-central-1"
location: "us-central1-a"
node_name: "gke-api-us-central-1-e2-highcpu-4-nod-dfe5c3a6-c0jy"
project_id: "questions-279902"
}
type: "k8s_node"
}
severity: "WARNING"
timestamp: "2021-04-26T23:13:13Z"
}
Kubernetes 似乎几乎立即杀死了使用 1GB 内存的容器。但同样,指标显示容器仅使用 2MB 内存:
我再次被难住了,因为即使在负载下这个二进制文件在我本地运行时也不会使用超过 80MB。
我也试过跑步go tool pprof <url>/debug/pprof/heap。它显示了几个不同的值,因为 Kubernetes 不断地颠簸容器。但没有超过~20MB,也没有异常的内存使用
编辑 04/27
我尝试为 pod 中的两个容器设置 request=limit:
requests:
cpu: "1"
memory: "1024Mi"
limits:
cpu: "1"
memory: "1024Mi"
...
requests:
cpu: "100m"
memory: "200Mi"
limits:
cpu: "100m"
memory: "200Mi"
但它也不起作用:
Memory cgroup out of memory: Killed process 2662217 (main) total-vm:1800900kB, anon-rss:1042888kB, file-rss:10384kB, shmem-rss:0kB, UID:0 pgtables:2224kB oom_score_adj:-998
内存指标仍以个位数 MB 显示使用情况。
更新 04/30
我通过煞费苦心地逐一检查我的最新提交,确定了似乎导致此问题的更改。
在违规提交中,我有几行像
type Pic struct {
image.Image
Proto *pb.Image
}
...
pic.Image = picture.Resize(pic, sz.Height, sz.Width)
...
哪里picture.Resize最终叫resize.Resize。我把它改成:
type Pic struct {
Img image.Image
Proto *pb.Image
}
...
pic.Img = picture.Resize(pic.Img, sz.Height, sz.Width)
这解决了我的直接问题,容器现在运行良好。但它没有回答我原来的问题:
- 为什么这些行会导致 GKE OOM 我的容器?
- 为什么 GKE 内存指标显示一切正常?