当前位置: 首页 > news >正文

k8s之多方面多维度的资源隔离和限制(namespace,LimitRange,ResourceQuota)

k8s之多方面多维度的资源隔离和限制(namespace,LimitRange,ResourceQuota)

首先我们能想到的资源隔离就是namespace,这不知道是不是大家的第一反应,反正我是的,,哈哈哈

namespace的神奇之处

大家可以想象一下,把每个项目组拥有着不同的命名空间,操作着属于自己组内的资源对象,是不是非常的方便管理呢,好的,大概就是这样哈,简单理解,接下来我们进行一下简单的测试,来验证下我们的猜想,let,s go

测试下nmespace的资源隔离

创建一个新的namespace

[root@k8s-master1 ~]# kubectl create namespace dev
namespace/dev created
[root@k8s-master1 ~]# kubectl get namespace -A
NAME                   STATUS   AGE
default                Active   4d13h
dev                    Active   2m51s
kube-node-lease        Active   4d13h
kube-public            Active   4d13h
kube-system            Active   4d13h
kubernetes-dashboard   Active   4d11h

在当前的namespace里面创建两个不同的pod应用

[root@k8s-master1 ~]# cat deployment-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 4
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor-login
[root@k8s-master1 ~]# cat deployment-tomcat.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: tomcat
  name: tomcat
spec:
  replicas: 4
  selector:
    matchLabels:
      app: tomcat
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: tomcat
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/tomcat
        name: tomcat
        resources: {}
      imagePullSecrets:
      - name: harbor-login
status: {}
[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml 
deployment.apps/nginx created
[root@k8s-master1 ~]# kubectl apply -f deployment-tomcat.yaml 
deployment.apps/tomcat created
[root@k8s-master1 ~]# kubectl get pod -o wide
NAME                      READY   STATUS    RESTARTS   AGE     IP               NODE          NOMINATED NODE   READINESS GATES
nginx-5896cbffc6-4vcdv    1/1     Running   0          11m     10.244.169.159   k8s-node2     <none>           <none>
nginx-5896cbffc6-jfzsz    1/1     Running   0          11m     10.244.159.163   k8s-master1   <none>           <none>
nginx-5896cbffc6-n7n2x    1/1     Running   0          11m     10.244.36.90     k8s-node1     <none>           <none>
nginx-5896cbffc6-rt9h7    1/1     Running   0          11m     10.244.36.91     k8s-node1     <none>           <none>
tomcat-596db6d496-l4dvs   1/1     Running   0          11m     10.244.169.160   k8s-node2     <none>           <none>
tomcat-596db6d496-nhc4c   1/1     Running   0          11m     10.244.159.164   k8s-master1   <none>           <none>
tomcat-596db6d496-qfkwr   1/1     Running   0          11m     10.244.36.92     k8s-node1     <none>           <none>
tomcat-596db6d496-zd8dp   1/1     Running   0          8m27s   10.244.36.94     k8s-node1     <none>           <none>

测试同一个命名空间下pod与pod是否可以互通

其实这个不用测试的,一般情况下都是互通的,但是为了效果,我们还是要进行一下测试,来验证下我们资源的隔离

进入一个Pod里面,以名称和ip地址来进行互相url访问

[root@k8s-master1 ~]# kubectl exec -it tomcat-596db6d496-zd8dp /bin/bash    ###进入到tomcat容器里面
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@tomcat-596db6d496-zd8dp:/usr/local/tomcat# curl nginx        ##curl访问名称为nginx的deployment,发现可以访问到
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@tomcat-596db6d496-zd8dp:/usr/local/tomcat# curl 10.244.169.159    ###使用ip地址进行访问,同样也是可以访问通的
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@tomcat-596db6d496-zd8dp:/usr/local/tomcat# exit
exit
  • 结论:同命名空间下的pod是可以通过名称来相互访问的

测试不同namespace下pod与pod是否互通

在dev命名空间下创建一个pod

###h还是使用上面的deploymen-nginx.yaml文件,发现是可以创建成功的,说明不同的namescpace下可以创建相同的名称deployment
[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml -n dev
deployment.apps/nginx created
[root@k8s-master1 ~]# kubectl get pod -n dev
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5896cbffc6-52rql   1/1     Running   0          8s
nginx-5896cbffc6-8znq5   1/1     Running   0          8s
nginx-5896cbffc6-fvdbf   1/1     Running   0          8s
nginx-5896cbffc6-jrppp   1/1     Running   0          8s
[root@k8s-master1 ~]# kubectl get pod -A
NAMESPACE              NAME                                         READY   STATUS    RESTARTS       AGE
default                nginx-5896cbffc6-4vcdv                       1/1     Running   0              22m
default                nginx-5896cbffc6-jfzsz                       1/1     Running   0              22m
default                nginx-5896cbffc6-n7n2x                       1/1     Running   0              22m
default                nginx-5896cbffc6-rt9h7                       1/1     Running   0              22m
default                tomcat-596db6d496-l4dvs                      1/1     Running   0              22m
default                tomcat-596db6d496-nhc4c                      1/1     Running   0              22m
default                tomcat-596db6d496-qfkwr                      1/1     Running   0              22m
default                tomcat-596db6d496-zd8dp                      1/1     Running   0              19m
dev                    nginx-5896cbffc6-52rql                       1/1     Running   0              27s
dev                    nginx-5896cbffc6-8znq5                       1/1     Running   0              27s
dev                    nginx-5896cbffc6-fvdbf                       1/1     Running   0              27s
dev                    nginx-5896cbffc6-jrppp                       1/1     Running   0              27s
kube-system            calico-kube-controllers-8db96c76-w4njz       1/1     Running   1 (21h ago)    3d13h
kube-system            calico-node-lxfbc                            1/1     Running   1 (21h ago)    3d13h
kube-system            calico-node-v2fbv                            1/1     Running   1 (21h ago)    3d13h
kube-system            calico-node-vd6v2                            1/1     Running   19 (21h ago)   3d13h
kube-system            calico-node-xvqmt                            1/1     Running   1 (21h ago)    3d13h
kube-system            coredns-7b9f9f5dfd-6fhwh                     1/1     Running   2 (21h ago)    4d11h
kubernetes-dashboard   dashboard-metrics-scraper-5594697f48-kw9zt   1/1     Running   3 (21h ago)    4d12h
kubernetes-dashboard   kubernetes-dashboard-686cc7c688-cn92d        1/1     Running   2 (21h ago)    38h

删除default命名空间下的nginx的deployment

为为了避免一会curl访问的时候不知道访问的是那个命名空间下 的

[root@k8s-master1 ~]# kubectl delete -f deployment-nginx.yaml 
deployment.apps "nginx" deleted
[root@k8s-master1 ~]# kubectl get pod -n default
NAME                      READY   STATUS    RESTARTS   AGE
tomcat-596db6d496-l4dvs   1/1     Running   0          27m
tomcat-596db6d496-nhc4c   1/1     Running   0          27m
tomcat-596db6d496-qfkwr   1/1     Running   0          27m
tomcat-596db6d496-zd8dp   1/1     Running   0          24m

在default命名空间下curl访问dev命名空间下资源

[root@k8s-master1 ~]# kubectl get pod -n dev -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP               NODE          NOMINATED NODE   READINESS GATES
nginx-5896cbffc6-52rql   1/1     Running   0          3m11s   10.244.224.24    k8s-master2   <none>           <none>
nginx-5896cbffc6-8znq5   1/1     Running   0          3m11s   10.244.169.163   k8s-node2     <none>           <none>
nginx-5896cbffc6-fvdbf   1/1     Running   0          3m11s   10.244.159.165   k8s-master1   <none>           <none>
nginx-5896cbffc6-jrppp   1/1     Running   0          3m11s   10.244.224.23    k8s-master2   <none>           <none>
[root@k8s-master1 ~]# kubectl get pod -n default
NAME                      READY   STATUS    RESTARTS   AGE
tomcat-596db6d496-l4dvs   1/1     Running   0          27m
tomcat-596db6d496-nhc4c   1/1     Running   0          27m
tomcat-596db6d496-qfkwr   1/1     Running   0          27m
tomcat-596db6d496-zd8dp   1/1     Running   0          24m
[root@k8s-master1 ~]# kubectl exec -it tomcat-596db6d496-l4dvs /bin/bash       ###进入default命名空间下面的容器里面
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@tomcat-596db6d496-l4dvs:/usr/local/tomcat# curl nginx    ###访问dev命名空间下的Nginx,发现是访问不了的

^C
root@tomcat-596db6d496-l4dvs:/usr/local/tomcat# curl 10.244.224.24   ###但是我们使用Ip发现是可以正常访问成功的
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
  • 结论1:不同的namescpace下面可以存在名字一样的资源
  • 结论2:不同的namespace下面名字是相对隔离的,但是Ip地址还是通着的
  • 结论3:namespace只是对名字进行了隔离,但是ip地址没有,这样也增加了更多大灵活性

对pod进行相应的资源限制和管理

  • Requests:期望被分配到的,可以正常运行的资源大小,调度器会根据requests值来参与调度计算来进行调度
  • Limits:限制资源最大上限,同时如果Node节点上面发生资源争,kubelet也会根据这个值来进行优化调整
### 清理调之前测试验证使用的容器
[root@k8s-master1 ~]# kubectl delete -f deployment-nginx.yaml -n dev
deployment.apps "nginx" deleted
[root@k8s-master1 ~]# kubectl delete -f deployment-tomcat.yaml 
deployment.apps "tomcat" deleted
[root@k8s-master1 ~]# kubectl get pod
No resources found in default namespace.

测试pod的资源限制效果

编写yaml文件,添加限制参数

[root@k8s-master1 ~]# vi deployment-nginx.yaml
[root@k8s-master1 ~]# cat deployment-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 4
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: 100Mi
            cpu: 200m
          limits:
            memory: 200Mi
            cpu: 500m
      imagePullSecrets:
      - name: harbor-login

查看当前机器剩余的资源情况

###比如查看Node1节点的使用情况
[root@k8s-master1 ~]# kubectl describe node k8s-node1
Name:               k8s-node1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
 ***********省略*******************************
kubelet is posting ready status
Addresses:
  InternalIP:  10.245.4.3
  Hostname:    k8s-node1
Capacity:      ####资源的容量
  cpu:                2     ###cpu大小
  ephemeral-storage:  17394Mi     ###磁盘大小
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             1865308Ki     ###内存大小
  pods:               110     ###可以容纳pod容量,注意呀这可不是说能创建110个pod的,这个也得根据实际情况来说的,但是最大不能超过这个
Allocatable:    #####可分配的资源
  cpu:                2      ###可分配cpu的大小
  ephemeral-storage:  16415037823     ###可分配磁盘的大小
  hugepages-1Gi:      0         ####大页内存
  hugepages-2Mi:      0         ####大页内存
  memory:             1762908Ki    ##可分配内存的大小
  pods:               110     
System Info:
  Machine ID:                 5b2e42d65e1c4fd99f8c541538c1c268
  System UUID:                B3BA4D56-BECB-1461-0470-BFF8A761EEDA
  Boot ID:                    1da3d5ad-e770-4c08-a029-a94703742702
  Kernel Version:             3.10.0-862.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.9
  Kubelet Version:            v1.22.4
  Kube-Proxy Version:         v1.22.4
PodCIDR:                      10.244.2.0/24
PodCIDRs:                     10.244.2.0/24
Non-terminated Pods:          (2 in total)
  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
  kube-system                 calico-node-f7gqq           250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m   ###可以查看具体pod占用的资源
  kube-system                 coredns-7b9f9f5dfd-9f8lm    100m (5%)     0 (0%)      70Mi (4%)        512Mi (29%)    10m  ###可以查看具体pod占用的资源
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                350m (17%)  0 (0%)        ####cpu已经使用了百分之十七
  memory             70Mi (4%)   512Mi (29%)    ####内存已经占用了百分之四,还有一个pod里面有linit限制为512Mi
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:              <none>
[root@k8s-master1 ~]# 

创建资源限制pod应用

[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml 
deployment.apps/nginx created
[root@k8s-master1 ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-66c78dc668-cvjtk   1/1     Running   0          58s
nginx-66c78dc668-lltrd   1/1     Running   0          58s
nginx-66c78dc668-n2nrf   1/1     Running   0          58s
nginx-66c78dc668-qjn5b   1/1     Running   0          58s


####我们更改下资源限制,看看Pod会发生什么
####将limits内存改成500M
[root@k8s-master1 ~]# vi deployment-nginx.yaml
[root@k8s-master1 ~]# cat deployment-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 4
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: 100Mi
            cpu: 200m
          limits:
            memory: 500Mi
            cpu: 500m
      imagePullSecrets:
      - name: harbor-login
[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml 
deployment.apps/nginx configured
[root@k8s-master1 ~]# kubectl get pod
NAME                     READY   STATUS        RESTARTS   AGE
nginx-9b46cf8dd-475vw    1/1     Running       0          4s      ###看age,可以发现pod已经重启了,那么为什么呢
nginx-9b46cf8dd-gqp9q    1/1     Running       0          2s
nginx-9b46cf8dd-qdrg7    1/1     Running       0          2s
nginx-9b46cf8dd-tc2dg    1/1     Running       0          4s

搞清楚为什么资源配置pod都发生了什么

###先看看pod的都在那个节点上
[root@k8s-master1 ~]# kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP               NODE          NOMINATED NODE   READINESS GATES
nginx-9b46cf8dd-475vw   1/1     Running   0          2m35s   10.244.159.170   k8s-master1   <none>           <none>  ##以他为列
nginx-9b46cf8dd-gqp9q   1/1     Running   0          2m33s   10.244.224.28    k8s-master2   <none>           <none>
nginx-9b46cf8dd-qdrg7   1/1     Running   0          2m33s   10.244.169.168   k8s-node2     <none>           <none>
nginx-9b46cf8dd-tc2dg   1/1     Running   0          2m35s   10.244.36.100    k8s-node1     <none>           <none>

##可以看到nginx-9b46cf8dd-475vw是运行的master1上面的
####到master1这个Node上去

###其实我们的容器也是docker来进行启动的,当我们修改了Pod相关的资源配置后,docker需要重新启动一个容器来完成资源的隔离,这也就是pod为什么会重启的原因
[root@k8s-master1 ~]# docker ps -a | grep nginx
7b53d10a8e27        605c77e624dd                  "/docker-entrypoint.…"   5 minutes ago       Up 5 minutes                                    k8s_nginx_nginx-9b46cf8dd-475vw_default_240f3c7f-4a9b-441c-9fd7-49bbc9b9d960_0
*******************太多省略*****************************
[root@k8s-master1 ~]# docker inspect 7b53d10a8e27
[
    {
        "Id": "7b53d10a8e27d2c78312692fafb9afd22717630148e45edfc36b9041521f890f",
        "Created": "2022-12-22T13:26:59.756903714Z",
        "Path": "/docker-entrypoint.sh",
********************省略*********************************************************************
            ],
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 204,           #######cpu共享,因为咱们的yaml文件里面配置的是200m,也就是200/1000*1024=204.8
            ###CpuShares,这个值是一个权重值,当资源发生争抢时,docker会按照这个值和其他的容器的这个值的比例来合理调整可用资源的大小,
            ####比如别人是2,这个是1,2:1的比例,当还有6个资源可用的时候,6*1/3=2个,这个就可以分到2个
            "Memory": 524288000,   ###524288000/1024/1024=500M   #为我们limits的内存
            "NanoCpus": 0,
            "CgroupParent": "/kubepods/burstable/pod240f3c7f-4a9b-441c-9fd7-49bbc9b9d960",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 100000,   ####docker的默认值,和下面的CpuQuota一起使用,代表着100毫秒最多分配给容器500m
            "CpuQuota": 50000,    ####是我们在linits做的设置,50000/1000=500m
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            ***************************省略***************************************

测试容器内存达到limits上限会发生什么

还是在上面的容器上,我们模拟下内存达到上限
[root@k8s-master1 ~]# docker exec -it  7b53d10a8e27 /bin/bash
root@nginx-9b46cf8dd-475vw:/# cat >> test.sh << EOF
> a="dadasdfsafdsafdsfsdafsdf"
> while 2>1
> do  
>   a="\$a\$a"
>   sleep 0.1
> done
> EOF
root@nginx-9b46cf8dd-475vw:/# cat test.sh 
a="dadasdfsafdsafdsfsdafsdf"
while 2>1
do
  a="$a$a"
  sleep 0.1
done
root@nginx-9b46cf8dd-475vw:/# sh test.sh 
Killed      ###可以发现我们的进程会被主动killd
[root@k8s-master1 ~]# docker ps -a | grep 7b53d10a8e27     
####但是我们的容器还在没有重启,说明会kill掉占用内存最高的那个进程
7b53d10a8e27        605c77e624dd                  "/docker-entrypoint.…"   42 minutes ago      Up 42 minutes                                      k8s_nginx_nginx-9b46cf8dd-475vw_default_240f3c7f-4a9b-441c-9fd7-49bbc9b9d960_0
  • 结论:当发现有占用内存较高的进行,k8s会主动的杀掉这个进程

测试容器将cpu达到limit上限会发生什么

###调整部署应用
[root@k8s-master1 ~]# vi deployment-nginx.yaml 
[root@k8s-master1 ~]# cat deployment-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 4
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: 1000Mi
            cpu: 200m
          limits:
            memory: 5000Mi
            cpu: 500m     ###cpu限制到了500m
      imagePullSecrets:
      - name: harbor-login
[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml 
deployment.apps/nginx configured
[root@k8s-master1 ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP               NODE          NOMINATED NODE   READINESS GATES
nginx-556585848b-4ds7x   1/1     Running   0          9m27s   10.244.159.173   k8s-master1   <none>           <none>
nginx-556585848b-8b7dt   1/1     Running   0          9m26s   10.244.169.170   k8s-node2     <none>           <none>
nginx-556585848b-gszt5   1/1     Running   0          9m25s   10.244.224.30    k8s-master2   <none>           <none>
nginx-556585848b-ksbds   1/1     Running   0          9m27s   10.244.36.102    k8s-node1     <none>           <none>
[root@k8s-master1 ~]# docker ps -a | grep nginx   ###在master节点找到相对应得容器
d153182a5410        605c77e624dd                  "/docker-entrypoint.…"   9 minutes ago       Up 9 minutes                                 k8s_nginx_nginx-556585848b-4ds7x_default_ee2915cc-b347-4187-b1b9-4e3f1ddd83ba_0
*****************省略******************
####进到容器里面模拟占用cpu
[root@k8s-master1 ~]# docker exec -it d153182a5410 /bin/bash
root@nginx-556585848b-4ds7x:/# dd if=/dev/zero of=/dev/null &     ####这玩意最耗cpu,我们夸张一点干8个进程
[1] 38
root@nginx-556585848b-4ds7x:/# dd if=/dev/zero of=/dev/null &
[2] 39
root@nginx-556585848b-4ds7x:/# dd if=/dev/zero of=/dev/null &
[3] 40
root@nginx-556585848b-4ds7x:/# dd if=/dev/zero of=/dev/null &
[4] 41
root@nginx-556585848b-4ds7x:/# dd if=/dev/zero of=/dev/null &
[5] 42
root@nginx-556585848b-4ds7x:/# dd if=/dev/zero of=/dev/null &
[6] 43
root@nginx-556585848b-4ds7x:/# dd if=/dev/zero of=/dev/null &
[7] 44
root@nginx-556585848b-4ds7x:/# dd if=/dev/zero of=/dev/null &
[8] 45
root@nginx-556585848b-4ds7x:/# exit                          
exit
[root@k8s-master1 ~]# docker stats d153182a5410

CONTAINER ID        NAME                                                                              CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
d153182a5410        k8s_nginx_nginx-556585848b-4ds7x_default_ee2915cc-b347-4187-b1b9-4e3f1ddd83ba_0   51.70%              4.898MiB / 1.779GiB   0.27%               0B / 0B             0B / 0B             11

CONTAINER ID        NAME                                                                              CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
d153182a5410        k8s_nginx_nginx-556585848b-4ds7x_default_ee2915cc-b347-4187-b1b9-4e3f1ddd83ba_0   52.30%              4.898MiB / 1.779GiB   0.27%               0B / 0B             0B / 0B             11

CONTAINER ID        NAME                                                                              CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
d153182a5410        k8s_nginx_nginx-556585848b-4ds7x_default_ee2915cc-b347-4187-b1b9-4e3f1ddd83ba_0   52.30%              4.898MiB / 1.779GiB   0.27%               0B / 0B             0B / 0B             11

CONTAINER ID        NAME                                                                              CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
d153182a5410        k8s_nginx_nginx-556585848b-4ds7x_default_ee2915cc-b347-4187-b1b9-4e3f1ddd83ba_0   50.98%              4.898MiB / 1.779GiB   0.27%               0B / 0B             0B / 0B             11

CONTAINER ID        NAME                                                                              CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
d153182a5410        k8s_nginx_nginx-556585848b-4ds7x_default_ee2915cc-b347-4187-b1b9-4e3f1ddd83ba_0   50.98%              4.898MiB / 1.779GiB   0.27%               0B / 0B             0B / 0B             11
### 可以看到cpu一直稳定在50%,这是因为cpu是一种可压缩得资源是允许超卖得,但是他也不会超出limits上限值,进程也不会被Kill掉
  • 结论:当一个进程把cpu撑起来的话,不会被killd掉,会稳定在limits限制,因为cpu可以压缩和超卖

测试将linits值调整非常高,看下效果

[root@k8s-master1 ~]# vi deployment-nginx.yaml 
[root@k8s-master1 ~]# cat deployment-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 4
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: 100Mi
            cpu: 200m
          limits:
            memory: 500Mi
            cpu: 500     ###不带m的话代表着是500个核心
      imagePullSecrets:
      - name: harbor-login
###重新应用一下
[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml 
deployment.apps/nginx configured
[root@k8s-master1 ~]# kubectl get pod
NAME                     READY   STATUS              RESTARTS   AGE
nginx-5556f6dc97-dn4hf   1/1     Running             0          4s
nginx-5556f6dc97-lqfqt   1/1     Running             0          4s
nginx-5556f6dc97-npvf8   0/1     ContainerCreating   0          2s
nginx-5556f6dc97-trz8b   0/1     ContainerCreating   0          1s
nginx-9b46cf8dd-qdrg7    1/1     Running             0          46m

###可以发现已经被调度起来了
[root@k8s-master1 ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5556f6dc97-dn4hf   1/1     Running   0          9s
nginx-5556f6dc97-lqfqt   1/1     Running   0          9s
nginx-5556f6dc97-npvf8   1/1     Running   0          7s
nginx-5556f6dc97-trz8b   1/1     Running   0          6s
  • 结论: 调度策略是不参照linits值得

测试将内存requests值调大,看下效果

[root@k8s-master1 ~]# vi deployment-nginx.yaml 
[root@k8s-master1 ~]# cat deployment-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 4
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: 100Gi       ####requests为100G
            cpu: 200m
          limits:
            memory: 500Mi
            cpu: 500
      imagePullSecrets:
      - name: harbor-login
####重新应用一下
[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml 
The Deployment "nginx" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "100Gi": must be less than or equal to memory limit
###意外收获,request得值不能大于limit值,否则不会参与调度,连api-server都过不去


###继续调整资源配置
[root@k8s-master1 ~]# vi deployment-nginx.yaml 
[root@k8s-master1 ~]# cat deployment-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 4
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: 100Gi      ####request 100G
            cpu: 200m
          limits:
            memory: 500Gi    ####linits   500G
            cpu: 500
      imagePullSecrets:
      - name: harbor-login
####重新应用一下
[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml 
deployment.apps/nginx configured
[root@k8s-master1 ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5556f6dc97-dn4hf   1/1     Running   0          7m45s
nginx-5556f6dc97-lqfqt   1/1     Running   0          7m45s
nginx-5556f6dc97-trz8b   1/1     Running   0          7m42s
nginx-5999cd5549-2qtwp   0/1     Pending   0          6s    ###pending状态
nginx-5999cd5549-7nbkx   0/1     Pending   0          6s  ###pending状态
因为我的pod是4个副本,按照K8s滚动更新得策略,更新前两个如果卡住,就不会更新后面得
###看下为什么滚动更新不了
[root@k8s-master1 ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5556f6dc97-dn4hf   1/1     Running   0          9m13s
nginx-5556f6dc97-lqfqt   1/1     Running   0          9m13s
nginx-5556f6dc97-trz8b   1/1     Running   0          9m10s
nginx-5999cd5549-2qtwp   0/1     Pending   0          94s
nginx-5999cd5549-7nbkx   0/1     Pending   0          94s

[root@k8s-master1 ~]# kubectl describe pod nginx-5999cd5549-7nbkx | tail -10
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  28s (x6 over 4m30s)  default-scheduler  0/4 nodes are available: 4 Insufficient memory.  
   ###4个内存不足,小编是4个节点得集群
  • 结论1:当内存的requests设置较大,是无法进行调度的
  • 结论2:requests值必须小于limit值,否则也会无法调度

测试将cpu得requests值调大,看下效果

[root@k8s-master1 ~]# vi deployment-nginx.yaml 
[root@k8s-master1 ~]# cat deployment-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 4
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: 100Mi
            cpu: 200     ###调整成200核
          limits:
            memory: 500Mi
            cpu: 500
      imagePullSecrets:
      - name: harbor-login
###重新应用一下
[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml 
deployment.apps/nginx configured
[root@k8s-master1 ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5556f6dc97-dn4hf   1/1     Running   0          17m
nginx-5556f6dc97-lqfqt   1/1     Running   0          17m
nginx-5556f6dc97-trz8b   1/1     Running   0          17m
nginx-6b895b54fb-d5n42   0/1     Pending   0          6s
nginx-6b895b54fb-qhjhn   0/1     Pending   0          6s
[root@k8s-master1 ~]# kubectl describe pod nginx-6b895b54fb-qhjhn | tail -10
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  31s   default-scheduler  0/4 nodes are available: 4 Insufficient cpu.

###4个cpu不足,小编是4个节点得集群
  • 结论: cpu的requests设置较大时,无法满足调度条件,也是不会被调度的

测试下极个别pod资源不够用得情况

[root@k8s-master1 ~]# vi deployment-nginx.yaml 
[root@k8s-master1 ~]# cat deployment-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 5
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: 1000Mi   
             ###每个pod得1000m,运行5个副本,小编4台机器,每台是2G内存,加上自身消耗,5个pod应该不会全部运行
            cpu: 200m
          limits:
            memory: 5000Mi
            cpu: 500m
      imagePullSecrets:
      - name: harbor-login

###重新应用一下
[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml 
deployment.apps/nginx configured
[root@k8s-master1 ~]# kubectl get pod
NAME                     READY   STATUS              RESTARTS   AGE
nginx-5556f6dc97-trz8b   1/1     Running             0          25m
nginx-556585848b-4ds7x   1/1     Running             0          5s
nginx-556585848b-8b7dt   1/1     Running             0          4s
nginx-556585848b-cd7ns   0/1     Pending             0          2s
nginx-556585848b-gszt5   0/1     ContainerCreating   0          3s
nginx-556585848b-ksbds   1/1     Running             0          5s
[root@k8s-master1 ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-556585848b-4ds7x   1/1     Running   0          10s
nginx-556585848b-8b7dt   1/1     Running   0          9s
nginx-556585848b-cd7ns   0/1     Pending   0          7s    ###可以看到一个pod是pending状态
nginx-556585848b-gszt5   1/1     Running   0          8s
nginx-556585848b-ksbds   1/1     Running   0          10s
[root@k8s-master1 ~]# kubectl describe pod nginx-556585848b-cd7ns | tail -10
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  68s (x3 over 71s)  default-scheduler  0/4 nodes are available: 4 Insufficient memory.
###4个内存不足
  • 结论:requests,就是调度得依据,那怕你机器上面可以运行一个pod,但是不满足我得requests要求,依然是不能够被调度得

资源分配得优先级等级划分

  • requests 等于 limits:说明该应用是可靠得
  • 如果不设置这些参数或者不设置其中一个,那就是不可靠得(不建议,当资源不足得时候,优先会杀掉这些服务
  • requests < limit:说明该应用是比较可靠得

几点考虑

1、上面我们所学的requests和limits,都是用户自己设置得
2、如果有人将requests设置得超出了我们得资源大小,是不是这些pod会永远也调度不起来,很不好得客户体验
3、如果说requests设置得合理,但是linit设置的超大,突然客户程序出现了大量占用资源情况,又没达到limits上限,是不是也很浪费我们得资源呢

上面得情况不要怕,k8s早就帮忙想好了,接着往下看就知道了

利用LinitsRange来对资源配置进行限制

limitsRange是对命名空间所言得,所以我们还是要用到上面得dev命名空间

[root@k8s-master1 ~]# vi limitrange.yaml
[root@k8s-master1 ~]# cat limitrange.yaml
apiVersion: v1
kind: LimitRange    ###资源类型
metadata:
  name: test-limitrange
spec:
  limits:
  - max:    ###该命名空间下创建资源得时候最大得限制
      cpu: 600m
      memory: 1200Mi
    min:   ###该命名空间下创建资源得时候最小得资源大小
      cpu: 100m
      memory: 100Mi
    maxLimitRequestRatio:    ####创建资源得时候,limits和requests得最大比值
      cpu: 3
      memory: 2
    type: Pod      #####以上是针对pod类型资源得,下面得是是针对容器资源得,因为一个pod里面会有多个容器得现象,所以他是没有默认得,但是容器有
  - default:     ###创建容器得时候默认得最大限制
       cpu: 200m
       memory: 200Mi
    defaultRequest:     ###创建容器得时候默认得最小资源配额
       cpu: 200m
       memory: 100Mi
    max:       ####创建得容器得时候最大允许得配置大小
      cpu: 200m
      memory: 1000Mi
    min:     ###创建容器得时候最小得允许得配额大小
      cpu: 100m
      memory: 100Mi
    maxLimitRequestRatio:     ####创建容器得时候,limit和requests得最大比值
      cpu: 2
      memory: 4
    type: Container

###指定命名空间进行创建limitrange。
[root@k8s-master1 ~]# kubectl apply -f limitrange.yaml -n dev
limitrange/test-limitrange created

###查看dev命名空间下面得所有limits,可以在这里看到我们所有得资源限制配置
[root@k8s-master1 ~]# kubectl describe limits -n dev
Name:       test-limitrange
Namespace:  dev
Type        Resource  Min    Max     Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---    ---     ---------------  -------------  -----------------------
Pod         memory    100Mi  1200Mi  -                -              2
Pod         cpu       100m   600m    -                -              3
Container   cpu       100m   200m    200m             200m           2
Container   memory    100Mi  1000Mi  100Mi            200Mi          4

测试不加requests和limits创建资源,看下效果


####注释掉我们得资源限制
[root@k8s-master1 ~]# vi deployment-nginx.yaml 
[root@k8s-master1 ~]# cat deployment-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 4
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx
        imagePullPolicy: IfNotPresent
#        resources:
#          requests:
#            memory: 1000Mi
#            cpu: 200m
#          limits:
#            memory: 5000Mi
#            cpu: 500m
      imagePullSecrets:
      - name: harbor-login
###部署到dev空间下
[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml -n dev
deployment.apps/nginx created
[root@k8s-master1 ~]# kubectl get pod -n dev
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5896cbffc6-5t9z8   1/1     Running   0          6s
nginx-5896cbffc6-dlmgd   1/1     Running   0          6s
nginx-5896cbffc6-mt4jn   1/1     Running   0          6s
nginx-5896cbffc6-tpsnq   1/1     Running   0          6s

###查看当前pod资源配置
[root@k8s-master1 ~]# kubectl get pod -n dev
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5896cbffc6-5t9z8   1/1     Running   0          3m33s
nginx-5896cbffc6-dlmgd   1/1     Running   0          3m33s
nginx-5896cbffc6-mt4jn   1/1     Running   0          3m33s
nginx-5896cbffc6-tpsnq   1/1     Running   0          3m33s
[root@k8s-master1 ~]# kubectl get pod -n dev -o yaml | grep -A 15 "spec"
  spec:
    containers:
    - image: 10.245.4.88:8888/base-images/nginx
      imagePullPolicy: IfNotPresent
      name: nginx
      resources:
        limits:     ####可以看到已经默认都添加了资源得限制
          cpu: 200m
          memory: 200Mi
        requests:
          cpu: 200m
          memory: 100Mi
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
--
  spec:
    containers:
    - image: 10.245.4.88:8888/base-images/nginx
      imagePullPolicy: IfNotPresent
      name: nginx
      resources:
        limits:
          cpu: 200m
          memory: 200Mi
        requests:
          cpu: 200m
          memory: 100Mi
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
--
  spec:
    containers:
    - image: 10.245.4.88:8888/base-images/nginx
      imagePullPolicy: IfNotPresent
      name: nginx
      resources:
        limits:
          cpu: 200m
          memory: 200Mi
        requests:
          cpu: 200m
          memory: 100Mi
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
--
  spec:
    containers:
    - image: 10.245.4.88:8888/base-images/nginx
      imagePullPolicy: IfNotPresent
      name: nginx
      resources:
        limits:
          cpu: 200m
          memory: 200Mi
        requests:
          cpu: 200m
          memory: 100Mi
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
  • 结论:如果配置了linirrange,在创建资源没有添加资源限制,那么默认会按照limitrange里面定义得最大和最小来进行限制

测试没有按照比例分配创建资源,看下效果

[root@k8s-master1 ~]# vi deployment-nginx.yaml 
[root@k8s-master1 ~]# cat deployment-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 4
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: 200Mi     ###当前比例为3,超出了当前规定得2
            cpu: 200m
          limits:
            memory: 600Mi
            cpu: 500m
      imagePullSecrets:
      - name: harbor-login
###重新部署下应用
[root@k8s-master1 ~]# kubectl get pod -n dev
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5896cbffc6-5t9z8   1/1     Running   0          12m
nginx-5896cbffc6-mt4jn   1/1     Running   0          12m
nginx-5896cbffc6-tpsnq   1/1     Running   0          12m
[root@k8s-master1 ~]# kubectl get deployment -n dev
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   3/4     0            3           12m     ####UP-TO-DATE为0,说明deployment没有更新
[root@k8s-master1 ~]# kubectl get deployment -n dev -o yaml | grep message    ####看看deployment得相关信息
      message: Deployment has minimum availability.
      message: 'pods "nginx-54765fd77c-lct5r" is forbidden: [memory max limit to request
      message: ReplicaSet "nginx-54765fd77c" is progressing.
[root@k8s-master1 ~]# kubectl get deployment -n dev -o yaml | grep -A 6 message
      message: Deployment has minimum availability.
      reason: MinimumReplicasAvailable
      status: "True"
      type: Available
    - lastTransitionTime: "2022-12-22T15:44:46Z"
      lastUpdateTime: "2022-12-22T15:44:46Z"
      message: 'pods "nginx-54765fd77c-lct5r" is forbidden: [memory max limit to request
        ratio per Pod is 2, but provided ratio is 3.000000, maximum cpu usage per      ####这边有提示了,比例为2,当前为3
        Container is 200m, but limit is 500m, cpu max limit to request ratio per Container
        is 2, but provided ratio is 2.500000]'
      reason: FailedCreate    ###所以失败
      status: "True"
      type: ReplicaFailure
--
      message: ReplicaSet "nginx-54765fd77c" is progressing.
      reason: ReplicaSetUpdated
      status: "True"
      type: Progressing
    observedGeneration: 2
    readyReplicas: 3
    replicas: 3
  • 结论:当不符合limitrange相关比例配置得时候,是无法创建deployment得

测试超出设置得最大限额创建资源,看下效果

[root@k8s-master1 ~]# kubectl describe limits -n dev
Name:       test-limitrange
Namespace:  dev
Type        Resource  Min    Max     Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---    ---     ---------------  -------------  -----------------------
Pod         memory    100Mi  1200Mi  -                -              2
Pod         cpu       100m   600m    -                -              3
Container   cpu       100m   200m    200m             200m           2
Container   memory    100Mi  1000Mi  100Mi            200Mi          4
[root@k8s-master1 ~]# vi deployment-nginx.yaml 
[root@k8s-master1 ~]# cat deployment-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 4
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: 1800Mi    ###limitrange上限是1200Mi
            cpu: 200m
          limits:
            memory: 6000Mi
            cpu: 500m
      imagePullSecrets:
      - name: harbor-login
[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml -n dev
deployment.apps/nginx configured
[root@k8s-master1 ~]# kubectl get pod -n dev
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5896cbffc6-5t9z8   1/1     Running   0          19m
nginx-5896cbffc6-mt4jn   1/1     Running   0          19m
nginx-5896cbffc6-tpsnq   1/1     Running   0          19m
[root@k8s-master1 ~]# kubectl get deployment -n dev
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   3/4     0            3           20m
[root@k8s-master1 ~]# kubectl get deployment -n dev -o yaml | grep  -A 6 "message"
      message: Deployment has minimum availability.
      reason: MinimumReplicasAvailable
      status: "True"
      type: Available
    - lastTransitionTime: "2022-12-22T15:44:46Z"
      lastUpdateTime: "2022-12-22T15:44:46Z"
      message: 'pods "nginx-54765fd77c-lct5r" is forbidden: [memory max limit to request
        ratio per Pod is 2, but provided ratio is 3.000000, maximum cpu usage per    ####报比例错误,可以说明,比例是优先得
        Container is 200m, but limit is 500m, cpu max limit to request ratio per Container
        is 2, but provided ratio is 2.500000]'
      reason: FailedCreate
      status: "True"
      type: ReplicaFailure
--
      message: Created new replica set "nginx-9d7f8f5cb"
      reason: NewReplicaSetCreated
      status: "True"
      type: Progressing
    observedGeneration: 3
    readyReplicas: 3
    replicas: 3

####我们再改下资源配置,符合比例得情况
[root@k8s-master1 ~]# vi deployment-nginx.yaml 
[root@k8s-master1 ~]# cat deployment-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 4
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: 1800Mi
            cpu: 200m
          limits:
            memory: 2000Mi
            cpu: 500m
      imagePullSecrets:
      - name: harbor-login
[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml -n dev
deployment.apps/nginx configured
[root@k8s-master1 ~]# kubectl get pod -n dev
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5896cbffc6-5t9z8   1/1     Running   0          22m     ###pod还是没有变化
nginx-5896cbffc6-mt4jn   1/1     Running   0          22m
nginx-5896cbffc6-tpsnq   1/1     Running   0          22m
[root@k8s-master1 ~]# kubectl get deployment -n dev
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   3/4     0            3           23m    ###deployment也没有更新
[root@k8s-master1 ~]# kubectl get deployment -n dev -o yaml | grep  -A 6 "message"
      message: Deployment has minimum availability.
      reason: MinimumReplicasAvailable
      status: "True"
      type: Available
    - lastTransitionTime: "2022-12-22T15:34:25Z"
      lastUpdateTime: "2022-12-22T15:54:14Z"
      message: Created new replica set "nginx-9d7f8f5cb"
      reason: NewReplicaSetCreated
      status: "True"
      type: Progressing
    - lastTransitionTime: "2022-12-22T15:57:14Z"
      lastUpdateTime: "2022-12-22T15:57:14Z"
      message: 'pods "nginx-7599cdff87-qbcxz" is forbidden: [maximum memory usage
        per Pod is 1200Mi, but limit is 2097152k, maximum memory usage per Container
        is 1000Mi, but limit is 2000Mi, maximum cpu usage per Container is 200m, but   ###可以看到内存配置已经操作limitrange限制了
        limit is 500m, cpu max limit to request ratio per Container is 2, but provided
        ratio is 2.500000]'
      reason: FailedCreate   ###所以创建会失败
      status: "True"
  • 结论:如果创建资源得时候超出了不在限制之内得话,也是无法创建资源得。
  • 结论2:优先会检查资源配置得比例,后对比资源限制得配置

再来几点考虑

1、如果咱们得团队过大,有多个命名空间,也就是不同的项目组使用不同的命令空间,有先对来说比较重要的,有相对来说不是那么重要的,那么怎么来限制他们呢
2、如果说相对来说不是那么重要的命名空间团队,创建了好多pod,没有了还在那边放着,还在继续创建资源怎么办,这岂不是又是一种浪费

当然了,k8s也想到了,继续向下面看

利用ResourceQuota来对命名空间下资源进行限额

大家可能有个困惑,就是ResourceQuota和limits都是来进行资源限制的,那该听谁的呢,其实不是ResourceQuota这个是一个总值,比如说pod数量,所有的pod数量加起来不能达到ResourceQuota限制。

[root@k8s-master1 ~]# vi ResourceQuota.yaml
[root@k8s-master1 ~]# cat ResourceQuota.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: test-resourcequota
spec:
  hard:
    pods: 4      ###限制创建pod二点数量为4个
    requests.cpu: 200m    ###对cpu、内存进行限制
    requests.memory: 400Mi
    limits.cpu: 800m
    limits.memory: 800Mi
 ##当然还有好多可以配置的,小编就不一一去验证了
[root@k8s-master1 ~]# kubectl apply -f  ResourceQuota.yaml  -n dev
resourcequota/test-resourcequota created
[root@k8s-master1 ~]# kubectl describe quota  -n dev
Name:            test-resourcequota
Namespace:       dev
Resource         Used  Hard    ####used是已经使用了的,hard是硬性要求的大小
--------         ----  ----
limits.cpu       0     800m
limits.memory    0     800Mi
pods             0     4
requests.cpu     0     200m
requests.memory  0     400Mi


测试下当资源配额量达到上限后,看看是什么效果

[root@k8s-master1 ~]# vi deployment-nginx.yaml
[root@k8s-master1 ~]# cat deployment-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: 200Mi
            cpu: 100m
          limits:
            memory: 400Mi
            cpu: 100m
      imagePullSecrets:
      - name: harbor-login
[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml -n dev
deployment.apps/nginx configured
[root@k8s-master1 ~]# kubectl get pod -n dev
NAME                     READY   STATUS    RESTARTS   AGE
nginx-68c84ffdc7-2jsx2   1/1     Running   0          2s
nginx-68c84ffdc7-76qlr   1/1     Running   0          4s
[root@k8s-master1 ~]# kubectl describe quota  -n dev
Name:            test-resourcequota
Namespace:       dev
Resource         Used   Hard
--------         ----   ----
limits.cpu       200m   800m  ##还未达到
limits.memory    800Mi  800Mi    ###已经达到了上限
pods             2      4    ##当前已经运行了两个pod
requests.cpu     200m   200m   ###已经达到了上限
requests.memory  400Mi  400Mi ###已经达到了上限
[root@k8s-master1 ~]# vi deployment-nginx.yaml 
[root@k8s-master1 ~]# cat deployment-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx-1   ###更改下名字,确保不重复
  name: nginx-1  ###更改下名字
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-1   ###更改下名字
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx-1   ###更改下名字
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx-1   ###更改下名字
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: 200Mi
            cpu: 100m
          limits:
            memory: 400Mi
            cpu: 100m
      imagePullSecrets:
      - name: harbor-login
[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml -n dev
deployment.apps/nginx-1 created
[root@k8s-master1 ~]# kubectl get pod -n dev
NAME                     READY   STATUS    RESTARTS   AGE
nginx-68c84ffdc7-2jsx2   1/1     Running   0          3m51s
nginx-68c84ffdc7-76qlr   1/1     Running   0          3m53s
[root@k8s-master1 ~]# kubectl get deployment nginx-1 -n dev -o yaml | grep -A 6 "message"
    message: Created new replica set "nginx-1-7dcf8cbc69"
    reason: NewReplicaSetCreated
    status: "True"
    type: Progressing
  - lastTransitionTime: "2022-12-22T16:44:50Z"
    lastUpdateTime: "2022-12-22T16:44:50Z"
    message: Deployment does not have minimum availability.
    reason: MinimumReplicasUnavailable
    status: "False"
    type: Available
  - lastTransitionTime: "2022-12-22T16:44:50Z"
    lastUpdateTime: "2022-12-22T16:44:50Z"
    message: 'pods "nginx-1-7dcf8cbc69-cgxl7" is forbidden: exceeded quota: test-resourcequota,    ###可以看到已经被限制了
      requested: limits.memory=400Mi,requests.cpu=100m,requests.memory=200Mi, used:
      limits.memory=800Mi,requests.cpu=200m,requests.memory=400Mi, limited: limits.memory=800Mi,requests.cpu=200m,requests.memory=400Mi'
    reason: FailedCreate   ### 所以失败
    status: "True"
    type: ReplicaFailure
  observedGeneration: 1
  • 结论:当一个资源达到配额限制的时候,也是不会被调度的

测试下当pod数量达到限额创建资源,看看效果

###清理当前的资源环境
[root@k8s-master1 ~]# kubectl delete -f deployment-nginx.yaml -n dev
[root@k8s-master1 ~]# kubectl delete deployment  nginx -n dev
deployment.apps "nginx" deleted
[root@k8s-master1 ~]# kubectl get pod -n dev
No resources found in dev namespace.
##为了实验效果,更改下quota的yaml文件并重新应用
[root@k8s-master1 ~]# vi ResourceQuota.yaml 
[root@k8s-master1 ~]# cat ResourceQuota.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: test-resourcequota
spec:
  hard:
    pods: 4
    requests.cpu: 2000m
    requests.memory: 4000Mi
    limits.cpu: 8000m
    limits.memory: 8000Mi
[root@k8s-master1 ~]# kubectl apply -f ResourceQuota.yaml -n dev
resourcequota/test-resourcequota configured
[root@k8s-master1 ~]# kubectl describe quota  -n dev
Name:            test-resourcequota
Namespace:       dev
Resource         Used  Hard
--------         ----  ----
limits.cpu       0     8
limits.memory    0     8000Mi
pods             0     4
requests.cpu     0     2
requests.memory  0     4000Mi

###创建大于4个的Pod副本数量

[root@k8s-master1 ~]# vi deployment-nginx.yaml 
[root@k8s-master1 ~]# cat deployment-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx-1
  name: nginx-1
spec:
  replicas: 6
  selector:
    matchLabels:
      app: nginx-1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx-1
    spec:
      containers:
      - image: 10.245.4.88:8888/base-images/nginx
        name: nginx-1
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: 100Mi
            cpu: 100m
          limits:
            memory: 100Mi
            cpu: 100m
      imagePullSecrets:
      - name: harbor-login
[root@k8s-master1 ~]# kubectl apply -f deployment-nginx.yaml -n dev
deployment.apps/nginx-1 created
[root@k8s-master1 ~]# kubectl get pod -n dev
NAME                       READY   STATUS    RESTARTS   AGE
nginx-1-6b68fc54b8-f8kxf   1/1     Running   0          8s
nginx-1-6b68fc54b8-lwd2h   1/1     Running   0          8s
nginx-1-6b68fc54b8-mq54t   1/1     Running   0          8s
nginx-1-6b68fc54b8-mvg9n   1/1     Running   0          8s
[root@k8s-master1 ~]# kubectl get deployment nginx-1 -n dev
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
nginx-1   4/6     4            4           24s   ###得到了配额限制
[root@k8s-master1 ~]# kubectl describe quota  -n dev
Name:            test-resourcequota
Namespace:       dev
Resource         Used   Hard
--------         ----   ----
limits.cpu       400m   8
limits.memory    400Mi  8000Mi
pods             4      4
requests.cpu     400m   2
requests.memory  400Mi  4000Mi

  • 结论:当pod数量达到了配额限制后也是无法完成调度的

再再几点考虑

1、在上面条件都满足下,依然出现了node节点资源不足,这是必然的,因为团队会越来越大,那该怎么办呢
2、或者说一个node节点突然有个程序抽风,占用了大量资源,那对于上面的pod可是致命的打击,那应该怎么办呢

不用着急,k8s也想到了。

kubernetes对于pod的驱逐策略

kubernetes提供了pod eviction,Kube-controller-manager会周期性的检查所有node的状态,在某些情况下node资源不足的时候,需要将node上面的pod驱逐至其他节点,用于保障业务的正常运行

可创建的驱逐配置

--eviction-soft=memory.available<2Gi  ###当内存小于2G的时候进行驱逐
--eviction-soft-grace-period=memory.available=2m   ##当内存持续2分钟,结合上面的小于2G的时候进行驱逐
--eviction-hard=设定的条件,多条件用逗号分隔   ####当满足后面所有条件的时候立即进行驱逐

还有好多的条件,小编没有一一去测试,毕竟都深夜了,还请大家原谅,等小编有空的话,小编再一一进行测试验证

结束语

小编也是查了好多的资料和看了好多的视频,总结出来的,虽然说不是官方,但是也是按照视频或者文档,一步一步的进行验证下来的,如果有什么总结不是和合理的地方,还请大家多多指正,谢谢大家

本着学习和共同进步的目的,小编也是参照了好多的博客和视频来编写这个文章的,小编觉得写的较好的,都总结了下来,如果有什么侵犯或者不如意的地方,还请联系小编,进行文章的修改,谢谢

相关文章:

  • 广州建设外贸网站/网络营销网站推广方法
  • 网站建设的售后/百度霸屏推广多少钱一个月
  • 青海省制作网站专业/互联网销售是做什么的
  • 做网站带来的好处/游戏推广员是违法的吗
  • php 网站调试/离我最近的电脑培训中心
  • 做网站是学什么编程语言/学营销app哪个更好
  • Himall商城更新插件列表\获取插件程序集文件\ 深复制IPlugin
  • 【Web前端HTML5CSS3】09、高度塌陷与 BFC
  • Java中toString方法的推荐实现方式
  • 构建系列之新一代利器Esbuild(上)
  • MIUI10国际版系统自定义字体设置办法
  • webpack 构建脚手架
  • 2022吴恩达机器学习课程——第三课(非监督学习)
  • PGP邮件加密软件的使用
  • LabVIEW如何减少下一代测试系统中的硬件过时 1
  • 全国职业院校技能大赛中职组网络安全竞赛试题 —文件包含漏洞与文件上传漏洞 (笔记文档)
  • java学习day64(乐友商城)Elasticsearch
  • Fabric.js 保存自定义属性