k8s基础功能的使用

k8s基础功能的使用

本篇接之前k8s安装后的基本功能测试,没有各种概念的介绍,需要有一定的概念进行实操

基础功能测试

pod的增删

创建deployment控制器
[root@k8s-master01 ~]# kubectl create deployment nginx  --image=nginx  --replicas=1
deployment.apps/nginx created

查看pdo
[root@k8s-master01 ~]# kubectl get pod
NAME                     READY   STATUS              RESTARTS   AGE
nginx-7854ff8877-6xbb5   0/1     ContainerCreating   0          27s

查看deployment控制器
[root@k8s-master01 ~]# kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           101s

查看replicaset控制器
[root@k8s-master01 ~]# kubectl get replicaset
NAME               DESIRED   CURRENT   READY   AGE
nginx-7854ff8877   1         1         1       5m50s

详细查看pod信息
[root@k8s-master01 ~]# kubectl get pod  -o wide
NAME                     READY   STATUS    RESTARTS   AGE    IP           NODE           NOMINATED NODE   READINESS GATES
nginx-7854ff8877-6xbb5   1/1     Running   0          7m1s   10.244.2.2   k8s-worker02   <none>           <none>

测试nginx运行是否正常
[root@k8s-worker01 ~]# curl 10.244.2.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.

<em>Thank you for using nginx.</em>
</body>
</html>

删除pod
[root@k8s-master01 ~]# kubectl delete pod nginx-7854ff8877-6xbb5
pod "nginx-7854ff8877-6xbb5" deleted

[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
nginx-7854ff8877-bz4pk   1/1     Running   0          24s   10.244.2.3   k8s-worker02   <none>           <none>

service增删

查看service
[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   97m

命令行创建service
[root@k8s-master01 ~]# kubectl expose deployment nginx --name=nginx-svc --port=80 --target-port=80 --protocol=TCP
service/nginx-svc exposed

查看service
[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   102m
nginx-svc    ClusterIP   10.109.88.128   <none>        80/TCP    3s

测试容器nginx
[root@k8s-worker01 ~]# curl 10.109.88.128
<!DOCTYPE html>
<html>
<head>
省略
</body>
</html>

命令行创建一个叫client运行busybox的pod
[root@k8s-master01 ~]# kubectl run client --image=busybox  -it --restart=Never
If you don't see a command prompt, try pressing enter.
/ #
/ # wget  -O - -q 10.109.88.128
<!DOCTYPE html>
<html>
</html>



[root@k8s-master01 ~]# kubectl get pod
NAME                     READY   STATUS      RESTARTS   AGE
client                   0/1     Completed   0          2m24s
nginx-7854ff8877-bz4pk   1/1     Running     0          21m

[root@k8s-master01 ~]# kubectl delete pod client
pod "client" deleted

[root@k8s-master01 ~]# kubectl create deployment mynginx  --image=nginx  --replicas=2  --port=80
deployment.apps/mynginx created

service 使用NodePort暴露随机端口
[root@k8s-master01 ~]#  kubectl expose deployment mynginx --name=mynginx-svc --type="NodePort" --port=80
service/mynginx-svc exposed

查看service信息
[root@k8s-master01 ~]#  kubectl get svc
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP        129m
mynginx-svc   NodePort    10.107.229.246   <none>        80:32690/TCP   32s
nginx-svc     ClusterIP   10.109.88.128    <none>        80/TCP         26m

三个节点都可以通过随机端口进行访问
《k8s基础功能的使用》
《k8s基础功能的使用》

查看service的详细信息
[root@k8s-master01 ~]# kubectl describe svc mynginx
Name: mynginx-svc
Namespace: default
Labels: app=mynginx
Annotations:
Selector: app=mynginx
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.107.229.246
IPs: 10.107.229.246
Port: 80/TCP
TargetPort: 80/TCP
NodePort: 32690/TCP
Endpoints: 10.244.1.4:80,10.244.2.5:80
Session Affinity: None
External Traffic Policy: Cluster
Events:

deployments扩容缩容

通过scale 和--replicas=?来缩容扩容
[root@k8s-master01 ~]# kubectl scale deployments/mynginx --replicas=3
deployment.apps/mynginx scaled

[root@k8s-master01 ~]# kubectl get pod -l app=mynginx
NAME                       READY   STATUS    RESTARTS   AGE
mynginx-5b7f5798b4-97bdj   1/1     Running   0          54s
mynginx-5b7f5798b4-shv22   1/1     Running   0          21m
mynginx-5b7f5798b4-wqqrp   1/1     Running   0          21m


[root@k8s-master01 ~]# kubectl scale deployments/mynginx --replicas=1
deployment.apps/mynginx scaled


[root@k8s-master01 ~]# kubectl get pod -l app=mynginx
NAME                       READY   STATUS    RESTARTS   AGE
mynginx-5b7f5798b4-wqqrp   1/1     Running   0          21m

删除所有的deployment创建的pod
[root@k8s-master01 ~]# kubectl delete deployment --all
deployment.apps "mynginx" deleted
deployment.apps "nginx" deleted
[root@k8s-master01 ~]# kubectl get pod
No resources found in default namespace.


查看当前所有pod的镜像名,排序去重
kubectl get pods -o=jsonpath='{range .items[*]}{range .spec.containers[*]}{.image}{end}{"\n"}{end}' | sort |uniq
以yaml方式查看pod的配置信息
[root@k8s-master01 ~]# kubectl get pod mynginx-5b7f5798b4-6xtmh -o yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2024-05-13T04:57:14Z"
  generateName: mynginx-5b7f5798b4-
  labels:
    app: mynginx
    pod-template-hash: 5b7f5798b4
  name: mynginx-5b7f5798b4-6xtmh
  namespace: default
.....

查看配置信息,写yaml文件可以参考,可以一级一级细分
kubectl explain pod
kubectl explain pod.metadata

简单yaml控制k8s

用过yml创建,一般创建用apply
[root@k8s-master01 ~]# kubectl create -f pod.yml
pod/pod-deme created

[root@k8s-master01 ~]# cat pod.yml

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: mynginx
    test: test1
  name: pod-deme
spec:
  containers:
  - image: busybox:latest
    name: busybox
    command:
     - "/bin/sh"
     - "-c"
     - "sleep 3000;"
  - image: nginx
    name: nginx

[root@k8s-master01 ~]# kubectl get pods -w
NAME                       READY   STATUS              RESTARTS   AGE
mynginx-5b7f5798b4-6xtmh   1/1     Running             0          127m
mynginx-5b7f5798b4-f7v8z   1/1     Running             0          127m
pod-deme                   0/2     ContainerCreating   0          4s
pod-deme                   2/2     Running             0          7s

--dry-run=client模拟创建操作,而不会实际创建pod,用于检查错误
kubectl apply --dry-run=client -f pod.yml


[root@k8s-master01 ~]# kubectl get pod pod-deme --show-labels
NAME       READY   STATUS    RESTARTS   AGE   LABELS
pod-deme   2/2     Running   0          20m   app=mynginx,test=test1

列出test标签的pod
[root@k8s-master01 ~]# kubectl get pod  -l test
NAME       READY   STATUS    RESTARTS   AGE
pod-deme   2/2     Running   0          21m

查看所有pod,将test值列出来,可以逗号分割查看多个lable
[root@k8s-master01 ~]# kubectl get pod  -L test
NAME                       READY   STATUS    RESTARTS   AGE    TEST
mynginx-5b7f5798b4-6xtmh   1/1     Running   0          149m
mynginx-5b7f5798b4-f7v8z   1/1     Running   0          149m
pod-deme                   2/2     Running   0          21m    test1


[root@k8s-master01 ~]# kubectl delete -f pod.yml
pod "pod-deme" deleted

k8s信息查看

查看pod详细信息

[root@k8s-master01 ~]kubectl describe pod pod-deme
Name: pod-deme
Namespace: default
Priority: 0

[root@k8s-master01 ~]# kubectl get pods -o wide
NAME                       READY   STATUS    RESTARTS   AGE    IP            NODE           NOMINATED NODE   READINESS GATES
mynginx-5b7f5798b4-6xtmh   1/1     Running   0          129m   10.244.1.5    k8s-worker01   <none>           <none>
mynginx-5b7f5798b4-f7v8z   1/1     Running   0          129m   10.244.2.7    k8s-worker02   <none>           <none>
pod-deme                   2/2     Running   0          2m2s   10.244.2.12   k8s-worker02   <none>           <none>

[root@k8s-master01 ~]# curl 10.244.2.12
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
</body>
</html>

查看日志

[root@k8s-master01 ~]# kubectl logs pod-deme nginx

/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
进入容器,一个pod有多个容器要-c指定容器名
 kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...] [options]

[root@k8s-master01 ~]# kubectl exec -c busybox -- /bin/sh
error: pod, type/name or --filename must be specified
[root@k8s-master01 ~]# kubectl exec -it pod-deme -c busybox -- /bin/sh

/ # ping baidu.com
PING baidu.com (110.242.68.66): 56 data bytes
64 bytes from 110.242.68.66: seq=0 ttl=127 time=51.205 mss
--- baidu.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 39.986/49.245/56.546 ms

打标签

[root@k8s-master01 ~]# kubectl label pod  pod-deme tag=test
pod/pod-deme labeled

[root@k8s-master01 ~]# kubectl get pod  -l tag
NAME       READY   STATUS    RESTARTS      AGE
pod-deme   2/2     Running   1 (30m ago)   80m


[root@k8s-master01 ~]# kubectl get pod  --show-labels
NAME                       READY   STATUS    RESTARTS      AGE     LABELS
mynginx-5b7f5798b4-6xtmh   1/1     Running   0             3h29m   app=mynginx,pod-template-hash=5b7f5798b4
mynginx-5b7f5798b4-f7v8z   1/1     Running   0             3h29m   app=mynginx,pod-template-hash=5b7f5798b4
pod-deme                   2/2     Running   1 (31m ago)   81m     app=mynginx,tag=test,test=test1

打多个标签

[root@k8s-master01 ~]# kubectl label pod pod-deme test=test1 test2=test2 test3=test3
pod/pod-deme labeled

覆盖标签值

[root@k8s-master01 ~]# kubectl label pod pod-deme test=test1.0 --overwrite
pod/pod-deme labeled

批量删除标签

[root@k8s-master01 ~]# kubectl label pod pod-deme test- test2- test3-
pod/pod-deme unlabeled

[root@k8s-master01 ~]# kubectl get pod pod-deme --show-labels
NAME       READY   STATUS    RESTARTS      AGE    LABELS
pod-deme   2/2     Running   2 (20m ago)   120m   app=mynginx
[root@k8s-master01 ~]#

标签选择

[root@k8s-master01 ~]# kubectl get   pod -l  tag=test
NAME       READY   STATUS    RESTARTS      AGE
pod-deme   2/2     Running   1 (35m ago)   85m
[root@k8s-master01 ~]# kubectl get   pod -l  tag!=test
NAME                       READY   STATUS    RESTARTS   AGE
mynginx-5b7f5798b4-6xtmh   1/1     Running   0          3h33m
mynginx-5b7f5798b4-f7v8z   1/1     Running   0          3h33m

多个条件逗号隔开
[root@k8s-master01 ~]# kubectl get   pod -l  app=mynginx,tag=test
NAME       READY   STATUS    RESTARTS      AGE
pod-deme   2/2     Running   1 (39m ago)   89m

集合过滤

[root@k8s-master01 ~]# kubectl get   pod -l  "tag notin (mynginx,test)"
NAME                       READY   STATUS    RESTARTS   AGE
mynginx-5b7f5798b4-6xtmh   1/1     Running   0          3h40m
mynginx-5b7f5798b4-f7v8z   1/1     Running   0          3h40m
[root@k8s-master01 ~]# kubectl get   pod -l  "tag in (mynginx,test)"
NAME       READY   STATUS    RESTARTS      AGE
pod-deme   2/2     Running   1 (43m ago)   93m

查看节点标签

[root@k8s-master01 ~]# kubectl get nodes –show-labels

NAME           STATUS   ROLES           AGE     VERSION   LABELS
k8s-master01   Ready    control-plane   6h33m   v1.26.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-worker01   Ready    <none>          6h20m   v1.26.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker01,kubernetes.io/os=linux
k8s-worker02   Ready    <none>          5h49m   v1.28.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker02,kubernetes.io/os=linux

[root@k8s-master01 ~]# kubectl label nodes k8s-worker01 disk=ssd
node/k8s-worker01 labeled

nodeSelector筛选节点运行

[root@k8s-master01 ~]# cat pod.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: mynginx
    test: test1
  name: pod-deme
spec:
  containers:
  - image: busybox:latest
    imagePullPolicy: IfNotPresent
    name: busybox
    command:
     - "/bin/sh"
     - "-c"
     - "sleep 3000;"
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
  nodeSelector:
     disk: ssd

[root@k8s-master01 ~]# kubectl create -f pod.yml
pod/pod-deme created
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                       READY   STATUS    RESTARTS   AGE    IP           NODE           NOMINATED NODE   READINESS GATES
mynginx-5b7f5798b4-6xtmh   1/1     Running   0          4h3m   10.244.1.5   k8s-worker01   <none>           <none>
mynginx-5b7f5798b4-f7v8z   1/1     Running   0          4h3m   10.244.2.7   k8s-worker02   <none>           <none>
pod-deme                   2/2     Running   0          38s    10.244.1.6   k8s-worker01   <none>           <none>

通多node选择器,实现了指定node运行pod,IfNotPresent只有本地没有latest镜像时才去仓库找

升级更新

[root@k8s-master01 ~]# cat deploy.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy01
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      server: nginx
  template:
    metadata:
      labels:
        app: myapp
        server: nginx
    spec:
      containers:
      - name: nginx
        image: ikubernetes/myapp:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80

[root@k8s-master01 ~]# vim deploy.yml
[root@k8s-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
deploy01-5c544f4466-r859p 1/1 Running 0 77s
deploy01-5c544f4466-vz7xg 1/1 Running 0 77s
mynginx-5b7f5798b4-6xtmh 1/1 Running 0 8h
mynginx-5b7f5798b4-f7v8z 1/1 Running 0 8h
pod-deme 2/2 Running 5 (39m ago) 4h49m

[root@k8s-master01 ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
deploy01 2/2 2 2 92s
mynginx 2/2 2 2 8h

只是升级镜像,可以用set image方法

kubectl set image deployment/deploy01 nginx=ikubernetes/myapp:v2

[root@k8s-master01 ~]# kubectl get deployments.apps -w
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
deploy01   2/2     2            2           2m29s
mynginx    2/2     2            2           9h
deploy01   2/2     2            2           4m31s
deploy01   2/2     2            2           4m31s
deploy01   2/2     0            2           4m31s
deploy01   2/2     1            2           4m31s
deploy01   3/2     1            3           4m43s
deploy01   2/2     1            2           4m43s
deploy01   2/2     2            2           4m43s
deploy01   3/2     2            3           4m52s
deploy01   2/2     2            2           4m52s

[root@k8s-master01 ~]# kubectl get rs

NAME                  DESIRED   CURRENT   READY   AGE
deploy01-5c544f4466   0         0         0       11m
deploy01-fd8f57949    2         2         2       7m9s
mynginx-5b7f5798b4    2         2         2       9h
可以看到生成了两个rs,保留rs用于回滚
kubectl scale deployment deploy01 --replicas 3
或者修改配置文件中的 replicas: 6(要修改的数量)
kubectl apply -f deploy.yml

或者打补丁方式修改
kubectl patch deployments.apps deploy01 -p '{"spec":{"replicas":4}}'
deployment.apps/deploy01 patched

启动更新镜像后立即停止(副本数为3)
kubectl set image deployment/deploy01 nginx=ikubernetes/myapp:v3 && kubectl rollout pause deployment deploy01

[root@k8s-master01 ~]# kubectl get deployments.apps -w

NAME       READY   UP-TO-DATE   AVAILABLE   AGE
deploy01   3/3     3            3           27m
mynginx    2/2     2            2           9h
deploy01   3/3     3            3           29m
deploy01   3/3     3            3           29m
deploy01   3/3     0            3           29m
deploy01   3/3     1            3           29m
deploy01   3/3     1            3           29m
deploy01   3/3     1            3           29m
deploy01   3/3     1            3           29m
deploy01   4/3     1            4           29m

[root@k8s-master01 ~]# kubectl get pod -w

NAME                        READY   STATUS    RESTARTS      AGE
deploy01-5c544f4466-kg64b   1/1     Running   0             11m
deploy01-5c544f4466-sfjs4   1/1     Running   0             11m
deploy01-5c544f4466-sqtlc   1/1     Running   0             11m
mynginx-5b7f5798b4-6xtmh    1/1     Running   0             9h
mynginx-5b7f5798b4-f7v8z    1/1     Running   0             9h
pod-deme                    2/2     Running   6 (16m ago)   5h16m
deploy01-bc6b58ddd-q2btm    0/1     Pending   0             0s
deploy01-bc6b58ddd-q2btm    0/1     Pending   0             0s
deploy01-bc6b58ddd-q2btm    0/1     ContainerCreating   0             0s
deploy01-bc6b58ddd-q2btm    1/1     Running             0             20s

恢复更新
[root@k8s-master01 ~]# kubectl rollout resume deployment deploy01

回滚操作
[root@k8s-master01 ~]# curl 10.244.1.15

Hello MyApp | Version: v3 | <a href="hostname.html">Pod Name</a>

[root@k8s-master01 ~]# kubectl rollout undo deployment deploy01
deployment.apps/deploy01 rolled back

[root@k8s-master01 ~]# kubectl get pods -o wide

NAME                        READY   STATUS    RESTARTS      AGE     IP            NODE           NOMINATED NODE   READINESS GATES
deploy01-5c544f4466-78nr9   1/1     Running   0             12s     10.244.1.16   k8s-worker01   <none>           <none>
deploy01-5c544f4466-88g65   1/1     Running   0             10s     10.244.2.24   k8s-worker02   <none>           <none>
deploy01-5c544f4466-gm48w   1/1     Running   0             13s     10.244.2.23   k8s-worker02   <none>           <none>

[root@k8s-master01 ~]# curl 10.244.1.16

Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

kubectl rollout –help
Available Commands:
history 显示上线历史
pause 将所指定的资源标记为已暂停
restart Restart a resource
resume 恢复暂停的资源
status 显示上线的状态
undo 撤销上一次的上线

[root@k8s-master01 ~]# kubectl rollout history deployment deploy01
deployment.apps/deploy01
REVISION CHANGE-CAUSE
2
4
5

回退到历史版本
[root@k8s-master01 ~]# kubectl rollout undo deployment deploy01 –to-revision 2
deployment.apps/deploy01 rolled back

DaemonSet

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: daemontset01
spec:
  selector:
    matchLabels:
      app: nginx
      server: mynginx
  template:
    metadata:
      labels:
        app: nginx
        server: mynginx
    spec:
      containers:
      - name: nginx
        image: ikubernetes/myapp:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80

[root@k8s-master01 ~]# kubectl apply -f demonset.yml
daemonset.apps/daemontset01 created

[root@k8s-master01 ~]# kubectl get pod -o wide

NAME                       READY   STATUS    RESTARTS      AGE     IP            NODE           NOMINATED NODE   READINESS GATES
daemontset01-2hqld         1/1     Running   0             12s     10.244.1.18   k8s-worker01   <none>           <none>
daemontset01-7xvhf         1/1     Running   0             12s     10.244.2.27   k8s-worker02   <none>           <none>
deploy01-fd8f57949-6h9zz   1/1     Running   0             17m     10.244.2.25   k8s-worker02   <none>           <none>
deploy01-fd8f57949-mk9lz   1/1     Running   0             17m     10.244.1.17   k8s-worker01   <none>           <none>
deploy01-fd8f57949-tmqvn   1/1     Running   0             17m     10.244.2.26   k8s-worker02   <none>           <none>
mynginx-5b7f5798b4-6xtmh   1/1     Running   0             9h      10.244.1.5    k8s-worker01   <none>           <none>
mynginx-5b7f5798b4-f7v8z   1/1     Running   0             9h      10.244.2.7    k8s-worker02   <none>           <none>
pod-deme                   2/2     Running   6 (48m ago)   5h48m   10.244.1.7    k8s-worker01   <none>           <none>

默认情况下,daemontSet是不会在master节点上运行的,要运行需要设置tolerations

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: daemontset01
spec:
  selector:
    matchLabels:
      app: nginx
      server: mynginx
  template:
    metadata:
      labels:
        app: nginx
        server: mynginx
    spec:
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/control-plane
      containers:
      - name: nginx
        image: ikubernetes/myapp:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                       READY   STATUS              RESTARTS      AGE     IP            NODE           NOMINATED NODE   READINESS GATES
daemontset01-dl9bb         1/1     Running             0             3m9s    10.244.1.21   k8s-worker01   <none>           <none>
daemontset01-ps8zv         0/1     ContainerCreating   0             2s      <none>        k8s-master01   <none>           <none>
daemontset01-x9f7t         1/1     Running             0             3m9s    10.244.2.30   k8s-worker02   <none>           <none>
可以看到master节点也有了,因为master节点特殊,没有配置特殊容忍度的pod是无法在master上运行的

Service

将写好的service配置来改
kubectl get svc mynginx-svc -o yaml
新建配置文件,修改端口,添加注释(修改service要写,新建不用,不然报警告),删掉了一些不必要的配置
service端口必须为30000-32767

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: "mylast modify"
  labels:
    app: mynginx
  name: mynginx-svc
  namespace: default
spec:
  clusterIP: 10.107.229.246
  clusterIPs:
  - 10.107.229.246
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - nodePort: 30008
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: mynginx
  sessionAffinity: None
  type: NodePort

kubectl apply -f svc.yml

[root@k8s-master01 ~]# kubectl get svc mynginx-svc
NAME          TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
mynginx-svc   NodePort   10.107.229.246   <none>        80:30008/TCP   20h

更方便的方法是在配置文件中修改保存即生效
kubectl edit service mynginx-svc

修改端口限制
[root@k8s-master01 k8s]# vim /etc/kubernetes/manifests/kube-apiserver.yaml

 --secure-port=6443
    - --service-account-issuer=https://kubernetes.default.svc.cluster.local
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/12
    添加下面一行
    - --service-node-port-range=1-65535
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
让端口生效
[root@k8s-master01 k8s]# kubectl apply -f  /etc/kubernetes/manifests/kube-apiserver.yaml

或者edit修改
kubectl -n kube-system  edit pod kube-apiserver-k8s-master01

kubectl edit service mynginx-svc
修改nodePort: 80

[root@k8s-master01 ~]# kubectl get svc mynginx-svc

NAME          TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
mynginx-svc   NodePort   10.107.229.246   <none>        80:80/TCP   20h

《k8s基础功能的使用》

Namespace

[root@k8s-master01 ~]# kubectl create namespace testnce
[root@k8s-master01 ~]# kubectl get namespaces
NAME              STATUS   AGE
default           Active   23h
kube-flannel      Active   22h
kube-node-lease   Active   23h
kube-public       Active   23h
kube-system       Active   23h
testnce           Active   70s
删除三连
[root@k8s-master01 k8s]# kubectl delete daemonsets.apps --all
[root@k8s-master01 k8s]# kubectl delete deployments.apps --all
[root@k8s-master01 k8s]# kubectl delete pod --all

[root@k8s-master01 k8s]# cat namespace.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: testnc-deploy01
  namespace: testnce
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      server: nginx
  template:
    metadata:
      labels:
        app: myapp
        server: nginx
    spec:
      containers:
      - name: nginx
        image: ikubernetes/myapp:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80

[root@k8s-master01 k8s]# kubectl apply -f namespace.yml
deployment.apps/testnc-deploy01 created

[root@k8s-master01 k8s]# kubectl get pod
poddisruptionbudgets.policy pods podtemplates

[root@k8s-master01 k8s]# kubectl get pod
No resources found in default namespace.

[root@k8s-master01 k8s]# kubectl get deployments.apps
No resources found in default namespace.

[root@k8s-master01 k8s]# kubectl get deployments.apps -n testnce

NAME              READY   UP-TO-DATE   AVAILABLE   AGE
testnc-deploy01   3/3     3            3           25s
[root@k8s-master01 k8s]# kubectl get pod -n testnce
NAME                               READY   STATUS    RESTARTS   AGE
testnc-deploy01-5c544f4466-hr6rk   1/1     Running   0          32s
testnc-deploy01-5c544f4466-n764p   1/1     Running   0          32s
testnc-deploy01-5c544f4466-stxxd   1/1     Running   0          32s

volume

pod内部的volume

改级别只在pod中生效,pos死亡一起毁灭,几个pod内的容器可以共享该空目录
在busybox中挂载该目录,写入时间
在nginx中挂载该目录,通过nginx的80端口访问内容
pod_volume.yml

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: mynginx
    test: test1
  name: pod-deme
spec:
  containers:
  - image: busybox:latest
    imagePullPolicy: IfNotPresent
    name: busybox
    volumeMounts:
      - name: html
        mountPath: /data/
    command:
     - "/bin/sh"
     - "-c"
     - "while true;do echo $(date) >> /data/index.html;sleep 1s;done;"
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    volumeMounts:
      - name: html
        mountPath: /usr/share/nginx/html/
  volumes:
    - name: html
      emptyDir: {}

[root@k8s-master01 k8s]# kubectl apply -f pod_volume.yml
pod/pod-deme created

[root@k8s-master01 k8s]# kubectl get pod
poddisruptionbudgets.policy pods podtemplates

[root@k8s-master01 k8s]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod-deme 2/2 Running 0 22s

[root@k8s-master01 k8s]# kubectl get pod -o wide

NAME       READY   STATUS    RESTARTS   AGE   IP            NODE           NOMINATED NODE   READINESS GATES
pod-deme   2/2     Running   0          29s   10.244.2.35   k8s-worker02   <none>           <none>

[root@k8s-master01 k8s]# curl 10.244.2.35 -w

curl: option -w: requires parameter
curl: try 'curl --help' or 'curl --manual' for more information

[root@k8s-master01 k8s]# curl 10.244.2.35

Tue May 14 02:21:45 UTC 2024
Tue May 14 02:21:46 UTC 2024
Tue May 14 02:21:47 UTC 2024

node级别volume

将资源保存到node的硬盘上
hostPath:
type:
DirectoryOrCreate # 目录存在就使用,不存在就先创建再使用
Directory # 目录必须存在
FileOrCreate # 文件存在就使用,不存在就先创建再使用
File # 文件必须存在
Socket # unix套接字必须存在
CharDevice # 字符设备必须存在
BlockDevice # 块设备必须存在

[root@k8s-master01 k8s]# cat volume_hostpath.yml

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: mynginx
    test: test1
  name: pod-deme
spec:
  containers:
  - image: busybox:latest
    imagePullPolicy: IfNotPresent
    name: busybox
    volumeMounts:
      - name: html
        mountPath: /data/
    command:
     - "/bin/sh"
     - "-c"
     - "while true;do echo $(date) >> /data/index.html;sleep 1s;done;"
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    volumeMounts:
      - name: html
        mountPath: /usr/share/nginx/html/
  volumes:
    - name: html
      hostPath:
        type: DirectoryOrCreate
        path: /tmp/ks8volme

[root@k8s-master01 k8s]# kubectl apply -f volume_hostpath.yml

[root@k8s-worker02 ~]# cat /tmp/ks8volme/index.html

Tue May 14 03:07:04 UTC 2024
Tue May 14 03:07:05 UTC 2024
Tue May 14 03:07:06 UTC 2024
Tue May 14 03:07:07 UTC 2024

[root@k8s-worker02 ~]# > /tmp/ks8volme/index.html
[root@k8s-worker02 ~]# echo “worker02” > /tmp/ks8volme/index.html

spec:
  nodeName: k8s-worker01
  containers:

[root@k8s-master01 k8s]# kubectl apply -f volume_hostpath.yml

hostPath只会在第一次创建的节点数据持久。

[root@k8s-worker01 ~]# cat /tmp/ks8volme/index.html
Tue May 14 03:12:19 UTC 2024
Tue May 14 03:12:20 UTC 2024
Tue May 14 03:12:21 UTC 2024
Tue May 14 03:12:22 UTC 2024

优点是pod死亡后文件还保存的硬盘上

nfs volume

nfs配置共享目录

所有节点安装nfs

yum -y install  nfs-utils rpcbind
systemctl start  rpcbind
systemctl start  nfs
[root@k8s-worker02 ~]# cat /etc/exports
/data/k8svolumes *(rw,no_root_squash,no_all_squash,sync)
或者/data/k8svolumes 192.168.126.0/24(rw,no_root_squash,no_all_squash,sync)

刷新配置查看结果
[root@k8s-worker02 ~]# showmount -e
Export list for k8s-worker02:
/data/k8svolumes *

chown -R nfsnobody:nfsnobody /data/k8svolumes/

在客户端测试
[root@k8s-worker01 ~]# showmount -e k8s-worker02
Export list for k8s-worker02:
/data/k8svolumes *

[root@k8s-worker01 ~]# mount -t nfs k8s-worker02:/data/k8svolumes /data/k8svolumes
mount -t nfs k8s-worker02:/data/k8svolumes /data/k8svolumes

[root@k8s-worker01 ~]# touch /data/k8svolumes/vtest
[root@k8s-worker02 ~]# ls /data/k8svolumes/
vtest

注意,整个挂载目录不存在没有提示,目录写错了,查半天问题,注意一下即可

k8s volume

[root@k8s-master01 k8s]# cat volume_nfs.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: mynginx
    test: test1
  name: pod-deme
spec:
  nodeName: k8s-worker01
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    volumeMounts:
      - name: html
        mountPath: /usr/share/nginx/html/
  volumes:
    - name: html
      nfs:
        server: 192.168.126.23
        path: /data/k8svolumes
[root@k8s-master01 k8s]# kubectl apply -f volume_nfs.yml
pod/pod-deme created
[root@k8s-master01 k8s]# kubectl get pod -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP            NODE           NOMINATED NODE   READINESS GATES
pod-deme   1/1     Running   0          3s    10.244.1.36   k8s-worker01   <none>           <none>

[root@k8s-worker02 ~]# echo "work02 nfs" > /data/k8svolumes/index.html

[root@k8s-master01 k8s]# curl 10.244.1.36
work02 nfs

pv and pvc

pv用于创建存储,pvc用于控制匹配的pv
pv有几种模式,一个pv可以配置多个模式,在PVC中进行更精确的配置
– ReadWriteOnce(RWO):限制一个PV只能以读写方式被挂载或绑定到一个PVC。尝试将其绑定到多个PVC的话会失败。块存储通常只支持RWO
– ReadWriteMany(RWM):则允许一个PV能够以读写方式被绑定到多个PVC上。这种模式通常只支持诸如NFS这样的文件或对象存储。
– ReadOnlyMany(ROM):允许PV以只读方式绑定到多个PVC。

PV、PVC的回收策略
有两种模式配置是否持久化存储
– Delete:存储类动态创建PV时的默认策略。这一策略会删除对应的PV对象以及外部存储系统中关联的存储资源,从而可能导致数据丢失!因此必须谨慎使用该策略。
– Retain:Retain则会保留对应的PV对象,以及外部存储系统中的资源。不过,也会导致其他PVC无法继续使用该PV。如果想要继续使用保留的PV,则需要执行如下3个步骤。
1. 手动删除该PV。
2. 格式化外部存储系统中相对应的存储资源。
3. 重新创建PV。

PVC的回收策略必须与其所绑定的PV的回收策略相匹配。例如,如果PV的回收策略为Retain,则PVC的回收策略也必须为Retain。否则,可能会导致数据丢失或存储资源泄漏。

storageClassName
一旦PV不够用了,管理员又要手工去申请PV来满足需求。为了降低管理员的工作,StorageClass可以动态去创建PV,即PVC发现PV不够后,直接通过其指定的storageClassName来动态创建PV。
需要配置StorageClass的对象


[root@k8s-master01 pvc]# cat pv.yml apiVersion: v1 kind: PersistentVolume metadata: name: pv01 labels: name: pv01 spec: nfs: server: 192.168.126.23 path: /data/k8svolumes/v1 accessModes: ["ReadWriteOnce","ReadOnlyMany"] capacity: storage: 1Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv02 labels: name: pv02 spec: nfs: server: 192.168.126.23 path: /data/k8svolumes/v2 accessModes: ["ReadWriteOnce"] capacity: storage: 2Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv03 labels: name: pv03 spec: nfs: server: 192.168.126.23 path: /data/k8svolumes/v3 accessModes: ["ReadWriteOnce","ReadOnlyMany"] capacity: storage: 3Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv04 labels: name: pv04 spec: nfs: server: 192.168.126.23 path: /data/k8svolumes/v4 accessModes: ["ReadWriteOnce","ReadOnlyMany"] capacity: storage: 4Gi ---

[root@k8s-master01 pvc]# kubectl apply -f pv.yml

persistentvolume/pv01 created
persistentvolume/pv02 created
persistentvolume/pv03 created
persistentvolume/pv04 created

[root@k8s-master01 pvc]# kubectl get pv

NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv01   1Gi        RWO,ROX        Retain           Available                                   8s
pv02   2Gi        RWO            Retain           Available                                   8s
pv03   3Gi        RWO,ROX        Retain           Available                                   8s
pv04   4Gi        RWO,ROX        Retain           Available                                   7s
[root@k8s-master01 pvc]# cat pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
  namespace: default
spec:
  accessModes: ["ReadWriteOnce"]
  resources:
    requests:
      storage: 2Gi
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: testpvc
  name: pvc-test
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pvctest
    volumeMounts:
      - name: html
        mountPath: /usr/share/nginx/html/
  volumes:
    - name: html
      persistentVolumeClaim:
        claimName: mypvc

[root@k8s-master01 pvc]# kubectl apply -f pvc.yml
[root@k8s-master01 pvc]# kubectl get  pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
pv01   1Gi        RWO,ROX        Retain           Available                                           53m
pv02   2Gi        RWO            Retain           Bound       default/mypvc                           53m
pv03   3Gi        RWO,ROX        Retain           Available                                           53m
pv04   4Gi        RWO,ROX        Retain           Available                                           53m
[root@k8s-master01 pvc]# kubectl get pod pvc-test  -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP            NODE           NOMINATED NODE   READINESS GATES
pvc-test   1/1     Running   0          34s   10.244.2.39   k8s-worker02   <none>           <none>

[root@k8s-worker02 k8svolumes]# echo "pv2" > /data/k8svolumes/v2/index.html


[root@k8s-master01 pvc]# curl 10.244.2.39
pv2

回收策略字段:pv.spec.persistentVolumeReclaimPolicy字段
当值是Delete时,被pvc调用之后删除pvc会变成Failed
这是插件不支持删除volume,应该要看存储类型,云存储可能支持

 message: 'error getting deleter volume plugin for volume "pv02": no deletable volume
    plugin matched'
  phase: Failed

可以保存nfs端的pv数据,然后删除pv,重新创建,或者直接重新创建就可以了,不需要保存数据

configMap

ConfigMap 是 Kubernetes 中的一种资源类型,用于存储配置数据,如键值对、文件或者整个配置文件。secret是加密版的configMap,使用方法类似。

它的主要用途包括以下几个方面:

  • 配置数据的集中管理:ConfigMap 可以存储应用程序所需的配置信息,如数据库连接字符串、环境变量、日志级别等,使得配置数据能够集中管理。

  • 环境变量注入:ConfigMap 中的数据可以以环境变量的形式注入到容器中,方便应用程序在运行时使用。

  • 卷挂载:ConfigMap 可以作为卷挂载到 Pod 中,使得容器可以直接读取 ConfigMap 中的配置文件,这样应用程序可以根据需要读取配置文件中的配置信息。

命令行创建键值对类型ConfigMap

kubectl create configmap my-config –from-literal=key1=value1 –from-literal=key2=value2
文件类型,文件的名是键,值是文件内容
kubectl create configmap my-config-file –from-file=config.txt

目录类型,目录下的文件名会成为键,值是文件内容
kubectl create configmap my-config-dir –from-file=config/

confiMap一但创建,不能再以修改文件的方式修改内容,值能edit cm configMap名称来修改内容

通过yaml创建ConfigMap

apiVersion: v1
data:
  key1: value1
  key2: value2
kind: ConfigMap
metadata:
  name: my-config
---
apiVersion: v1
data:
  configdir1.txt: |
    configdir1
    1111

  configdir2.txt: |
    configdir2
    222
kind: ConfigMap
metadata:
  name: my-config-dir

[root@k8s-master01 k8s]# kubectl apply -f configMap.yml

[root@k8s-master01 k8s]# kubectl get cm

NAME               DATA   AGE
kube-root-ca.crt   1      47h
my-config          2      21m
my-config-dir      2      10m

[root@k8s-master01 k8s]# kubectl get cm my-config-dir -o yaml

apiVersion: v1
data:
  configdir1.txt: |
    configdir1
    1111
  configdir2.txt: |
    configdir2
    222
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"configdir1.txt":"configdir1\n1111\n","configdir2.txt":"configdir2\n222\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"my-config-dir","namespace":"default"}}
  creationTimestamp: "2024-05-15T01:33:53Z"
  name: my-config-dir
  namespace: default
  resourceVersion: "151490"
  uid: 70f31bbf-af9c-4cdf-a92e-e0d64fead270

[root@k8s-master01 ~]# kubectl create configmap my-config-env --from-literal=envfrom1=envfrom1 --from-literal=envfrom2=envfrom2

[root@k8s-master01 k8s]# cat configMapVolume.yml

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: mynginx
    test: test1
  name: pod-deme
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: configvolume
    volumeMounts:
     - name: nginxconf
       mountPath: /etc/nginx/conf.d
       readOnly: true
    env:
       - name: env-key1
         valueFrom:
            configMapKeyRef:
              name: my-config
              key: key1
    envFrom:
       - configMapRef:
           name: my-config-env
  volumes:
  - name: nginxconf
    configMap:
      name: my-config-dir

[root@k8s-master01 k8s]# kubectl apply -f configMapVolume.yml
[root@k8s-master01 ~]# kubectl exec -it pod-deme -- /bin/bash
root@pod-deme:/# env
env-key1=value1
envfrom2=envfrom2
envfrom1=envfrom1

root@pod-deme:/# ls /etc/nginx/conf.d/
configdir1.txt  configdir2.txt
root@pod-deme:/# cat /etc/nginx/conf.d/configdir1.txt
configdir1
1111
[root@k8s-master01 k8s]# kubectl edit cm my-config-dir
configmap/my-config-dir edited

root@pod-deme:/# cat /etc/nginx/conf.d/configdir2.txt
configdir2
222
root@pod-deme:/# cat /etc/nginx/conf.d/configdir2.txt
configdir2
222add

修改后同步大概有20秒的延迟,env变量修改不起作用,因为pod是初始化才加载

StateFulSet控制器

千万要注意镜像后面要加imagePullPolicy: IfNotPresent,不然很容导致容器创建失败,找半天原因,以为配置有问题,实际上是拉取不到镜像

[root@k8s-master01 k8s]# cat pvc/pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01
  labels:
    name: pv01
spec:
  nfs:
    server: 192.168.126.23
    path: /data/k8svolumes/v1
  accessModes: ["ReadWriteOnce"]
  capacity:
   storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv02
  labels:
    name: pv02
spec:
  nfs:
    server: 192.168.126.23
    path: /data/k8svolumes/v2
  accessModes: ["ReadWriteOnce"]
  capacity:
   storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv03
  labels:
    name: pv03
spec:
  nfs:
    server: 192.168.126.23
    path: /data/k8svolumes/v3
  accessModes: ["ReadWriteOnce"]
  capacity:
   storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv04
  labels:
    name: pv04
spec:
  nfs:
    server: 192.168.126.23
    path: /data/k8svolumes/v4
  accessModes: ["ReadWriteOnce"]
  capacity:
   storage: 1Gi
---
[root@k8s-master01 k8s]# kubectl apply -f pvc/pv.yml

[root@k8s-master01 k8s]# cat sts.yml
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  labels:
     app: nginx-svc
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx-pod
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx-sts
spec:
  selector:
    matchLabels:
      app: nginx-pod
  serviceName: nginx-svc
  replicas: 3
  template:
    metadata:
     labels:
       app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: ikubernetes/myapp:v1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 1Gi

[root@k8s-master01 k8s]# kubectl apply -f sts.yml
[root@k8s-master01 k8s]# kubectl get pod -w

NAME          READY   STATUS    RESTARTS   AGE
nginx-sts-0   0/1     Pending   0          0s
nginx-sts-0   0/1     Pending   0          0s
nginx-sts-0   0/1     ContainerCreating   0          0s
nginx-sts-0   1/1     Running             0          2s
nginx-sts-1   0/1     Pending             0          0s
nginx-sts-1   0/1     Pending             0          0s
nginx-sts-1   0/1     ContainerCreating   0          0s
nginx-sts-1   1/1     Running             0          1s
nginx-sts-2   0/1     Pending             0          0s
nginx-sts-2   0/1     Pending             0          0s
nginx-sts-2   0/1     ContainerCreating   0          0s
nginx-sts-2   1/1     Running             0          2s

删除后pvc还在

[root@k8s-master01 k8s]# kubectl delete -f sts.yml
[root@k8s-master01 k8s]# kubectl get pvc
NAME              STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
www-nginx-sts-0   Bound    pv01     1Gi        RWO                           20m
www-nginx-sts-1   Bound    pv02     1Gi        RWO                           20m
www-nginx-sts-2   Bound    pv03     1Gi        RWO                           20m

扩容
[root@k8s-master01 k8s]# kubectl scale statefulset nginx-sts –replicas 5
statefulset.apps/nginx-sts scaled
[root@k8s-master01 k8s]# kubectl get pod

NAME          READY   STATUS    RESTARTS   AGE
nginx-sts-0   1/1     Running   0          28s
nginx-sts-1   1/1     Running   0          26s
nginx-sts-2   1/1     Running   0          25s
nginx-sts-3   1/1     Running   0          8s
nginx-sts-4   0/1     Pending   0          5s
配置滚动更新的分割点,大于等于partition的才会更新
[root@k8s-master01 k8s]#  kubectl patch sts nginx-sts -p'{"spec":{"updateStrategy":{"rollingUpdate":{"partition":2 }}}}'

更新版本
[root@k8s-master01 k8s]# kubectl set image sts nginx-sts nginx=ikubernetes/myapp:v2
statefulset.apps/nginx-sts image updated

[root@k8s-master01 k8s] kubectl describe pod nginx-sts-2
Name:             nginx-sts-2
Containers:
  nginx:
    Container ID:   containerd://618c2f3238790b27d34d0e9f9bed3470f73f1c024c45f45395f14b365dacee3d
    Image:          ikubernetes/myapp:v2
[root@k8s-master01 k8s]# kubectl describe pod nginx-sts-1
Name:             nginx-sts-1
Containers:
  nginx:
    Container ID:   containerd://c4f22dcec6dcc413836a2bed032974b4540cb5bc7c644b6078f102d08e802cb3
    Image:          ikubernetes/myapp:v1

[root@k8s-master01 k8s]#  kubectl patch sts nginx-sts -p'{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0 }}}}'


[root@k8s-master01 k8s]# kubectl describe pod nginx-sts-0
Name:             nginx-sts-0
Controlled By:  StatefulSet/nginx-sts
Containers:
  nginx:
    Container ID:   containerd://14caf9b82996b9e16317491ebae90c89658370f700722847e7ee439928e1ce09
    Image:          ikubernetes/myapp:v2

serviceaccount

User Account用于访问集群,是跨namespace的
每个namespace都会自动创建一个default service account
service account是namespace级别的

Pod容器中的进程也可以与apiserver联系,当他们在连接apiserver的时候,他们会被认证为一个特定的service account(例如default)。

默认的每个container启动后都会挂载该service account的token和ca.crt到/var/run/secrets/kubernetes.io/serviceaccount/

[root@k8s-master01 k8s]# kubectl describe pod nginx-sts-0

Name:             nginx-sts-0
Namespace:        default
Priority:         0
Service Account:  default

[root@k8s-master01 k8s]# kubectl exec -it nginx-sts-0 — /bin/sh
/ # ls /var/run/secrets/kubernetes.io/serviceaccount/
ca.crt namespace token

如果有一个Pod需要用于管理其他Pod或者是其他资源对象(例如dashboard),是无法通过自身的名称空间的default service account进行获取其他Pod的相关属性信息的,此时就需要进行手动创建一个serviceaccount,并在创建Pod时进行定义。

[root@k8s-master01 k8s]# cat serviceaccount.yml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: default

[root@k8s-master01 k8s]# kubectl apply -f serviceaccount.yml
serviceaccount/admin created
[root@k8s-master01 k8s]# kubectl get sa

NAME      SECRETS   AGE
admin     0         4s
default   0         2d5h
k8s不通版本serviceaccount的不通
<=1.20 版本之前会创建serviceaccount会自动创建secret和对应的token
>=1.21 版本 && <= 1.23 版本,同上,但是容器内的tocken和secret的不同。这个 token 会在 1 个小时后由过期重建,如果删除此 Pod 重新创建的话,则会重新申领 token,被删除 Pod 里的 token 会立即过期。
​>=​1.24 版本 不会创建secret对象,token自动创建早pod,会在 1 个小时后由过期重建

手动生成token
[root@k8s-master01 k8s]# kubectl create token admin

手动生成secret
[root@k8s-master01 k8s]# cat secret.yml
apiVersion: v1

kind: Secret
metadata:
  name: admin-secret
  annotations:
    kubernetes.io/service-account.name: admin #绑定的serviceaccount
type: kubernetes.io/service-account-token #secret 类型

[root@k8s-master01 k8s]# kubectl apply -f secret.yml
secret/admin-secret created
[root@k8s-master01 k8s]# kubectl describe secrets admin-secret

Name:         admin-secret
Namespace:    default
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin
              kubernetes.io/service-account.uid: d97ab1a8-ee01-4117-8d49-d069add08b3c

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1107 bytes
namespace:  7 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ikp5QTc4X1Y3bk1QZEJ0aXF1OXZQZERyaThxVVlJenZlS3hYcXJxWHlHa00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImFkbWluLXNlY3JldCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImQ5N2FiMWE4LWVlMDEtNDExNy04ZDQ5LWQwNjlhZGQwOGIzYyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmFkbWluIn0.gECelLNObrh-1jAekFBw6LzHPwD8JeRz4cbqI_znPqCcgYdipIif26jEwP7lZPHfRo10KiIfQvJ33ys2ZV7_WCIW0S7rrPyZaJZlGF4nrB_gHB3SPINptgwAjhcXcLmEHvYCjQ_8ZCr4-DB1SIFDtnfluXUa8BGrc3w9WcJGhx60eAcqoND35nVQYXVIZ_p8_BWTPHg9wBAwo6JJVaByKSIPqLt5YfhxgsHKxt4utciOgI8kchlBbXpp6g4aWM3UPKDrN5v5kQHxvOsb_33z75RjX7Qjm3fZn9xjnHay6fg0Th9_PK_Mvj_RcHBOlYykevEXzRRh5h-uHv6w5gpn2Q

[root@k8s-master01 k8s]# kubectl describe serviceaccounts admin

Name:                admin
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   <none>
Tokens:              admin-secret
Events:              <none>

配置文件位置$HOME/.kube/config,查看配置
[root@k8s-master01 k8s]# kubectl config view

apiVersion: v1
clusters: #集群信息
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.126.21:6443
  name: kubernetes
contexts: #集群上下文,关联用户和集群
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes # 当前上下文
kind: Config
preferences: {}
users: ## 用户信息
- name: kubernetes-admin
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED

1、创建集群
2、创建集群登录用户
3、集群和对应用户做对应关系
4、切换这个对应关系

用户能登录需要RBAC赋予权限

生成用户证书

首先生成证书
此步骤在/etc/kubernetes/pki路径下执行,因为需要用到Kubernetes CA证书。

[root@k8s-master01 pki]# (umask 077;openssl genrsa -out shadowwu.key 2048)
Generating RSA private key, 2048 bit long modulus
...........+++
........+++
e is 65537 (0x10001)
证书签署(CN表示用户名,O表示组)

[root@k8s-master01 pki]# openssl req -new -key shadowwu.key -out shadowwu.csr -subj "/CN=shadowwu"

[root@k8s-master01 pki]# openssl x509 -req -in shadowwu.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out shadowwu.crt -days 365

Signature ok
subject=/CN=shadowwu
Getting CA Private Key


[root@k8s-master01 pki]# openssl x509 -in shadowwu.crt -text -noout

k8s添加用户认证

[root@k8s-master01 pki]# kubectl config set-credentials shadowwu --client-certificate=./shadowwu.crt --client-key=./shadowwu.key --embed-certs=true
User "shadowwu" set.

创建上下文
[root@k8s-master01 pki]# kubectl config set-context shadowwu@kubernetes --cluster=kubernetes --user=shadowwu
Context "shadowwu@kubernetes" created.

切换上下文
[root@k8s-master01 pki]# kubectl config use-context shadowwu@kubernetes
Switched to context "shadowwu@kubernetes".

没有权限
[root@k8s-master01 pki]# kubectl get pod
Error from server (Forbidden): pods is forbidden: User "shadowwu" cannot list resource "pods" in API group "" in the namespace "default"

切回来不然没法操作
[root@k8s-master01 pki]# kubectl config use-context kubernetes-admin@kubernetes

Role和RoleBinding

角色
Role:授权特定命名空间的访问权限
ClusterRole:授权所有命名空间的访问权限
角色绑定

RoleBinding:将角色绑定到主体(即subject)
ClusterRoleBinding:将集群角色绑定到主体
主体(subject)

User:用户
Group:用户组
ServiceAccount:服务账号

role是权限,rolebinding是将权限绑定用户,cluster级别是超出命名空间的权限,而cluster只是命名空间级别内的

相关参数
1、Role、ClsuterRole Verbs可配置参数

"get", "list", "watch", "create", "update", "patch", "delete", "exec"

2、Role、ClsuterRole Resource可配置参数

"services", "endpoints", "pods","secrets","configmaps","crontabs","deployments","jobs","nodes","rolebindings","clusterroles","daemonsets","replicasets","statefulsets","horizontalpodautoscalers","replicationcontrollers","cronjobs"

3、Role、ClsuterRole APIGroup可配置参数

"","apps", "autoscaling", "batch" 空表示核心用户组

创建role并对shadowwu用户授权

cat > shadowrole.yml <<EOF
kind: Role  # 角色
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  name: shadow-role
rules:
- apiGroups: [""] # ""代表核心api组
  resources: ["pods"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
kind: RoleBinding # 角色绑定
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: default-rolebinding
  namespace: default
subjects:
- kind: User
  name: shadowwu   # 目标用户
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: shadow-role  # 角色信息
  apiGroup: rbac.authorization.k8s.io
EOF

验证用户权限

[root@k8s-master01 role]# kubectl apply -f shadowrole.yml

role.rbac.authorization.k8s.io/shadow-role created
rolebinding.rbac.authorization.k8s.io/default-rolebinding created
[root@k8s-master01 role]#  kubectl config use-context shadowwu@kubernetes
`Switched to context "shadowwu@kubernetes".`


[root@k8s-master01 role]# kubectl get pod

NAME          READY   STATUS    RESTARTS   AGE
nginx-sts-0   1/1     Running   0          3h41m
nginx-sts-1   1/1     Running   0          3h41m
nginx-sts-2   1/1     Running   0          3h46m


[root@k8s-master01 role]# kubectl get sts

Error from server (Forbidden): statefulsets.apps is forbidden: User "shadowwu" cannot list resource "statefulsets" in API group "apps" in the namespace "default"

[root@k8s-master01 role]# kubectl get deployments.apps
Error from server (Forbidden): deployments.apps is forbidden: User "shadowwu" cannot list resource "deployments" in API group "apps" in the namespace "default"

批量删除Terminating的pod

这是值删除default namespace的

#!/bin/bash
#kubectl get pods --all-namespaces | grep Terminating > terminating_pods.txt
kubectl get pods -n default | grep Terminating > terminating_pods.txt
while read line; do
  #namespace=$(echo $line | awk '{print $1}')
  namespace=default
  pod=$(echo $line | awk '{print $1}')
  kubectl delete pod $pod -n $namespace --force --grace-period=0
done < terminating_pods.txt

删除所有namespace的

#!/bin/bash
kubectl get pods -n default | grep Terminating > terminating_pods.txt
while read line; do
  namespace=$(echo $line | awk '{print $1}')
  pod=$(echo $line | awk '{print $2}')
  kubectl delete pod $pod -n $namespace --force --grace-period=0
done < terminating_pods.txt
点赞

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注