Eli's Blog

1. ConfigMap

提供向容器注入配置信息的机制,可以用来保存单个属性,也可以用来保存整个配置文化或 JSON 二进制大对象

1.1 创建 ConfigMap

1.1.1 文件

--from-file:指定文件或目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ cat > ./ui.properties <<EOF
color=red
background=cyan
EOF

$ kubectl create configmap ui-config --from-file=./ui.properties

$ kubectl get cm ui-config -o yaml
apiVersion: v1
data:
ui.properties: | # key 为文件名称
color=red
background=cyan
kind: ConfigMap
metadata:
...

1.1.2 字面值

--from-literal

1
2
3
4
5
6
7
8
9
10
11
12
$ kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm

$ kubectl describe cm special-config

$ kubectl get cm special-config -o yaml
apiVersion: v1
data:
special.how: very
special.type: charm
kind: ConfigMap
metadata:
...

1.2 使用 ConfigMap

1.2.1 使用 ConfigMap 代替环境变量

spec.containers[].env[]

spec.containers[].envFrom[]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# configmap-injection.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm

---
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
namespace: default
data:
log_level: INFO

---
apiVersion: v1
kind: Pod
metadata:
name: configmap-pod
spec:
containers:
- name: cm-container
image: busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c", "env"]
env: # 按key导入
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.how
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.type
envFrom: # 全部导入
- configMapRef:
name: env-config
restartPolicy: Never
1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ kubectl create -f configmap-injection.yaml 

$ kubectl get pod
NAME READY STATUS RESTARTS AGE
configmap-pod 0/1 Completed 0 2m35s

$ kubectl logs configmap-pod
...
SPECIAL_TYPE_KEY=charm # target
SPECIAL_LEVEL_KEY=very # target
log_level=INFO # target
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1

1.2.2 用 ConfigMap 设置命令行参数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: v1
kind: Pod
metadata:
name: configmap-pod
spec:
containers:
- name: cm-container
image: busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c", "echo ${SPECIAL_LEVEL_KEY} ${SPECIAL_TYPE_KEY} "]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.how
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.type
restartPolicy: Never

1.2.3 通过数据插件使用 ConfigMap

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: Pod
metadata:
name: configmap-pod
spec:
containers:
- name: cm-container
image: busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c", "sleep 300"]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
restartPolicy: Never
1
2
$ kubectl exec configmap-pod -it -- cat /etc/config/special.how
very

1.2.4 ConfigMap 热更新

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# configmap-hot-update.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: log-config
namespace: default
data:
log_level: INFO

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
spec:
containers:
- name: nginx
image: hub.elihe.io/test/nginx:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: log-config
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ kubectl apply -f configmap-hot-update.yaml

$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-df47dc9cd-9mjnx 1/1 Running 0 82s

$ kubectl exec nginx-df47dc9cd-9mjnx -it -- cat /etc/config/log_level
INFO

# 修改 ConfigMap
$ kubectl edit configmap log-config
apiVersion: v1
data:
log_level: DEBUG
kind: ConfigMap

# 30s 后再次查询
$ kubectl exec nginx-df47dc9cd-9mjnx -it -- cat /etc/config/log_level
DEBUG

# 触发热更新, 会重新启动, 配置生效
$ kubectl patch deployment nginx --patch '{"spec": {"template": {"metadata": {"annotations": {"version.config": "20201014"}}}}}'

2. Secret

Secret 解决密码、token、密钥等敏感数据的配置问题,可以以Volume 或环境变量方式导入 Pod 中使用

Secret 类型有三种:

  • Service Account: 用来访问 k8s api。由 k8s 自动创建,自动挂载到 Pod 的 /run/secrets/kubernetes.io/serviceaccount 目录下
  • Opaque: base64 编码格式的 Secret,用来存储密码、密钥等
  • kubernetes.io/dockerconfigjson: 用来存储私有 docker registry 的认证信息

2.1 Service Account (SA)

1
2
$ kubectl exec nginx-596675fccc-v8gfw -it -- ls /run/secrets/kubernetes.io/serviceaccount
ca.crt namespace token

2.2 Opaque

2.2.1 创建

1
2
3
4
5
$ echo -n "admin" | base64
YWRtaW4=

$ echo -n "pass123" | base64
cGFzczEyMw==
1
2
3
4
5
6
7
8
9
# secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
username: YWRtaW4=
password: cGFzczEyMw==

2.2.2 使用 Secret

  1. 将 Secret 挂载到 Volume
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# secret-volume.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
name: secret-volume
name: secret-volume
spec:
volumes:
- name: secrets
secret:
secretName: my-secret
containers:
- name: db
image: hub.elihe.io/test/nginx:v1
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secrets
mountPath: "/etc/secrets"
readOnly: true
1
2
3
4
5
6
7
8
9
10
11
12
$ kubectl create -f secret-volume.yaml 

$ kubectl get pod
NAME READY STATUS RESTARTS AGE
secret-volume 1/1 Running 0 8s

$ kubectl exec secret-volume -it -- ls /etc/secrets
password username

# 容器内自动解密
$ kubectl exec secret-volume -it -- cat /etc/secrets/password
pass123
  1. 将 Secret 导入环境变量
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# secret-env.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: secret-env
spec:
replicas: 2
selector:
matchLabels:
app: secret-pod
template:
metadata:
labels:
app: secret-pod
spec:
containers:
- name: nginx
image: hub.elihe.io/test/nginx:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
env:
- name: TEST_USER
valueFrom:
secretKeyRef:
name: my-secret
key: username
- name: TEST_PASSWORD
valueFrom:
secretKeyRef:
name: my-secret
key: password
1
2
3
4
5
6
7
8
9
10
$ kubectl apply -f secret-env.yaml 

$ kubectl get pod
NAME READY STATUS RESTARTS AGE
secret-env-6f5785997f-2w7dj 1/1 Running 0 8s
secret-env-6f5785997f-khzjz 1/1 Running 0 8s

$ kubectl exec secret-env-6f5785997f-2w7dj -it -- env | grep TEST
TEST_USER=admin
TEST_PASSWORD=pass123

2.3 kubernetes.io/dockerconfigjson

创建 docker register 认证

1
$ kubectl create secret docker-register myregisterkey --docker-server=hub.elihe.io --docker-username=admin --docker-password=Harbor12345 --docker-email=eli.he@live.cn

创建 Pod 时,用 imagePullSecrets 来引用刚创建的 myregisterkey

1
2
3
4
5
6
7
8
9
10
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: hub.elihe.io/test/nginx:v1
imagePullSecrets:
- name: myregisterkey

3. Volume

volume 解决问题:

  1. 容器磁盘上的新增文件,容器重启后将消失,无法持久化
  2. Pod中运行的多个容器需求共享文件

当 Pod 中的容器重启时,volume 数据还在。但是当 Pod 不存在时,volume 也将不复存在。

Volume 支持的类型:

  • awsElasticBlockStore, azureDisk, azureFile, cephfs, csi, downwardAPI, emptyDir
  • fc, flocker, gcePersistentDisk, gitRepo, glusterfs, hostPath, iscsi, local, nfs
  • persistentVolumeClaim, projected, portworxVolume, quobyte, rbd, scaleIO, secret
  • storageos vsphereVolume

3.1 emptyDir

创建 Pod 时,会自动创建 emptyDir 卷,它最初是空的,Pod 中的容器可以读取和写入 emptyDir 卷中的文件。当删除 Pod 时,emptyDir 中的数据将被永久删。容器崩溃不会导致 Pod 被删除,因此 emptyDir 卷中的数据在容器崩溃时是安全的。

emptyDir 用法:

  • 暂存空间,例如用于基于磁盘的合并排序
  • 用于长时间计算崩溃恢复时的检查点
  • Web服务器容器提供数据时,保存内容管理容器提取的文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: v1
kind: Pod
metadata:
name: vol-emptydir
spec:
containers:
- name: c1
image: busybox
command: ["/bin/sh", "-c", "sleep 600"]
volumeMounts:
- mountPath: /cache
name: cache-volume
- name: c2
image: busybox
command: ["/bin/sh", "-c", "sleep 600"]
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
1
2
3
4
$ kubectl exec vol-emptydir -c c1 -it -- touch /cache/now.txt

$ kubectl exec vol-emptydir -c c2 -it -- ls -l /cache/now.txt
-rw-r--r-- 1 root root 0 Oct 14 01:34 /cache/now.txt

3.2 hostPath

将主机节点的文件系统中的文件和目录挂载到集群中

hostPath 用途:

  • 运行需要访问 Docker 内部的容器,使用 /var/lib/docker 的 hostPath
  • 在容器中运行 cAdvisor(猫头鹰,Google提供的一个服务),使用 /dev/cgroups 的 hostPath

hostPath 卷指定 type检查:

行为
空字符串(默认),向后兼容,在挂载hostPath 卷之前不会执行任何检查
DirectoryOrCreate 目录不存在自动创建,权限0755,与kubectl具有相同组和所有权
Directory 目录必须存在
FileOrCreate 文件不存在自动创建,权限0644,与kubectl具有相同组和所有权
File 文件必须存在
Socket Unix 套接字必须存在
CharDevice 字符设备必须存在
BlockDevice 块设备必须存在
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
kind: Pod
metadata:
name: vol-hostpath
spec:
containers:
- name: c1
image: busybox
command: ["/bin/sh", "-c", "sleep 600"]
volumeMounts:
- mountPath: /data
name: data-volume
- name: c2
image: busybox
command: ["/bin/sh", "-c", "sleep 600"]
volumeMounts:
- mountPath: /data
name: data-volume
volumes:
- name: data-volume
hostPath:
path: /data
type: Directory
1
2
3
4
5
6
7
8
9
10
11
$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
vol-hostpath 0/2 ContainerCreating 0 44s <none> k8s-node02 <none> <none>

# k8s-node02 上创建目录 /data
$ mkdir /data
$ date > /data/abc.txt

# 查看文件内容
$ kubectl exec vol-hostpath -c c1 -it -- cat /data/abc.txt
Sat Sep 12 11:10:53 CST 2020

4. PV & PVC

PV 作用:屏蔽后端不同存储类型之间,挂载方式不一致等特性差异

PVC: 寻找一个合适的PV进行绑定

4.1 概念

  • PV: Persistent Volume。由管理员设置的存储,它是集群的一部分。就像节点是集群中的资源一样,PV 也是集群中的资源。PV 是 Volume 之类的卷插件,但具有独立于使用 PV 的 Pod 的生命周期之外。
    • 静态 PV:集群管理员创建一些 PV。它们带有可供集群用户使用的实际存储的细节。它们存储在 k8s api 中,可用于消费
    • 动态 PV:当管理员创建的静态 PV 都不匹配用户的 PersistentVolumeClaim时,集群可能会尝试动态地为 PVC创建卷。此配置基于StorageClasses: PVC必须请求(存储类),并且管理员必须创建并配置该类才能够尽兴动态创建。声明该类为 “” 可有效地禁用其动态配置
  • PVC: PersistentVolumeClaim。是用户存储的请求。它与Pod 类似。Pod 消耗节点资源,PVC 消耗 PV 资源。Pod 可以请求特定级别的资源(CPU & Memory)。PVC 可以请求特定的大小和访问模式(例如,可以以读/写一次或只读多次模式挂载)

4.1.1 绑定

mater 中的控制环路监视新的 PVC,寻找匹配的PV(如果可能),并将它们绑定在一起。如果为新的PVC动态调配PV,则该环路将始终将该PV绑定到PVC。否则,用户总会得到他们所请求的存储,但容器可能会超出要求的数量。一旦PV和PVC绑定后,PVC绑定是排他性的,不管它们是如何绑定的,PVC和PV绑定是一一映射的

4.1.2 持久化卷声明的保护

PVC 保护的目的是确保 pod 正在使用的 PVC 不会从系统中移除,因为如果被移除的话,可能导致数据丢失。

当 Pod 状态为 Pending 或 Running 时,PVC 处于活动状态

当启用 PVC 保护 alpha 功能时,如果用户删除一个 pod 正在使用的 PVC,该 PVC 不会被立即删除。PVC 的删除将被推迟,直到 PVC 不再被任何 Pod 使用

4.1.3 持久化卷类型

  • GCEPersistentDisk, AWSElasticBlockStore, AsureFile, AzureDisk, FC(Fibre Channel)
  • FlexVolume, Flocker, NFS, iSCSI, RBD(Ceph Block Device), CephFS
  • Cinder (OpenStack block storage), Glusterfs, VsphereVolume, QuoByte Volumes
  • HostPath, VMware Photon, Portworx Volumes, ScaleIO Volumes, StorageOS

4.1.4 PV 访问模式

PV 可以以资源提供者支持的任何方式挂载在主机上。

  • ReadWriteOnce: 单节点读写模式,RWO
  • ReadOnlyMany: 多节点只读模式,ROX
  • ReadWriteMany: 多节点读写模式, RWX

4.1.5 回收策略

  • Retain: 保留,需手动回收
  • Recycle: 基本擦除 (rm -rf /thevolume/*),新版k8s已不支持
  • Delete: 关联的存储资产将被删除

只有 NFS和HostPath 支持 Recycle 策略

AWS EBS、GCE PD、Azure Disk 和 Cinder 卷 支持 Delete 策略

4.1.6 状态

  • Available: 空闲资源,还未被绑定
  • Bound: 已绑定
  • Released: 解除绑定,但未被集群重新声明
  • Failed: 自动回收失败
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-1
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce # 同时只允许一个用户读写操作
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsserevr=4.1
nfs:
path: /tmp
server: 192.168.31.200

4.2 持久化演示 NFS

4.2.1 安装 NFS 服务器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
yum install -y nfs-common nfs-utils rpcbind
mkdir /nfs
chmod 666 /nfs
chown nfsnobody /nfs
echo '/nfs *(rw, no_root_squash,no_all_squash,sync)' > /etc/exports

systemctl start rpcbind
systemctl start nfs
exportfs -rv

# 客户端安装
yum install -y nfs-utils rpcbind
showmount -e 192.168.31.200
mkdir /test
mount -t nfs 192.168.31.200:/nfs /test

4.2.2 部署 PV

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfspv1
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfs
server: 192.168.31.200

---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfspv2
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfs2
server: 192.168.31.200

---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfspv3
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfs3
server: 192.168.31.200

4.2.3 创建服务并使用 PVC

StatefulSet 控制器,必须先要有一个无头服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
# app.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: nginx # service 必须为无头服务
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: hub.elihe.io/test/nginx:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec: # 选择条件
accessModes: [ "ReadWriteOnce" ]
storageClassName: nfs
resources:
requests:
storage: 1Gi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ kubectl apply -f pv.yaml
$ kubectl apply -f app.yaml

$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfspv1 1Gi RWO Retain Bound default/www-web-0 nfs 5m23s
nfspv2 2Gi RWO Retain Bound default/www-web-1 nfs 5m23s
nfspv3 3Gi RWO Retain Bound default/www-web-2 nfs 5m23s

$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound nfspv1 1Gi RWO nfs 2m53s
www-web-1 Bound nfspv2 2Gi RWO nfs 2m50s
www-web-2 Bound nfspv3 3Gi RWO nfs 2m45s

$ kubectl get sts # statefulset
NAME READY AGE
web 3/3 3m11s

$ kubectl get pod
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 3m29s
web-1 1/1 Running 0 3m26s
web-2 1/1 Running 0 3m21s

$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d17h
nginx ClusterIP None <none> 80/TCP 49m

StatefulSet 相关总结:

  • Pod Name(网络标识):$(statefulset name)-(order)。例如:web-0
  • DNS 域名:$(podname).(headless server name) Pod 重建,IP改变,但域名不变。例如:web-0.nginx
  • 域名FQDN:$(service name).$(namespace).svc.cluster.local, 其中”cluster.local”为集群的域名。例如:nginx.default.svc.cluster.local
1
2
3
4
5
6
7
8
9
$ kubectl get pod -n kube-system -o wide | grep coredns
coredns-66bff467f8-8lb4m 1/1 Running 4 21d 10.244.0.10 k8s-master <none> <none>
coredns-66bff467f8-nbzmn 1/1 Running 4 21d 10.244.0.11 k8s-master <none> <none>

$ dig -t A nginx.default.svc.cluster.local. @10.244.0.10
;; ANSWER SECTION:
nginx.default.svc.cluster.local. 30 IN A 10.244.2.53
nginx.default.svc.cluster.local. 30 IN A 10.244.1.36
nginx.default.svc.cluster.local. 30 IN A 10.244.2.54

FQDN:(Fully Qualified Domain Name)全限定域名:同时带有主机名和域名的名称。(通过符号“.”)例如:主机名是bigserver,域名是mycompany.com,那么FQDN就是bigserver.mycompany.com

StatefulSet 启停顺序:

  • 有序部署:如果有多个Pod副本,它们会按顺序创建 0~N-1,并且只有当Pod处于Running和Ready状态,才会创建下一个Pod
  • 有序删除:Pod被删除时,删除顺序从N-1~ 0
  • 有序扩展:扩展时,也必须按顺序进行

StatefulSet 使用场景:

  • 稳定的持久存储
  • 稳定的网络标识,即Pod重新调度后,PodName 和 Hostname 不变
  • 有序部署,有序扩展,具有 init containers 来实现
  • 有序收缩

pv资源释放:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
$ kubectl delete svc nginx

$ kubectl delete sts --all

$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound nfspv1 1Gi RWO nfs 39m
www-web-1 Bound nfspv2 2Gi RWO nfs 39m
www-web-2 Bound nfspv3 3Gi RWO nfs 39m

$ kubectl delete pvc --all
persistentvolumeclaim "www-web-0" deleted
persistentvolumeclaim "www-web-1" deleted
persistentvolumeclaim "www-web-2" deleted

$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfspv1 1Gi RWO Retain Released default/www-web-0 nfs 41m
nfspv2 2Gi RWO Retain Released default/www-web-1 nfs 41m
nfspv3 3Gi RWO Retain Released default/www-web-2 nfs 41m

$ kubectl edit pv nfspv1
...
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
claimRef: # 删除
apiVersion: v1
kind: PersistentVolumeClaim
name: www-web-0
namespace: default
resourceVersion: "104634"
uid: 57597e18-963d-4ce1-b1d9-880ac0ef3da0
nfs:
path: /nfs
server: 192.168.31.200
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
...

$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfspv1 1Gi RWO Retain Available nfs 46m
nfspv2 2Gi RWO Retain Released default/www-web-1 nfs 46m
nfspv3 3Gi RWO Retain Released default/www-web-2 nfs 46m