1. 首页
  2. 技术知识

kubernetes存储之GlusterFS集群详解

目录

    1、glusterfs概述

      1.1、glusterfs简介1.2、glusterfs特点1.3、glusterfs卷的模式

    2、heketi概述3、部署heketi+glusterfs

      3.1、准备工作

        3.1.1、所有节点安装glusterfs客户端3.1.2、节点打标签3.1.3、所有节点加载对应模块

      3.2、创建glusterfs集群

        3.2.1、下载相关安装文件3.2.2、创建集群3.2.3、查看gfs pods

      3.3、创建heketi服务

        3.3.1、创建heketi的service account对象3.3.2、创建heketi对应的权限和secret3.3.3、初始化部署heketi

      3.4、创建gfs集群

        3.4.1、复制二进制文件3.4.2、配置topology-sample3.4.3、获取当前heketi的ClusterIP3.4.4、使用heketi创建gfs集群3.4.5、持久化heketi配置

    4、创建storageclass5、测试通过gfs提供动态存储6、分析k8s通过heketi创建pv及pvc的过程7、测试数据8、测试deployment参考来源:总结

1、glusterfs概述


1.1、glusterfs简介

glusterfs是一个可扩展,分布式文件系统,集成来自多台服务器上的磁盘存储资源到单一全局命名空间,以提供共享文件存储。

1.2、glusterfs特点

    可以扩展到几PB容量支持处理数千个客户端兼容POSIX接口使用通用硬件,普通服务器即可构建能够使用支持扩展属性的文件系统,例如ext4,XFS支持工业标准的协议,例如NFS,SMB提供很多高级功能,例如副本,配额,跨地域复制,快照以及bitrot检测支持根据不同工作负载进行调优


1.3、glusterfs卷的模式

glusterfs中的volume的模式有很多中,包括以下几种:

    分布卷(默认模式):即DHT, 也叫 分布卷: 将文件以hash算法随机分布到 一台服务器节点中存储。复制模式:即AFR, 创建volume 时带 replica x 数量: 将文件复制到 replica x 个节点中。条带模式:即Striped, 创建volume 时带 stripe x 数量: 将文件切割成数据块,分别存储到 stripe x 个节点中 ( 类似raid 0 )。分布式条带模式:最少需要4台服务器才能创建。 创建volume 时 stripe 2 server = 4 个节点: 是DHT 与 Striped 的组合型。分布式复制模式:最少需要4台服务器才能创建。 创建volume 时 replica 2 server = 4 个节点:是DHT 与 AFR 的组合型。条带复制卷模式:最少需要4台服务器才能创建。 创建volume 时 stripe 2 replica 2 server = 4 个节点: 是 Striped 与 AFR 的组合型。三种模式混合: 至少需要8台 服务器才能创建。 stripe 2 replica 2 , 每4个节点 组成一个 组。


2、heketi概述

heketi是一个提供RESTful API管理gfs卷的框架,能够在kubernetes、openshift、openstack等云平台上实现动态的存储资源供应,支持gfs多集群管理,便于管理员对gfs进行操作,在kubernetes集群中,pod将存储的请求发送至heketi,然后heketi控制gfs集群创建对应的存储卷。

heketi动态在集群内选择bricks构建指定的volumes,以确保副本会分散到集群不同的故障域内。

heketi还支持任意数量的glusterfs集群,以保证接入的云服务器不局限于单个glusterfs集群。

3、部署heketi+glusterfs

环境:kubeadm安装的最新k8s 1.16.2版本,由1master+2node组成,网络插件选用的是flannel,默认kubeadm安装的k8s,会给master打上污点,本文为了实现gfs集群功能,先手动去掉了污点。

本文的glusterfs卷模式为复制卷模式。

另外,glusterfs在kubernetes集群中需要以X运行,需要在kube-apiserver中添加–allow-privileged=true参数以开启此功能,默认此版本的kubeadm已开启。

  1. [root@k8s-master-01 ~]# kubectl describe nodes k8s-master-01 |grep Taint
  2. Taints:             node-role.kubernetes.io/master:NoSchedule  
  3. [root@k8s-master-01 ~]# kubectl taint node k8s-master-01 node-role.kubernetes.io/master-
  4. node/k8s-master-01 untainted
  5. [root@k8s-master-01 ~]# kubectl describe nodes k8s-master-01 |grep Taint
  6. Taints:             <none>

复制代码
3.1、准备工作

为了保证pod能够正常使用gfs作为后端存储,需要每台运行pod的节点上提前安装gfs的客户端工具,其他存储方式也类似。

3.1.1、所有节点安装glusterfs客户端

  1. $ yum install -y glusterfs glusterfs-fuse -y

复制代码
3.1.2、节点打标签

需要安装gfs的kubernetes设置Label,因为gfs是通过kubernetes集群的DaemonSet方式安装的。

DaemonSet安装方式默认会在每个节点上都进行安装,除非安装前设置筛选要安装节点Label,带上此标签的节点才会安装。

安装脚本中设置DaemonSet中设置安装在贴有 storagenode=glusterfs的节点,所以这是事先将节点贴上对应Label。

  1. [root@k8s-master-01 ~]# kubectl get nodes
  2. NAME            STATUS   ROLЕS    AGE     VERSION
  3. k8s-master-01   Ready    master   5d      v1.16.2
  4. k8s-node-01     Ready    <none>   4d23h   v1.16.2
  5. k8s-node-02     Ready    <none>   4d23h   v1.16.2
  6. [root@k8s-master-01 ~]# kubectl label node k8s-master-01 storagenode=glusterfs
  7. node/k8s-master-01 labeled
  8. [root@k8s-master-01 ~]# kubectl label node k8s-node-01 storagenode=glusterfs  
  9. node/k8s-node-01 labeled
  10. [root@k8s-master-01 ~]# kubectl label node k8s-node-02 storagenode=glusterfs
  11. node/k8s-node-02 labeled
  12. [root@k8s-master-01 ~]# kubectl get nodes –show-labels
  13. NAME            STATUS   ROLЕS    AGE     VERSION   LABELS
  14. k8s-master-01   Ready    master   5d      v1.16.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-01,kubernetes.io/os=linux,node-role.kubernetes.io/master=,storagenode=glusterfs
  15. k8s-node-01     Ready    <none>   4d23h   v1.16.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-01,kubernetes.io/os=linux,storagenode=glusterfs
  16. k8s-node-02     Ready    <none>   4d23h   v1.16.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-02,kubernetes.io/os=linux,storagenode=glusterfs

复制代码
3.1.3、所有节点加载对应模块

  1. $ modprobe dm_snapshot
  2. $ modprobe dm_mirror
  3. $ modprobe dm_thin_pool

复制代码 查看是否加载

  1. $ lsmod | grep dm_snapshot
  2. $ lsmod | grep dm_mirror
  3. $ lsmod | grep dm_thin_pool

复制代码
3.2、创建glusterfs集群

采用容器化方式部署gfs集群,同样也可以使用传统方式部署,在生产环境中,gfs集群最好是独立于集群之外进行部署,之后只需要创建对应的endpoints即可。这里采用Daemonset方式部署,同时保证已经打上标签的节点上都运行一个gfs服务,并且均有提供存储的磁盘。

3.2.1、下载相关安装文件

  1. [root@k8s-master-01 glusterfs]# pwd
  2. /root/manifests/glusterfs
  3. [root@k8s-master-01 glusterfs]# wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz
  4. [root@k8s-master-01 glusterfs]# tar xf heketi-client-v7.0.0.linux.amd64.tar.gz
  5. [root@k8s-master-01 glusterfs]# cd heketi-client/share/heketi/kubernetes/
  6. [root@k8s-master-01 kubernetes]# pwd
  7. /root/manifests/glusterfs/heketi-client/share/heketi/kubernetes

复制代码 在本集群中,下面用到的daemonset控制器及后面用到的deployment控制器的api版本均变为了APPs/v1,所以需要手动修改下载的json文件再进行部署,资源编排文件中需要指定selector声明。避免出现以下报错:

  1. [root@k8s-master-01 kubernetes]# kubectl apply -f glusterfs-daemonset.json
  2. error: unable to recognize “glusterfs-daemonset.json”: no matches for kind “DaemonSet” in version “extensions/v1beta1”

复制代码 修改api版本

  1. “apiVersion”: “extensions/v1beta1”

复制代码 为apps/v1

  1. “apiVersion”: “apps/v1”,

复制代码 指定selector声明

  1. [root@k8s-master-01 kubernetes]# kubectl apply -f glusterfs-daemonset.json
  2. error: error validating “glusterfs-daemonset.json”: error validating data: ValidationError(DaemonSet.spec): missing required field “selector” in io.k8s.api.apps.v1.DaemonSetSpec; if you choose to ignore these errors, turn validation off with –validate=false

复制代码 对应后面内容的selector,用matchlabel相关联

  1. “spec”: {
  2.     “selector”: {
  3.         “matchLabels”: {
  4.             “glusterfs-node”: “daemonset”
  5.         }
  6.     },

复制代码
3.2.2、创建集群

  1. [root@k8s-master-01 kubernetes]# kubectl apply -f glusterfs-daemonset.json
  2. daemonset.apps/glusterfs created

复制代码 注意:

    这里使用的是默认的挂载方式,可使用其他磁盘作为gfs的工作目录此处创建的namespace为default,可手动指定为其他namespace


3.2.3、查看gfs pods

  1. [root@k8s-master-01 kubernetes]# kubectl get pods
  2. NAME              READY   STATUS    RESTARTS   AGE
  3. glusterfs-9tttf   1/1     Running   0          1m10s
  4. glusterfs-gnrnr   1/1     Running   0          1m10s
  5. glusterfs-v92j5   1/1     Running   0          1m10s

复制代码
3.3、创建heketi服务


3.3.1、创建heketi的service account对象

  1. [root@k8s-master-01 kubernetes]# cat heketi-service-account.json
  2. {
  3.   “apiVersion”: “v1”,
  4.   “kind”: “ServiceAccount”,
  5.   “metadata”: {
  6.     “name”: “heketi-service-account”
  7.   }
  8. }
  9. [root@k8s-master-01 kubernetes]# kubectl apply -f heketi-service-account.json
  10. serviceaccount/heketi-service-account created
  11. [root@k8s-master-01 kubernetes]# kubectl get sa
  12. NAME                     SECRETS   AGE
  13. default                  1         71m
  14. heketi-service-account   1         5s

复制代码
3.3.2、创建heketi对应的权限和secret

  1. [root@k8s-master-01 kubernetes]# kubectl create clusterrolebinding heketi-gluster-admin –clusterrole=edit –serviceaccount=dаfault:heketi-service-account
  2. clusterrolebinding.rbac.authorization.k8s.io/heketi-gluster-admin created
  3. [root@k8s-master-01 kubernetes]# kubectl create secret generic heketi-config-secret –from-file=./heketi.json
  4. secret/heketi-config-secret created

复制代码
3.3.3、初始化部署heketi

同样的,需要修改api版本以及增加selector声明部分。

  1. [root@k8s-master-01 kubernetes]# vim heketi-bootstrap.json
  2.       “kind”: “Deployment”,
  3.       “apiVersion”: “apps/v1”
  4.       “spec”: {
  5.         “selector”: {
  6.           “matchLabels”: {
  7.             “name”: “deploy-heketi”
  8.           }
  9.         },
  10. [root@k8s-master-01 kubernetes]# kubectl create -f heketi-bootstrap.json
  11. service/deploy-heketi created
  12. deployment.apps/deploy-heketi created
  13. [root@k8s-master-01 kubernetes]# vim heketi-deployment.json
  14.       “kind”: “Deployment”,
  15.       “apiVersion”: “apps/v1”,
  16.       “spec”: {
  17.         “selector”: {
  18.           “matchLabels”: {
  19.             “name”: “heketi”
  20.           }
  21.         },
  22.         “replicas”: 1,
  23. [root@k8s-master-01 kubernetes]# kubectl apply -f heketi-deployment.json
  24. secret/heketi-db-backup created
  25. service/heketi created
  26. deployment.apps/heketi created
  27. [root@k8s-master-01 kubernetes]# kubectl get pods
  28. NAME                             READY   STATUS    RESTARTS   AGE
  29. deploy-heketi-6c687b4b84-p7mcr   1/1     Running   0          72s
  30. heketi-68795ccd8-9726s           0/1     ContainerCreating   0          50s
  31. glusterfs-9tttf                  1/1     Running   0          48m
  32. glusterfs-gnrnr                  1/1     Running   0          48m
  33. glusterfs-v92j5                  1/1     Running   0          48m

复制代码
3.4、创建gfs集群


3.4.1、复制二进制文件

复制heketi-cli到/usr/local/bin目录下

  1. [root@k8s-master-01 heketi-client]# pwd
  2. /root/manifests/glusterfs/heketi-client
  3. [root@k8s-master-01 heketi-client]# cp bin/heketi-cli /usr/local/bin/
  4. [root@k8s-master-01 heketi-client]# heketi-cli -v
  5. heketi-cli v7.0.0

复制代码
3.4.2、配置topology-sample

修改topology-sample,manage为gfs管理服务的节点Node主机名,storage为节点的ip地址,device为节点上的裸设备,也就是用于提供存储的磁盘最好使用裸设备,不进行分区。

因此,需要预先在每个gfs的节点上准备好新的磁盘,这里分别在三个节点都新添加了一块/dev/sdb磁盘设备,大小均为10G。

  1. [root@k8s-master-01 ~]# lsblk
  2. NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  3. sda               8:0    0   50G  0 disk
  4. ├─sda1            8:1    0    2G  0 part /boot
  5. └─sda2            8:2    0   48G  0 part
  6.   ├─centos-root 253:0    0   44G  0 lvm  /
  7.   └─centos-swap 253:1    0    4G  0 lvm  
  8. sdb               8:16   0   10G  0 disk
  9. sr0              11:0    1 1024M  0 rom
  10. [root@k8s-node-01 ~]# lsblk
  11. NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  12. sda               8:0    0   50G  0 disk
  13. ├─sda1            8:1    0    2G  0 part /boot
  14. └─sda2            8:2    0   48G  0 part
  15.   ├─centos-root 253:0    0   44G  0 lvm  /
  16.   └─centos-swap 253:1    0    4G  0 lvm  
  17. sdb               8:16   0   10G  0 disk
  18. sr0              11:0    1 1024M  0 rom
  19. [root@k8s-node-02 ~]# lsblk
  20. NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  21. sda               8:0    0   50G  0 disk
  22. ├─sda1            8:1    0    2G  0 part /boot
  23. └─sda2            8:2    0   48G  0 part
  24.   ├─centos-root 253:0    0   44G  0 lvm  /
  25.   └─centos-swap 253:1    0    4G  0 lvm  
  26. sdb               8:16   0   10G  0 disk
  27. sr0              11:0    1 1024M  0 rom

复制代码 配置topology-sample

  1. {
  2.     “clusters”: [
  3.         {
  4.             “nodes”: [
  5.                 {
  6.                     “node”: {
  7.                         “hostnames”: {
  8.                             “manage”: [
  9.                                 “k8s-master-01”
  10.                             ],
  11.                             “storage”: [
  12.                                 “192.168.2.10”
  13.                             ]
  14.                         },
  15.                         “zone”: 1
  16.                     },
  17.                     “devices”: [
  18.                         {
  19.                             “name”: “/dev/sdb”,
  20.                             “destroydata”: false
  21.                         }
  22.                     ]
  23.                 },
  24.                 {
  25.                     “node”: {
  26.                         “hostnames”: {
  27.                             “manage”: [
  28.                                 “k8s-node-01”
  29.                             ],
  30.                             “storage”: [
  31.                                 “192.168.2.11”
  32.                             ]
  33.                         },
  34.                         “zone”: 1
  35.                     },
  36.                     “devices”: [
  37.                         {
  38.                             “name”: “/dev/sdb”,
  39.                             “destroydata”: false
  40.                         }
  41.                     ]
  42.                 },
  43.                 {
  44.                     “node”: {
  45.                         “hostnames”: {
  46.                             “manage”: [
  47.                                 “k8s-node-02”
  48.                             ],
  49.                             “storage”: [
  50.                                 “192.168.2.12”
  51.                             ]
  52.                         },
  53.                         “zone”: 1
  54.                     },
  55.                     “devices”: [
  56.                         {
  57.                             “name”: “/dev/sdb”,
  58.                             “destroydata”: false
  59.                         }
  60.                     ]
  61.                 }
  62.             ]
  63.         }
  64.     ]
  65. }

复制代码
3.4.3、获取当前heketi的ClusterIP

查看当前heketi的ClusterIP,并通过环境变量声明

  1. [root@k8s-master-01 kubernetes]# kubectl get svc|grep heketi
  2. deploy-heketi   ClusterIP   10.1.241.99   <none>        8080/TCP   3m18s
  3. [root@k8s-master-01 kubernetes]# curl http://10.1.241.99:8080/hello
  4. Hello from Heketi
  5. [root@k8s-master-01 kubernetes]# export HEKETI_CLI_SERVER=http://10.1.241.99:8080
  6. [root@k8s-master-01 kubernetes]# echo $HEKETI_CLI_SERVER
  7. http://10.1.185.215:8080

复制代码
3.4.4、使用heketi创建gfs集群

执行如下命令创建gfs集群会提示Invalid JWT token: Token missing iss claim

  1. [root@k8s-master-01 kubernetes]# heketi-cli topology load –json=topology-sample.json
  2. Error: Unable to get topology information: Invalid JWT token: Token missing iss claim

复制代码 这是因为新版本的heketi在创建gfs集群时需要带上参数,声明用户名及密码,相应值在heketi.json文件中配置,即:

  1. [root@k8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER –user admin –secret ‘My Secret’ topology load –json=topology-sample.json
  2. Creating cluster … ID: 1c5ffbd86847e5fc1562ef70c033292e
  3.         Allowing file volumes on cluster.
  4.         Allowing block volumes on cluster.
  5.         Creating node k8s-master-01 … ID: b6100a5af9b47d8c1f19be0b2b4d8276
  6.                 Adding device /dev/sdb … OK
  7.         Creating node k8s-node-01 … ID: 04740cac8d42f56e354c94bdbb7b8e34
  8.                 Adding device /dev/sdb … OK
  9.         Creating node k8s-node-02 … ID: 1b33ad0dba20eaf23b5e3a4845e7cdb4
  10.                 Adding device /dev/sdb … OK

复制代码 执行了heketi-cli topology load之后,Heketi在服务器做的大致操作如下:

    进入任意glusterfs Pod内,执行gluster peer status 发现都已把对端加入到了可信存储池(TSP)中。在运行了gluster Pod的节点上,自动创建了一个VG,此VG正是由topology-sample.json 文件中的磁盘裸设备创建而来。一块磁盘设备创建出一个VG,以后创建的PVC,即从此VG里划分的LV。heketi-cli topology info 查看拓扑结构,显示出每个磁盘设备的ID,对应VG的ID,总空间、已用空间、空余空间等信息。

    通过部分日志查看

  1. [root@k8s-master-01 manifests]# kubectl logs -f deploy-heketi-6c687b4b84-l5b6j
  2. [kubeexec] DEBUG 2022/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [pvs -o pv_name,pv_uuid,vg_name –reportformat=json /dev/sdb] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [  {
  3.       “report”: [
  4.           {
  5.               “pv”: [
  6.                   {“pv_name”:”/dev/sdb”, “pv_uuid”:”1UkSIV-RYt1-QBNw-KyAR-Drm5-T9NG-UmO313″, “vg_name”:”vg_398329cc70361dfd4baa011d811de94a”}
  7.               ]
  8.           }
  9.       ]
  10.   }
  11. ]: Stderr [  WARNING: Device /dev/sdb not initialized in udev database even after waiting 10000000 microseconds.
  12.   WARNING: Device /dev/centos/root not initialized in udev database even after waiting 10000000 microseconds.
  13.   WARNING: Device /dev/sda1 not initialized in udev database even after waiting 10000000 microseconds.
  14.   WARNING: Device /dev/centos/swap not initialized in udev database even after waiting 10000000 microseconds.
  15.   WARNING: Device /dev/sda2 not initialized in udev database even after waiting 10000000 microseconds.
  16.   WARNING: Device /dev/sdb not initialized in udev database even after waiting 10000000 microseconds.
  17. ]
  18. [kubeexec] DEBUG 2022/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [udevadm info –query=symlink –name=/dev/sdb] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
  19. [kubeexec] DEBUG 2022/10/23 02:17:44 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
  20. [kubeexec] DEBUG 2022/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [udevadm info –query=symlink –name=/dev/sdb] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [
  21. ]: Stderr []
  22. [kubeexec] DEBUG 2022/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [vgdisplay -c vg_398329cc70361dfd4baa011d811de94a] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
  23. [kubeexec] DEBUG 2022/10/23 02:17:44 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
  24. [negroni] 2022-10-23T02:17:44Z | 200 |   93.868µs | 10.1.241.99:8080 | GET /queue/3d0b6edb0faa67e8efd752397f314a6f
  25. [kubeexec] DEBUG 2022/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [vgdisplay -c vg_398329cc70361dfd4baa011d811de94a] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [  vg_398329cc70361dfd4baa011d811de94a:r/w:772:-1:0:0:0:-1:0:1:1:10350592:4096:2527:0:2527:YCPG9X-b270-1jf2-VwKX-ycpZ-OI9u-7ZidOc
  26. ]: Stderr []
  27. [cmdexec] DEBUG 2022/10/23 02:17:44 heketi/executors/cmdexec/device.go:273:cmdexec.(*CmdExecutor).getVgSizeFromNode: /dev/sdb in k8s-node-01 has TotalSize:10350592, FreeSize:10350592, UsedSize:0
  28. [heketi] INFO 2022/10/23 02:17:44 Added device /dev/sdb
  29. [asynchttp] INFO 2022/10/23 02:17:44 Completed job 3d0b6edb0faa67e8efd752397f314a6f in 3m2.694238221s
  30. [negroni] 2022-10-23T02:17:45Z | 204 |   105.23µs | 10.1.241.99:8080 | GET /queue/3d0b6edb0faa67e8efd752397f314a6f
  31. [cmdexec] INFO 2022/10/23 02:17:45 Check Glusterd service status in node k8s-node-01
  32. [kubeexec] DEBUG 2022/10/23 02:17:45 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
  33. [kubeexec] DEBUG 2022/10/23 02:17:45 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
  34. [kubeexec] DEBUG 2022/10/23 02:17:45 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout filtered, Stderr filtered
  35. [heketi] INFO 2022/10/23 02:17:45 Adding node k8s-node-02
  36. [negroni] 2022-10-23T02:17:45Z | 202 |   146.998544ms | 10.1.241.99:8080 | POST /nodes
  37. [asynchttp] INFO 2022/10/23 02:17:45 Started job 8da70b6fd6fec1d61c4ba1cd0fe27fe5
  38. [cmdexec] INFO 2022/10/23 02:17:45 Probing: k8s-node-01 -> 192.168.2.12
  39. [negroni] 2022-10-23T02:17:45Z | 200 |   74.577µs | 10.1.241.99:8080 | GET /queue/8da70b6fd6fec1d61c4ba1cd0fe27fe5
  40. [kubeexec] DEBUG 2022/10/23 02:17:45 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [gluster –mode=script –timeout=600 peer probe 192.168.2.12] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
  41. [kubeexec] DEBUG 2022/10/23 02:17:45 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
  42. [negroni] 2022-10-23T02:17:46Z | 200 |   79.893µs | 10.1.241.99:8080 | GET /queue/8da70b6fd6fec1d61c4ba1cd0fe27fe5
  43. [kubeexec] DEBUG 2022/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [gluster –mode=script –timeout=600 peer probe 192.168.2.12] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [peer probe: success.
  44. ]: Stderr []
  45. [cmdexec] INFO 2022/10/23 02:17:46 Setting snapshot limit
  46. [kubeexec] DEBUG 2022/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [gluster –mode=script –timeout=600 snapshot config snap-max-hard-limit 14] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
  47. [kubeexec] DEBUG 2022/10/23 02:17:46 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
  48. [kubeexec] DEBUG 2022/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [gluster –mode=script –timeout=600 snapshot config snap-max-hard-limit 14] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [snapshot config: snap-max-hard-limit for System set successfully
  49. ]: Stderr []
  50. [heketi] INFO 2022/10/23 02:17:46 Added node 1b33ad0dba20eaf23b5e3a4845e7cdb4
  51. [asynchttp] INFO 2022/10/23 02:17:46 Completed job 8da70b6fd6fec1d61c4ba1cd0fe27fe5 in 488.404011ms
  52. [negroni] 2022-10-23T02:17:46Z | 303 |   80.712µs | 10.1.241.99:8080 | GET /queue/8da70b6fd6fec1d61c4ba1cd0fe27fe5
  53. [negroni] 2022-10-23T02:17:46Z | 200 |   242.595µs | 10.1.241.99:8080 | GET /nodes/1b33ad0dba20eaf23b5e3a4845e7cdb4
  54. [heketi] INFO 2022/10/23 02:17:46 Adding device /dev/sdb to node 1b33ad0dba20eaf23b5e3a4845e7cdb4
  55. [negroni] 2022-10-23T02:17:46Z | 202 |   696.018µs | 10.1.241.99:8080 | POST /devices
  56. [asynchttp] INFO 2022/10/23 02:17:46 Started job 21af2069b74762a5521a46e2b52e7d6a
  57. [negroni] 2022-10-23T02:17:46Z | 200 |   82.354µs | 10.1.241.99:8080 | GET /queue/21af2069b74762a5521a46e2b52e7d6a
  58. [kubeexec] DEBUG 2022/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [pvcreate -qq –metadatasize=128M –dataalignment=256K ‘/dev/sdb’] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]
  59. [kubeexec] DEBUG 2022/10/23 02:17:46 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0

复制代码
3.4.5、持久化heketi配置

上面创建的heketi没有配置持久化的卷,如果heketi的pod重启,可能会丢失之前的配置信息,所以现在创建heketi持久化的卷来对heketi数据进行持久化,该持久化方式利用gfs提供的动态存储,也可以采用其他方式进行持久化。

在所有节点安装device-mapper*

  1. yum install -y device-mapper*

复制代码 将配置信息保存为文件,并创建持久化相关信息

  1. [root@k8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER –user admin –secret ‘My Secret’ setup-openshift-heketi-storage Saving heketi-storage.json
  2. Saving heketi-storage.json
  3. [root@k8s-master-01 kubernetes]# kubectl apply -f heketi-storage.json
  4. secret/heketi-storage-secret created
  5. endpoints/heketi-storage-endpoints created
  6. service/heketi-storage-endpoints created
  7. job.batch/heketi-storage-copy-job created

复制代码 删除中间产物

  1. [root@k8s-master-01 kubernetes]# kubectl delete all,svc,jobs,deployment,secret –selector=”deploy-heketi”
  2. pod “deploy-heketi-6c687b4b84-l5b6j” deleted
  3. service “deploy-heketi” deleted
  4. deployment.apps “deploy-heketi” deleted
  5. replicaset.apps “deploy-heketi-6c687b4b84” deleted
  6. job.batch “heketi-storage-copy-job” deleted
  7. secret “heketi-storage-secret” deleted

复制代码 创建持久化的heketi

  1. [root@k8s-master-01 kubernetes]# kubectl apply -f heketi-deployment.json
  2. secret/heketi-db-backup created
  3. service/heketi created
  4. deployment.apps/heketi created
  5. [root@k8s-master-01 kubernetes]# kubectl get pods
  6. NAME                     READY   STATUS    RESTARTS   AGE
  7. glusterfs-cqw5d          1/1     Running   0          41m
  8. glusterfs-l2lsv          1/1     Running   0          41m
  9. glusterfs-lrdz7          1/1     Running   0          41m
  10. heketi-68795ccd8-m8x55   1/1     Running   0          32s

复制代码 查看持久化后heketi的svc,并重新声明环境变量

  1. [root@k8s-master-01 kubernetes]# kubectl get svc
  2. NAME                       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
  3. heketi                     ClusterIP   10.1.45.61   <none>        8080/TCP   2m9s
  4. heketi-storage-endpoints   ClusterIP   10.1.26.73   <none>        1/TCP      4m58s
  5. kubernetes                 ClusterIP   10.1.0.1     <none>        443/TCP    14h
  6. [root@k8s-master-01 kubernetes]# export HEKETI_CLI_SERVER=http://10.1.45.61:8080
  7. [root@k8s-master-01 kubernetes]# curl http://10.1.45.61:8080/hello
  8. Hello from Heketi

复制代码 查看gfs集群信息,更多操作参照官方文档说明

  1. [root@k8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER –user admin –secret ‘My Secret’ topology info
  2. Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e
  3.     File:  true
  4.     Block: true
  5.     Volumes:
  6.         Name: heketidbstorage
  7.         Size: 2
  8.         Id: b25f4b627cf66279bfe19e8a01e9e85d
  9.         Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e
  10.         Mount: 192.168.2.11:heketidbstorage
  11.         Mount Options: backup-volfile-servers=192.168.2.12,192.168.2.10
  12.         Durability Type: replicate
  13.         Replica: 3
  14.         Snapshot: Disabled
  15.                 Bricks:
  16.                         Id: 3ab6c19b8fe0112575ba04d58573a404
  17.                         Path: /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_3ab6c19b8fe0112575ba04d58573a404/brick
  18.                         Size (GiB): 2
  19.                         Node: b6100a5af9b47d8c1f19be0b2b4d8276
  20.                         Device: 703e3662cbd8ffb24a6401bb3c3c41fa
  21.                         Id: d1fa386f2ec9954f4517431163f67dea
  22.                         Path: /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_d1fa386f2ec9954f4517431163f67dea/brick
  23.                         Size (GiB): 2
  24.                         Node: 04740cac8d42f56e354c94bdbb7b8e34
  25.                         Device: 398329cc70361dfd4baa011d811de94a
  26.                         Id: d2b0ae26fa3f0eafba407b637ca0d06b
  27.                         Path: /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_d2b0ae26fa3f0eafba407b637ca0d06b/brick
  28.                         Size (GiB): 2
  29.                         Node: 1b33ad0dba20eaf23b5e3a4845e7cdb4
  30.                         Device: 7c791bbb90f710123ba431a7cdde8d0b
  31.     Nodes:
  32.         Node Id: 04740cac8d42f56e354c94bdbb7b8e34
  33.         State: online
  34.         Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e
  35.         Zone: 1
  36.         Management Hostnames: k8s-node-01
  37.         Storage Hostnames: 192.168.2.11
  38.         Devices:
  39.                 Id:398329cc70361dfd4baa011d811de94a   Name:/dev/sdb            State:online    Size (GiB):9       Used (GiB):2       Free (GiB):7      
  40.                         Bricks:
  41.                                 Id:d1fa386f2ec9954f4517431163f67dea   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_d1fa386f2ec9954f4517431163f67dea/brick
  42.         Node Id: 1b33ad0dba20eaf23b5e3a4845e7cdb4
  43.         State: online
  44.         Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e
  45.         Zone: 1
  46.         Management Hostnames: k8s-node-02
  47.         Storage Hostnames: 192.168.2.12
  48.         Devices:
  49.                 Id:7c791bbb90f710123ba431a7cdde8d0b   Name:/dev/sdb            State:online    Size (GiB):9       Used (GiB):2       Free (GiB):7      
  50.                         Bricks:
  51.                                 Id:d2b0ae26fa3f0eafba407b637ca0d06b   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_d2b0ae26fa3f0eafba407b637ca0d06b/brick
  52.         Node Id: b6100a5af9b47d8c1f19be0b2b4d8276
  53.         State: online
  54.         Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e
  55.         Zone: 1
  56.         Management Hostnames: k8s-master-01
  57.         Storage Hostnames: 192.168.2.10
  58.         Devices:
  59.                 Id:703e3662cbd8ffb24a6401bb3c3c41fa   Name:/dev/sdb            State:online    Size (GiB):9       Used (GiB):2       Free (GiB):7      
  60.                         Bricks:
  61.                                 Id:3ab6c19b8fe0112575ba04d58573a404   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_3ab6c19b8fe0112575ba04d58573a404/brick

复制代码
4、创建storageclass

  1. [root@k8s-master-01 kubernetes]# vim storageclass-gfs-heketi.yaml
  2. apiVersion: storage.k8s.io/v1
  3. kind: StorageClass
  4. metadata:
  5.   name: gluster-heketi
  6. provisioner: kubernetes.io/glusterfs
  7. reclaimPolicy: Retain
  8. parameters:
  9.   resturl: “http://10.1.45.61:8080”
  10.   restauthenabled: “true”
  11.   restuser: “admin”
  12.   restuserkey: “My Secret”
  13.   gidMin: “40000”
  14.   gidMax: “50000”
  15.   volumetype: “replicate:3”
  16. allowVolumeExpansion: true
  17. [root@k8s-master-01 kubernetes]# kubectl apply -f storageclass-gfs-heketi.yaml
  18. storageclass.storage.k8s.io/gluster-heketi created

复制代码 参数说明:

    reclaimPolicy:Retain 回收策略,默认是Delete,删除pvc后pv及后端创建的volume、brick(lvm)不会被删除。gidMin和gidMax,能够使用的最小和最大gidvolumetype:卷类型及个数,这里使用的是复制卷,个数必须大于1


5、测试通过gfs提供动态存储

创建一个pod使用动态pv,在StorageClassName指定之前创建的StorageClass的name,即gluster-heketi:

  1. [root@k8s-master-01 kubernetes]# vim pod-use-pvc.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5.   name: pod-use-pvc
  6. spec:
  7.   containers:
  8.   – name: pod-use-pvc
  9.     image: busybox
  10.     command:
  11.       – sleep
  12.       – “3600”
  13.     volumeMounts:
  14.     – name: gluster-volume
  15.       mountPath: “/pv-data”
  16.       readOnly: false
  17.   volumes:
  18.   – name: gluster-volume
  19.     persistentVolumeClaim:
  20.       claimName: pvc-gluster-heketi
  21. kind: PersistentVolumeClaim
  22. apiVersion: v1
  23. metadata:
  24.   name: pvc-gluster-heketi
  25. spec:
  26.   accessModes: [ “ReadWriteOnce” ]
  27.   storageClassName: “gluster-heketi”
  28.   resources:
  29.     requests:
  30.       storage: 1Gi

复制代码 创建pod并查看创建的pv和pvc

  1. [root@k8s-master-01 kubernetes]# kubectl apply -f pod-use-pvc.yaml
  2. pod/pod-use-pvc created
  3. persistentvolumeclaim/pvc-gluster-heketi created
  4. [root@k8s-master-01 kubernetes]# kubectl get pv,pvc
  5. NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS     REASON   AGE
  6. persistentvolume/pvc-0fb9b246-4da4-491c-b6a2-4f38489ab11c   1Gi        RWO            Retain           Bound    default/pvc-gluster-heketi   gluster-heketi            57s
  7. NAME                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
  8. persistentvolumeclaim/pvc-gluster-heketi   Bound    pvc-0fb9b246-4da4-491c-b6a2-4f38489ab11c   1Gi        RWO            gluster-heketi   62s

复制代码
6、分析k8s通过heketi创建pv及pvc的过程

通过pvc及向storageclass申请创建对应的pv,具体可通过查看创建的heketi pod的日志

首先发现heketi接收到请求之后运行了一个job任务,创建了三个bricks,在三个gfs节点中创建对应的目录:

  1. [heketi] INFO 2022/10/23 03:08:36 Allocating brick set #0
  2. [negroni] 2022-10-23T03:08:36Z | 202 |   56.193603ms | 10.1.45.61:8080 | POST /volumes
  3. [asynchttp] INFO 2022/10/23 03:08:36 Started job 3ec932315085609bc54ead6e3f6851e8
  4. [heketi] INFO 2022/10/23 03:08:36 Started async operation: Create Volume
  5. [heketi] INFO 2022/10/23 03:08:36 Trying Create Volume (attempt #1/5)
  6. [heketi] INFO 2022/10/23 03:08:36 Creating brick 289fe032c1f4f9f211480e24c5d74a44
  7. [heketi] INFO 2022/10/23 03:08:36 Creating brick a3172661ba1b849d67b500c93c3dd652
  8. [heketi] INFO 2022/10/23 03:08:36 Creating brick 917e27a9dbc5395ebf08dff8d3401b43
  9. [negroni] 2022-10-23T03:08:36Z | 200 |   72.083µs | 10.1.45.61:8080 | GET /queue/3ec932315085609bc54ead6e3f6851e8
  10. [kubeexec] DEBUG 2022/10/23 03:08:36 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [mkdir -p /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]
  11. [kubeexec] DEBUG 2022/10/23 03:08:36 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
  12. [kubeexec] DEBUG 2022/10/23 03:08:36 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [mkdir -p /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
  13. [kubeexec] DEBUG 2022/10/23 03:08:36 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 1
  14. [kubeexec] DEBUG 2022/10/23 03:08:36 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [mkdir -p /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_917e27a9dbc5395ebf08dff8d3401b43] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]
  15. [kubeexec] DEBUG 2022/10/23 03:08:36 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2

复制代码 创建lv,添加自动挂载

  1. [kubeexec] DEBUG 2022/10/23 03:08:37 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2
  2. [kubeexec] DEBUG 2022/10/23 03:08:37 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_703e3662cbd8ffb24a6401bb3c3c41fa-brick_a3172661ba1b849d67b500c93c3dd652] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]: Stdout [meta-data=/dev/mapper/vg_703e3662cbd8ffb24a6401bb3c3c41fa-brick_a3172661ba1b849d67b500c93c3dd652 isize=512    agcount=8, agsize=32768 blks
  3.          =                       sectsz=512   attr=2, projid32bit=1
  4.          =                       crc=1        finobt=0, sparse=0
  5. data     =                       bsize=4096   blocks=262144, imaxpct=25
  6.          =                       sunit=64     swidth=64 blks
  7. naming   =version 2              bsize=8192   ascii-ci=0 ftype=1
  8. log      =internal log           bsize=4096   blocks=2560, version=2
  9.          =                       sectsz=512   sunit=64 blks, lazy-count=1
  10. realtime =none                   extsz=4096   blocks=0, rtextents=0
  11. ]: Stderr []
  12. [kubeexec] DEBUG 2022/10/23 03:08:37 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [awk “BEGIN {print “/dev/mapper/vg_703e3662cbd8ffb24a6401bb3c3c41fa-brick_a3172661ba1b849d67b500c93c3dd652 /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652 xfs rw,inode64,noatime,nouuid 1 2” >> “/var/lib/heketi/fstab”}”] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]

复制代码 创建brick,设置权限

  1. [kubeexec] DEBUG 2022/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [chmod 2775 /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652/brick] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]
  2. [kubeexec] DEBUG 2022/10/23 03:08:38 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2
  3. [kubeexec] DEBUG 2022/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chown :40000 /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44/brick] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout []: Stderr []
  4. [kubeexec] DEBUG 2022/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [chmod 2775 /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44/brick] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
  5. [kubeexec] DEBUG 2022/10/23 03:08:38 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2
  6. [negroni] 2022-10-23T03:08:38Z | 200 |   83.159µs | 10.1.45.61:8080 | GET /queue/3ec932315085609bc54ead6e3f6851e8
  7. [kubeexec] DEBUG 2022/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chmod 2775 /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_917e27a9dbc5395ebf08dff8d3401b43/brick] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]: Stdout []: Stderr []
  8. [kubeexec] DEBUG 2022/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chmod 2775 /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652/brick] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]: Stdout []: Stderr []
  9. [kubeexec] DEBUG 2022/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chmod 2775 /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44/brick] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout []: Stderr []
  10. [cmdexec] INFO 2022/10/23 03:08:38 Creating volume vol_08e8447256de2598952dcb240e615d0f replica 3

复制代码 创建对应的volume

  1. [asynchttp] INFO 2022/10/23 03:08:41 Completed job 3ec932315085609bc54ead6e3f6851e8 in 5.007631648s
  2. [negroni] 2022-10-23T03:08:41Z | 303 |   78.335µs | 10.1.45.61:8080 | GET /queue/3ec932315085609bc54ead6e3f6851e8
  3. [negroni] 2022-10-23T03:08:41Z | 200 |   5.751689ms | 10.1.45.61:8080 | GET /volumes/08e8447256de2598952dcb240e615d0f
  4. [negroni] 2022-10-23T03:08:41Z | 200 |   139.05µs | 10.1.45.61:8080 | GET /clusters/1c5ffbd86847e5fc1562ef70c033292e
  5. [negroni] 2022-10-23T03:08:41Z | 200 |   660.249µs | 10.1.45.61:8080 | GET /nodes/04740cac8d42f56e354c94bdbb7b8e34
  6. [negroni] 2022-10-23T03:08:41Z | 200 |   270.334µs | 10.1.45.61:8080 | GET /nodes/1b33ad0dba20eaf23b5e3a4845e7cdb4
  7. [negroni] 2022-10-23T03:08:41Z | 200 |   345.528µs | 10.1.45.61:8080 | GET /nodes/b6100a5af9b47d8c1f19be0b2b4d8276
  8. [heketi] INFO 2022/10/23 03:09:39 Starting Node Health Status refresh
  9. [cmdexec] INFO 2022/10/23 03:09:39 Check Glusterd service status in node k8s-node-01
  10. [kubeexec] DEBUG 2022/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
  11. [kubeexec] DEBUG 2022/10/23 03:09:39 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
  12. [kubeexec] DEBUG 2022/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout filtered, Stderr filtered
  13. [heketi] INFO 2022/10/23 03:09:39 Periodic health check status: node 04740cac8d42f56e354c94bdbb7b8e34 up=true
  14. [cmdexec] INFO 2022/10/23 03:09:39 Check Glusterd service status in node k8s-node-02
  15. [kubeexec] DEBUG 2022/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]
  16. [kubeexec] DEBUG 2022/10/23 03:09:39 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
  17. [kubeexec] DEBUG 2022/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]: Stdout filtered, Stderr filtered
  18. [heketi] INFO 2022/10/23 03:09:39 Periodic health check status: node 1b33ad0dba20eaf23b5e3a4845e7cdb4 up=true
  19. [cmdexec] INFO 2022/10/23 03:09:39 Check Glusterd service status in node k8s-master-01
  20. [kubeexec] DEBUG 2022/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]
  21. [kubeexec] DEBUG 2022/10/23 03:09:39 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
  22. [kubeexec] DEBUG 2022/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]: Stdout filtered, Stderr filtered
  23. [heketi] INFO 2022/10/23 03:09:39 Periodic health check status: node b6100a5af9b47d8c1f19be0b2b4d8276 up=true
  24. [heketi] INFO 2022/10/23 03:09:39 Cleaned 0 nodes from health cache

复制代码
7、测试数据

测试使用该pv的pod之间能否共享数据,手动进入到pod并创建文件

  1. [root@k8s-master-01 kubernetes]# kubectl get pods
  2. NAME                     READY   STATUS    RESTARTS   AGE
  3. glusterfs-cqw5d          1/1     Running   0          90m
  4. glusterfs-l2lsv          1/1     Running   0          90m
  5. glusterfs-lrdz7          1/1     Running   0          90m
  6. heketi-68795ccd8-m8x55   1/1     Running   0          49m
  7. pod-use-pvc              1/1     Running   0          20m
  8. [root@k8s-master-01 kubernetes]# kubectl exec -it pod-use-pvc /bin/sh
  9. / # cd /pv-data/
  10. /pv-data # echo “hello world”>a.txt
  11. /pv-data # cat a.txt
  12. hello world

复制代码 查看创建的卷

  1. [root@k8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER –user admin –secret ‘My Secret’ volume list
  2. Id:08e8447256de2598952dcb240e615d0f    Cluster:1c5ffbd86847e5fc1562ef70c033292e    Name:vol_08e8447256de2598952dcb240e615d0f
  3. Id:b25f4b627cf66279bfe19e8a01e9e85d    Cluster:1c5ffbd86847e5fc1562ef70c033292e    Name:heketidbstorage

复制代码 将设备挂载查看卷中的数据,vol_08e8447256de2598952dcb240e615d0f为卷名称

  1. [root@k8s-master-01 kubernetes]# mount -t glusterfs 192.168.2.10:vol_08e8447256de2598952dcb240e615d0f /mnt
  2. [root@k8s-master-01 kubernetes]# ll /mnt/
  3. total 1
  4. -rw-r–r– 1 root 40000 12 Oct 23 11:29 a.txt
  5. [root@k8s-master-01 kubernetes]# cat /mnt/a.txt
  6. hello world

复制代码
8、测试deployment

测试通过deployment控制器部署能否正常使用storageclass,创建nginx的deployment

  1. [root@k8s-master-01 kubernetes]# vim nginx-deployment-gluster.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5.   name: nginx-gfs
  6. spec:
  7.   selector:
  8.     matchLabels:
  9.       name: nginx
  10.   replicas: 2
  11.   template:
  12.     metadata:
  13.       labels:
  14.         name: nginx
  15.     spec:
  16.       containers:
  17.         – name: nginx
  18.           image: nginx
  19.           imagePullPolicy: IfNotPresent
  20.           ports:
  21.             – containerPort: 80
  22.           volumeMounts:
  23.             – name: nginx-gfs-html
  24.               mountPath: “/usr/share/nginx/html”
  25.             – name: nginx-gfs-conf
  26.               mountPath: “/etc/nginx/conf.d”
  27.       volumes:
  28.       – name: nginx-gfs-html
  29.         persistentVolumeClaim:
  30.           claimName: glusterfs-nginx-html
  31.       – name: nginx-gfs-conf
  32.         persistentVolumeClaim:
  33.           claimName: glusterfs-nginx-conf
  34. apiVersion: v1
  35. kind: PersistentVolumeClaim
  36. metadata:
  37.   name: glusterfs-nginx-html
  38. spec:
  39.   accessModes: [ “ReadWriteMany” ]
  40.   storageClassName: “gluster-heketi”
  41.   resources:
  42.     requests:
  43.       storage: 500Mi
  44. apiVersion: v1
  45. kind: PersistentVolumeClaim
  46. metadata:
  47.   name: glusterfs-nginx-conf
  48. spec:
  49.   accessModes: [ “ReadWriteMany” ]
  50.   storageClassName: “gluster-heketi”
  51.   resources:
  52.     requests:
  53.       storage: 10Mi

复制代码 查看相应资源

  1. [root@k8s-master-01 kubernetes]# kubectl get pod,pv,pvc|grep nginx
  2. pod/nginx-gfs-7d66cccf76-mkc76   1/1     Running   0          2m45s
  3. pod/nginx-gfs-7d66cccf76-zc8n2   1/1     Running   0          2m45s
  4. persistentvolume/pvc-87481e3a-9b7e-43aa-a0b9-4028ce0a1abb   1Gi        RWX            Retain           Bound    default/glusterfs-nginx-conf   gluster-heketi            2m34s
  5. persistentvolume/pvc-f954a4ca-ea1c-458d-8490-a49a0a001ab5   1Gi        RWX            Retain           Bound    default/glusterfs-nginx-html   gluster-heketi            2m34s
  6. persistentvolumeclaim/glusterfs-nginx-conf   Bound    pvc-87481e3a-9b7e-43aa-a0b9-4028ce0a1abb   1Gi        RWX            gluster-heketi   2m45s
  7. persistentvolumeclaim/glusterfs-nginx-html   Bound    pvc-f954a4ca-ea1c-458d-8490-a49a0a001ab5   1Gi        RWX            gluster-heketi   2m45s

复制代码 查看挂载情况

  1. [root@k8s-master-01 kubernetes]# kubectl exec -it nginx-gfs-7d66cccf76-mkc76 — df -Th
  2. Filesystem                                        Type            Size  Used Avail Use% Mounted on
  3. overlay                                           overlay          44G  3.2G   41G   8% /
  4. tmpfs                                             tmpfs            64M     0   64M   0% /dev
  5. tmpfs                                             tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
  6. /dev/mapper/centos-root                           xfs              44G  3.2G   41G   8% /etc/hosts
  7. shm                                               tmpfs            64M     0   64M   0% /dev/shm
  8. 192.168.2.10:vol_adf6fc08c8828fdda27c8aa5ce99b50c fuse.glusterfs 1014M   43M  972M   5% /etc/nginx/conf.d
  9. 192.168.2.10:vol_454e14ae3184122ff9a14d77e02b10b9 fuse.glusterfs 1014M   43M  972M   5% /usr/share/nginx/html
  10. tmpfs                                             tmpfs           2.0G   12K  2.0G   1% /run/secrets/kubernetes.io/serviceaccount
  11. tmpfs                                             tmpfs           2.0G     0  2.0G   0% /proc/acpi
  12. tmpfs                                             tmpfs           2.0G     0  2.0G   0% /proc/scsi
  13. tmpfs                                             tmpfs           2.0G     0  2.0G   0% /sys/firmware

复制代码 在宿主机挂载和创建文件

  1. [root@k8s-master-01 kubernetes]# mount -t glusterfs 192.168.2.10:vol_454e14ae3184122ff9a14d77e02b10b9 /mnt/
  2. [root@k8s-master-01 kubernetes]# cd /mnt/
  3. [root@k8s-master-01 mnt]# echo “hello world”>index.html
  4. [root@k8s-master-01 mnt]# kubectl exec -it nginx-gfs-7d66cccf76-mkc76 — cat /usr/share/nginx/html/index.html
  5. hello world

复制代码 扩容nginx副本,查看是否能正常挂载

  1. [root@k8s-master-01 mnt]# kubectl scale deployment nginx-gfs –replicas=3
  2. deployment.apps/nginx-gfs scaled
  3. [root@k8s-master-01 mnt]# kubectl get pods
  4. NAME                         READY   STATUS    RESTARTS   AGE
  5. glusterfs-cqw5d              1/1     Running   0          129m
  6. glusterfs-l2lsv              1/1     Running   0          129m
  7. glusterfs-lrdz7              1/1     Running   0          129m
  8. heketi-68795ccd8-m8x55       1/1     Running   0          88m
  9. nginx-gfs-7d66cccf76-mkc76   1/1     Running   0          8m55s
  10. nginx-gfs-7d66cccf76-qzqnv   1/1     Running   0          23s
  11. nginx-gfs-7d66cccf76-zc8n2   1/1     Running   0          8m55s
  12. [root@k8s-master-01 mnt]# kubectl exec -it nginx-gfs-7d66cccf76-qzqnv — cat /usr/share/nginx/html/index.html     
  13. hello world

复制代码 至此,在k8s集群中部署heketi+glusterfs提供动态存储结束。

参考来源:

https://github.com/heketi/heketi

https://github.com/gluster/gluster-kubernetes

https://www.jb51.net/article/244019.htm

总结

到此这篇关于kubernetes存储之GlusterFS集群的文章就介绍到这了,更多相关kubernetes存储GlusterFS集群内容请搜索共生网络以前的文章或继续浏览下面的相关文章希望大家以后多多支持共生网络!

原创文章,作者:starterknow,如若转载,请注明出处:https://www.starterknow.com/108303.html

联系我们