找回密码
 立即注册
首页 业界区 业界 gitlab runner operator部署配置

gitlab runner operator部署配置

伯斌 2025-6-13 10:16:31
背景说明

由于公司管理的git runner资源不足,导致并发的任务比较多时,出现大面积的排队,比较影响效率。基于此问题,我们可以自建一部分Runner给到相应的仓库使用。这里我们有自建的
在k8s集群自建Runner

参考文档
先决条件


  • Kubernetes v1.21 及更高版本
  • Cert manager v1.7.1
安装配置Operator


  • 安装OLM(Operator Lifecycle Manager)
  1. curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.32.0/install.sh | bash -s v0.32.0
  2. # 如果下载了install.sh,可以直接执行 chmod +x install.sh && ./install.sh v0.32.0
  3. root@yuhaohao-10-10-7-19:~# kubectl  get pod -n olm
  4. NAME                                READY   STATUS    RESTARTS   AGE
  5. catalog-operator-55894c8cf8-d84vw   1/1     Running   0          3d
  6. olm-operator-6946d4b877-t55w6       1/1     Running   0          3d
  7. operatorhubio-catalog-nvq9q         1/1     Running   0          17h
  8. packageserver-64fdd96cbb-ggpnh      1/1     Running   0          3d
  9. packageserver-64fdd96cbb-s7r59      1/1     Running   0          3d
复制代码
2.安装Operator
  1. kubectl create -f https://operatorhub.io/install/gitlab-runner-operator.yaml
复制代码
3.查看安装的资源
  1. root@yuhaohao-10-10-7-19:~# kubectl  get csv -n operators
  2. NAME                             DISPLAY         VERSION   REPLACES                         PHASE
  3. gitlab-runner-operator.v1.37.0   GitLab Runner   1.37.0    gitlab-runner-operator.v1.36.0   Succeeded
  4. root@yuhaohao-10-10-7-19:~# kubectl  get pod -n operators
  5. NAME                                                             READY   STATUS    RESTARTS   AGE
  6. gitlab-runner-gitlab-runnercontroller-manager-84d6f69dc8-7nsdc   2/2     Running   0          3d
复制代码
安装配置Gitlab Runner

1.安装minio
  1. # 这里为了实现gitlab runner的缓存,提高构建效率,可以部署minio作为gitlab runner的缓存
  2. # 这里我们安装的是minio 12.9.0版本
  3. # 这里我们可以基于主机目录,配置StorageClass,实现动态PVC存储
  4. # 配置local-path-provisioner
  5. root@yuhaohao-10-10-7-19:~/minio-install# cat /home/yuhaohao/local-path.yaml
  6. apiVersion: v1
  7. kind: Namespace
  8. metadata:
  9.   name: local-path-storage
  10. ---
  11. apiVersion: v1
  12. kind: ServiceAccount
  13. metadata:
  14.   name: local-path-provisioner-service-account
  15.   namespace: local-path-storage
  16. ---
  17. apiVersion: rbac.authorization.k8s.io/v1
  18. kind: Role
  19. metadata:
  20.   name: local-path-provisioner-role
  21.   namespace: local-path-storage
  22. rules:
  23.   - apiGroups: [""]
  24.     resources: ["pods"]
  25.     verbs: ["get", "list", "watch", "create", "patch", "update", "delete"]
  26. ---
  27. apiVersion: rbac.authorization.k8s.io/v1
  28. kind: ClusterRole
  29. metadata:
  30.   name: local-path-provisioner-role
  31. rules:
  32.   - apiGroups: [""]
  33.     resources: ["nodes", "persistentvolumeclaims", "configmaps", "pods", "pods/log"]
  34.     verbs: ["get", "list", "watch"]
  35.   - apiGroups: [""]
  36.     resources: ["persistentvolumes"]
  37.     verbs: ["get", "list", "watch", "create", "patch", "update", "delete"]
  38.   - apiGroups: [""]
  39.     resources: ["events"]
  40.     verbs: ["create", "patch"]
  41.   - apiGroups: ["storage.k8s.io"]
  42.     resources: ["storageclasses"]
  43.     verbs: ["get", "list", "watch"]
  44. ---
  45. apiVersion: rbac.authorization.k8s.io/v1
  46. kind: RoleBinding
  47. metadata:
  48.   name: local-path-provisioner-bind
  49.   namespace: local-path-storage
  50. roleRef:
  51.   apiGroup: rbac.authorization.k8s.io
  52.   kind: Role
  53.   name: local-path-provisioner-role
  54. subjects:
  55.   - kind: ServiceAccount
  56.     name: local-path-provisioner-service-account
  57.     namespace: local-path-storage
  58. ---
  59. apiVersion: rbac.authorization.k8s.io/v1
  60. kind: ClusterRoleBinding
  61. metadata:
  62.   name: local-path-provisioner-bind
  63. roleRef:
  64.   apiGroup: rbac.authorization.k8s.io
  65.   kind: ClusterRole
  66.   name: local-path-provisioner-role
  67. subjects:
  68.   - kind: ServiceAccount
  69.     name: local-path-provisioner-service-account
  70.     namespace: local-path-storage
  71. ---
  72. apiVersion: apps/v1
  73. kind: Deployment
  74. metadata:
  75.   name: local-path-provisioner
  76.   namespace: local-path-storage
  77. spec:
  78.   replicas: 1
  79.   selector:
  80.     matchLabels:
  81.       app: local-path-provisioner
  82.   template:
  83.     metadata:
  84.       labels:
  85.         app: local-path-provisioner
  86.     spec:
  87.       serviceAccountName: local-path-provisioner-service-account
  88.       containers:
  89.         - name: local-path-provisioner
  90.           image: registry.test.com/test/rancher/local-path-provisioner:v0.0.31
  91.           imagePullPolicy: IfNotPresent
  92.           command:
  93.             - local-path-provisioner
  94.             - --debug
  95.             - start
  96.             - --config
  97.             - /etc/config/config.json
  98.           volumeMounts:
  99.             - name: config-volume
  100.               mountPath: /etc/config/
  101.           env:
  102.             - name: POD_NAMESPACE
  103.               valueFrom:
  104.                 fieldRef:
  105.                   fieldPath: metadata.namespace
  106.             - name: CONFIG_MOUNT_PATH
  107.               value: /etc/config/
  108.       volumes:
  109.         - name: config-volume
  110.           configMap:
  111.             name: local-path-config
  112. ---
  113. apiVersion: storage.k8s.io/v1
  114. kind: StorageClass
  115. metadata:
  116.   name: local-path
  117. provisioner: rancher.io/local-path
  118. volumeBindingMode: WaitForFirstConsumer
  119. reclaimPolicy: Delete
  120. ---
  121. kind: ConfigMap
  122. apiVersion: v1
  123. metadata:
  124.   name: local-path-config
  125.   namespace: local-path-storage
  126. data:
  127.   config.json: |-
  128.     {
  129.             "nodePathMap":[
  130.             {
  131.                     "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
  132.                     "paths":["/data"]
  133.             }
  134.             ]
  135.     }
  136.   setup: |-
  137.     #!/bin/sh
  138.     set -eu
  139.     mkdir -m 0777 -p "$VOL_DIR"
  140.   teardown: |-
  141.     #!/bin/sh
  142.     set -eu
  143.     rm -rf "$VOL_DIR"
  144.   helperPod.yaml: |-
  145.     apiVersion: v1
  146.     kind: Pod
  147.     metadata:
  148.       name: helper-pod
  149.     spec:
  150.       priorityClassName: system-node-critical
  151.       tolerations:
  152.         - key: node.kubernetes.io/disk-pressure
  153.           operator: Exists
  154.           effect: NoSchedule
  155.       containers:
  156.       - name: helper-pod
  157.         image: registry.test.com/test/busybox
  158.         imagePullPolicy: IfNotPresent
  159. $ kubectl apply -f local-path.yaml
  160. $ kubectl get pod -n local-path-storage
  161. NAME                                      READY   STATUS    RESTARTS   AGE
  162. local-path-provisioner-6bf6d4456b-f9sbr   1/1     Running   0          3d
  163. $ kubectl  get sc
  164. NAME         PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
  165. local-path   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  16d
  166. # 创建minio所需的pvc
  167. $ cat gitlab-runner-minio-pvc.yaml
  168. apiVersion: v1
  169. kind: PersistentVolumeClaim
  170. metadata:
  171.   labels:
  172.     app.kubernetes.io/instance: minio
  173.     app.kubernetes.io/managed-by: Helm
  174.   name: gitlab-runner-minio-pvc
  175.   namespace: gitlab-runner
  176. spec:
  177.   accessModes:
  178.   - ReadWriteOnce
  179.   resources:
  180.     requests:
  181.       storage: 100Gi
  182.   storageClassName: local-path
  183.   volumeMode: Filesystem
  184. $ kubectl apply -f gitlab-runner-minio-pvc.yaml
  185. $ kubectl get pvc -n gitlab-runner
  186. NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
  187. gitlab-runner-minio-pvc   Bound    pvc-31430c05-16cc-4441-814b-7aae70b7b134   100Gi      RWO            local-path     <unset>                 15d
  188. # 部署minio
  189. $ cat minio-values.yaml
  190. root@yuhaohao-10-10-7-19:~/minio-install# cat minio-values.yaml
  191. image:
  192.   registry: registry.test.com
  193.   repository: test/bitnami/minio
  194.   tag: 2023.11.1-debian-11-r0
  195.   pullPolicy: IfNotPresent
  196. fullnameOverride: "minio"
  197. auth:
  198.   rootUser: admin
  199.   rootPassword: "minio12345"
  200. defaultBuckets: "gitlab"
  201. persistence:
  202.   ## @param persistence.enabled Enable MinIO® data persistence using PVC. If false, use emptyDir
  203.   ##
  204.   enabled: true
  205.   storageClass: ""
  206.   ## @param persistence.mountPath Data volume mount path
  207.   mountPath: /bitnami/minio/data
  208.   accessModes:
  209.     - ReadWriteOnce
  210.   size: 100Gi
  211.   ## @param persistence.annotations Annotations for the PVC
  212.   ##
  213.   annotations: {}
  214.   ## @param persistence.existingClaim Name of an existing PVC to use (only in `standalone` mode)
  215.   ##
  216.   existingClaim: "gitlab-runner-minio-pvc"
  217. $ helm install -f minio-values.yaml minio ./minio -n gitlab-runner
  218. root@yuhaohao-10-10-7-19:~/minio-install# kubectl  get pod -n gitlab-runner
  219. NAME                              READY   STATUS    RESTARTS   AGE
  220. minio-6d947866f5-rhmwz            1/1     Running   0          3d
复制代码
2.安装gitlab runner
  1. # 配置通用的runner配置信息
  2. # 注意pre_clone_script 这里的配置是为了解决`Git LFS is not enabled on this GitLab server`的报错
  3. $ cat custom.yaml
  4. apiVersion: v1
  5. data:
  6.   config.toml: |-
  7.     [[runners]]
  8.       environment = ["GIT_LFS_SKIP_SMUDGE=1"]
  9.       pre_clone_script = """
  10.         echo "Setting up secrets"
  11.         ################## run in runner-helper
  12.         echo "######### ${GITLAB_USER_LOGIN}"
  13.         git config --global lfs.url "https://lfs.test.com"
  14.         git config --global credential.helper store
  15.         echo "https://test:test@lfs.test.com" >> $HOME/.git-credentials
  16.         mkdir -pv $HOME/.ssh >> /dev/null
  17.         chmod 700 $HOME/.ssh
  18.         echo -e "Host gitlab.testcom\n\tStrictHostKeyChecking no\n" >> $HOME/.ssh/config
  19.       """
  20.       [runners.kubernetes]
  21.         image_pull_secrets = ["inner"]
  22.         pull_policy = "if-not-present"
  23.         privileged = true
  24.         [runners.kubernetes.node_selector]
  25.           "ci-node" = "True"
  26.         [runners.kubernetes.node_tolerations]
  27.           "ci-node=true" = "NoSchedule"
  28. kind: ConfigMap
  29. metadata:
  30.   name: runner-custom-config
  31.   namespace: gitlab-runner
  32. # 配置minio的secret
  33. $ cat minio-secret.yaml
  34. ---
  35. apiVersion: v1
  36. data:
  37.   accesskey: YWRtaW4=
  38.   secretkey: bWluaW8xMjM0NQ==
  39. kind: Secret
  40. metadata:
  41.   name: minio-secret-for-runner
  42.   namespace: gitlab-runner
  43. type: Opaque
  44. # 配置拉取镜像的secrets
  45. $ cat secrets-inner.yaml
  46. apiVersion: v1
  47. data:
  48.   .dockerconfigjson: eyJhdXRocyI6eyjkjkj9ib3QuZG9ja2VyIiwicGFzc3dvcmQiOiJTVGFpMjAyMy5kayIsImF1dGgiOiJkbWx3WlhJdWNtOWliM1F1Wkc5amEyVnlPbE5VWVdreU1ESXpMbVJyIn19fQ==
  49. kind: Secret
  50. metadata:
  51.   name: inner
  52.   namespace: gitlab-runner
  53. type: kubernetes.io/dockerconfigjson
  54. # 配置具体的runner
  55. $ cat test-runner.yaml
  56. ---
  57. apiVersion: v1
  58. kind: Secret
  59. metadata:
  60.   name: gitlab-runner-secret-test
  61.   namespace: gitlab-runner
  62. type: Opaque
  63. stringData:
  64.   runner-registration-token: ""
  65.   runner-token: "glrt-t2_73uew3jtestEhDM" # 注意,这里是gitlab runner注册的token
  66. ---
  67. apiVersion: apps.gitlab.com/v1beta2
  68. kind: Runner
  69. metadata:
  70.   name: test
  71.   namespace: gitlab-runner
  72. spec:
  73.   logLevel: debug
  74.   # 这里,不要用gitlab-runner-ocp 13.x版本,会有重启runner后,runner无法重新注册连接的问题。
  75.   runnerImage: "registry.test.com/test/gitlab-runner-ocp:v17.4.0"
  76.   gitlabUrl: https://gitlab.test.com/
  77.   buildImage: alpine
  78.   helperImage: registry.test.com/test/gitlab-runner-helper:ubuntu-x86_64-v17.5.1
  79.   concurrent: 50
  80.   token: gitlab-runner-secret-test
  81.   tags: "test"
  82.   locked: true
  83.   cacheType: s3
  84.   cacheShared: true
  85.   s3:
  86.     server: minio:9000
  87.     credentials: minio-secret-for-runner
  88.     bucket: sumo19
  89.     insecure: true
  90.   config: runner-custom-config
  91. $ kubectl apply -f .
  92. # 查看runner
  93. $  kubectl  get pod -n gitlab-runner
  94. NAME                              READY   STATUS    RESTARTS   AGE
  95. minio-6d947866f5-rhmwz            1/1     Running   0          3d1h
  96. test-runner-7475ddc7f-nn7xh   1/1     Running   0          15h
复制代码
运行测试

1.png


来源:程序园用户自行投稿发布,如果侵权,请联系站长删除
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!
您需要登录后才可以回帖 登录 | 立即注册