Ghost Migration

Migration from Ghost(Docker) to Ghost(K8s) Recently I am prepparing for migrating my blog from Ghost(Docker on Baremetal) to Ghost(on K8s). Changes Previous Previous Now Host Docker K8s Database Mariadb Mysql S3 Cloudflare R2 Cloudflare R2 Route Caddy Ingress Steps Boot up the new Ghost I am using the official docker image1 to boot up the new Ghost. I am quite sure the official image is the most suitable one for me to run instead of bitnami one(it has way too many self defined logic processes in it)....

November 27, 2024 · 3 min · 576 words · Me

K8s Trivial Questions

Here are some examples or illustrations that I use quite a lot in my work routine. How to reuse env parameter in another one(e.g. assemble many into one) That’s quite useful for database DSN configuration like work cases. apiVersion: v1 kind: Pod metadata: name: mysql-app spec: containers: - name: mysql-container image: mysql:5.7 env: - name: mysql_host value: "mysql.default.svc.cluster.local" - name: mysql_db value: "myapp" - name: mysql_port value: "3306" - name: PROTOCOL value: "mysql" - name: mysql_dsn value: "$(PROTOCOL)://$(mysql_host):$(mysql_port)/$(mysql_db)" Why my kustomization replaced my base envFrom?...

April 30, 2024 · 1 min · 135 words · Me

ArgoCD intermittent updates to manifest fail

Under our scenario we put Github Actions to use argocd cli to update our app’s manifest, which is like below - name: Update ArgoCD Image uses: clowdhaus/argo-cd-action/@main if: ${{ inputs.argocd_app_name != '' }} env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} with: version: 2.6.7 command: app set ${{ inputs.argocd_app_name }} options: | --server ${{ vars.ARGOCD_URL }} --kustomize-image ${{ fromJSON(steps.meta.outputs.json).tags[0] }} --auth-token ${{ secrets.MEEX_ARGOCD_TOKEN }} In this case, the cli will update argocd app manifest of source....

April 30, 2024 · 2 min · 363 words · Me

Helm stucking at "Uninstalling" Status

Sometimes if we do not pay attention to the helm release uninstallation order, like we removed one release which includes CRDs and then we try to remove a customized resource which does not have a reference or finalization, that means this resource will cause resource recycle stuck, in other words, the helm release stuck at “uninstalling” status. There is a quick fix which can easily sort it out. helm plugin install https://github....

April 6, 2024 · 1 min · 103 words · Me

K8s Etcd Backup

因为云计算的快速发展,K8s 早已经成为了现代服务的核心。而作为 K8s 的核心,etcd 集群自然有着至关重要的位置。如果使用的是云厂商托管的 K8s 那么相对简单一些,K8s 本身的升级和维护基本上云厂商都会负责,无需投入太多精力,只要定期安排好时间配合厂商做一下升级即可。但是如果是自建的 K8s 集群呢? 做好 etcd 集群的备份和恢复方案是非常关键的,在真正的故障的时候是能起到决定性的作用。 就我们的经验而言,之前发生的几次集群故障都是出自于 ETCD 集群的故障,但我们没有做好足够的备份和恢复方案,因此每当发生这样的问题的时候最后的结果总是重新安装整个集群,然后就是漫长而枯燥而痛苦的集群恢复。耗费大量的人力和物力,也会造成非常恶劣的影响,从而就流失用户了。 我们线上集群是通过 Kubespray 安装的,基于 release-2.23 虽然 kubespray 在每次操作的时候都会备份当前的 etcd: https://github.com/kubernetes-sigs/kubespray/blob/master/roles/etcd/handlers/backup.yml 但我们并不能用 kubespray 的方式来备份,kubespary 的操作原则是非必要尽量不做操作。额外一提,kubepsray 也自带了 recover etcd 的方案: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/recover-control-plane.md 真的遇到 control-plane 或者 etcd 节点故障的时候可以按照说明书进行操作。 脚本 考虑到备份的稳定性以及目前集群中并没有做一个多备份的存储工具,暂时选择直接在其中一个 etcd 的物理机上进行脚本备份,并上传到 Hetzner 的 StorageBox 中。 #!/usr/bin/env bash # # Etcd backup set -ex # kubespray 安装的 etcd 集群默认会把相关的配置放在 /etc/etcd.env 文件中 ETCD_ENV_FILE=/etc/etcd.env BACKUP_DIR=/data/scripts/etcd_backup/ # 保留 21 以内的所有备份 DAYS_RETAIN=21 DT=$(date +%Y%m%d....

March 28, 2024 · 2 min · 219 words · Me