Hi there 👋

💡You think today is just another day in your life. It’s not just another day. It’s the one day that is given to you — today. It’s a gift. It’s the only gift that you have right now, and the only appropriate response is gratefulness.

K8s on Bare metal: Teleport by helm

Helm Charts here, I use terraform + helm provider to install teleport charts. We have to start from here: https://goteleport.com/docs/reference/helm-reference/ For a K8s cluster, we need at least install no less than two components: teleport-cluster teleport-operator is included teleport-kube-agent A few steps further: setup Github Login setup applications Also, we need cert-manager to issue certificates. As we need DNS solver, this means we need to create a TXT record of that domain every time when we issue a wildcard domain to get verified by ACME, which means we need to configure according to different DNS providers....

November 20, 2024 · 4 min · 816 words · Me

Naive Server Backup

当私有维护的服务器变多之后,以及考虑到服务器厂商的不稳定性(例如跑路),保障服务器上起码有一个备份策略是非常有必要的。 存储服务搭建 之前我的做法是通过 tailscale 把服务器上的关键文件夹打包 scp 回家庭服务器上,但是这种得控制权限(设置 ssh 相关的一些参数,后面如果有机会的话可以再分享)),以及每个服务器都得配置,也是挺麻烦的。而且我也担心这样一个服务器暴露在公网上面万一被人爆破了就什么数据都丢了(例如我其中某一台服务器被黑了那么我家庭环境下所有东西都没了)。因此借这次机会尝试一下别的方案,拍脑袋想了一个: 路由是通过端口转发来实现外网到内网端口的映射的。 Minio 是一个自建的 S3 服务,兼容 AWS S3 协议,所以市场上大部分语言的大部分 SDK 都可以直接请求,那么备份工具只要是支持 S3 的也就可以直接配置使用了。因为我有自己的 NAS 和 公网 IP,因此我的服务放在了家里,且因为我的 NAS 本身为了数据的冗余就做了两块盘的镜像,因此我只需要单实例的 Minio 容器即可,不用考虑冗余。 参考 https://min.io/docs/minio/container/index.html 部署即可 Docker compose 如下 version: "3.8" services: minio: image: quay.io/minio/minio container_name: minio networks: - my_network volumes: - /mnt/data/Minio:/data restart: unless-stopped command: - server - /data - --console-address - ":9001" env_file: - ./data/envs/minio.env caddy: image: iarekylew00t/caddy-cloudflare restart: unless-stopped container_name: caddy extra_hosts: - "host....

November 20, 2024 · 9 min · 1839 words · Me

Hello World

Hello World! Proudly made with Hugo, PaperMod.

November 20, 2024 · 1 min · 7 words · Me

Redis Replica Full Sync Failed

After a long time running, my redis instance consumes a lot of memory(RDB file takes about 65 GB on Disk). At this time, I want to set a new replication on this standalone instance to keep my data safe, so I start a new replica instance which contains the same ACL permission alike the master instance and add following options to enable Redis replication. slaveof redis 6379 masterauth xxxx The replica starts soon from a brand new disk snapshot(a new AOF checkpoint) and begin syncing from the master....

July 22, 2024 · 2 min · 216 words · Me

K8s on Baremetal: IP Passthrough

I’ve received complaints about inside biz pods, apps didn’t get the right client (instead of public IPv4 IP, they only got in-cluster IPs like 10.233.x.y, or the host IP), so they couldn’t block the over requesting clients by IP. The topology: As the rate limit component works fine under development, that should be due to some reasons that make the app not get correct IPs. Let’s do quick experiments. # svc....

May 21, 2024 · 4 min · 658 words · Me