I’ve been in the process of migrating a lot things back to kubernetes, and I’m debating whether I should have separate private and public clusters.

Some stuff I’ll keep out of kubernetes and leave in separate vms, like nextcloud/immich/etc. Basically anything I think would be more likely to have sensitive data in it.

I also have a few public-facing things like public websites, a matrix server, etc.

Right now I’m solving this by having two separate ingress controllers in one cluster - one for private stuff only available over a vpn, and one only available over public ips.

The main concern I’d have is reducing the blast radius if something gets compromised. But I also don’t know if I really want to maintain multiple personal clusters. I am using Omni+Talos for kubernetes, so it’s not too difficult to maintain two clusters. It would be more inefficient as far as resources go since some of the nodes are baremetal servers and others are only vms. I wouldn’t be able to share a large baremetal server anymore, unless I split it into vms.

What are y’all’s opinions on whether to keep everything in one cluster or not?

  • farcaller@fstab.sh
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    I’ve dealt with exactly the same dilemma in my homelab. I used to have 3 clusters, because you’d always want to have an “infra” cluster which others can talk to (for monitoring, logs, docker registry, etc. workloads). In the end, I decided it’s not worth it.

    I separated on the public/private boundary and moved everything publicly facing to a separate cluster. It can only talk to my primary cluster via specific endpoints (via tailscale ingress), and I no longer do a multi-cluster mesh (I used to have istio for that, then cilium). This way, the public cluster doesn’t have to be too large capacity-wise, e.g. all the S3 api needs are served by garage from the private cluster, but the public cluster will reverse-proxy into it for specific needs.

    • johntash@eviltoast.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I did actually consider a 3rd cluster for infra stuff like dns/monitoring/etc, but at the moment I have those things in separate vms so that they don’t depend on me not breaking kubernetes.

      Do you have your actual public services running in the public cluster, or only the load balancer/ingress for those public resources?

      Also how are you liking garage so far? I was looking at it (instead of minio) to set up backups for a few things.

      • farcaller@fstab.sh
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        Actual public services run there, yeah. In case if any is compromised they can only access limited internal resources, and they’d have to fully compromise the cluster to get the secrets to access those in the first place.

        I really like garage. I remember when minio was straightforward and easy to work with. Garage is that thing now. I use it because it’s just co much easier to handle file serving where you have s3-compatible uploads even when you don’t do any real clustering.

        • johntash@eviltoast.orgOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Do you use garage for backups by any chance? I was wanting to deploy it in kubernetes, but one of my uses would be to back up volumes, and… that doesn’t really help me if the kubernetes cluster itself is broken somehow and I have to rebuild it.

          I kind of want to avoid a separate cluster for storage or even separate vms. I’m still thinking of deploying garage in k8s, and then just using rclone or something to copy the contents from garage s3 to my nas

          • farcaller@fstab.sh
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            No. It’s my in-cluster storage that I only use for things that are easier to work with via S3 api, and I do backups outside of the k8s scope (it’s a bunch of various solutions that boil down to offsite zfs replication, basically). I’d suggest you to take a look at garage’s replication features if you want it to be durable.