I’ve been slowly moving along in this self-hosting journey and now have a number of services that I regularly use and depend on. Of course I’m backing things up, but I also still worry about screwing up my server and having to rollback/rebuild/fix whatever got messed up.

I’m just curious, for those of you with home labs, do you use a testing environment of some kind or do you just push whatever your working on straight to "production

  • edit: grammar
  • N0x0n@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    13 days ago

    Production is my testing lab, but only in my homelab ! I guess I don’t care to perfectly secure my services (really dumb and easy passwords, no 2fa, not hiding plain sight passwords…) because I’m not directly exposing them to the web and accessing them externally via Wireguard ! That’s really bad practice though, but any time soon will probably clean up that mess, but right now I can’t, I have to cook some eggs…

    There are 2 things though I actually do have some more complex workflow:

    • Rather complex incremental automated backup script for my docker container volumes, databases, config files, compose files.

    • Self-hosted mini-CA to access all my services via a nice .lab domain and get rid of that pesky warning on my devices.

    I always do some tests if my backups are working on a VM on my personal desktop computer, because no backup means that all those years of tinkering for nothing… This will bring up some nasty depression…

    Edit: If have a rather small homelab, everything on an old laptop, still quite happy with the result and works as expected.

  • morpheus17pro@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    12 days ago

    In my case, yes. My setup is managed using Ansible playbooks, so I have a dev inventory and a playbook that spins up a virtualized environment that mimics (as much as possible, as there are a few details that cannot be fully replicated) my home lab.

    That way, I usually prepare my new setups on dev, and then deploy on my pro setup and test with the few aspects I cannot reproduce in dev.

    Finally, I have everything backed by a (private) git repo.

  • ambitiousslab@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    12 days ago

    For services only I depend on, I have production-only. Since I can only inflict damage on myself, and can often work around problems.

    For the XMPP server my friends and family also depend on, I have a dedicated nonprod VPS. My services are driven by ansible playbooks, so I’ll tweak the playbook with whatever change I want to make works in nonprod, before running the same playbook against prod.

    Whenever there’s a new Debian Stable release, I’ll rebuild the servers completely, to try and prevent “drift” between the nonprod and prod versions (not that I change things often enough for this to become a big problem). This is also the big test of my backups, which so far haven’t been needed in a “real” emergency 🤞

  • beerclue@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 days ago

    I personally use my home lab to test and learn, and I try to mimic a corporate environment. I have multiple instances of DNS, proxy, etc and I have a “prod” and a separate “staging” k8s environment. I try as much as possible, without going nuts about it, to update and try new changes that might be breaking in the staging cluster.

  • r0ertel@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 days ago

    After breaking “prod” many times, I have a Dev (local machine), Test (small VM) and Prod (big VM). My test is just less RAM and space and I need to spin down certain K8s things to spin up others, but it’s a close mirror of Prod, just less.

  • vegetaaaaaaa@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 days ago

    My prod and testing environments are 2 libvirt VMs on the same hypervisor. They run the same services, deployed and managed by ansible. The testing VM just gets less disk/CPU/RAM resources, and is powered off most of the time. Simple config changes? Straight to prod. New feature, risky change? Testing first.