• 0 Posts
  • 16 Comments
Joined 2 years ago
cake
Cake day: August 6th, 2023

help-circle







  • For my larger boxes, I only use SuperMicro. Most other vendors do weird shit to their back planes that make them incompatible or charge for licenses for their ipmi/drac/lightsout . Any reputable reseller of server gear will offer SuperMicro.

    The disk to ram ratio is niche, and I’ve almost never run into that outside of large data warehouse or database systems (not what we’re doing here). Most of my machines run nearly idle even serving files several active streams or 3gb/sec data moves on only 16gb RAM. I use CPU being maxed out as a good warning that one of my disks needs checking, since silvering or degraded in ZFS chews CPU.

    That said, hypervisors eat RAM. Whatever machine you might want to perform torrents, transcoding, etc, give that box RAM and either a good supported GPU or a recent Intel quicksync chip.

    For organizing over the arrays, I use raided SSD for the downloads, with the torrent client moving to the destination host for seeding on completion.

    Single instance of radarr and sonarr, instead I update the root folder for “new” content any time I need to point to a new machine. I just have to keep the current new media destination in sync between the Arr and the torrent client for that category.

    The Arr stacks have gotten really good lately with path management, you just need to ensure the mounts available to them are set correct.

    In the event I need to move content between 2x different boxes, I pause the seed and use rsync to duplicate the torrent files. Change path and recheck the torrent. Once that’s good I either nuke and reimport in the Arr, or lately I’ve been doing better naming convention on the hosts so I can use preserving hardlinks. Beware, this is pretty complex route unless you are very comfortable in Linux and rsync!

    I’m using OMV on bare metal personally. My proxmox doesn’t even have OMV, it’s on a mini PC for transcoding. I see no problem running OMV inside proxmox though. My baremetal boxes are dedicated for just NAS duties.

    For what it’s worth, keep tasks as minimal and simple as you can. Complexity where it’s not needed can be pain later. My nas machines are largely identical in base config, with only the machine name and storage pool name different.

    If you don’t need a full hypervisor, I’d skip it. Docker has gotten great in its abilities. The easiest docker box I have was just Ubuntu with DockGE. It keeps it’s configs in a reliable path so easy to backup your configs etc.


  • I personally have dedicated machines per task.

    8x SSD machine: runs services for Arr stack, temporary download and work destination.

    4-5x misc 16x Bay boxes: raw storage boxes. NFS shared. ZFS underlying drive config. Changes on a whim for what’s on them, but usually it’s 1x for movies, 2x for TV, etc. Categories can be spread to multiple places.

    2-3x 8x bay boxes: critical storage. Different drive geometric config, higher resilience. Hypervisors. I run a mix of Xen and proxmox depending on need.

    All get 10gb interconnect, with critical stuff (nothing Arr for sure) like personal vids and photos pushed to small encrypted storage like BackBlaze.

    The NFS shared stores, once you get everything mapped, allow some smooth automation to migrate things pretty smoothly around to allow maintenance and such.

    Mostly it’s all 10 year old or older gear. Fiber 10gb cards can be had off eBay for a few bucks, just watch out for compatibility and the cost for the transceivers.

    8 port SAS controllers can be gotten same way new off eBay from a few vendors, just explicitly look for “IT mode” so you don’t get a raid controller by accident.

    SuperMicro makes quality gear for this… Used can be affordable and I’ve had excellent luck. Most have a great ipmi controller for simple diagnostic needs too. Some of the best SAS drive planes are made by them.

    Check BackBlaze disk stats from their blog for drive suggestions!

    Heat becomes a huge factor, and the drives are particularly sensitive to it… Running hot shortens lifespan. Plan accordingly.

    It’s going to be noisy.

    Filter your air in the room.

    The rsync command is a good friend in a pinch for data evacuation.

    Your servers are cattle, not pets… If one is ill, sometimes it’s best to put it down (wipe and reload). If you suspect hardware, get it out of the mix quick, test and or replace before risking your data again.

    You are always closer to dataloss than you realize. Be paranoid.

    Don’t trust SMART. Learn how to read the full report. Pending-Sectors above 0 is always failure… Remove that disk!

    Keep 2 thumb drives with your installer handy.

    Keep a repo somewhere with your basics of network configs… Ideally sorted by machine.

    Leave yourself a back door network… Most machines will have a 1gb port. Might be handy when you least expect. Setting up LAGG with those 1gb ports as fallback for the higher speed fiber can save headaches later too…






  • I’d start by noting that raid is more about availability, not backup… I suspect you already have that in mind but just in case. Ideally if you are up for learning ZFS, that is one of the most resilient raid tools out there. Most NAS and Unix or Linux OS will have support for this.

    Never connect RAID disks via USB… This only causes headaches.

    Avoid SATA port multipliers, these can cause problems in raid.

    SAS has the most reliable and flexible options for connectivity. Used JBOD chassis, even small, can be found cheaply and will run SATA disks well.

    As to cloud data, I strongly recommend BackBlaze. Many utilities can natively interact with it (API compatible with Amazon s3) and you can handle encryption on the fly with several sync options. They are one of the cheapest solutions, and storage is pretty much all they do.

    With pretty much any cloud storage, look at the ingress/egress cost of your data too… That is where many can bite you unexpectedly.

    Worth noting that when you get to large storage, a good organization method for your data is key so you can prune and prioritize without getting overwhelmed later… Don’t want several copies of the same thing eating cash needlessly.

    Good luck! And welcome to the wonderful illness known as data hoarding!



  • Rules: I only enforce what I agree with … thankfully I fully am in tune with the rules here, no hitch.

    Active: I’m mostly a lurker with the occasional comment, but I try to keep up on things. I spend a few hours a day on /all but try to focus on local db0 more attentively in general… Better vibe anyway.

    Pirate: Arrr! Does knowing what 1200 baud feels like help? DOS and Amiga Cracktro is my bread and butter! As to the tech, I run all my own services except Lemmy… Y’all do it better. Been working pro in tech, especially hypervisor and Linux stuff, since early 90s.

    Bonus: human knowledge belongs to the world … You are free to do anything that doesn’t explicitly harm others. If peace isn’t an option, I can bartend well enough to mix some cocktails to help solve things.

    I’m CST , -6 as it were.

    Cheers!