

As long as you can trust Apple, sure
As long as you can trust Apple, sure
Well it’s a tougher question to answer when it’s an active-active config rather than a master slave config because the former would need minimum latency possible as requests are bounced all over the place. For the latter, I’ll probably set up to pull every 5 minutes, so 5 minutes of latency (assuming someone doesn’t try to push right when the master node is going down).
I don’t think the likes of Github work on a master-slave configuration. They’re probably on the active-active side of things for performance. I’m surprised I couldn’t find anything on this from Codeberg though, you’d think they have already solved this problem and might have published something. Maybe I missed it.
I didn’t find anything in the official git book either, which one do you recommend?
Thanks for the comment. There’s no special use-case: it’ll just be me and a couple of friends using it anyway. But I would like to make it highly available. It doesn’t need to be 5 - 2 or 3 would be fine too but I don’t think the number would change the concept.
Ideally I’d want all servers to be updated in real-time, but it’s not necessary. I simply want to run it like so because I want to experience what the big cloud providers run for their distributed git services.
Thanks for the idea about update hooks, I’ll read more about it.
Well the other choice was Reddit so I decided to post here (Reddit flags my IP and doesn’t let me create an account easily). I might ask on a couple of other forums too.
Thanks
I think I messed up my explanation again.
The load-balancer in front of my git servers doesn’t really matter. I can use whatever, really. What matters is: how do I make sure that when the client writes to a repo in one of the 5 servers, the changes are synced in real-time to the other 4 as well? Running rsync every 0.5 second doesn’t seem to be a viable solution
You mean have two git servers, one “PROD” and one for infrastructure, and mirror repos in both? I suppose I could do that, but if I were to go that route I could simply create 5 remotes for every repo and push to each individually.
For the k8s suggestion - what happens when my k8s cluster goes down, taking my git server along with it?
Thank you. I did think of this but I’m afraid this might lead me into a chicken and egg situation, since I plan to store my Kubernetes manifests in my git repo. But if the Kubernetes instances go down for whatever reason, I won’t be able to access my git server anymore.
I edited the post which will hopefully clarify what I’m thinking about
Apologies for not explaining better. I want to run a loadbalancer in front of multiple instances of a git server. When my client performs an action like a pull or a push, it will go to one of the 5 instances, and the changes will then be synced to the rest.
I have edited the post to hopefully make my thoughts a bit more clear
You can never be private with any device that can connect to the internet out of its own volition. Ubiquity, Alta Labs and Mikrotik should never be trusted unless you’re OK with your data potentially ending up on their servers.
With that said, you can manually upgrade Mikrotik software and selfhost the Mikrotik CHR, Ubiquity controller and Alta Labs controller for a fee (for the latter), which should then in theory invalidate this argument. Even then, I do not trust non-FOSS software for such critical infrastructure so it’s still too much for me, but depending on your risk tolerance this might be a good compromise. I would suggest you to look at Mikrotik seriously - their UI might suck but their hardware and software capabilities are FAR beyond what Ubiquity offers for the same price.
If you want to be private you should get an old computer, buy quad port NIC cards from EBay and run a Linux/BSD router on your own hardware. But that’s not the most friendly way to do it so I don’t blame anyone for looking away
Upvoted. Awesome project
You raise a good point. I think that if an RSS reader could pull from different websites at separate times and either programmatically use the TOR browser /at elast have support for stream isolation along with randomly scheduling when to pull from what website, it should be able to evade most automated measures of surveillance. Timing and correlation attacks are the only ones I can think of other than NSA paying for over 50% if TOR nodes.
The downside is that it probably is a great fingerprint if you go through vpn or tor. But it also could limit your tor/vpn connection time to the shortest time possible.
What do you mean? How is it any less private than on the clearnet?
I see. Thanks
Heavily quantized?
OP, I have been facing the same situation as you in this community recently. This was not the case when I first joined Lemmy but the behaviour around these parts has started to resemble Reddit more and more. But we’ll leave it at that.
I think I have a solution for you if you’re willing to spend $2-$3 a month - set up a VPS and run a Wireguard server on it. Run clients on your devices and the raspberry pi and connect to it.
As for your LAN: from the discussion you linked, it seems that Jellyfin will use the CAs present in the OS trust store. That’s not very hard to do on Linux but I guess if you have to do it on Android you’d have some more trouble. In either case, using a reverse-proxy (I like HAProxy but I use it at work and it might be more enterprise than you need, for beginners Caddy is usually easier) will fix the trouble you’re having with your own CA and self-signed certs.
I am interested in the attack vector you mentioned; could you elaborate on the MITM attack?
Unfortunately, if you don’t have control over your network, you cannot force a DNS server for your devices unless you can set it yourself for every individual client. If I assume that you can do that, then:
I think that should do it. This turned out more complicated than I imagined (it’s more of a brain dump at this point), feel free to ask if it is overwhelming.
I see. Thanks
Used 3090s go for $800. I was planning to wait for the ARC B580s to go down in price to buy a few. The reason for the networked setup is because I didn’t find there to be enough PCIe lanes in any of the used computers I was looking at. If there’s either an affordable card with good performance and 48GB of VRAM, or there’s an affordable motherboard + CPU combo with a lot of PCIe lanes under $200, then I’ll gladly drop the idea of the distributed AI. I just need lots of VRAM and this is the only way I could think of.
Thanks
Thank you, and that highlights the problem - I don’t see any affordable options (around $200 or so for a motherboard + CPU combo) for a lot of PCIe lanes other than purchasing Frankenstein boards from Aliexpress. Which isn’t going to be a thing for much longer with tariffs, so I’m looking elsewhere
Thanks but I’m not going to run supercomputers. I just want to run 4 GPUs separately because of inadequate PCIe lanes in a single computer to run 24B-30B models
You can’t protect your data if you use those apps. Pick one.