

And loot boxes too.
And loot boxes too.
As a recently former hpc/supercomputer dork nfs scales really well. All this talk of encryption etc is weird you normally just do that at the link layer if you’re worried about security between systems. That and v4 to reduce some metadata chattiness and gtg. I’ve tried scaling ceph and s3 for latency on 100/200g links. By far NFS is easier than all the rest to scale. For a homelab? NFS and call it a day, all the clustering file systems will make you do a lot more work than just throwing hard into your nfs mount options and letting clients block io while you reboot. Which for home is probably easiest.
I already have every version downloaded on my nas, more cause I hate redownloading things and am a pack rat.
Ask if you want but I’m not sure if the question is ability or suvivability. You can lick anything once. Just might regret it.
Until then just desolder the antennas good luck sending data with no way to connect to the internet.
I use this for my plex yet subscriptions https://ytdl-sub.readthedocs.io/en/latest/
Note they will throttle ips so I recommend a vpn if you’re snagging huge channels.
This is basically the same thing as what the big platforms do. You’re just offloading the decisions of what to see to a neural network and hope it’s deciding correctly. I’m not sure what a solution would be but I’m not sure I would put my eggs in the llm/ai basket. Not without a lot more details from the models on why they made a decision.