I've been dealing a lot with NFS recently, and the conclusion so far is to avoid it at all cost.

@boilingsteam because $HOME mounted on a NFS share leads to unexpected delays, git and npm become slow. I've been running jobs in a cluster, as the number of workers increases access to data becomes costlier.

@boilingsteam It makes things very easy in the beginning but then too painful to change.

@boilingsteam For example, I tend to symlink ~/.cache to a local partition. But then I need to be sure that on every machine I login to that destination exists.

@boilingsteam as of my cluster problem, I might start an rsync daemon to serve data. I already copy data to a local partition using rsync, so that should be easy.


On a second thought, I might use BitTorrent to spread data to the workers. It looks like an ideal setup. I have one BT node at the beginning, when I start many jobs, traffic to the initial BT node will be reduced, as workers will share data between themselves.

I am not sure if this fits with your workflow, but you can also use Syncthing to spread data across clients. Works very well.

Yeah. It's like Dropbox but in Free Software. Been using it for years without issue.

Sign in to participate in the conversation
3dots.lv Mastodon Instance

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!