I've been dealing a lot with NFS recently, and the conclusion so far is to avoid it at all cost.

@boilingsteam because $HOME mounted on a NFS share leads to unexpected delays, git and npm become slow. I've been running jobs in a cluster, as the number of workers increases access to data becomes costlier.

@boilingsteam It makes things very easy in the beginning but then too painful to change.

Follow

@boilingsteam For example, I tend to symlink ~/.cache to a local partition. But then I need to be sure that on every machine I login to that destination exists.

@boilingsteam as of my cluster problem, I might start an rsync daemon to serve data. I already copy data to a local partition using rsync, so that should be easy.

@boilingsteam
On a second thought, I might use BitTorrent to spread data to the workers. It looks like an ideal setup. I have one BT node at the beginning, when I start many jobs, traffic to the initial BT node will be reduced, as workers will share data between themselves.

@pixel
I am not sure if this fits with your workflow, but you can also use Syncthing to spread data across clients. Works very well.

@pixel
Yeah. It's like Dropbox but in Free Software. Been using it for years without issue.

Sign in to participate in the conversation
3dots.lv Mastodon Instance

A beta setup of a Mastodon instance primary for family and friends.