next up previous
Next: On-demand transfer and failure Up: Principle Previous: Transport and caching

Master nodes and slave nodes

To accomplish goals No. 2 and 4 I divided all the files and directories in a filesystem in master nodes and slave nodes. Master ones can only be read by clients. This includes binaries, libraries and many configuration files (/bin, /sbin, /lib, most of /etc), generally - most of all the files. Master nodes are always kept in sync with server. Slave nodes are initially populated amongst clients, but they can be then modified locally on every machine independently. These changed files are stored on individual machines and are no more synced with server. These include machine-specific configuration files (for example storing MAC addresses of NICs etc.), most files in /var folder and perhaps /home (it's usually better to mount /home as an NFS share from central server).

In such an environment performing update of some software component on all the slaves can be done by updating the master copy on the server and all the changes are then populated automatically. It is possible only when all affected files are master files, otherwise some changes won't be populated leaving system possibly inconsistent. This scenario is much more convenient than doing the same update on every machine manually.

Also by disallowing changing master files, system is secured from unauthorized modification of system files.


next up previous
Next: On-demand transfer and failure Up: Principle Previous: Transport and caching
root 2004-01-19