Distributed Filesystems
[glusterfs]
Here are the required packages to compile glusterfs on Redhat distros:
- make
- gcc
- flex
- bison
- db4-devel
- libibverbs-devel
- fuse-devel
Packages for runtime:
- fuse
Sample glusterfsd.vol with shared /tmp/gfs running on 6 hosts:
volume scaffold type storage/posix option directory /tmp/gfs end-volume volume scaffold-locks type features/locks subvolumes scaffold end-volume volume scaffold-brick type performance/io-threads option thread-count 8 subvolumes scaffold-locks end-volume volume server type protocol/server option transport-type tcp option auth.addr.scaffold-brick.allow 128.195.*,128.200.217.19 subvolumes scaffold-brick end-volume
Sample glusterfs.vol with RAID10 configuration:
volume remote1 type protocol/client option transport-type tcp option remote-host hpblade-1 option remote-subvolume scaffold-brick end-volume volume remote2 type protocol/client option transport-type tcp option remote-host hpblade-2 option remote-subvolume scaffold-brick end-volume volume remote3 type protocol/client option transport-type tcp option remote-host hpblade-3 option remote-subvolume scaffold-brick end-volume volume remote4 type protocol/client option transport-type tcp option remote-host hpblade-4 option remote-subvolume scaffold-brick end-volume volume remote5 type protocol/client option transport-type tcp option remote-host hpblade-5 option remote-subvolume scaffold-brick end-volume volume remote6 type protocol/client option transport-type tcp option remote-host hpblade-6 option remote-subvolume scaffold-brick end-volume volume replicate1 type cluster/replicate subvolumes remote1 remote4 #subvolumes remote1 remote4 remote3 end-volume volume replicate2 type cluster/replicate subvolumes remote2 remote5 #subvolumes remote2 remote5 remote6 end-volume volume replicate3 type cluster/replicate subvolumes remote3 remote6 end-volume volume distribute type cluster/distribute subvolumes replicate1 replicate2 replicate3 #subvolumes replicate1 replicate2 end-volume volume writebehind type performance/write-behind option window-size 1MB subvolumes distribute end-volume volume cache type performance/io-cache option cache-size 512MB subvolumes writebehind end-volume
Servers:
# chkconfig --levels 35 glusterfs on
Clients:
Mount on clients via fstab, autofs or glusterfs -f /etc/glusterfs.vol /mnt/point for testing
* glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/gfs
* mount -t /etc/glusterfs/glusterfs.vol /mnt/gfs
[references]
http://www.linuxjournal.com/content/storage-cluster-challenge-lj-staff-and-readers