I heard good things about http://www.redhat.com/gfs/ .. I have not used it though. I have been using lustre for 4 months .. Actually, I'm doing a research on it for my professor. it is a pain and takes much of time to be installed .. I had to patch the kernel , build it , install the lustre and solve many problems before I get it to work .
for a backup purpose .. I used NFS and it is perfect for my needs. I actually created ~5TB on NAS ( 2 X 160 SATA with RAID 1 for OS and 1 X 500 GB RAID 5 for the storage ) , export it and mount it on each server that I want to backup. It works just fine.
You have to read about NFS security in NFS project homepage and do little test to get the best performance ( read and write ) out of it. It is well documented on their website.
The good thing about NFS is that you can mount it on varies OS's ( Mac, Win and *nix as well). Furthermore, It is simple to use and doesn't require a lot of work to be installed and ready to go.
The only thing that I want to advise you is to change the label of your RAID5 array ( if you are going to use it or the large device if used LVM to create one ) to *gpt*. In addition , when you create the FS ( ext3 is good), make biggest block size possible ( -b 4096 ) and if you are going to filling it with large file i.e > 100 MB use one inode per 4 megabytes ( -T largefile4 ). Furthermore, if your setup will be like mine ( the OS in another partition ) , use -m 0 to change the reserved block percentage for the root user from 5% to 0% .. 5% was a lot in my situation ( ~256GB) ..
Would you please share you experience with GFS ?
Used in a cluster of about 30 diskless systems with various architectures over gigabit ethernet. It will work well enough for basic loads (even moderate loads, YMMV) so there is no need spending money on clustered file systems when GFS will be suffice.