[HCoop-Misc] AFS on Linux

Christopher D. Clausen cclausen at hcoop.net
Tue Jan 23 11:12:22 EST 2007


Alok G. Singh <alephnull at airtelbroadband.in> wrote:
> Hi,
>
> We have a new 16-node Opteron cluster running Rocks [1]. There are 2
> local disks per node (2 x 160 GB) and we would like to aggregate one
> disk from each node in an AFS partition so that we would have a 160 x
> 16 GB partition that we could use for non-critical storage.
>
> Is this setup feasible with (Open)AFS ? I know that there are some AFS
> users in hcoop, so I hope I don't get Warnocked :) If it is not
> possible, is there some alternate method that I could use ?

Yes, it is possible.  I'm not sure how AFS would work with Rocks though. 
It shouldn't be a problem as long as each machine has its own /etc space 
for the AFS config files (some of them need to be different on each 
machine.)  Note that a compromise of any one machine would comprimise 
ALL of them though (which may already be the case.)

AFS servers generally want dedicated partitions for the server to store 
data.  Also note that AFS does not export local files systems like NFS. 
You need to copy data into AFS and once copied it is only accessible 
with an AFS client.

Note that AFS cannot aggregate paritions across machines to form large 
volumes.  An AFS volume (think of a volume as a container for files and 
directories in a movable, managable unit) can only exist on one vice 
partition at a time.  If you are okay with sixteen 160GB partitions and 
a mimimum of one volume on each machine, you should be good.  Note that 
if a node goes down, so does the data hosted on that node.  You can 
replicate data, but generally this is only useful for read-only files. 
The replication is not multi-master either.  If the node hosting the RW 
volume is down, you can't write to the volume.

You might be better off using some block-level export scheme, like iSCSI 
or nbd to share the space to a few machines that would act as servers. 
Of course, this isn't an efficient use of network I/O as data would 
travel in and out of the server machine.

If you can be more specific as to how you'd want to actually use the 
disk space, I might be able to make a better suggestion.

I suggest that you join the openafs-info list and ask this same question 
there.  Maybe someone has already done what you want.

<<CDC 






More information about the HCoop-Misc mailing list