[HCoop-Misc] AFS on Linux

Alok G. Singh alephnull at airtelbroadband.in
Wed Jan 24 22:01:56 EST 2007


On 24 Jan 2007, cclausen at hcoop.net wrote:

> Do you actually need one large 1TB partition?  You can access all
> the speace from AFS, but only in chunks the size of the partition on
> each disk.

We have about 600 GB of data on sanfs [1] but that system is old and
on its last legs. The problem with having chunks would be that we
would have to split up our data into chunks and then have a table of
which data is in which chunk.

> It would be more efficient (IMHO) to have multiple clients writting
> data to multiple machines at the same time.  Any aggregation
> technique will likely require a single host to perform ALL I/O.

Yes. That seems logical. However, the writing of data will probably be
very rare. There will be one _large_ write of about 600 GB after which
that data won't be accessed. 

> I'd say to have a look at: http://code.google.com/p/hotcakes/ It
> seems to be exactly what you want, although I'm not sure if I'd
> trust important data to it.

That was the impression I got too. I had replied to Brian Holmes's
post but it seems to have got caught up in the tubes ...

Looks like nbd and unionfs might be a workable hack for now. I'll
get started and see how it goes.

Footnotes: 
[1]  http://publib.boulder.ibm.com/infocenter/tssfsv21/index.jsp

-- 
Alok

A lifetime isn't nearly long enough to figure out what it's all about.




More information about the HCoop-Misc mailing list