[Hcoop-discuss] Server pricing

Justin S. Leitgeb leitgebj at hcoop.net
Wed Feb 8 13:35:23 EST 2006


Adam Chlipala wrote:

>Justin S. Leitgeb wrote:
>
>  
>
>>The problem is that if we're really talking about a terabyte of storage 
>>space or more, I think that it is still outside of our budget to buy 
>>space that is both abundant *and* fast.  If you don't believe me, try 
>>pricing out a system with 1 TB of SCSI storage in RAID 10, a powerful 
>>RAID controller, and a 2U+ machine to house it in.  By the time we came 
>>up with $6,000+ US to spend on such a system, we would have outgrown it 
>>in terms of our space needs and have to buy another bigger and faster one.
>> 
>>
>>    
>>
>We wouldn't set up a terabyte of storage right away.  Ideally the 
>fileserver would be designed to make it easy to swap disks in and out.  
>We'd come up with a reasonable policy where members pay for disk usage, 
>so that no one would have to pay for more than he really needs (modulo 
>the fact that we'd be dealing with SCSI storage, if we decide that 
>that's important).
>
>Maybe we could start with cheaper disks and use this shared filesystem 
>only for backup purposes and direct access that isn't speed critical?  
>It really seems like having a single global namespace for critical files 
>is important to include in our set-up from the start, and we shouldn't 
>restrict our vision to the most standard approaches in figuring out how 
>to make it work.
>
I guess I'm having trouble answering the question because I think it 
depends on the type of file that is shared.  Surely we are going to 
approach resolv.conf differently than html pages, right?  What files do 
you think should have a global namespace, and how are these going to 
play into the rest of our system architecture?

We could buy a scaleable system like you're suggesting above, but:

1) It would be expensive.
2) It's not necessary right now, in terms of space.
3) We would outgrow it at some point and need to buy a completely new server

If we are really just talking about a shared namespace for the 
user-level files, perhaps we can do the following:  make the public web 
server be both an AFS server and client.  Then, just plan on adding a 
separate fileserver later to join the AFS cell.

Since you know AFS better than I do, do you think this would be possible?

>  
>
>>I know that one of the reasons that we were looking at an architecture 
>>that gave a fileserver a central place was that it would make 
>>administration easier... but we have to recognize all of the tools that 
>>are available to us, as unix/linux admins in a growing network.  I'm 
>>thinking especially of tools like cfengine, which I'm deploying now to 
>>help manage a network of 300 + linux boxes and a handful of Sun machines.
>> 
>>
>>    
>>
>That is for a centrally maintained network, not a system whose services 
>are configured by hundreds of mutually-untrusting users, right?  Do you 
>think that would actually work for us?
>
>  
>
Certainly, it's not for something like what we're doing with apache 
right now.  But it would work for sendmail, DNS, hosts files, ntp...  
Basically, while we have specialized needs for some services, there is a 
lot that is "centralized", or at least able to be controlled by a small 
group of admins in our network.  It seems to me that in those places of 
our environment we could benefit from something like cfengine, eventually.







More information about the HCoop-Discuss mailing list