[Hcoop-discuss] Server pricing
Justin S. Leitgeb
leitgebj at hcoop.net
Wed Feb 8 15:21:52 EST 2006
Sure, if we're OK with poor performance, it can be done relatively
cheaply. I'm guessing we'd just want a fairly large case (perhaps a
Dell 2850 or bigger, a 2850 can hold 6 disks), a SATA RAID controller
and a bunch of disks in RAID 5 (maybe some of the new 500 GB disks). We
could start with 4 disks (3 live and a hot spare) and it should be
pretty reliable. Maybe around $2500 or so to start?
Sorry I wasn't on-board with this earlier, I guess I just didn't
understand our (flexible) performance constraints.
Justin
Adam Chlipala wrote:
>Justin S. Leitgeb wrote:
>
>
>
>>I guess I'm having trouble answering the question because I think it
>>depends on the type of file that is shared. Surely we are going to
>>approach resolv.conf differently than html pages, right? What files do
>>you think should have a global namespace, and how are these going to
>>play into the rest of our system architecture?
>>
>>
>>
>>
>In my current vision, there are two kinds of files in the shared filesystem:
>
>1) Files whose primary logical location is in the shared filesystem.
>This includes HTML pages.
>2) Files whose primary logical location is on a local filesystem, but
>that we periodically synchronize to the shared filesystem for ease of
>centralized back-ups. This includes resolv.conf.
>
>
>
>>We could buy a scaleable system like you're suggesting above, but:
>>
>>1) It would be expensive.
>>2) It's not necessary right now, in terms of space.
>>3) We would outgrow it at some point and need to buy a completely new server
>>
>>
>>
>>
>I think it's entirely clear that we have current and future members who
>would love to be able to use arbitrarily much disk space. Keep in mind
>that many of us now are purposely avoiding using HCoop for purposes that
>would incur significant disk usage; it's not that we're all only
>interested in low disk usage services, but rather that we impose low
>disk quotas ATM. Can you elaborate on your point 2 in light of this?
>
>Can it really be that expensive to have a slow fileserver for which it's
>relatively easy to add new disks? I'm fine with bad performance as long
>as we can get commercial-level reliability and protection against data loss.
>
>
>
>>If we are really just talking about a shared namespace for the
>>user-level files, perhaps we can do the following: make the public web
>>server be both an AFS server and client. Then, just plan on adding a
>>separate fileserver later to join the AFS cell.
>>
>>Since you know AFS better than I do, do you think this would be possible?
>>
>>
>>
>>
>I don't know very much about AFS. I only used it a lot and heard in
>operating systems class that it has significant technical advantage over
>NFS.
>
>
>
>>Certainly, it's not for something like what we're doing with apache
>>right now. But it would work for sendmail, DNS, hosts files, ntp...
>>Basically, while we have specialized needs for some services, there is a
>>lot that is "centralized", or at least able to be controlled by a small
>>group of admins in our network. It seems to me that in those places of
>>our environment we could benefit from something like cfengine, eventually.
>>
>>
>>
>>
>DNS and Exim (no sendmail for us!) are also member-controlled for us,
>but at semantically shallower levels than Apache configuration. I agree
>that admin tools like you're suggesting could be very helpful, though.
>
>_______________________________________________
>Hcoop-discuss mailing list
>Hcoop-discuss at hcoop.net
>http://hcoop.net/cgi-bin/mailman/listinfo/hcoop-discuss
>
>
More information about the HCoop-Discuss
mailing list