[HCoop-Discuss] comments on payment schemes
Justin S. Leitgeb
leitgebj at hcoop.net
Mon Jul 24 12:20:22 EDT 2006
j.c.hallgren at juno.com wrote:
>So exactly WHY are we going to a new, bigger capacity setup when we,
>in some cases (bandwidth), are not yet using the current fully? Can
>somone explain that? When I was in the corporate mainframe world, we
>would usually not buy a new CPU until we had been running 100% usage
>for months..and disks when we were at 80-90% of capacity..
>
>
1) Interserver, which hosts our primary web server, has been unreliable,
resulting in several periods where our services were entirely
unavailable. Although our "bandwidth capacity" is rather high here, it
doesn't do any good if our server is totally unavailable.
2) Fyodor is often overloaded, not in because of CPU, but because of I/O
limitations as far as I can tell. This results in periods where CGI
timeouts occur, resulting in e.g., wiki pages that show errors instead
of content. We should be able to resolve this problem with the fastion
I/O subsystems of the new web servers that have been donated (if anyone
wants to modify this with more specific details, or a different
interpretation of what is occurring here, feel free). Here, CPU
utilization is misleading (are you looking at a specific number in
top?), as far as I can see, because of other issues that are more
critical. If you ever watch "vmstat 1" in a terminal on fyodor you will
see the runqueue fill up rather high sporadically, which is usually one
of the better indicators that the system has become overloaded,
regardless of the CPU utilization at that moment.
So, sheer quantity of bandwidth is really not the issue, but what is
important is that we are moving to a site that has much better quality
bandwidth overall. We are also hoping for better support, and a
physical location that some of our members can have access to, in case a
machine dies. The new location also gives us options to expand (e.g.,
into full racks) as our member base grows. Also it lets us dump a
server that has slow I/O, isn't owned by the coop, and has software RAID
instead of a more robust and hopefully reliable, hot-swappable SCSI RAID
solution.
>I'm know there is a whole lot I don't know about HCOOP's setup...but
>from what I can see, the setup under ABU/Xiolink would still well
>suffice for my needs...I've pulled my sites webalizer stats for 17
>months (10 on new and 7 on ABU) into Excel..and yes, they've
>grown...but still would be a very small user compared to our biggest
>users...so could I stay back on Interserver and let the power users
>move on the the new big system and let them pay for that capacity?
>
>
If any of us need to move, it would be an unneccesary cost for some of
us to "stay back". At this point we're talking about migrating all of
our sites, and then cut the old services to reduce our monthly operating
costs. Although it will be more expensive for a short period of time,
I'm optimistic that after we move, with a quick membership drive we can
reduce costs almost to where they are at this point, including a buffer
for upgrades and repairs (something that is necessary for the inevitable
case when a hard disk fails, for example).
>Yes, I realize there is some HCOOP overhead in disk and bandwidth, and
>sharing that cost is not a problem...except that I'm not using any
>email on HC, so my usage of that is nill...but looking at the last
>month of Apache bandwidth stats for all users, one user accounts for
>51% of total, and the prior month, 38% of total..mine averages about
>1%...should we pay the same amount for HC overhead, such as wiki and
>portal, etc? Yes but maybe adjusted for email use or not...but
>definitely for our indivdual site usage it should be by amount used!
>
>
Again, it's not about sheer bandwidth quantity. Quality of service is
more of a problem, and this has been an issue with our current providers
that we want to get away from. Peer 1, from our evaluations, is one of
the best colocation providers on the net, and it should be a huge
benefit for all of the users on our site, regardless of individual
utilization. This will also allow us to expand to improve reliability
of services, by building in things like redundancy in the future. In
the long run, it should be cheaper for existing members, and more
attractive for members looking to host with us (thereby reducing the
price for everyone), knowing that we're located in one of the top
colocation providers on the market.
>I firmly belive we should have sufficient resources for current users
>and room for them to expand and some buffer for new users, but going
>overboard to handle hundreds of new users seems to be a bit overkill.
>
>
>
We're in a place where there doesn't seem to be a great "middle ground",
from what I've seen. We want to move to one of the largest colocation
providers, which will give us great service and lots of room to grow.
What we're trying to start in is the smallest colocation package at Peer
1, which should be good for about 100 more users. If there was a better
middle option, it might be smart, but I haven't seen one that would work
for us. Initially I thought that he.net would fit this bill, but there
were unforeseen traps in that setup, where they would give us cheap
space and bandwidth, but not enough power to host the machines we needed.
I think that through the donations we've received from members so far,
which would have cost us near $10,000 to buy new, and through some sort
of subsidization of membership dues for those who can't help out the
coop a lot in the transitional period, it will be a great move for all
of us. I realize that it will increase overhead in the short-term, and
that this may make it difficult for some members, but I hope that we can
decide on a plan where we can subsidize dues for those who can't afford
the transitional period in order to reduce undue loss of membership.
And although this may be difficult, and traumatic for some members,
based on the comments I made above, I still think that this move is not
a luxury, but a necessity given the unreliability of our current
resources and our need to accomodate a growing member base.
If you have any other suggestions or questions, let us know! :)
Justin
More information about the HCoop-Discuss
mailing list