[HCoop-Discuss] On organizing people to get work done
Adam Chlipala
adamc at hcoop.net
Sat May 9 10:50:20 EDT 2009
David Snider wrote:
> OK Let's try again with the numbers on this site because their calculator
> seems to be showing numbers way lower than what is here:
> http://aws.amazon.com/ec2/
>
> All Data Transfer Out $0.10 per GB ($102/TB/mo)
> First 10 TB per Month In $0.17 per GB ($174.08/TB/mo)
> 1 YR Large linux instance $1300 ($108/mo) or 1 YR Extra Large linux
> instance $2600 ($216/mo)
>
> # Large Instance 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores
> with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platform
>
> # Extra Large Instance 15 GB of memory, 8 EC2 Compute Units (4 virtual
> cores with 2 EC2 Compute Units each), 1690 GB of instance storage, 64-bit
> platform
>
> Which comes to $384.08 or $492. Still a good deal cheaper than Peer 1 it
> seems.
>
OK, these numbers seem like a better comparison. I still think we
definitely want multiple servers/instances, for load balancing and fault
tolerance reasons, if nothing else. Adding in another instance or two,
we get to the point where the price is pretty darned comparable to what
we're paying for a quarter cabinet at Peer 1 now, and we could use our
Peer 1 space to house considerably more compute power/storage than we
would get with even 3 of the biggest EC2 instances.
Ignoring our staffing needs, the only additional cost factor in EC2's
favor is cost of buying machines and replacement parts. With our
current number of members, I believe these costs are pretty much trivial
and have negligible effect on the big picture. The big win with EC2 or
some other virtualization platforms is the freedom from need for our
staff to buy hardware, monitor it for failures, and replace it when it
breaks. Even with EC2, we would need to implement our own scheme for
dealing with failed hardware, though we could on Amazon to have
replacement (virtual) hardware available for us at all times, and to
implement their own durable backups of storage, etc..
More information about the HCoop-Discuss
mailing list