[HCoop-Discuss] Our ideal architecture?

Davor Ocelic docelic at hcoop.net
Wed Jun 3 15:58:35 EDT 2009


On Wed, 03 Jun 2009 09:12:48 -0400
Adam Chlipala <adamc at hcoop.net> wrote:

> Davor Ocelic wrote:
> > Openvz has very detailed resource usage limits, which are also 
> > conveniently displayed as a table in one of the /proc files, where
> > a person can easily see existing containers, their limits, current
> > usage and number of times the limits were overstepped.
> >
> > Here's partial output from that file, /proc/user_beancounters
> > ("fitted" in 80 columns without wordwrap for your viewing pleasure):
> >
> >  uid  resource       held maxheld   barrier                limit
> > failcnt 104:  kmemsize     857224 9942062  12752512
> > 12752512        0 lockedpages       0       0
> > 256                  256        0 
> 
> Does "uid" mean "container ID" here? Are containers somehow conflated 
> with users?

It's just a numerical UID (also called CTID) that you choose when
creating a VPS.

Like, vzctl create 104 ...

-doc

> > For example, if we were to give out VMs to members, I don't see ANY
> > advantage that HCoop would have compared to any other VM provider.
> 
> Well, there are the advantages of democratic organization, but those 
> don't seem to motivate that large a fraction of our current
> membership, so it probably also wouldn't matter much to potential VPS
> users.
> 
> One big one, though, could be the ability to mount HCoop AFS volumes.
> A fancy-pants distributed file system maintained by someone else is 
> nothing to shrug off. In this scenario, a VM user could also rely on
> us to back up all AFS volumes automatically and in a reasonable way.
> 
> There are a slew of other possible benefits of being near shared 
> machines, like being able to set up monitoring, development systems, 
> etc., in the same data center (with fast/free transfer), without
> needing to manage extra machines.
> 
> I do strongly agree that we should ignore support for any per-member 
> VPSes until after we are happy with services along our current model, 
> though.
> 
> _______________________________________________
> HCoop-Discuss mailing list
> HCoop-Discuss at lists.hcoop.net
> https://lists.hcoop.net/listinfo/hcoop-discuss



More information about the HCoop-Discuss mailing list