[HCoop-Discuss] Our ideal architecture?

David Snider david at davidsnider.net
Tue Jun 2 17:59:39 EDT 2009


Here are my main concerns with OpenVZ:

1) Kernel patch gone wrong. Instead of taking down one server it takes down
many.
2) I think we should design our systems with the potential to let users
host their own non-debian Guests. If done correctly we wouldn't need to
officially support them any more than we support people's custom daemons
now.
3) What do we lose by going with Xen? 


On Tue, 2 Jun 2009 23:34:15 +0200, Davor Ocelic <docelic at hcoop.net> wrote:
> On Tue, 02 Jun 2009 13:42:23 -0400
> Adam Chlipala <adamc at hcoop.net> wrote:
> 
>> David Snider wrote:
>> > Operating System Level Virtualization: (Ex. OpenVZ, FreeBSD Jails,
>> > Solaris Containers) The name "jail" that FreeBSD makes it pretty
>> > clear what this does. Each server shares an underlying operating
>> > system but it is partitioned in such a way to make it look and feel
>> > like it is on it's own server. The advantage to this is that you
>> > don't have to duplicate a lot of commonly shared resources. The
>> > disadvantage is that it is difficult to control individual
>> > utilization of each server. (I.E If your web server is getting
>> > hammered your mail server's performance suffers too.)
>>
>> This last disadvantage, if accurate, kills the attractiveness of the
>> approach for me.  docelic, do you agree that OpenVZ has this
>> problem? If so, why do you think OpenVZ would still be a good choice
>> for us?
> 
> Openvz has very detailed resource usage limits, which are also
> conveniently displayed as a table in one of the /proc files, where
> a person can easily see existing containers, their limits, current
> usage and number of times the limits were overstepped.
> 
> Here's partial output from that file, /proc/user_beancounters
> ("fitted" in 80 columns without wordwrap for your viewing pleasure):
> 
>  uid  resource       held maxheld   barrier                limit  failcnt
> 104:  kmemsize     857224 9942062  12752512             12752512        0
>       lockedpages       0       0       256                  256        0
>       privvmpages  198979  263110    600000               600000        0
>       shmpages       7485    7485     21504                21504        0
>       dummy             0       0         0                    0        0
>       numproc          67      86       240                  240        0
>       physpages     74237  137612         0  9223372036854775807        0
>       vmguarpages       0       0     33792  9223372036854775807        0
>       oomguarpages  74237  137612     26112  9223372036854775807        0
>       numtcpsock        4      21       360                  360        0
>       numflock          1       7       188                  206        0
>       numpty            0       2        16                   16        0
>       numsiginfo        0       2       256                  256        0
>       tcpsndbuf     69760  234352   1720320              2703360        0
>       tcprcvbuf     65536  229640   1720320              2703360        0
>       othersockbuf   4624   13080   1126080              2097152        0
>       dgramrcvbuf       0   23120    262144               262144        0
>       numothersock     18      22       360                  360        0
>       dcachesize   238506  387807   3409920              3624960        0
>       numfile        1207    1702      9312                 9312        0
>       dummy             0       0         0                    0        0
>       dummy             0       0         0                    0        0
> 
> 
> I was very surprised that I didn't hear of openvz (or any of its
technical
> equivalents) up until just recently, but since I've been using it, I'm
> very
> satisfied with the  ease of installation, transparency of use, speed,
> maintainability, and just about everything else. No weak points so far.
> 
> I think we do not have resources to run non-Linux (or even non-Debian)
> guest OSes in the container, so I think the single shared kernel is not
> a limitation at all.
> 
> And even if we were to support that, I think it'd warrant a separate
> machine for that kind of use.
> 
> If anyone else has additional comments, please chime in.
> 
> Cya,
> -doc
> 
> _______________________________________________
> HCoop-Discuss mailing list
> HCoop-Discuss at lists.hcoop.net
> https://lists.hcoop.net/listinfo/hcoop-discuss




More information about the HCoop-Discuss mailing list