[HCoop-Discuss] Our ideal architecture?
Davor Ocelic
docelic at hcoop.net
Tue Jun 2 20:46:05 EDT 2009
On Tue, 2 Jun 2009 17:59:39 -0400
David Snider <david at davidsnider.net> wrote:
> Here are my main concerns with OpenVZ:
>
> 1) Kernel patch gone wrong. Instead of taking down one server it
> takes down many.
Well here's what I think about this point. I'd like to stay practical,
not guard against *potential* errors of this kind.
Personally (related to openvz or otherwise), I am not interested in
worrying about problems in someone's kernel driver, because I believe
that that stuff should be taken care of by upstream authors, that their
stuff should work, and not be a deciding factor for us.
Things needed to support openvz are, if I'm not mistaken, part of
the mainline kernel and do not require a separate patch. Besides,
openvz, Xen, or anything else has portion of kernel code which may
go wrong.
There are so many things in the kernel that might go wrong that it's
a flawed strategy to think this way. If we were discussing use of,
for example, Ext3 vs. SGI xfs, you could also ask "How can we prevent
bugs in Ext3?"... Well, we can't, we take past track record and
existing evidence to deduce that it'll work for us like it's been
working for others.
Your concern could be applied to all software components, kernel or
not-- and so, as long as we're talking about stable software releases
(which we are), it should not be a factor that counts either way.
> 2) I think we should design our systems with the potential to let
> users host their own non-debian Guests. If done correctly we wouldn't
> need to officially support them any more than we support people's
> custom daemons now.
We could offer non-debian (but linux) guests with openvz,
kernel does not mandate the Linux distribution.
(Assuming that users would take over the maintenance of their
non-Debian guest, because we never discussed nor volunteering
nor paid staff who'd be handling multiple platforms.)
But this is too far to envision for me, as I am about 100% sure that
everything we have would have to be changed to support this
(including all our hardware, existing admin team and overall HCoop
design principles).
Instead of trying to be everything to everyone, we need to limit
the number of areas we're covering, and then do them well.
And so if we're not talking about non-linux guests, this even goes
in favor of openvz.
> 3) What do we lose by going with Xen?
Primarily, three things:
- I don't like the overhead it is adding; booting whole bios emulation
etc, to run a system that's effectively the same as the host, is a
waste.
- Setup, administration and use of openvz is simpler.
- I don't remember exactly (you're welcome to correct me), but I think
Xen is an image you can't mount and access the files from host OS
(at least not while the VM is running), while with openvz, you see
the container's files directly on the base fileystem, such as
/var/lib/openvz/private/104/etc/hosts
Actually, to conclude, I am not convinced that this whole talk about
VMs is going in the right direction, or at least we didn't set the
foundation right.
Because, if we were to use virtualization for our internal organization
purposes (sandboxing etc.), then this needs to be lightweight solution
in style of openvz.
Or, if we were looking towards providing VMs for members (like Tech.Coop
is providing us for hosting secondary DNS on a separate subnet), then
this brings a whole set of new problems that we did not discuss.
For example, if we were to give out VMs to members, I don't see ANY
advantage that HCoop would have compared to any other VM provider.
Our infrastructure or offering would be at the bottom of any random
set of providers you'd compare us with.
We're paying mere $40/month for a VPS at Tech.Coop; they're by
no means the only choice in this performance/price range, and their
reliability so far has been basically 100%.
Drop in your comments.
Cya,
-doc
> On Tue, 2 Jun 2009 23:34:15 +0200, Davor Ocelic <docelic at hcoop.net>
> wrote:
> > On Tue, 02 Jun 2009 13:42:23 -0400
> > Adam Chlipala <adamc at hcoop.net> wrote:
> >
> >> David Snider wrote:
> >> > Operating System Level Virtualization: (Ex. OpenVZ, FreeBSD
> >> > Jails, Solaris Containers) The name "jail" that FreeBSD makes it
> >> > pretty clear what this does. Each server shares an underlying
> >> > operating system but it is partitioned in such a way to make it
> >> > look and feel like it is on it's own server. The advantage to
> >> > this is that you don't have to duplicate a lot of commonly
> >> > shared resources. The disadvantage is that it is difficult to
> >> > control individual utilization of each server. (I.E If your web
> >> > server is getting hammered your mail server's performance
> >> > suffers too.)
> >>
> >> This last disadvantage, if accurate, kills the attractiveness of
> >> the approach for me. docelic, do you agree that OpenVZ has this
> >> problem? If so, why do you think OpenVZ would still be a good
> >> choice for us?
> >
> > Openvz has very detailed resource usage limits, which are also
> > conveniently displayed as a table in one of the /proc files, where
> > a person can easily see existing containers, their limits, current
> > usage and number of times the limits were overstepped.
> >
> > Here's partial output from that file, /proc/user_beancounters
> > ("fitted" in 80 columns without wordwrap for your viewing pleasure):
> >
> > uid resource held maxheld barrier limit
> > failcnt 104: kmemsize 857224 9942062 12752512
> > 12752512 0 lockedpages 0 0
> > 256 256 0 privvmpages 198979 263110
> > 600000 600000 0 shmpages 7485
> > 7485 21504 21504 0 dummy
> > 0 0 0 0 0 numproc
> > 67 86 240 240 0 physpages
> > 74237 137612 0 9223372036854775807 0
> > vmguarpages 0 0 33792 9223372036854775807 0
> > oomguarpages 74237 137612 26112 9223372036854775807 0
> > numtcpsock 4 21 360 360 0
> > numflock 1 7 188 206 0
> > numpty 0 2 16 16 0
> > numsiginfo 0 2 256 256 0
> > tcpsndbuf 69760 234352 1720320 2703360 0
> > tcprcvbuf 65536 229640 1720320 2703360 0
> > othersockbuf 4624 13080 1126080 2097152 0
> > dgramrcvbuf 0 23120 262144 262144 0
> > numothersock 18 22 360 360 0
> > dcachesize 238506 387807 3409920 3624960 0
> > numfile 1207 1702 9312 9312 0
> > dummy 0 0 0 0 0
> > dummy 0 0 0 0 0
> >
> >
> > I was very surprised that I didn't hear of openvz (or any of its
> technical
> > equivalents) up until just recently, but since I've been using it,
> > I'm very
> > satisfied with the ease of installation, transparency of use,
> > speed, maintainability, and just about everything else. No weak
> > points so far.
> >
> > I think we do not have resources to run non-Linux (or even
> > non-Debian) guest OSes in the container, so I think the single
> > shared kernel is not a limitation at all.
> >
> > And even if we were to support that, I think it'd warrant a separate
> > machine for that kind of use.
> >
> > If anyone else has additional comments, please chime in.
> >
> > Cya,
> > -doc
> >
> > _______________________________________________
> > HCoop-Discuss mailing list
> > HCoop-Discuss at lists.hcoop.net
> > https://lists.hcoop.net/listinfo/hcoop-discuss
>
>
> _______________________________________________
> HCoop-Discuss mailing list
> HCoop-Discuss at lists.hcoop.net
> https://lists.hcoop.net/listinfo/hcoop-discuss
More information about the HCoop-Discuss
mailing list