better platform for virtualization

better platform for virtualization

Amos Shapira amos.shapira at gmail.com
Wed Jan 20 03:46:00 IST 2010


2010/1/20 Etzion Bar-Noy <ezaton at tournament.org.il>:
> PV drivers were released by Oracle, who run their own virtualization
> platform based on XenCommunity.

Just wondering - are these required to be installed separately when
trying to run Windows on CentOS?
We generally managed to do that when we tried (got stuck on none of
our licenses being accepted).

> KVM is wasteful and requires VT support even for Linux machines. Not only
> that, but its virtualized hardware is legacy old hardware supplied by QEMU.
> The leading virtualization solutions currently in the market are Vmware
> ESXi, which, for a single server is free and very nice (although you better
> make sure your hardware is in their support matrix), Microsoft Hyper-V,
> which is not Linux-friendly, if you care about it, and Citrix XenServer.
> You can compare these three on the internet.
> Less common, but aggressively in use (with the same performance profile) are
> OracleVM, RHEL Xen platform (with Oracle PV drivers for Windows) - Xen
> Community based.
> On the level of the "low performance" you would find Vmware Server, and Sun
> VirtualBox.
> KVM was designed, and is focused on VDI - desktop virtualization, being the
> focus of Kumranet in the past. RedHat cannot maintain two virtualization
> platforms.

I expect this targeting will change quickly since RH plan to replace
replace Xen by KVM (in 5.5 or 6.0?)

> Assuming you aim at Windows virtualization, and assuming you want the
> assurance of enterprise-class solution, I would recommend any of the top
> three, with my favorite Citrix XenServer (check the pros and cons of each
> and make your own decisions in that matter). Other solutions are not
> complete, or will have near-zero support or knowledge base on the net.
> To make things clear - I earn my keep by performing various system and IT
> architecture-related integration operations, amongst are virtualization
> design and implementations.
> I have several customers with large XenServer farms, running
> mission-critical, 24/7/365. The longer one has his farm running since about
> March, containing about 40-50 VMs, including the company's Exchange server
> (~400 users), several Oracle DB environments, MSSQL, ADS, TS and more. These
> are memory-hungry applications, and the total memory allocated there
> (usually memory is the immediate bottleneck, followed by disk IO
> performance) is about 150GB ram, in total, with shared storage and the
> entire shabang.

I do not have the tools (or wish) to dispute your recommendations
above, they sound reasonable and well baked, but if we are about to
demonstrate what can be achieved with the options then let me describe
what I got running on CentOS 5 (and the old Xen 3.0 which comes with
it):

18 physical 2xQuad-core servers with at least 64Gb RAM (some have 80Gb
but we decided to stay clear form 8Gb DIMM's) in production.
4 more servers in test env, also with 64Gb RAM each.

To do the calculation for you - this is upwards of 1408 Gb of RAM.

A total of 70+ virtual guests.
~1.5Tb disk space, on 12 spindles, in each server.

That system have been up for over two years now. Only unplanned down
time we had were either due to hardware issues or our own mistakes,
not the platform's fault.
We use DRBD to replicate disks, linux-ha for heartbeat and fail-over,
LVS for Virtual IP load balancing.

Our up time on the worst part of the system (an old customer portal we
are about to replace) is %99.93 in 11 months (5:20 hours of down time
since February 2009, that's the beginning of the monitoring data of
the current monitoring tools), up time on better parts (properly load
balanced) is up to %99.98 since around the same time (1:25 hours down
since March 2009). These include system upgrades and migration to new
servers.

There is a lot to improve but I think it's not bad, considering that
our company's entire income depend on these servers staying up.

BTW - I just google'd a bit about "XenServer centos" and found two blog posts:

1. One saying that XenServer actually is a modified CentOS, based on
the "Vendor: CentOS" field in most packages.
2. You must have Windows for the graphic management console, which uses .net.

Cheers,

--Amos

> Other customers of mine use smaller environments, running one or two
> servers, with several tenths of vms on them. They were unable to reach
> anywhere near this capacity (amount of VMs, performance of every single VM)
> using VMware Server, of course.
> Make your own pick. I can only recommend to use the enterprise class tools,
> especially if they can come for free, and/or you could buy support.
> Ez
>
> On Tue, Jan 19, 2010 at 10:11 PM, Oleg Goldshmidt <pub at goldshmidt.org>
> wrote:
>>
>> Jonathan Ben Avraham <yba at tkos.co.il> writes:
>>
>> > Hi Gilad,
>> > Why do you recommend KVM over XEN? Have you fiddled with both? Are
>> > there particular problems with XEN?
>>
>> Apart from the fact that XEN is paravirtualization technology and
>> running a mission-critical Windows DomU is possible mostly in theory?
>>
>> Disclaimer: I have not touched Xen over a couple of years (when
>> Windows guests were possible on KVM, at least in principle, and not
>> possible on Xen). I checked the current docs out of curiousity and
>> phrases like "PV drivers are being developed" and "you need to disable
>> driver signature checking on (every!) reboot" [original emphasis]
>> don't inspire much confidence.
>>
>> Other points Gilad made (KVM being much less intrusive and already in
>> the vanilla kernel and provided by RedHat) are very much valid.
>>
>> To the OP: Xen is not for you. I have no first-hand experience (beyond
>> a tiny bit of tinkering) with KVM. I have quite a bit of production
>> experience with VMware. I am surprised that most of the postings focus
>> on the VMware Server (previously known as GSX). IIRC the OP mentioned
>> "crucial" servers but did *not* say $0 was a requirement. I'd go with
>> ESX for mission-crtical stuff.
>>
>> For a serious installation I would not keep data (or system images,
>> for that matter, but YMMV) on directly attached disks.
>>
>> --
>> Oleg Goldshmidt | pub at goldshmidt.org
>>
>> _______________________________________________
>> Linux-il mailing list
>> Linux-il at cs.huji.ac.il
>> http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
>
>
> _______________________________________________
> Linux-il mailing list
> Linux-il at cs.huji.ac.il
> http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
>
>



More information about the Linux-il mailing list