<div dir="ltr"><br><br><div class="gmail_quote">On Tue, Jun 9, 2009 at 2:40 PM, Amos Shapira <span dir="ltr"><<a href="mailto:amos.shapira@gmail.com">amos.shapira@gmail.com</a>></span> wrote<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
1. It means that we'll have another application-level proxy where<br>
right now we are very happy with LVS's performance, transparency and<br>
handling of lots of other traffic going on (we also use it for<br>
internal VIP forwarding among the various components of the system).<br>
I.e. we'll need yet another technology to maintain in addition to LVS,<br>
which we are very happy with.</blockquote><div><br>Can't help with that, but I will mention that nginx has a load balancing feature as well; <br><br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
<br>
2. One of the applications does lots of TCP/IP-level connection<br>
sniffing so it can't be used behind an application-level proxy, it<br>
must have a direct connection to the browser (LVS works for us since<br>
it acts like a bridge - doesn't touch anything inside the packet<br>
except for the destination MAC address).</blockquote><div><br>You mean that it connects back to the origin and then run stuff on it? If that's the problem, nginx has an option to forward the originating IP address via an HTTP header, which you can then use in your application.<br>
<br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>
<br>
Your suggestion to check nginx incoming vs. outgoing gave me another<br>
idea - I'll try to find whether I can get such stats from the LVS<br>
server, though LVS itself could be dropping connections due to lack of<br>
space to track all of them too (which closes the loop - how can I tell<br>
whether nginx' own server doesn't drop incoming connections which<br>
nginx itself doesn't know about?).<br>
</blockquote><div><br>I think that you can't (side of sniffing). However nginx is designed for tens of thousands of simultaneous keep-alive sessions, with a *very* small footprint. This is a very high limit. I think that above it, you'ld need something which is hardware assisted (like F5 BigIP) <br>
<br>From my experience, nginx allowed to reduce the number of running Apache threads (apache+mod_php5 was the setup). I don't know how much of your content can be served directly by nginx (I dumped Apache completely, because nginx can do PHP via FastCGI). How much you'll reduce depends on what volume of the requests is going to your proprietary module :) But all static content handling can be done by nginx. When number of Apache threads go down, your CPU load goes down, and your memory usage goes WAY down. My numbers were 10-20 loadavg with 6GB RAM, down to loadavg of 1 with a few tens of MBs of RAM... my hardware problem became a bandwidth problem... our uplink couldn't sustain what we now been able to push... of course, that's because I was pure-PHP - so your case is different. <br>
<br>But really, setting it up is a breeze, and very easy to test. I'll do give you a tip, though. nginx has static buffers set up for everything; And it tests them and returns an HTTP error if a request is larger than the buffers. So if your requests are bigger than very plain access (large cookies, file uploads, etc) - be advised to set up the relevant buffers in nginx.conf with values larger than the default (the default is VERY light on resources...)<br>
</div><br>-- Shimi<br></div></div>