<div dir="ltr"><div class="gmail_quote">For a start, it looks like you put both trending and alerting in one basket. I'd keep them separate though alerting based on collected trending data is useful (e.g. don't alert just when a load threshold is crossed but only if the trending average for the part X minutes is above the threshold, or even only if it's derivative shows that it's not going to get better soon enough).</div>
<div class="gmail_quote"><br></div><div class="gmail_quote">See <a href="http://fractio.nl/2013/03/25/data-failures-compartments-pipelines/">http://fractio.nl/2013/03/25/data-failures-compartments-pipelines/</a> for high level theory about monitoring pipelines, and a bit of a pitch for Flapjack (and start by reading the first link from it). Lindsay is a very eloquent speaker and author in general and fun to watch and read.</div>
<div class="gmail_quote"><br></div><div class="gmail_quote">Bottom line from the above - I'm currently not aware of a single silver bullet to do everything you need for proper monitoring.</div><div class="gmail_quote">
<br></div><div class="gmail_quote">Last time I had to setup such a system (monitoring hundreds of servers for trends AND alerts) I used:</div><div class="gmail_quote">1. collectd (<a href="https://collectd.org/">https://collectd.org/</a>) for trending data - it can sample things down to once a second if you want</div>
<div class="gmail_quote">2. statsd (<a href="https://github.com/etsy/statsd/">https://github.com/etsy/statsd/</a>) for event counting (e.g. every time a Bamboo build plan started or stopped, or failed or succeeded, or other such events happend, an event was shot over to statsd to coalace and ship over to graphite). nice overview: <a href="http://codeascraft.com/2011/02/15/measure-anything-measure-everything/">http://codeascraft.com/2011/02/15/measure-anything-measure-everything/</a></div>
<div class="gmail_quote">3. both of the above send data to graphite (<a href="https://github.com/graphite-project">https://github.com/graphite-project</a>)</div><div class="gmail_quote">4. To track things like "upgraded Bamboo" events, we used tricks like <a href="http://codeascraft.com/2010/12/08/track-every-release/">http://codeascraft.com/2010/12/08/track-every-release/</a>. I since then learned about another project to help stick extra data with events (e.g. the version that Bamboo was upgraded to), but I can't find it right now.</div>
<div class="gmail_quote"><br></div><div class="gmail_quote">Here is a good summary with Graphite tips: <a href="http://kevinmccarthy.org/blog/2013/07/18/10-things-i-learned-deploying-graphite/">http://kevinmccarthy.org/blog/2013/07/18/10-things-i-learned-deploying-graphite/</a></div>
<div class="gmail_quote"><br></div><div class="gmail_quote">Alerts were generated by opsview (stay away from it, it was a mistake), which is yet another Nagios wrapper, many of the checks were based on reading the Graphite data whenever it was available (<a href="https://github.com/olivierHa/check_graphite">https://github.com/olivierHa/check_graphite</a>), but many also with plain old "nrpe" (e.g. "is the collectd/bamboo/apache/mysql/postgres/whatever process still running?").</div>
<div class="gmail_quote"><br></div><div class="gmail_quote">I don't like nagios specifically and its centralization in general (which affects all other "nagios replacement" impolementations) and would rather look for something else, perhaps Sensu (<a href="http://sensuapp.org/">http://sensuapp.org/</a>), though it wasn't ready last time I evaluated it about a year ago.</div>
<div class="gmail_quote"><br></div><div class="gmail_quote">My main beef with Nagios and the other central monitoring systems is that there is a central server which orchestrates most of the monitoring. This means that:</div>
<div class="gmail_quote">1. There is one server which has to go through all the checks on all monitored servers in each iteration to trigger a check. With hundreds of servers and thousands of checks this could take a very long time. It could be busy checking whether the root filesystem on a throw-away bamboo agent is full (while the previous check showed that it's far from that) while your central Maven repository is burning for a few minutes. And it wouldn't help to say "check Maven repo more often" because it'll be like the IBM vs. DEC boat race - "row harder!" (<a href="http://www.panix.com/~clp/humor/computers/programming/dec-ibm.html">http://www.panix.com/~clp/humor/computers/programming/dec-ibm.html</a>).</div>
<div class="gmail_quote">2. That server is a single point of failure, or you have to start using complex clustering solutions to keep it (and only one of it!) up - no parallel servers.</div><div class="gmail_quote">3. This server has to be very beefy to keep up with all the checks AND serve the results. In one of my former workplaces (second largest Australian ISP at the time) there was a cluster of four such servers with the checks carefully spread among them. Updating the cluster configuration was a delicate business and keeping them up wasn't pleasant and still it was very slow to serve the web interface.</div>
<div class="gmail_quote">4. The amount of traffic and load on the network and monitored servers is VERY wasteful - open TCP for each check, fork/exec via the NRPE agent, process exit, collect results, rinse, repeat, millions of times a day.</div>
<div class="gmail_quote"><br></div><div class="gmail_quote">Nagios doesn't encourage what it calls "passive monitoring" (i.e. the monitored servers initiate checks and send results, whether positive or negative, to a central server) and in general its protocol (NRPE) means that the central monitoring data collector is a bottleneck.</div>
<div class="gmail_quote"><br></div><div class="gmail_quote">Sensu, on the other hand, works around this by encouraging more "passive monitoring", i.e. each monitored server is responsible to monitor itself without the overhead of a central server doing the rounds and loading the network, it uses RabbitMQ message bus so its data transport and collection servers are more scalable (it also supports multiple servers), and it's OK with not sending anything if there is nothing to report (the system will still has "keepalive" checks (<a href="http://sensuapp.org/docs/0.12/keepalives">http://sensuapp.org/docs/0.12/keepalives</a>) to monitor for nodes which went down).</div>
<div class="gmail_quote"><br></div><div class="gmail_quote">But my favourite idea for scalability is the one presented in <a href="http://linux-ha.org/source-doc/assimilation/html/index.html">http://linux-ha.org/source-doc/assimilation/html/index.html</a> - each monitored host is responsible to monitor itself, without bothering anyone if there is nothing to write home about (so a bit like Sensu), and a couple of servers near it, so the "is host is alive" external monitoring is distributed across the network (and doesn't fall on the servers alone, like in Sensu), it also saves unnecessary network traffic. Unfortunately, it seems not to be ready yet (<a href="http://linux-ha.org/source-doc/assimilation/html/_release_descriptions.html">http://linux-ha.org/source-doc/assimilation/html/_release_descriptions.html</a>).</div>
<div class="gmail_quote"><br></div><div class="gmail_quote">More points:</div><div class="gmail_quote"><br></div><div class="gmail_quote">Lack of VPN - if you can't setup a "proper" vpn then you can always look at ssh vpn (e.g. Ubuntu instructions: <a href="https://help.ubuntu.com/community/SSH_VPN">https://help.ubuntu.com/community/SSH_VPN</a>), and if you can't be bothered with ssh_config "Tunnel"/"TunnelDevice" (ssh "-w" flag) then even a simple ssh port redirection with ssh -NT and autossh could do.</div>
<div class="gmail_quote"><br></div><div class="gmail_quote">Log concentration - look at Logstash (<a href="http://logstash.net/">http://logstash.net/</a>) for proper log collection and analysis.</div><div class="gmail_quote">
<br></div><div class="gmail_quote">Hope this gives you some ideas.</div><div class="gmail_quote"><br></div><div class="gmail_quote">--Amos</div><div class="gmail_quote"><br></div><div class="gmail_quote">On 16 Jun 2014 09:13, "Ori Berger" <<a href="mailto:linux-il@orib.net" target="_blank">linux-il@orib.net</a>> wrote:<br type="attribution">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
I'm looking for a single system that can track all of a remote server's health and performance status, and which stores a detailed every-few-seconds history. So far, I haven't found one comprehensive system that does it all; also, triggering alarms in "bad" situations (such as no disk space, etc). Things I'm interested in (in parentheses - how I track them at the moment. Note shinken is a nagios-compatible thing).<br>
<br>
Free disk space (shinken)<br>
Server load (shinken)<br>
Debian package and security updates (shinken)<br>
NTP drift (shinken)<br>
Service ping/reply time (shinken)<br>
Upload/download rates per interface (mrtg)<br>
Temperatures (sensord, hddtemp)<br>
Security logs, warning and alerts e.g. fail2ban, auth.log (rsync of log files)<br>
<br>
I have a few tens of servers to monitor, which I would like to do with one software and one console. Those servers are not all physically on the same network, nor do they have a VPN (so, no UDP) but tcp and ssh are mostly reliable even though they are low bandwidth.<br>
<br>
Please note that shinken (much like nagios) doesn't really give a good visible history of things it measures - only alerts; Also, it can't really sample things every few seconds - the lowest reasonable update interval (given shinken's architecture) is ~5 minutes for the things it measures above.<br>
<br>
Any recommendations?<br>
<br>
Thanks in advance,<br>
Ori<br>
<br>
______________________________<u></u>_________________<br>
Linux-il mailing list<br>
<a href="mailto:Linux-il@cs.huji.ac.il" target="_blank">Linux-il@cs.huji.ac.il</a><br>
<a href="http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il" target="_blank">http://mailman.cs.huji.ac.il/<u></u>mailman/listinfo/linux-il</a><br>
</blockquote></div>
</div>