<div dir="ltr">Again, inline.<br><br><div class="gmail_quote">On Mon, Aug 23, 2010 at 2:46 AM, Amos Shapira <span dir="ltr"><<a href="mailto:amos.shapira@gmail.com">amos.shapira@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">On 23 August 2010 04:42, Etzion Bar-Noy <<a href="mailto:ezaton@tournament.org.il">ezaton@tournament.org.il</a>> wrote:<br>
><br>
> Inline<br>
><br>
> On Sun, Aug 22, 2010 at 4:41 PM, Amos Shapira <<a href="mailto:amos.shapira@gmail.com">amos.shapira@gmail.com</a>> wrote:<br>
>><br>
</div><div class="im">>> Yes. But apart from hoping that RHCS does its job right, there is<br>
>> nothing preventing other guests from mounting the same partition in<br>
>> parallel.<br>
>><br>
> Of course there is - LUN masking. Its purpose is exactly this. You expose the LUN *only* to the nodes which require direct access to it.<br>
<br>
</div>The issue is that we want all our internal servers to be able to mount<br>
any of LUN's, as the "shards" of the multi-FS-multi-sqlite migrate<br>
among them.<br>
There is no way to dynamically mask and unmask LUN's, as it is now<br>
adding a new LUN requires a server reboot.<br></blockquote><div><br></div><div>Adding LUNs does not require a reboot. Removing ones do. However, if you let the cluster software manage all disk mount operations, as it should, multiple-mounts will never happen, so no need for any special masking, but only letting the cluster manage the mounts. </div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im"><br>
>> That's why we looked at cluster-aware file systems in form of GFS but<br>
>> decided the performance hit is too great to go with it. A brief look<br>
>> at OCFS installation steps gave an impression that it isn't trivial or<br>
>> well supported on CentOS 5.<br>
><br>
> Incorrect impression. Less than 5 minutes work, being extra slow, with coffee included. Simple enough?<br>
<br>
</div>Thanks for the info. Do you have a good pointer to follow?<br></blockquote><div>Nope. I have been doing it for several years now, and I can quote the procedure, if you like.</div><div>Download OCFS2 package for your kernel from here:</div>
<div> <a href="http://oss.oracle.com/projects/ocfs2/files/RedHat/RHEL5/x86_64/1.4.7-1/">http://oss.oracle.com/projects/ocfs2/files/RedHat/RHEL5/x86_64/1.4.7-1/</a> (I assumed it's x86_64 platform. Also - RHEL5)</div>
<div>Download OCFS2 tools from here:</div><div>http://<a href="http://oss.oracle.com/projects/ocfs2-tools/files/RedHat/RHEL5/x86_64/1.4.4-1/">http://oss.oracle.com/projects/ocfs2-tools/files/RedHat/RHEL5/x86_64/1.4.4-1/</a> (no need for the devel and debug packages)</div>
<div><br></div><div>Install all three RPMs on all nodes.</div><div>Run, on each node, the command:</div><div>/etc/init.d/o2cb configure</div><div>Answer the questions.</div><div>Run on one node the GUI command (for first-timers) ocfs2console</div>
<div>Create a new clsuter, add all nodes with their names and IPs (if you have multiple networks, select a non-busy one)</div><div>Press on "propagate cluster configuration" (or something similar, I don't recall exactly), and you will have to insert the root password of each node for "scp" command there. </div>
<div>Following that, your cluster is ready. Create a new partition, and mount it. You will have to configure it to automatically mount via the /etc/fstab, of course.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Have you compared OCFS performance to GFS and ext3?<br></blockquote><div><br></div><div>Nope. I never bothered. I avoid GFS, as said, as much as possible, due to its complexity, and I avoid clustered FS unless I must, so ext3 is not an option then... </div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im"><br>
>> It is. As was pointed out earlier in this thread - a large part of the<br>
>> "file system" is about how the file system module "caches" information<br>
>> in memory and synchronises it on the disk. If it's not a cluster-aware<br>
>> file system then parallel mounting is equivalent to opening the LV or<br>
>> device by an application and randomly starting to write data on it.<br>
><br>
> True. But cluster-aware FS should be considered carefully. For some purposes, they ease management. For some others, they complicate it.<br>
> GFS has always been misunderstood by me. It has little benefits, and major drawbacks. Can't see any reason to use it, from the existing variety of clustered FS around.<br>
<br>
</div>What drawbacks and compared to what? We've noticed the performance<br>
problem when we tried it. What are other *practical* alternatives?<br>
GFS comes built-in and supported in RHEL/CentOS and the RHCS, that's<br>
why it was the almost "natural choice" in our mind.<br>
<br></blockquote><div>Complexity. It increases the complexity of the RHCS configuration. It requires RH cluster, even if you do not. It uses the cluster fencing methods, which is somewhat good, but somewhat bad. I can't exactly point there. Its implementation is somewhat ugly and over-complex to my liking.</div>
<div><br></div><div>Ez</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Thanks,<br>
<font color="#888888"><br>
--Amos<br>
</font></blockquote></div><br></div>