official way to load aoe module?
Etzion Bar-Noy
ezaton at tournament.org.il
Mon Aug 23 07:47:25 IDT 2010
Again, inline.
On Mon, Aug 23, 2010 at 2:46 AM, Amos Shapira <amos.shapira at gmail.com>wrote:
> On 23 August 2010 04:42, Etzion Bar-Noy <ezaton at tournament.org.il> wrote:
> >
> > Inline
> >
> > On Sun, Aug 22, 2010 at 4:41 PM, Amos Shapira <amos.shapira at gmail.com>
> wrote:
> >>
> >> Yes. But apart from hoping that RHCS does its job right, there is
> >> nothing preventing other guests from mounting the same partition in
> >> parallel.
> >>
> > Of course there is - LUN masking. Its purpose is exactly this. You expose
> the LUN *only* to the nodes which require direct access to it.
>
> The issue is that we want all our internal servers to be able to mount
> any of LUN's, as the "shards" of the multi-FS-multi-sqlite migrate
> among them.
> There is no way to dynamically mask and unmask LUN's, as it is now
> adding a new LUN requires a server reboot.
>
Adding LUNs does not require a reboot. Removing ones do. However, if you let
the cluster software manage all disk mount operations, as it should,
multiple-mounts will never happen, so no need for any special masking, but
only letting the cluster manage the mounts.
>
> >> That's why we looked at cluster-aware file systems in form of GFS but
> >> decided the performance hit is too great to go with it. A brief look
> >> at OCFS installation steps gave an impression that it isn't trivial or
> >> well supported on CentOS 5.
> >
> > Incorrect impression. Less than 5 minutes work, being extra slow, with
> coffee included. Simple enough?
>
> Thanks for the info. Do you have a good pointer to follow?
>
Nope. I have been doing it for several years now, and I can quote the
procedure, if you like.
Download OCFS2 package for your kernel from here:
http://oss.oracle.com/projects/ocfs2/files/RedHat/RHEL5/x86_64/1.4.7-1/ (I
assumed it's x86_64 platform. Also - RHEL5)
Download OCFS2 tools from here:
http://
http://oss.oracle.com/projects/ocfs2-tools/files/RedHat/RHEL5/x86_64/1.4.4-1/(no
need for the devel and debug packages)
Install all three RPMs on all nodes.
Run, on each node, the command:
/etc/init.d/o2cb configure
Answer the questions.
Run on one node the GUI command (for first-timers) ocfs2console
Create a new clsuter, add all nodes with their names and IPs (if you have
multiple networks, select a non-busy one)
Press on "propagate cluster configuration" (or something similar, I don't
recall exactly), and you will have to insert the root password of each node
for "scp" command there.
Following that, your cluster is ready. Create a new partition, and mount it.
You will have to configure it to automatically mount via the /etc/fstab, of
course.
Have you compared OCFS performance to GFS and ext3?
>
Nope. I never bothered. I avoid GFS, as said, as much as possible, due to
its complexity, and I avoid clustered FS unless I must, so ext3 is not an
option then...
>
> >> It is. As was pointed out earlier in this thread - a large part of the
> >> "file system" is about how the file system module "caches" information
> >> in memory and synchronises it on the disk. If it's not a cluster-aware
> >> file system then parallel mounting is equivalent to opening the LV or
> >> device by an application and randomly starting to write data on it.
> >
> > True. But cluster-aware FS should be considered carefully. For some
> purposes, they ease management. For some others, they complicate it.
> > GFS has always been misunderstood by me. It has little benefits, and
> major drawbacks. Can't see any reason to use it, from the existing variety
> of clustered FS around.
>
> What drawbacks and compared to what? We've noticed the performance
> problem when we tried it. What are other *practical* alternatives?
> GFS comes built-in and supported in RHEL/CentOS and the RHCS, that's
> why it was the almost "natural choice" in our mind.
>
> Complexity. It increases the complexity of the RHCS configuration. It
requires RH cluster, even if you do not. It uses the cluster fencing
methods, which is somewhat good, but somewhat bad. I can't exactly point
there. Its implementation is somewhat ugly and over-complex to my liking.
Ez
Thanks,
>
> --Amos
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.huji.ac.il/pipermail/linux-il/attachments/20100823/041b407a/attachment.html>
More information about the Linux-il
mailing list