Moving away from cron jobs to some workflow manager

Moving away from cron jobs to some workflow manager

Rabin Yasharzadehe rabin at rabin.io
Tue Jun 19 11:20:48 IDT 2018


I'll have to read the documentation to learn more,
but this project seems barely maintained as only minor versions each year
or two (last release was 2 years ago),
that doesn't give a lot of confidence.

but i'll check it out
thanks.

--
Rabin


On Tue, 19 Jun 2018 at 09:43, Marc Volovic <marcvolovic at icloud.com> wrote:

> Hi,
>
> It is intended for submitting multiple jobs for crunching. But you can use
> it (SOGE) or SLURM for issuing job and dependent jobs, even on a single
> machine issuer/execution host. It can be used as a resource aware job
> scheduler.
>
> —mav
> Marc Volovic
> marcvolovic at me.com
>
>
>
> > On 19 Jun 2018, at 9:41, Rabin Yasharzadehe <rabin at rabin.io> wrote:
> >
> > never heard of it,
> > but from reading the manual and the 10minute presentation ,
> > it's seems like it is more suitable for data crunching, where you have a
> pool
> > of compute resources and you submit jobs to it.
> >
> > my case is a bit different, where I have many jobs which need to run
> (orchestrated) on there own hosts
> > with a specific environment and setup.
> >
> >
> > --
> > Rabin
> >
> >
> > On Tue, 19 Jun 2018 at 09:10, Marc Volovic <marcvolovic at icloud.com>
> wrote:
> > Why not a minimal deploy of SGE - which would also allow you to make
> multi-executor?
> >
> > https://arc.liv.ac.uk/trac/SGE
> >
> > —mav
> > Marc Volovic
> > marcvolovic at me.com
> >
> >
> >
> > > On 19 Jun 2018, at 9:06, Rabin Yasharzadehe <rabin at rabin.io> wrote:
> > >
> > > Hi all,
> > >
> > > I need some advice, currently I have a huge cron file which schedules
> tasks one after anther, and each task is position precisely (with some room
> for error) to start after it predecessor.
> > >
> > > So if one job start at 00:00 and it will go and fetch some files and
> it takes 3minutes
> > > the next job will be after start right after at ~00:05
> > > and so on
> > >
> > > the problem is that if one job fails, all other jobs which are depend
> on him will fail as well, and then I get a shitload of alerts, and the
> worst part is that if i have to manually start a batch process I need to go
> to each machine and manually start each job in the right order,
> > >
> > > I was looking to resolve this problem with a tool which can manage
> this "pipe line"
> > > and I cam across several tools like Luigi and (apache-)AirFlow, I
> started with Luigi but It didn't look
> > > right for the job, and then I tried airflow, but was not able to make
> it to work, the jobs queue never executed. =(
> > >
> > > Has any one have experience with airflow, or other tool like it which
> they can recommend ?
> > > My needs are to be able to execute my CURRENT shell/python/php scripts
> and build the dependency between them, and I perfer the option for remote
> exec so that I will have central
> > > place to manage and monitor all work flow whichs are executed on
> several nodes.
> > >
> > > Thanks in advance,
> > > Rabin
> > > _______________________________________________
> > > Linux-il mailing list
> > > Linux-il at cs.huji.ac.il
> > > http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.huji.ac.il/pipermail/linux-il/attachments/20180619/f5787e53/attachment.html>


More information about the Linux-il mailing list