Rufus Scheduler: Job Scheduler for Ruby (at, Cron, in and Every Jobs)


Job scheduler for Ruby (at, cron, in and every jobs).

It uses threads.

Note: maybe are you looking for the README of rufus-scheduler 2.x? (especially if you're using Dashing which is stuck on rufus-scheduler 2.0.24)


# quickstart.rb

require 'rufus-scheduler'

scheduler = '3s' do
  puts 'Hello... Rufus'

  # let the current thread join the scheduler thread
  # (please note that this join should be removed when scheduling
  # in a web application (Rails and friends) initializer)

(run with ruby quickstart.rb)

Various forms of scheduling are supported:

require 'rufus-scheduler'

scheduler =

# ... '10d' do
  # do something in 10 days
end '2030/12/12 23:30:00' do
  # do something at a given point in time

scheduler.every '3h' do
  # do something every 3 hours
scheduler.every '3h10m' do
  # do something every 3 hours and 10 minutes

scheduler.cron '5 0 * * *' do
  # do something every day, five minutes after midnight
  # (see "man 5 crontab" in your terminal)

# ...

Rufus-scheduler uses fugit for parsing time strings, et-orbi for pairing time and tzinfo timezones.


Rufus-scheduler (out of the box) is an in-process, in-memory scheduler. It uses threads.

It does not persist your schedules. When the process is gone and the scheduler instance with it, the schedules are gone.

A rufus-scheduler instance will go on scheduling while it is present among the objects in a Ruby process. To make it stop scheduling you have to call its #shutdown method.

related and similar gems

  • Whenever - let cron call back your Ruby code, trusted and reliable cron drives your schedule
  • ruby-clock - a clock process / job scheduler for Ruby
  • Clockwork - rufus-scheduler inspired gem
  • Crono - an in-Rails cron scheduler
  • PerfectSched - highly available distributed cron built on Sequel and more

(please note: rufus-scheduler is not a cron replacement)

note about the 3.0 line

It's a complete rewrite of rufus-scheduler.

There is no EventMachine-based scheduler anymore.

I don't know what this Ruby thing is, where are my Rails?

I'll drive you right to the tracks.

notable changes:

  • As said, no more EventMachine-based scheduler
  • scheduler.every('100') { will schedule every 100 seconds (previously, it would have been 0.1s). This aligns rufus-scheduler with Ruby's sleep(100)
  • The scheduler isn't catching the whole of Exception anymore, only StandardError
  • The error_handler is #on_error (instead of #on_exception), by default it now prints the details of the error to $stderr (used to be $stdout)
  • Rufus::Scheduler::TimeOutError renamed to Rufus::Scheduler::TimeoutError
  • Introduction of "interval" jobs. Whereas "every" jobs are like "every 10 minutes, do this", interval jobs are like "do that, then wait for 10 minutes, then do that again, and so on"
  • Introduction of a lockfile: true/filename mechanism to prevent multiple schedulers from executing
  • "discard_past" is on by default. If the scheduler (its host) sleeps for 1 hour and a every '10m' job is on, it will trigger once at wakeup, not 6 times (discard_past was false by default in rufus-scheduler 2.x). No intention to re-introduce discard_past: false in 3.0 for now.
  • Introduction of Scheduler #on_pre_trigger and #on_post_trigger callback points

getting help

So you need help. People can help you, but first help them help you, and don't waste their time. Provide a complete description of the issue. If it works on A but not on B and others have to ask you: "so what is different between A and B" you are wasting everyone's time.

"hello", "please" and "thanks" are not swear words.

Go read how to report bugs effectively, twice.

Update: might help help you.

on Gitter

You can find help via chat over at It's fugit, et-orbi, and rufus-scheduler combined chat room.

Please be courteous.


Yes, issues can be reported in rufus-scheduler issues, I'd actually prefer bugs in there. If there is nothing wrong with rufus-scheduler, a Stack Overflow question is better.



Rufus-scheduler supports five kinds of jobs. in, at, every, interval and cron jobs.

Most of the rufus-scheduler examples show block scheduling, but it's also OK to schedule handler instances or handler classes.

in, at, every, interval, cron

In and at jobs trigger once.

require 'rufus-scheduler'

scheduler = '10d' do
  puts "10 days reminder for review X!"
end '2014/12/24 2000' do
  puts "merry xmas!"

In jobs are scheduled with a time interval, they trigger after that time elapsed. At jobs are scheduled with a point in time, they trigger when that point in time is reached (better to choose a point in the future).

Every, interval and cron jobs trigger repeatedly.

require 'rufus-scheduler'

scheduler =

scheduler.every '3h' do
  puts "change the oil filter!"

scheduler.interval '2h' do
  puts "thinking..."
  puts sleep(rand * 1000)
  puts "thought."

scheduler.cron '00 09 * * *' do
  puts "it's 9am! good morning!"

Every jobs try hard to trigger following the frequency they were scheduled with.

Interval jobs trigger, execute and then trigger again after the interval elapsed. (every jobs time between trigger times, interval jobs time between trigger termination and the next trigger start).

Cron jobs are based on the venerable cron utility (man 5 crontab). They trigger following a pattern given in (almost) the same language cron uses.


#schedule_x vs #x

schedule_in, schedule_at, schedule_cron, etc will return the new Job instance.

in, at, cron will return the new Job instance's id (a String).

job_id = '10d' do
    # ...
job = scheduler.job(job_id)

# versus

job =
  scheduler.schedule_in '10d' do
    # ...

# also

job = '10d', job: true do
    # ...

#schedule and #repeat

Sometimes it pays to be less verbose.

The #schedule methods schedules an at, in or cron job. It just decides based on its input. It returns the Job instance.

scheduler.schedule '10d' do; end.class
  # => Rufus::Scheduler::InJob

scheduler.schedule '2013/12/12 12:30' do; end.class
  # => Rufus::Scheduler::AtJob

scheduler.schedule '* * * * *' do; end.class
  # => Rufus::Scheduler::CronJob

The #repeat method schedules and returns an EveryJob or a CronJob.

scheduler.repeat '10d' do; end.class
  # => Rufus::Scheduler::EveryJob

scheduler.repeat '* * * * *' do; end.class
  # => Rufus::Scheduler::CronJob

(Yes, no combination here gives back an IntervalJob).

schedule blocks arguments (job, time)

A schedule block may be given 0, 1 or 2 arguments.

The first argument is "job", it's simply the Job instance involved. It might be useful if the job is to be unscheduled for some reason.

scheduler.every '10m' do |job|

  status = determine_pie_status

  if status == 'burnt' || status == 'cooked'

The second argument is "time", it's the time when the job got cleared for triggering (not

Note that time is the time when the job got cleared for triggering. If there are mutexes involved, now = mutex_wait_time + time...

"every" jobs and changing the next_time in-flight

It's OK to change the next_time of an every job in-flight:

scheduler.every '10m' do |job|

  # ...

  status = determine_pie_status

  job.next_time = + 30 * 60 if status == 'burnt'
    # if burnt, wait 30 minutes for the oven to cool a bit

It should work as well with cron jobs, not so with interval jobs whose next_time is computed after their block ends its current run.

scheduling handler instances

It's OK to pass any object, as long as it responds to #call(), when scheduling:

class Handler
  def, time)
    p "- Handler called for #{} at #{time}"
end '10d', Handler

# or

class OtherHandler
  def initialize(name)
    @name = name
  def call(job, time)
    p "* #{time} - Handler #{name.inspect} called for #{}"

oh ='Doe')

scheduler.every '10m', oh '3d5m', oh

The call method must accept 2 (job, time), 1 (job) or 0 arguments.

Note that time is the time when the job got cleared for triggering. If there are mutexes involved, now = mutex_wait_time + time...

scheduling handler classes

One can pass a handler class to rufus-scheduler when scheduling. Rufus will instantiate it and that instance will be available via job#handler.

class MyHandler
  attr_reader :count
  def initialize
    @count = 0
  def call(job)
    @count += 1
    puts ". #{self.class} called at #{} (#{@count})"

job = scheduler.schedule_every '35m', MyHandler

  # => #<MyHandler:0x000000021034f0>
  # => 0

If you want to keep that "block feeling":

job_id =
  scheduler.every '10m', do
    def call(job)
      puts ". hello #{self.inspect} at #{}"

pause and resume the scheduler

The scheduler can be paused via the #pause and #resume methods. One can determine if the scheduler is currently paused by calling #paused?.

While paused, the scheduler still accepts schedules, but no schedule will get triggered as long as #resume isn't called.

job options

name: string

Sets the name of the job.

scheduler.cron '*/15 8 * * *', name: 'Robert' do |job|
  puts "A, it's #{} and my name is #{}"

job1 =
  scheduler.schedule_cron '*/30 9 * * *', n: 'temporary' do |job|
    puts "B, it's #{} and my name is #{}"
# ... = 'Beowulf'

blocking: true

By default, jobs are triggered in their own, new threads. When blocking: true, the job is triggered in the scheduler thread (a new thread is not created). Yes, while a blocking job is running, the scheduler is not scheduling.

overlap: false

Since, by default, jobs are triggered in their own new threads, job instances might overlap. For example, a job that takes 10 minutes and is scheduled every 7 minutes will have overlaps.

To prevent overlap, one can set overlap: false. Such a job will not trigger if one of its instances is already running.

The :overlap option is considered before the :mutex option when the scheduler is reviewing jobs for triggering.

mutex: mutex_instance / mutex_name / array of mutexes

When a job with a mutex triggers, the job's block is executed with the mutex around it, preventing other jobs with the same mutex from entering (it makes the other jobs wait until it exits the mutex).

This is different from overlap: false, which is, first, limited to instances of the same job, and, second, doesn't make the incoming job instance block/wait but give up.

:mutex accepts a mutex instance or a mutex name (String). It also accept an array of mutex names / mutex instances. It allows for complex relations between jobs.

Array of mutexes: original idea and implementation by Rainux Luo

Note: creating lots of different mutexes is OK. Rufus-scheduler will place them in its Scheduler#mutexes hash... And they won't get garbage collected.

The :overlap option is considered before the :mutex option when the scheduler is reviewing jobs for triggering.

timeout: duration or point in time

It's OK to specify a timeout when scheduling some work. After the time specified, it gets interrupted via a Rufus::Scheduler::TimeoutError. '10d', timeout: '1d' do
    # ... do something
  rescue Rufus::Scheduler::TimeoutError
    # ... that something got interrupted after 1 day

The :timeout option accepts either a duration (like "1d" or "2w3d") or a point in time (like "2013/12/12 12:00").

:first_at, :first_in, :first, :first_time

This option is for repeat jobs (cron / every) only.

It's used to specify the first time after which the repeat job should trigger for the first time.

In the case of an "every" job, this will be the first time (modulo the scheduler frequency) the job triggers. For a "cron" job as well, the :first will point to the first time the job has to trigger, the following trigger times are then determined by the cron string.

scheduler.every '2d', first_at: + 10 * 3600 do
  # ... every two days, but start in 10 hours

scheduler.every '2d', first_in: '10h' do
  # ... every two days, but start in 10 hours

scheduler.cron '00 14 * * *', first_in: '3d' do
  # ... every day at 14h00, but start after 3 * 24 hours

:first, :first_at and :first_in all accept a point in time or a duration (number or time string). Use the symbol you think makes your schedule more readable.

Note: it's OK to change the first_at (a Time instance) directly:

job.first_at = + 10
job.first_at = Rufus::Scheduler.parse('2029-12-12')

The first argument (in all its flavours) accepts a :now or :immediately value. That schedules the first occurrence for immediate triggering. Consider:

require 'rufus-scheduler'

s =

n =; p [ :scheduled_at, n, n.to_f ]

s.every '3s', first: :now do
  n =; p [ :in, n, n.to_f ]


that'll output something like:

[:scheduled_at, 2014-01-22 22:21:21 +0900, 1390396881.344438]
[:in, 2014-01-22 22:21:21 +0900, 1390396881.6453865]
[:in, 2014-01-22 22:21:24 +0900, 1390396884.648807]
[:in, 2014-01-22 22:21:27 +0900, 1390396887.651686]
[:in, 2014-01-22 22:21:30 +0900, 1390396890.6571937]

:last_at, :last_in, :last

This option is for repeat jobs (cron / every) only.

It indicates the point in time after which the job should unschedule itself.

scheduler.cron '5 23 * * *', last_in: '10d' do
  # ... do something every evening at 23:05 for 10 days

scheduler.every '10m', last_at: + 10 * 3600 do
  # ... do something every 10 minutes for 10 hours

scheduler.every '10m', last_in: 10 * 3600 do
  # ... do something every 10 minutes for 10 hours

:last, :last_at and :last_in all accept a point in time or a duration (number or time string). Use the symbol you think makes your schedule more readable.

Note: it's OK to change the last_at (nil or a Time instance) directly:

job.last_at = nil
  # remove the "last" bound

job.last_at = Rufus::Scheduler.parse('2029-12-12')
  # set the last bound

times: nb of times (before auto-unscheduling)

One can tell how many times a repeat job (CronJob or EveryJob) is to execute before unscheduling by itself.

scheduler.every '2d', times: 10 do
  # ... do something every two days, but not more than 10 times

scheduler.cron '0 23 * * *', times: 31 do
  # ... do something every day at 23:00 but do it no more than 31 times

It's OK to assign nil to :times to make sure the repeat job is not limited. It's useful when the :times is determined at scheduling time.

scheduler.cron '0 23 * * *', times: (nolimit ? nil : 10) do
  # ...

The value set by :times is accessible in the job. It can be modified anytime.

job =
  scheduler.cron '0 23 * * *' do
    # ...

# later on...

job.times = 10
  # 10 days and it will be over

Job methods

When calling a schedule method, the id (String) of the job is returned. Longer schedule methods return Job instances directly. Calling the shorter schedule methods with the job: true also returns Job instances instead of Job ids (Strings).

  require 'rufus-scheduler'

  scheduler =

  job_id = '10d' do
      # ...

  job =
    scheduler.schedule_in '1w' do
      # ...

  job = '1w', job: true do
      # ...

Those Job instances have a few interesting methods / properties:

id, job_id

Returns the job id.

job = scheduler.schedule_in('10d') do; end
  # => "in_1374072446.8923042_0.0_0"


Returns the scheduler instance itself.


Returns the options passed at the Job creation.

job = scheduler.schedule_in('10d', tag: 'hello') do; end
  # => { :tag => 'hello' }


Returns the original schedule.

job = scheduler.schedule_in('10d', tag: 'hello') do; end
  # => '10d'

callable, handler

callable() returns the scheduled block (or the call method of the callable object passed in lieu of a block)

handler() returns nil if a block was scheduled and the instance scheduled otherwise.

# when passing a block

job =
  scheduler.schedule_in('10d') do
    # ...

  # => nil
  # => #<Proc:0x00000001dc6f58@/home/jmettraux/whatever.rb:115>


# when passing something else than a block

class MyHandler
  attr_reader :counter
  def initialize
    @counter = 0
  def call(job, time)
    @counter = @counter + 1

job = scheduler.schedule_in('10d',

  # => #<Method: MyHandler#call>
  # => #<MyHandler:0x0000000163ae88 @counter=0>


Added to rufus-scheduler 3.8.0.

Returns the array [ 'path/to/file.rb', 123 ] like Proc#source_location does.

require 'rufus-scheduler'

scheduler =

job = scheduler.schedule_every('2h') { p }

p job.source_location
  # ==> [ '/home/jmettraux/rufus-scheduler/test.rb', 6 ]


Returns the Time instance when the job got created.

job = scheduler.schedule_in('10d', tag: 'hello') do; end
  # => 2013-07-17 23:48:54 +0900


Returns the last time the job triggered (is usually nil for AtJob and InJob).

job = scheduler.schedule_every('10s') do; end

  # => 2013-07-17 23:48:54 +0900
  # => nil (since we've just scheduled it)

# after 10 seconds

  # => 2013-07-17 23:48:54 +0900 (same as above)
  # => 2013-07-17 23:49:04 +0900


Returns the previous #next_time

scheduler.every('10s') do |job|
  puts "job scheduled for #{job.previous_time} triggered at #{}"
  puts "next time will be around #{job.next_time}"
  puts "."

last_work_time, mean_work_time

The job keeps track of how long its work was in the last_work_time attribute. For a one time job (in, at) it's probably not very useful.

The attribute mean_work_time contains a computed mean work time. It's recomputed after every run (if it's a repeat job).


Returns an array of EtOrbi::EoTime instances (Time instances with a designated time zone), listing the n next occurrences for this job.

Please note that for "interval" jobs, a mean work time is computed each time and it's used by this #next_times(n) method to approximate the next times beyond the immediate next time.


Unschedule the job, preventing it from firing again and removing it from the schedule. This doesn't prevent a running thread for this job to run until its end.


Returns the list of threads currently "hosting" runs of this Job instance.


Interrupts all the work threads currently running for this job instance. They discard their work and are free for their next run (of whatever job).

Note: this doesn't unschedule the Job instance.

Note: if the job is pooled for another run, a free work thread will probably pick up that next run and the job will appear as running again. You'd have to unschedule and kill to make sure the job doesn't run again.


Returns true if there is at least one running Thread hosting a run of this Job instance.


Returns true if the job is scheduled (is due to trigger). For repeat jobs it should return true until the job gets unscheduled. "at" and "in" jobs will respond with false as soon as they start running (execution triggered).

pause, resume, paused?, paused_at

These four methods are only available to CronJob, EveryJob and IntervalJob instances. One can pause or resume such jobs thanks to these methods.

job =
  scheduler.schedule_every('10s') do
    # ...

  # => 2013-07-20 01:22:22 +0900
  # => true
  # => 2013-07-20 01:22:22 +0900

  # => nil


Returns the list of tags attached to this Job instance.

By default, returns an empty array.

job = scheduler.schedule_in('10d') do; end
  # => []

job = scheduler.schedule_in('10d', tag: 'hello') do; end
  # => [ 'hello' ]

[]=, [], key?, has_key?, keys, values, and entries

Threads have thread-local variables, similarly Rufus-scheduler jobs have job-local variables. Those are more like a dict with thread-safe access.

job =
  @scheduler.schedule_every '1s' do |job|
    job[:timestamp] =
    job[:counter] ||= 0
    job[:counter] += 1

sleep 3.6

  # => 3

job.key?(:timestamp) # => true
job.has_key?(:timestamp) # => true
job.keys # => [ :timestamp, :counter ]

Locals can be set at schedule time:

job0 =
  @scheduler.schedule_cron '*/15 12 * * *', locals: { a: 0 } do
    # ...
job1 =
  @scheduler.schedule_cron '*/15 13 * * *', l: { a: 1 } do
    # ...

One can fetch the Hash directly with Job#locals. Of course, direct manipulation is not thread-safe.

job.locals.entries do |k, v|
  p "#{k}: #{v}"


Job instances have a #call method. It simply calls the scheduled block or callable immediately.

job =
  @scheduler.schedule_every '10m' do |job|
    # ...

Warning: the Scheduler#on_error handler is not involved. Error handling is the responsibility of the caller.

If the call has to be rescued by the error handler of the scheduler, call(true) might help:

require 'rufus-scheduler'

s =

def s.on_error(job, err)
  if job
    p [ 'error in scheduled job', job.class, job.original, err.message ]
    p [ 'error while scheduling', err.message ]
  p $!

job =
  s.schedule_in('1d') do
    fail 'again'
  # true lets the error_handler deal with error in the job call

AtJob and InJob methods


Returns when the job will trigger (hopefully).


An alias for time.

EveryJob, IntervalJob and CronJob methods


Returns the next time the job will trigger (hopefully).


Returns how many times the job fired.

EveryJob methods


It returns the scheduling frequency. For a job scheduled "every 20s", it's 20.

It's used to determine if the job frequency is higher than the scheduler frequency (it raises an ArgumentError if that is the case).

IntervalJob methods


Returns the interval scheduled between each execution of the job.

Every jobs use a time duration between each start of their execution, while interval jobs use a time duration between the end of an execution and the start of the next.

CronJob methods


An expensive method to run, it's brute. It caches its results. By default it runs for 2017 (a non leap-year).

  require 'rufus-scheduler'

  Rufus::Scheduler.parse('* * * * *').brute_frequency
    # => #<Fugit::Cron::Frequency:0x00007fdf4520c5e8
    #      @span=31536000.0, @delta_min=60, @delta_max=60,
    #      @occurrences=525600, @span_years=1.0, @yearly_occurrences=525600.0>
      # Occurs 525600 times in a span of 1 year (2017) and 1 day.
      # There are least 60 seconds between "triggers" and at most 60 seconds.

  Rufus::Scheduler.parse('0 12 * * *').brute_frequency
    # => #<Fugit::Cron::Frequency:0x00007fdf451ec6d0
    #      @span=31536000.0, @delta_min=86400, @delta_max=86400,
    #      @occurrences=365, @span_years=1.0, @yearly_occurrences=365.0>
  Rufus::Scheduler.parse('0 12 * * *').brute_frequency.to_debug_s
    # => "dmin: 1D, dmax: 1D, ocs: 365, spn: 52W1D, spnys: 1, yocs: 365"
      # 365 occurrences, at most 1 day between each, at least 1 day.

The CronJob#frequency method found in rufus-scheduler < 3.5 has been retired.

looking up jobs


The scheduler #job(job_id) method can be used to look up Job instances.

  require 'rufus-scheduler'

  scheduler =

  job_id = '10d' do
      # ...

  # later on...

  job = scheduler.job(job_id)

Scheduler #jobs #at_jobs #in_jobs #every_jobs #interval_jobs and #cron_jobs

Are methods for looking up lists of scheduled Job instances.

Here is an example:

  # let's unschedule all the at jobs


Scheduler#jobs(tag: / tags: x)

When scheduling a job, one can specify one or more tags attached to the job. These can be used to look up the job later on. '10d', tag: 'main_process' do
    # ...
  end '10d', tags: [ 'main_process', 'side_dish' ] do
    # ...

  # ...

  jobs = 'main_process')
    # find all the jobs with the 'main_process' tag

  jobs = [ 'main_process', 'side_dish' ]
    # find all the jobs with the 'main_process' AND 'side_dish' tags


Returns the list of Job instance that have currently running instances.

Whereas other "_jobs" method scan the scheduled job list, this method scans the thread list to find the job. It thus comprises jobs that are running but are not scheduled anymore (that happens for at and in jobs).

misc Scheduler methods


Unschedule a job given directly or by its id.


Shuts down the scheduler, ceases any scheduler/triggering activity.


Shuts down the scheduler, waits (blocks) until all the jobs cease running.

Scheduler#shutdown(wait: n)

Shuts down the scheduler, waits (blocks) at most n seconds until all the jobs cease running. (Jobs are killed after n seconds have elapsed).


Kills all the job (threads) and then shuts the scheduler down. Radical.


Returns true if the scheduler has been shut down.


Returns the Time instance at which the scheduler got started.

Scheduler #uptime / #uptime_s

Returns since the count of seconds for which the scheduler has been running.

#uptime_s returns this count in a String easier to grasp for humans, like "3d12m45s123".


Lets the current thread join the scheduling thread in rufus-scheduler. The thread comes back when the scheduler gets shut down.

#join is mostly used in standalone scheduling script (or tiny one file examples). Calling #join from a web application initializer will probably hijack the main thread and prevent the web application from being served. Do not put a #join in such a web application initializer file.


Returns all the threads associated with the scheduler, including the scheduler thread itself.


Lists the work threads associated with the scheduler. The query option defaults to :all.

  • :all : all the work threads
  • :active : all the work threads currently running a Job
  • :vacant : all the work threads currently not running a Job

Note that the main schedule thread will be returned if it is currently running a Job (ie one of those blocking: true jobs).


Returns true if the arg is a currently scheduled job (see Job#scheduled?).

Scheduler#occurrences(time0, time1)

Returns a hash { job => [ t0, t1, ... ] } mapping jobs to their potential trigger time within the [ time0, time1 ] span.

Please note that, for interval jobs, the #mean_work_time is used, so the result is only a prediction.

Scheduler#timeline(time0, time1)

Like #occurrences but returns a list [ [ t0, job0 ], [ t1, job1 ], ... ] of time + job pairs.

dealing with job errors

The easy, job-granular way of dealing with errors is to rescue and deal with them immediately. The two next sections show examples. Skip them for explanations on how to deal with errors at the scheduler level.

block jobs

As said, jobs could take care of their errors themselves.

scheduler.every '10m' do
    # do something that might fail...
  rescue => e
    $stderr.puts '-' * 80
    $stderr.puts e.message
    $stderr.puts e.stacktrace
    $stderr.puts '-' * 80

callable jobs

Jobs are not only shrunk to blocks, here is how the above would look like with a dedicated class.

scheduler.every '10m', do
  def call(job)
    # do something that might fail...
  rescue => e
    $stderr.puts '-' * 80
    $stderr.puts e.message
    $stderr.puts e.stacktrace
    $stderr.puts '-' * 80

TODO: talk about callable#on_error (if implemented)

(see scheduling handler instances and scheduling handler classes for more about those "callable jobs")


By default, rufus-scheduler intercepts all errors (that inherit from StandardError) and dumps abundant details to $stderr.

If, for example, you'd like to divert that flow to another file (descriptor), you can reassign $stderr for the current Ruby process

$stderr ='/var/log/myapplication.log', 'ab')

or, you can limit that reassignement to the scheduler itself

scheduler.stderr ='/var/log/myapplication.log', 'ab')

Rufus::Scheduler#on_error(job, error)

We've just seen that, by default, rufus-scheduler dumps error information to $stderr. If one needs to completely change what happens in case of error, it's OK to overwrite #on_error

def scheduler.on_error(job, error)

  Logger.warn("intercepted error in #{}: #{error.message}")

On Rails, the on_error method redefinition might look like:

def scheduler.on_error(job, error)

    "err#{error.object_id} rufus-scheduler intercepted #{error.inspect}" +
    " in job #{job.inspect}")
  error.backtrace.each_with_index do |line, i|
      "err#{error.object_id} #{i}: #{line}")


Rufus::Scheduler #on_pre_trigger and #on_post_trigger callbacks

One can bind callbacks before and after jobs trigger:

s =

def s.on_pre_trigger(job, trigger_time)
  puts "triggering job #{}..."

def s.on_post_trigger(job, trigger_time)
  puts "triggered job #{}."

s.every '1s' do
  # ...

The trigger_time is the time at which the job triggers. It might be a bit before

Warning: these two callbacks are executed in the scheduler thread, not in the work threads (the threads where the job execution really happens).


One can create an around callback which will wrap a job:

def s.around_trigger(job)
  t =
  puts "Starting job #{}..."
  puts "job #{} finished in #{} seconds."

The around callback is executed in the thread.

Rufus::Scheduler#on_pre_trigger as a guard

Returning false in on_pre_trigger will prevent the job from triggering. Returning anything else (nil, -1, true, ...) will let the job trigger.

Note: your business logic should go in the scheduled block itself (or the scheduled instance). Don't put business logic in on_pre_trigger. Return false for admin reasons (backend down, etc), not for business reasons that are tied to the job itself.

def s.on_pre_trigger(job, trigger_time)

  return false if Backend.down?

  puts "triggering job #{}..."
end options


By default, rufus-scheduler sleeps 0.300 second between every step. At each step it checks for jobs to trigger and so on.

The :frequency option lets you change that 0.300 second to something else.

scheduler = 5)

It's OK to use a time string to specify the frequency.

scheduler = '2h10m')
  # this scheduler will sleep 2 hours and 10 minutes between every "step"

Use with care.

lockfile: "mylockfile.txt"

This feature only works on OSes that support the flock (man 2 flock) call.

Starting the scheduler with lockfile: '.rufus-scheduler.lock' will make the scheduler attempt to create and lock the file .rufus-scheduler.lock in the current working directory. If that fails, the scheduler will not start.

The idea is to guarantee only one scheduler (in a group of schedulers sharing the same lockfile) is running.

This is useful in environments where the Ruby process holding the scheduler gets started multiple times.

If the lockfile mechanism here is not sufficient, you can plug your custom mechanism. It's explained in advanced lock schemes below.


(since rufus-scheduler 3.0.9)

The scheduler lock is an object that responds to #lock and #unlock. The scheduler calls #lock when starting up. If the answer is false, the scheduler stops its initialization work and won't schedule anything.

Here is a sample of a scheduler lock that only lets the scheduler on host "" start:

class HostLock
  def initialize(lock_name)
    @lock_name = lock_name
  def lock
    @lock_name == `hostname -f`.strip
  def unlock

scheduler =''))

By default, the scheduler_lock is an instance of Rufus::Scheduler::NullLock, with a #lock that returns true.


(since rufus-scheduler 3.0.9)

The trigger lock in an object that responds to #lock. The scheduler calls that method on the job lock right before triggering any job. If the answer is false, the trigger doesn't happen, the job is not done (at least not in this scheduler).

Here is a (stupid) PingLock example, it'll only trigger if an "other host" is not responding to ping. Do not use that in production, you don't want to fork a ping process for each trigger attempt...

class PingLock
  def initialize(other_host)
    @other_host = other_host
  def lock
    ! system("ping -c 1 #{@other_host}")

scheduler =''))

By default, the trigger_lock is an instance of Rufus::Scheduler::NullLock, with a #lock that always returns true.

As explained in advanced lock schemes, another way to tune that behaviour is by overriding the scheduler's #confirm_lock method. (You could also do that with an #on_pre_trigger callback).


In rufus-scheduler 2.x, by default, each job triggering received its own, brand new, thread of execution. In rufus-scheduler 3.x, execution happens in a pooled work thread. The max work thread count (the pool size) defaults to 28.

One can set this maximum value when starting the scheduler.

scheduler = 77)

It's OK to increase the :max_work_threads of a running scheduler.

scheduler.max_work_threads += 10


Do not want to store a reference to your rufus-scheduler instance? Then Rufus::Scheduler.singleton can help, it returns a singleton instance of the scheduler, initialized the first time this class method is called.

Rufus::Scheduler.singleton.every '10s' { puts "hello, world!" }

It's OK to pass initialization arguments (like :frequency or :max_work_threads) but they will only be taken into account the first time .singleton is called.

Rufus::Scheduler.singleton(max_work_threads: 77)
Rufus::Scheduler.singleton(max_work_threads: 277) # no effect

The .s is a shortcut for .singleton.

Rufus::Scheduler.s.every '10s' { puts "hello, world!" }

advanced lock schemes

As seen above, rufus-scheduler proposes the :lockfile system out of the box. If in a group of schedulers only one is supposed to run, the lockfile mechanism prevents schedulers that have not set/created the lockfile from running.

There are situations where this is not sufficient.

By overriding #lock and #unlock, one can customize how schedulers lock.

This example was provided by Eric Lindvall:

class ZookeptScheduler < Rufus::Scheduler

  def initialize(zookeeper, opts={})
    @zk = zookeeper

  def lock
    @zk_locker = @zk.exclusive_locker('scheduler')
    @zk_locker.lock # returns true if the lock was acquired, false else

  def unlock

  def confirm_lock
    return false if down?
  rescue ZK::Exceptions::LockAssertionFailedError => e
    # we've lost the lock, shutdown (and return false to at least prevent
    # this job from triggering

This uses a zookeeper to make sure only one scheduler in a group of distributed schedulers runs.

The methods #lock and #unlock are overridden and #confirm_lock is provided, to make sure that the lock is still valid.

The #confirm_lock method is called right before a job triggers (if it is provided). The more generic callback #on_pre_trigger is called right after #confirm_lock.

:scheduler_lock and :trigger_lock

(introduced in rufus-scheduler 3.0.9).

Another way of prodiving #lock, #unlock and #confirm_lock to a rufus-scheduler is by using the :scheduler_lock and :trigger_lock options.

See :trigger_lock and :scheduler_lock.

The scheduler lock may be used to prevent a scheduler from starting, while a trigger lock prevents individual jobs from triggering (the scheduler goes on scheduling).

One has to be careful with what goes in #confirm_lock or in a trigger lock, as it gets called before each trigger.

Warning: you may think you're heading towards "high availability" by using a trigger lock and having lots of schedulers at hand. It may be so if you limit yourself to scheduling the same set of jobs at scheduler startup. But if you add schedules at runtime, they stay local to their scheduler. There is no magic that propagates the jobs to all the schedulers in your pack.

parsing cronlines and time strings

(Please note that fugit does the heavy-lifting parsing work for rufus-scheduler).

Rufus::Scheduler provides a class method .parse to parse time durations and cron strings. It's what it's using when receiving schedules. One can use it directly (no need to instantiate a Scheduler).

require 'rufus-scheduler'

  # => 777600.0
  # => 777600.0

Rufus::Scheduler.parse('Sun Nov 18 16:01:00 2012').strftime('%c')
  # => 'Sun Nov 18 16:01:00 2012'

Rufus::Scheduler.parse('Sun Nov 18 16:01:00 2012 Europe/Berlin').strftime('%c %z')
  # => 'Sun Nov 18 15:01:00 2012 +0000'

  # => 0.1

Rufus::Scheduler.parse('* * * * *')
  # => #<Fugit::Cron:0x00007fb7a3045508
  #      @original="* * * * *", @cron_s=nil,
  #      @seconds=[0], @minutes=nil, @hours=nil, @monthdays=nil, @months=nil,
  #      @weekdays=nil, @zone=nil, @timezone=nil>

It returns a number when the input is a duration and a Fugit::Cron instance when the input is a cron string.

It will raise an ArgumentError if it can't parse the input.

Beyond .parse, there are also .parse_cron and .parse_duration, for finer granularity.

There is an interesting helper method named .to_duration_hash:

require 'rufus-scheduler'

  # => { :m => 1 }
  # => { :m => 1, :s => 2, :ms => 127 }

Rufus::Scheduler.to_duration_hash(62.127, drop_seconds: true)
  # => { :m => 1 }

cronline notations specific to rufus-scheduler

first Monday, last Sunday et al

To schedule something at noon every first Monday of the month:

scheduler.cron('00 12 * * mon#1') do
  # ...

To schedule something at noon the last Sunday of every month:

scheduler.cron('00 12 * * sun#-1') do
  # ...
# OR
scheduler.cron('00 12 * * sun#L') do
  # ...

Such cronlines can be tested with scripts like:

require 'rufus-scheduler'
  # => 2013-10-26 07:07:08 +0900
Rufus::Scheduler.parse('* * * * mon#1').next_time.to_s
  # => 2013-11-04 00:00:00 +0900

L (last day of month)

L can be used in the "day" slot:

In this example, the cronline is supposed to trigger every last day of the month at noon:

require 'rufus-scheduler'
  # => 2013-10-26 07:22:09 +0900
Rufus::Scheduler.parse('00 12 L * *').next_time.to_s
  # => 2013-10-31 12:00:00 +0900

negative day (x days before the end of the month)

It's OK to pass negative values in the "day" slot:

scheduler.cron '0 0 -5 * *' do
  # do it at 00h00 5 days before the end of the month...

Negative ranges (-10--5-: 10 days before the end of the month to 5 days before the end of the month) are OK, but mixed positive / negative ranges will raise an ArgumentError.

Negative ranges with increments (-10---2/2) are accepted as well.

Descending day ranges are not accepted (10-8 or -8--10 for example).

a note about timezones

Cron schedules and at schedules support the specification of a timezone.

scheduler.cron '0 22 * * 1-5 America/Chicago' do
  # the job...
end '2013-12-12 14:00 Pacific/Samoa' do
  puts "it's tea time!"

# or even

Rufus::Scheduler.parse("2013-12-12 14:00 Pacific/Saipan")
  # => #<Rufus::Scheduler::ZoTime:0x007fb424abf4e8 @seconds=1386820800.0, @zone=#<TZInfo::DataTimezone: Pacific/Saipan>, @time=nil>

I get "zotime.rb:41:in `initialize': cannot determine timezone from nil"

For when you see an error like:

  in `initialize':
    cannot determine timezone from nil (etz:nil,tnz:"中国标准时间",tzid:nil)
    from rufus-scheduler/lib/rufus/scheduler/zotime.rb:198:in `new'
    from rufus-scheduler/lib/rufus/scheduler/zotime.rb:198:in `now'
    from rufus-scheduler/lib/rufus/scheduler.rb:561:in `start'

It may happen on Windows or on systems that poorly hint to Ruby which timezone to use. It should be solved by setting explicitly the ENV['TZ'] before the scheduler instantiation:

ENV['TZ'] = 'Asia/Shanghai'
scheduler =
scheduler.every '2s' do
  puts "#{} Hello #{ENV['TZ']}!"

On Rails you might want to try with:

ENV['TZ'] = # Rails only
scheduler =
scheduler.every '2s' do
  puts "#{} Hello #{ENV['TZ']}!"

(Hat tip to Alexander in gh-230)

Rails sets its timezone under config/application.rb.

Rufus-Scheduler 3.3.3 detects the presence of Rails and uses its timezone setting (tested with Rails 4), so setting ENV['TZ'] should not be necessary.

The value can be determined thanks to

Use a "continent/city" identifier (for example "Asia/Shanghai"). Do not use an abbreviation (not "CST") and do not use a local time zone name (not "中国标准时间" nor "Eastern Standard Time" which, for instance, points to a time zone in America and to another one in Australia...).

If the error persists (and especially on Windows), try to add the tzinfo-data to your Gemfile, as in:

gem 'tzinfo-data'

or by manually requiring it before requiring rufus-scheduler (if you don't use Bundler):

require 'tzinfo/data'
require 'rufus-scheduler'

so Rails?

Yes, I know, all of the above is boring and you're only looking for a snippet to paste in your Ruby-on-Rails application to schedule...

Here is an example initializer:

# config/initializers/scheduler.rb

require 'rufus-scheduler'

# Let's use the rufus-scheduler singleton
s = Rufus::Scheduler.singleton

# Stupid recurrent task...
s.every '1m' do "hello, it's #{}"

And now you tell me that this is good, but you want to schedule stuff from your controller.


class ScheController < ApplicationController

  # GET /sche/
  def index

    job_id = '5s' do "time flies, it's now #{}"

    render text: "scheduled job #{job_id}"

The rufus-scheduler singleton is instantiated in the config/initializers/scheduler.rb file, it's then available throughout the webapp via Rufus::Scheduler.singleton.

Warning: this works well with single-process Ruby servers like Webrick and Thin. Using rufus-scheduler with Passenger or Unicorn requires a bit more knowledge and tuning, gently provided by a bit of googling and reading, see Faq above.

avoid scheduling when running the Ruby on Rails console

(Written in reply to gh-186)

If you don't want rufus-scheduler to trigger anything while running the Ruby on Rails console, running for tests/specs, or running from a Rake task, you can insert a conditional return statement before jobs are added to the scheduler instance:

# config/initializers/scheduler.rb

require 'rufus-scheduler'

return if defined?(Rails::Console) || Rails.env.test? || File.split($PROGRAM_NAME).last == 'rake'
  # do not schedule when Rails is run from its console, for a test/spec, or
  # from a Rake task

# return if $PROGRAM_NAME.include?('spring')
  # see

s = Rufus::Scheduler.singleton

s.every '1m' do "hello, it's #{}"

(Beware later version of Rails where Spring takes care pre-running the initializers. Running spring stop or disabling Spring might be necessary in some cases to see changes to initializers being taken into account.)

rails server -d

(Written in reply to )

There is the handy rails server -d that starts a development Rails as a daemon. The annoying thing is that the scheduler as seen above is started in the main process that then gets forked and daemonized. The rufus-scheduler thread (and any other thread) gets lost, no scheduling happens.

I avoid running -d in development mode and bother about daemonizing only for production deployment.

These are two well crafted articles on process daemonization, please read them:

If, anyway, you need something like rails server -d, why not try bundle exec unicorn -D instead? In my (limited) experience, it worked out of the box (well, had to add gem 'unicorn' to Gemfile first).

executor / reloader

You might benefit from wraping your scheduled code in the executor or reloader. Read more here:


see getting help above.

Author: jmettraux
Source code:
License: MIT license


Rufus Scheduler: Job Scheduler for Ruby (at, Cron, in and Every Jobs)
Hunter  Krajcik

Hunter Krajcik


Cron_parser: A Cron Parser for Dart

A cron parser

Spits out next & previous cron dates starting from now or a specific date. Please notice that the timezone of the resulting dates are in the same timezone provided with Cron().parse(...)


A simple usage example:

import 'package:cron_parser/cron_parser.dart';
import 'package:timezone/timezone.dart';

main() {
  // by default next cron dates are starting from the current date
  var cronIterator = Cron().parse("0 * * * *", "Europe/London");
  TZDateTime nextDate =;
  // you can retrieve the current value by using the current method
  TZDateTime currentDate = cronIterator.current(); // same as nextDate
  TZDateTime afterNextDate =;

  cronIterator = Cron().parse("0 * * * *", "Europe/London");
  TZDateTime previousDate = cronIterator.previous();
  // you can retrieve the current value by using the current method
  TZDateTime currentDate = cronIterator.current(); // same as previousDate
  TZDateTime beforePreviousDate = cronIterator.previous();

Another example this time with a specific start date:

import 'package:cron_parser/cron_parser.dart';
import 'package:timezone/timezone.dart';

main() {
  TZDateTime startDate = TZDateTime(getLocation("Europe/London"), 2020, 4, 01);
  var cronIterator = Cron().parse("0 * * * *", "Europe/London", startDate);
  TZDateTime nextDate =; // 2020-04-01 01:00:00.000+0100
  TZDateTime afterNextDate =; // 2020-04-01 02:00:00.000+0100

  cronIterator = Cron().parse("0 * * * *", "Europe/London", startDate);
  TZDateTime previousDate = cronIterator.previous(); // 2020-03-31 23:00:00.000+0100
  TZDateTime beforePreviousDate = cronIterator.previous(); // 2020-03-31 22:00:00.000+0100


Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add cron_parser

With Flutter:

 $ flutter pub add cron_parser

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

  cron_parser: ^0.5.0

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:cron_parser/cron_parser.dart';


import 'package:cron_parser/cron_parser.dart';
import 'package:timezone/standalone.dart';
import 'package:timezone/timezone.dart';

void main() {
  // by default next cron dates are starting from the current date
  var cronIterator = Cron().parse("0 * * * *", "Europe/London");
  TZDateTime nextDate =;
  TZDateTime afterNextDate =;

  // specify a start date to get cron dates after this date
  TZDateTime startDate = TZDateTime(getLocation("Europe/London"), 2020, 4, 01);
  cronIterator = Cron().parse("0 * * * *", "Europe/London", startDate);
  nextDate =; // 2020-04-01 01:00:00.000+0100
  var currentDateNext = cronIterator.current(); // 2020-04-01 01:00:00.000+0100
  afterNextDate =; // 2020-04-01 02:00:00.000+0100

  // by default previous cron dates are starting from the current date
  var cronIteratorPrevious = Cron().parse("0 * * * *", "Europe/London");
  TZDateTime previousDate = cronIterator.previous();
  TZDateTime beforePreviousDate = cronIterator.previous();

  // specify a start date to get cron dates before this date
  TZDateTime startDatePrevious =
      TZDateTime(getLocation("Europe/London"), 2020, 4, 01);
  cronIterator = Cron().parse("0 * * * *", "Europe/London", startDatePrevious);
  previousDate = cronIterator.previous(); // 2020-03-31 23:00:00.000+0100
  var currentDatePrevious =
      cronIterator.current(); // 2020-03-31 23:00:00.000+0100
  beforePreviousDate = cronIterator.previous(); // 2020-03-31 22:00:00.000+0100


Author: rbubke
Source Code: 
License: BSD-3-Clause license

#flutter #dart #cron 

Cron_parser: A Cron Parser for Dart
Nigel  Uys

Nigel Uys


You had one job, or more then one, which can be done in steps


Leprechaun is tool where you can schedule your recurring tasks to be performed over and over.

In Leprechaun tasks are recipes, lets observe simple recipe file which is written using YAML syntax.

File is located in recipes directory which can be specified in configs.ini configurational file. For all possible settings take a look here

By definition there are 3 types of recipes, the ones that can be scheduled, the others that can be hooked and last ones that use cron pattern for scheduling jobs, they are similiar regarding steps but have some difference in definition

First we will talk about scheduled recipes and they are defined like this:

name: job1 // name of recipe
definition: schedule // definition of which type is recipe
    min: 0 // every min
    hour: 0 // every hour
    day: 0 // every day
steps: // steps are done from first to last
    - touch ./test.txt
    - echo "Is this working?" > ./test.txt
    - mv ./test.txt ./imwondering.txt

If we set something like this

    min: 10 // every min
    hour: 2 // every hour
    day: 2 // every day

Task will run every 2 days 2 hours and 10 mins, if we put just days to 0 then it will run every 2 hours and 10 mins

name: job2 // name of recipe
definition: hook // definition of which type is recipe
id: 45DE2239F // id which we use to find recipe
    - echo "Hooked!" > ./hook.txt

Hooked recipe can be run by sending request to {host}:{port}/hook?id={id_of_recipe} on which Leprechaun server is listening, for example localhost:11400/hook?id=45DE2239F.

Recipes that use cron pattern to schedule tasks are used like this:

name: job3 // name of recipe
definition: cron // definition of which type is recipe
pattern: * * * * *
steps: // steps are done from first to last
    - touch ./test.txt
    - echo "Is this working?" > ./test.txt
    - mv ./test.txt ./imwondering.txt

Steps also support variables which syntax is $variable, and those are environment variables ex: $LOGNAME and in our steps it will be available as $LOGNAME. We can now rewrite our job file and it will look like something like this:

name: job1 // name of recipe
definition: schedule
    min: 0 // every min
    hour: 0 // every hour
    day: 0 // every day
steps: // steps are done from first to last
    - echo "Is this working?" > $LOGNAME

Usage is very straightforward, you just need to start client and it will run recipes you defined previously.

Steps also can be defined as sync/async tasks which is defined by ->, but keep in mind that steps in recipes are performed by linear path because one that is not async can block other from performing, lets take this one as example

- -> ping
- echo "I will not wait above task to perform, he is async so i will start immidiatelly"

but in this case for example first task will block performing on any task and all others will hang waiting it to finish

- ping
- -> echo "I need to wait above step to finish, then i can do my stuff"

Step Pipe

Output from one step can be passed to input of next step:

name: job1 // name of recipe
definition: schedule
    min: 0 // every min
    hour: 0 // every hour
    day: 0 // every day
steps: // steps are done from first to last
    - echo "Pipe this to next step" }>
    - cat > piped.txt

As you see, first step is using syntax }> at the end, which tells that this command output will be passed to next command input, you can chain like this how much you want.

Step Failure

Since steps are executed linear workers doesn't care if some of the commands fail, they continue with execution, but you get notifications if you did setup those configurations. If you want that workers stop execution of next steps if some command failes you can specifify it with ! like in example:

name: job1 // name of recipe
definition: schedule
    min: 0 // every min
    hour: 0 // every hour
    day: 0 // every day
steps: // steps are done from first to last
    - ! echo "Pipe this to next step" }>
    - cat > piped.txt

If first step fails, recipe will fail and all other steps wont be executed

Remote step execution

Steps can be handled by your local machine using regular syntax, if there is any need that you want specific step to be executed by some remote machine you can spoecify that in step provided in example under, syntax is rmt:some_host, leprechaun will try to communicate with remote service that is configured on provided host and will run this command at that host.

name: job1 // name of recipe
definition: schedule
    min: 0 // every min
    hour: 0 // every hour
    day: 0 // every day
steps: // steps are done from first to last
    - rmt:some_host echo "Pipe this to next step"

Note that also as regular step this step also can pipe output to next step, so something like this is possible also:

steps: // steps are done from first to last
    - rmt:some_host echo "Pipe this to next step" }>
    - rmt:some_other_host grep -a "Pipe" }>
    - cat > stored.txt


Go to leprechaun directory and run make install, you will need sudo privileges for this. This will install scheduler, cron, and webhook services.

To install remote service run make install-remote-service, this will create leprechaunrmt binary.


Go to leprechaun directory and run make build. This will build scheduler, cron, and webhook services.

To build remote service run make build-remote-service, this will create leprechaunrmt binary.

Starting/Stopping services

To start leprechaun just simply run it in background like this : leprechaun &

For more available commands run leprechaun --help


For cli tools take a look here


To run tests with covarage make test, to run tests and generate reports run make test-with-report files will be generated in coverprofile dir. To test specific package run make test-package package=[name]

Author: Kilgaloon
Source Code: 
License: MIT License

#go #golang #cron #scheduling 

You had one job, or more then one, which can be done in steps
Nigel  Uys

Nigel Uys


Gron: Gron, Cron Jobs in Go


Gron provides a clear syntax for writing and deploying cron jobs.


  • Minimalist APIs for scheduling jobs.
  • Thread safety.
  • Customizable Job Type.
  • Customizable Schedule.


$ go get


Create schedule.go

package main

import (

func main() {
    c := gron.New()
    c.AddFunc(gron.Every(1*time.Hour), func() {
        fmt.Println("runs every hour.")

Schedule Parameters

All scheduling is done in the machine's local time zone (as provided by the Go time package).

Setup basic periodic schedule with gron.Every().


Also support Day, Week by importing gron/xtime:

import ""

gron.Every(1 * xtime.Day)
gron.Every(1 * xtime.Week)

Schedule to run at specific time with .At(hh:mm)

gron.Every(30 * xtime.Day).At("00:00")
gron.Every(1 * xtime.Week).At("23:59")

Custom Job Type

You may define custom job types by implementing gron.Job interface: Run().

For example:

type Reminder struct {
    Msg string

func (r Reminder) Run() {

After job has defined, instantiate it and schedule to run in Gron.

c := gron.New()
r := Reminder{ "Feed the baby!" }
c.Add(gron.Every(8*time.Hour), r)

Custom Job Func

You may register Funcs to be executed on a given schedule. Gron will run them in their own goroutines, asynchronously.

c := gron.New()
c.AddFunc(gron.Every(1*time.Second), func() {
    fmt.Println("runs every second")

Custom Schedule

Schedule is the interface that wraps the basic Next method: Next(p time.Duration) time.Time

In gron, the interface value Schedule has the following concrete types:

  • periodicSchedule. adds time instant t to underlying period p.
  • atSchedule. reoccurs every period p, at time components(hh:mm).

For more info, checkout schedule.go.

Full Example

package main

import (

type PrintJob struct{ Msg string }

func (p PrintJob) Run() {

func main() {

    var (
        // schedules
        daily     = gron.Every(1 * xtime.Day)
        weekly    = gron.Every(1 * xtime.Week)
        monthly   = gron.Every(30 * xtime.Day)
        yearly    = gron.Every(365 * xtime.Day)

        // contrived jobs
        purgeTask = func() { fmt.Println("purge aged records") }
        printFoo  = printJob{"Foo"}
        printBar  = printJob{"Bar"}

    c := gron.New()

    c.Add(daily.At("12:30"), printFoo)
    c.AddFunc(weekly, func() { fmt.Println("Every week") })

    // Jobs may also be added to a running Gron
    c.Add(monthly, printBar)
    c.AddFunc(yearly, purgeTask)

    // Stop Gron (running jobs are not halted).

Author: Roylee0704
Source Code: 
License: MIT License

#go #golang #scheduling #cron

Gron: Gron, Cron Jobs in Go
Nigel  Uys

Nigel Uys


Go-quartz: Minimalist and Zero-dependency Scheduling Library for Go


A minimalistic and zero-dependency scheduling library for Go.


Inspired by the Quartz Java scheduler.

Library building blocks

Job interface. Any type that implements it can be scheduled.

type Job interface {
    Description() string
    Key() int

Implemented Jobs

  • ShellJob
  • CurlJob

Scheduler interface

type Scheduler interface {
    // Start starts the scheduler.
    // IsStarted determines whether the scheduler has been started.
    IsStarted() bool
    // ScheduleJob schedules a job using a specified trigger.
    ScheduleJob(job Job, trigger Trigger) error
    // GetJobKeys returns the keys of all of the scheduled jobs.
    GetJobKeys() []int
    // GetScheduledJob returns the scheduled job with the specified key.
    GetScheduledJob(key int) (*ScheduledJob, error)
    // DeleteJob removes the job with the specified key from the Scheduler's execution queue.
    DeleteJob(key int) error
    // Clear removes all of the scheduled jobs.
    // Stop shutdowns the scheduler.

Implemented Schedulers

  • StdScheduler

Trigger interface

type Trigger interface {
    NextFireTime(prev int64) (int64, error)
    Description() string

Implemented Triggers

  • CronTrigger
  • SimpleTrigger
  • RunOnceTrigger

Cron expression format

Field NameMandatoryAllowed ValuesAllowed Special Characters
SecondsYES0-59, - * /
MinutesYES0-59, - * /
HoursYES0-23, - * /
Day of monthYES1-31, - * ? /
MonthYES1-12 or JAN-DEC, - * /
Day of weekYES1-7 or SUN-SAT, - * ? /


sched := quartz.NewStdScheduler()
cronTrigger, _ := quartz.NewCronTrigger("1/5 * * * * *")
shellJob := quartz.NewShellJob("ls -la")
curlJob, _ := quartz.NewCurlJob(http.MethodGet, "", "", nil)
sched.ScheduleJob(shellJob, cronTrigger)
sched.ScheduleJob(curlJob, quartz.NewSimpleTrigger(time.Second*7))

More code samples can be found in the examples directory.

Author: Reugn
Source Code: 
License: MIT License

#go #golang #cron #job 

Go-quartz: Minimalist and Zero-dependency Scheduling Library for Go
Nigel  Uys

Nigel Uys


Cheek: Crontab-like ScHeduler for Effective Execution Of TasKs


cheek, of course, stands for Crontab-like scHeduler for Effective Execution of tasKs. cheek is a KISS approach to crontab-like job scheduling. It was born out of a (/my?) frustration about the big gap between a lightweight crontab and full-fledged solutions like Airflow.

cheek aims to be a KISS approach to job scheduling. Focus is on the KISS approach not to necessarily do this in the most robust way possible.

Getting started

Fetch the latest version for your system below.

darwin-arm64 | darwin-amd64 | linux-386 | linux-arm64 | linux-amd64

You can (for example) fetch it like below, make it executable and run it. Optionally put the cheek on your PATH.

curl -o cheek
chmod +x cheek

Create a schedule specification using the below YAML structure:

tz_location: Europe/Brussels
    command: date
    cron: "* * * * *"
        - bar
      - echo
      - bar
      - foo
    command: this fails
    cron: "* * * * *"
    retries: 3

If your command requires arguments, please make sure to pass them as an array like in foo_job.

Note that you can set tz_location if the system time of where you run your service is not to your liking.


The core of cheek consists of a scheduler that uses a schedule specified in a yaml file to triggers jobs when they are due.

You can launch the scheduler via:

cheek run ./path/to/my-schedule.yaml

Check out cheek run --help for configuration options.


cheek ships with a terminal ui you can launch via:

cheek ui

The UI allows to get a quick overview on jobs that have run, that error'd and their logs. It basically does this by fetching the state of the scheduler and by reading the logs that (per job) get written to $HOME/.cheek/. Note that you can ignore these logs, output of jobs will always go to stdout as well.


All configuration options are available by checking out cheek --help or the help of its subcommands (e.g. cheek run --help).

Configuration can be passed as flags to the cheek CLI directly. All configuration flags are also possible to set via environment variables. The following environment variables are available, they will override the default and/or set value of their similarly named CLI flags (without the prefix): CHEEK_PORT, CHEEK_SUPPRESSLOGS, CHEEK_LOGLEVEL, CHEEK_PRETTY, CHEEK_HOMEDIR, CHEEK_NOTELEMETRY.


There are two types of event you can hook into: on_success and on_error. Both events materialize after an (attempted) job run. Two types of actions can be taken as a response: notify_webhook and trigger_job. See the example below. Definition of these event actions can be done on job level or at schedule level, in the latter case it will apply to all jobs.

    command: this fails # this will create on_error event
    cron: "* * * * *"
    command: echo grind # this will create on_success event
    cron: "* * * * *"

Webhook are a generic way to push notifications to a plethora of tools. You can use it for instance via Zapier to push messages to a Slack channel.


Check out the Dockerfile for an example on how to set up cheek within the context of a Docker image.

Available versions

If you want to pin your setup to a specific version of cheek you can use the following template to fetch your cheek binary:


  • os is one of linux, darwin
  • arch is one of amd64, arm64, 386
  • tag is one the available tags
  • shortsha is a 7-char SHA and most commits on main will be available

Usage stats

By default cheek reports minimal usage stats. Each time a job is triggered a simple request that (only) contains your cheek version is send to our servers. Check out the exact implementation here. Note that you can always opt-out of this by passing the -no-telemetry or -n flag.


Thanks goes to:

  • gronx: for allowing me not to worry about CRON strings.
  • Charm: for their bubble-icious TUI libraries.
  • Sam & Frederik: for valuable code reviews / feedback.

Author: Datarootsio
Source Code: 
License: MIT License

#go #golang #cron #job 

Cheek: Crontab-like ScHeduler for Effective Execution Of TasKs
Misael  Stark

Misael Stark


How to Integrate Cron Jobs Inside A Docker

I recently had to create a Dockerfile for a small application and since we had to run it periodically, I chose to integrate it with the native cron available on Linux distributions.

Unfortunately, it took a little longer than I initially expected, but I finally managed to get it working correctly with the help of an old Stack Overflow response.

#docker #cron

How to Integrate Cron Jobs Inside A Docker
Desmond  Gerber

Desmond Gerber


Agenda: MongoDB-backed Job Scheduling

A light-weight job scheduling library for Node.js

Agenda offers

  • Minimal overhead. Agenda aims to keep its code base small.
  • Mongo backed persistence layer.
  • Promises based API.
  • Scheduling with configurable priority, concurrency, repeating and persistence of job results.
  • Scheduling via cron or human readable syntax.
  • Event backed job queue that you can hook into.
  • Agenda-rest: optional standalone REST API.
  • Inversify-agenda - Some utilities for the development of agenda workers with Inversify.
  • Agendash: optional standalone web-interface.

Feature Comparison

Since there are a few job queue solutions, here a table comparing them to help you use the one that better suits your needs.

Agenda is great if you need a MongoDB job scheduler, but try Bree if you need something simpler (built by a previous maintainer).

Delayed jobs 
Global events  
Rate Limiter  
Sandboxed worker  
Repeatable jobs 
Atomic ops 
Optimized forJobs / MessagesMessagesJobs

Kudos for making the comparison chart goes to Bull maintainers.



In order to support new MongoDB 5.0 and mongodb node.js driver/package the next release (5.x.x) of Agenda will be major. The required node version will become >=12. The mongodb dependency version will become >=3.2.

Install via NPM

npm install agenda

You will also need a working Mongo database (v3) to point it to.

CJS / Module Imports

for regular javascript code, just use the default entrypoint

const Agenda = require("agenda");

For Typescript, Webpack or other module imports, use agenda/es entrypoint: e.g.

import { Agenda } from "agenda/es";

NOTE: If you're migrating from @types/agenda you also should change imports to agenda/es. Instead of import Agenda from 'agenda' use import Agenda from 'agenda/es'.

Example Usage

const mongoConnectionString = "mongodb://";

const agenda = new Agenda({ db: { address: mongoConnectionString } });

// Or override the default collection name:
// const agenda = new Agenda({db: {address: mongoConnectionString, collection: 'jobCollectionName'}});

// or pass additional connection options:
// const agenda = new Agenda({db: {address: mongoConnectionString, collection: 'jobCollectionName', options: {ssl: true}}});

// or pass in an existing mongodb-native MongoClient instance
// const agenda = new Agenda({mongo: myMongoClient});

agenda.define("delete old users", async (job) => {
  await User.remove({ lastLogIn: { $lt: twoDaysAgo } });

(async function () {
  // IIFE to give access to async/await
  await agenda.start();

  await agenda.every("3 minutes", "delete old users");

  // Alternatively, you could also do:
  await agenda.every("*/3 * * * *", "delete old users");
  "send email report",
  { priority: "high", concurrency: 10 },
  async (job) => {
    const { to } =;
    await emailClient.send({
      from: "",
      subject: "Email Report",
      body: "...",

(async function () {
  await agenda.start();
  await agenda.schedule("in 20 minutes", "send email report", {
    to: "",
(async function () {
  const weeklyReport = agenda.create("send email report", {
    to: "",
  await agenda.start();
  await weeklyReport.repeatEvery("1 week").save();

Full documentation

Agenda's basic control structure is an instance of an agenda. Agenda's are mapped to a database collection and load the jobs from within.

Table of Contents

Configuring an agenda

All configuration methods are chainable, meaning you can do something like:

const agenda = new Agenda();
  .processEvery('3 minutes')

Agenda uses Human Interval for specifying the intervals. It supports the following units:

seconds, minutes, hours, days,weeks, months -- assumes 30 days, years -- assumes 365 days

More sophisticated examples

agenda.processEvery("one minute");
agenda.processEvery("1.5 minutes");
agenda.processEvery("3 days and 4 hours");
agenda.processEvery("3 days, 4 hours and 36 seconds");

database(url, [collectionName])

Specifies the database at the url specified. If no collection name is given, agendaJobs is used.

agenda.database("localhost:27017/agenda-test", "agendaJobs");

You can also specify it during instantiation.

const agenda = new Agenda({
  db: { address: "localhost:27017/agenda-test", collection: "agendaJobs" },

Agenda will emit a ready event (see Agenda Events) when properly connected to the database. It is safe to call agenda.start() without waiting for this event, as this is handled internally. If you're using the db options, or call database, then you may still need to listen for ready before saving jobs.


Use an existing mongodb-native MongoClient/Db instance. This can help consolidate connections to a database. You can instead use .database to have agenda handle connecting for you.

You can also specify it during instantiation:

const agenda = new Agenda({ mongo: mongoClientInstance.db("agenda-test") });

Note that MongoClient.connect() returns a mongoClientInstance since node-mongodb-native 3.0.0, while it used to return a dbInstance that could then be directly passed to agenda.


Sets the lastModifiedBy field to name in the jobs collection. Useful if you have multiple job processors (agendas) and want to see which job queue last ran the job. + "-" +;

You can also specify it during instantiation

const agenda = new Agenda({ name: "test queue" });


Takes a string interval which can be either a traditional javascript number, or a string such as 3 minutes

Specifies the frequency at which agenda will query the database looking for jobs that need to be processed. Agenda internally uses setTimeout to guarantee that jobs run at (close to ~3ms) the right time.

Decreasing the frequency will result in fewer database queries, but more jobs being stored in memory.

Also worth noting is that if the job queue is shutdown, any jobs stored in memory that haven't run will still be locked, meaning that you may have to wait for the lock to expire. By default it is '5 seconds'.

agenda.processEvery("1 minute");

You can also specify it during instantiation

const agenda = new Agenda({ processEvery: "30 seconds" });


Takes a number which specifies the max number of jobs that can be running at any given moment. By default it is 20.


You can also specify it during instantiation

const agenda = new Agenda({ maxConcurrency: 20 });


Takes a number which specifies the default number of a specific job that can be running at any given moment. By default it is 5.


You can also specify it during instantiation

const agenda = new Agenda({ defaultConcurrency: 5 });


Takes a number which specifies the max number jobs that can be locked at any given moment. By default it is 0 for no max.


You can also specify it during instantiation

const agenda = new Agenda({ lockLimit: 0 });


Takes a number which specifies the default number of a specific job that can be locked at any given moment. By default it is 0 for no max.


You can also specify it during instantiation

const agenda = new Agenda({ defaultLockLimit: 0 });


Takes a number which specifies the default lock lifetime in milliseconds. By default it is 10 minutes. This can be overridden by specifying the lockLifetime option to a defined job.

A job will unlock if it is finished (ie. the returned Promise resolves/rejects or done is specified in the params and done() is called) before the lockLifetime. The lock is useful if the job crashes or times out.


You can also specify it during instantiation

const agenda = new Agenda({ defaultLockLifetime: 10000 });


Takes a query which specifies the sort query to be used for finding and locking the next job.

By default it is { nextRunAt: 1, priority: -1 }, which obeys a first in first out approach, with respect to priority.

Agenda Events

An instance of an agenda will emit the following events:

  • ready - called when Agenda mongo connection is successfully opened and indices created. If you're passing agenda an existing connection, you shouldn't need to listen for this, as agenda.start() will not resolve until indices have been created. If you're using the db options, or call database, then you may still need to listen for the ready event before saving jobs. agenda.start() will still wait for the connection to be opened.
  • error - called when Agenda mongo connection process has thrown an error
await agenda.start();

Defining Job Processors

Before you can use a job, you must define its processing behavior.

define(jobName, [options], handler)

Defines a job with the name of jobName. When a job of jobName gets run, it will be passed to handler(job, done). To maintain asynchronous behavior, you may either provide a Promise-returning function in handler or provide done as a second parameter to handler. If done is specified in the function signature, you must call done() when you are processing the job. If your function is synchronous or returns a Promise, you may omit done from the signature.

options is an optional argument which can overwrite the defaults. It can take the following:

  • concurrency: number maximum number of that job that can be running at once (per instance of agenda)
  • lockLimit: number maximum number of that job that can be locked at once (per instance of agenda)
  • lockLifetime: number interval in ms of how long the job stays locked for (see multiple job processors for more info). A job will automatically unlock once a returned promise resolves/rejects (or if done is specified in the signature and done() is called).
  • priority: (lowest|low|normal|high|highest|number) specifies the priority of the job. Higher priority jobs will run first. See the priority mapping below
  • shouldSaveResult: boolean flag that specifies whether the result of the job should also be stored in the database. Defaults to false

Priority mapping:

  highest: 20,
  high: 10,
  normal: 0,
  low: -10,
  lowest: -20

Async Job:

agenda.define("some long running job", async (job) => {
  const data = await doSomelengthyTask();
  await formatThatData(data);
  await sendThatData(data);

Async Job (using done):

agenda.define("some long running job", (job, done) => {
  doSomelengthyTask((data) => {

Sync Job:

agenda.define("say hello", (job) => {

define() acts like an assignment: if define(jobName, ...) is called multiple times (e.g. every time your script starts), the definition in the last call will overwrite the previous one. Thus, if you define the jobName only once in your code, it's safe for that call to execute multiple times.

Creating Jobs

every(interval, name, [data], [options])

Runs job name at the given interval. Optionally, data and options can be passed in. Every creates a job of type single, which means that it will only create one job in the database, even if that line is run multiple times. This lets you put it in a file that may get run multiple times, such as webserver.js which may reboot from time to time.

interval can be a human-readable format String, a cron format String, or a Number.

data is an optional argument that will be passed to the processing function under

options is an optional argument that will be passed to job.repeatEvery. In order to use this argument, data must also be specified.

Returns the job.

agenda.define("printAnalyticsReport", async (job) => {
  const users = await User.doSomethingReallyIntensive();
  console.log("I print a report!");

agenda.every("15 minutes", "printAnalyticsReport");

Optionally, name could be array of job names, which is convenient for scheduling different jobs for same interval.

agenda.every("15 minutes", [

In this case, every returns array of jobs.

schedule(when, name, [data])

Schedules a job to run name once at a given time. when can be a Date or a String such as tomorrow at 5pm.

data is an optional argument that will be passed to the processing function under

Returns the job.

agenda.schedule("tomorrow at noon", "printAnalyticsReport", { userCount: 100 });

Optionally, name could be array of job names, similar to the every method.

agenda.schedule("tomorrow at noon", [

In this case, schedule returns array of jobs.

now(name, [data])

Schedules a job to run name once immediately.

data is an optional argument that will be passed to the processing function under

Returns the job."do the hokey pokey");

create(jobName, data)

Returns an instance of a jobName with data. This does NOT save the job in the database. See below to learn how to manually work with jobs.

const job = agenda.create("printAnalyticsReport", { userCount: 100 });
console.log("Job successfully saved");

Managing Jobs

jobs(mongodb-native query, mongodb-native sort, mongodb-native limit, mongodb-native skip)

Lets you query (then sort, limit and skip the result) all of the jobs in the agenda job's database. These are full mongodb-native find, sort, limit and skip commands. See mongodb-native's documentation for details.

const jobs = await
  { name: "printAnalyticsReport" },
  { data: -1 },
// Work with jobs (see below)

cancel(mongodb-native query)

Cancels any jobs matching the passed mongodb-native query, and removes them from the database. Returns a Promise resolving to the number of cancelled jobs, or rejecting on error.

const numRemoved = await agenda.cancel({ name: "printAnalyticsReport" });

This functionality can also be achieved by first retrieving all the jobs from the database using, looping through the resulting array and calling job.remove() on each. It is however preferable to use agenda.cancel() for this use case, as this ensures the operation is atomic.

disable(mongodb-native query)

Disables any jobs matching the passed mongodb-native query, preventing any matching jobs from being run by the Job Processor.

const numDisabled = await agenda.disable({ name: "pollExternalService" });

Similar to agenda.cancel(), this functionality can be acheived with a combination of and job.disable()

enable(mongodb-native query)

Enables any jobs matching the passed mongodb-native query, allowing any matching jobs to be run by the Job Processor.

const numEnabled = await agenda.enable({ name: "pollExternalService" });

Similar to agenda.cancel(), this functionality can be acheived with a combination of and job.enable()


Removes all jobs in the database without defined behaviors. Useful if you change a definition name and want to remove old jobs. Returns a Promise resolving to the number of removed jobs, or rejecting on error.

IMPORTANT: Do not run this before you finish defining all of your jobs. If you do, you will nuke your database of jobs.

const numRemoved = await agenda.purge();

Starting the job processor

To get agenda to start processing jobs from the database you must start it. This will schedule an interval (based on processEvery) to check for new jobs and run them. You can also stop the queue.


Starts the job queue processing, checking processEvery time to see if there are new jobs. Must be called after processEvery, and before any job scheduling (e.g. every).


Stops the job queue processing. Unlocks currently running jobs.

This can be very useful for graceful shutdowns so that currently running/grabbed jobs are abandoned so that other job queues can grab them / they are unlocked should the job queue start again. Here is an example of how to do a graceful shutdown.

async function graceful() {
  await agenda.stop();

process.on("SIGTERM", graceful);
process.on("SIGINT", graceful);


Closes database connection. You don't normally have to do this, but it might be useful for testing purposes.

Using force boolean you can force close connection.

Read more from Node.js MongoDB Driver API

await agenda.close({ force: true });

Multiple job processors

Sometimes you may want to have multiple node instances / machines process from the same queue. Agenda supports a locking mechanism to ensure that multiple queues don't process the same job.

You can configure the locking mechanism by specifying lockLifetime as an interval when defining the job.

agenda.define("someJob", { lockLifetime: 10000 }, (job, cb) => {
  // Do something in 10 seconds or less...

This will ensure that no other job processor (this one included) attempts to run the job again for the next 10 seconds. If you have a particularly long running job, you will want to specify a longer lockLifetime.

By default it is 10 minutes. Typically you shouldn't have a job that runs for 10 minutes, so this is really insurance should the job queue crash before the job is unlocked.

When a job is finished (i.e. the returned promise resolves/rejects or done is specified in the signature and done() is called), it will automatically unlock.

Manually working with a job

A job instance has many instance methods. All mutating methods must be followed with a call to await in order to persist the changes to the database.

repeatEvery(interval, [options])

Specifies an interval on which the job should repeat. The job runs at the time of defining as well in configured intervals, that is "run now and in intervals".

interval can be a human-readable format String, a cron format String, or a Number.

options is an optional argument containing:

options.timezone: should be a string as accepted by moment-timezone and is considered when using an interval in the cron string format.

options.skipImmediate: true | false (default) Setting this true will skip the immediate run. The first run will occur only in configured interval.

options.startDate: Date the first time the job runs, should be equal or after the start date.

options.endDate: Date the job should not repeat after the endDate. The job can run on the end-date itself, but not after that.

options.skipDays: humand readable string ('2 days'). After each run, it will skip the duration of 'skipDays'

job.repeatEvery("10 minutes");
job.repeatEvery("3 minutes", {
  skipImmediate: true,
job.repeatEvery("0 6 * * *", {
  timezone: "America/New_York",


Specifies a time when the job should repeat. Possible values



Specifies the next time at which the job should run.

job.schedule("tomorrow at 6pm");


Specifies the priority weighting of the job. Can be a number or a string from the above priority table.



Specifies whether the result of the job should also be stored in the database. Defaults to false.


The data returned by the job will be available on the result attribute after it succeeded and got retrieved again from the database, e.g. via or through the success job event).

unique(properties, [options])

Ensure that only one instance of this job exists with the specified properties

options is an optional argument which can overwrite the defaults. It can take the following:

  • insertOnly: boolean will prevent any properties from persisting if the job already exists. Defaults to false.
job.unique({ "data.type": "active", "data.userId": "123", nextRunAt: date });

IMPORTANT: To guarantee uniqueness as well as avoid high CPU usage by MongoDB make sure to create a unique index on the used fields, like name, data.type and data.userId for the example above.


Sets job.attrs.failedAt to now, and sets job.attrs.failReason to reason.

Optionally, reason can be an error, in which case job.attrs.failReason will be set to error.message"insufficient disk space");
// or Error("insufficient disk space"));


Runs the given job and calls callback(err, job) upon completion. Normally you never need to call this manually., job) => {
  console.log("I don't know why you would need to do this...");


Saves the job.attrs into the database. Returns a Promise resolving to a Job instance, or rejecting on error.

try {
  console.log("Successfully saved job to collection");
} catch (e) {
  console.error("Error saving job to collection");


Removes the job from the database. Returns a Promise resolving to the number of jobs removed, or rejecting on error.

try {
  await job.remove();
  console.log("Successfully removed job from collection");
} catch (e) {
  console.error("Error removing job from collection");


Disables the job. Upcoming runs won't execute.


Enables the job if it got disabled before. Upcoming runs will execute.


Resets the lock on the job. Useful to indicate that the job hasn't timed out when you have very long running jobs. The call returns a promise that resolves when the job's lock has been renewed.

agenda.define("super long job", async (job) => {
  await doSomeLongTask();
  await job.touch();
  await doAnotherLongTask();
  await job.touch();
  await finishOurLongTasks();

Job Queue Events

An instance of an agenda will emit the following events:

  • start - called just before a job starts
  • start:job name - called just before the specified job starts
agenda.on("start", (job) => {
  console.log("Job %s starting",;
  • complete - called when a job finishes, regardless of if it succeeds or fails
  • complete:job name - called when a job finishes, regardless of if it succeeds or fails
agenda.on("complete", (job) => {
  console.log(`Job ${} finished`);
  • success - called when a job finishes successfully
  • success:job name - called when a job finishes successfully
agenda.on("success:send email", (job) => {
  console.log(`Sent Email Successfully to ${}`);
  • fail - called when a job throws an error
  • fail:job name - called when a job throws an error
agenda.on("fail:send email", (err, job) => {
  console.log(`Job failed with error: ${err.message}`);

Frequently Asked Questions

What is the order in which jobs run?

Jobs are run with priority in a first in first out order (so they will be run in the order they were scheduled AND with respect to highest priority).

For example, if we have two jobs named "send-email" queued (both with the same priority), and the first job is queued at 3:00 PM and second job is queued at 3:05 PM with the same priority value, then the first job will run first if we start to send "send-email" jobs at 3:10 PM. However if the first job has a priority of 5 and the second job has a priority of 10, then the second will run first (priority takes precedence) at 3:10 PM.

The default MongoDB sort object is { nextRunAt: 1, priority: -1 } and can be changed through the option sort when configuring Agenda.

What is the difference between lockLimit and maxConcurrency?

Agenda will lock jobs 1 by one, setting the lockedAt property in mongoDB, and creating an instance of the Job class which it caches into the _lockedJobs array. This defaults to having no limit, but can be managed using lockLimit. If all jobs will need to be run before agenda's next interval (set via agenda.processEvery), then agenda will attempt to lock all jobs.

Agenda will also pull jobs from _lockedJobs and into _runningJobs. These jobs are actively being worked on by user code, and this is limited by maxConcurrency (defaults to 20).

If you have multiple instances of agenda processing the same job definition with a fast repeat time you may find they get unevenly loaded. This is because they will compete to lock as many jobs as possible, even if they don't have enough concurrency to process them. This can be resolved by tweaking the maxConcurrency and lockLimit properties.

Sample Project Structure?

Agenda doesn't have a preferred project structure and leaves it to the user to choose how they would like to use it. That being said, you can check out the example project structure below.

Can I Donate?

Thanks! I'm flattered, but it's really not necessary. If you really want to, you can find my gittip here.

Web Interface?

Agenda itself does not have a web interface built in but we do offer stand-alone web interface Agendash:

Agendash interface

Mongo vs Redis

The decision to use Mongo instead of Redis is intentional. Redis is often used for non-essential data (such as sessions) and without configuration doesn't guarantee the same level of persistence as Mongo (should the server need to be restarted/crash).

Agenda decides to focus on persistence without requiring special configuration of Redis (thereby degrading the performance of the Redis server on non-critical data, such as sessions).

Ultimately if enough people want a Redis driver instead of Mongo, I will write one. (Please open an issue requesting it). For now, Agenda decided to focus on guaranteed persistence.

Spawning / forking processes

Ultimately Agenda can work from a single job queue across multiple machines, node processes, or forks. If you are interested in having more than one worker, Bars3s has written up a fantastic example of how one might do it:

const cluster = require("cluster");
const os = require("os");

const httpServer = require("./app/http-server");
const jobWorker = require("./app/job-worker");

const jobWorkers = [];
const webWorkers = [];

if (cluster.isMaster) {
  const cpuCount = os.cpus().length;
  // Create a worker for each CPU
  for (let i = 0; i < cpuCount; i += 1) {

  cluster.on("exit", (worker, code, signal) => {
    if (jobWorkers.indexOf( !== -1) {
        `job worker ${} exited (signal: ${signal}). Trying to respawn...`

    if (webWorkers.indexOf( !== -1) {
        `http worker ${} exited (signal: ${signal}). Trying to respawn...`
} else {
  if (process.env.web) {
    console.log(`start http server: ${}`);
    // Initialize the http server here

  if (process.env.job) {
    console.log(`start job server: ${}`);
    // Initialize the Agenda here

function addWebWorker() {
  webWorkers.push(cluster.fork({ web: 1 }).id);

function addJobWorker() {
  jobWorkers.push(cluster.fork({ job: 1 }).id);

function removeWebWorker(id) {
  webWorkers.splice(webWorkers.indexOf(id), 1);

function removeJobWorker(id) {
  jobWorkers.splice(jobWorkers.indexOf(id), 1);

Recovering lost Mongo connections ("auto_reconnect")

Agenda is configured by default to automatically reconnect indefinitely, emitting an error event when no connection is available on each process tick, allowing you to restore the Mongo instance without having to restart the application.

However, if you are using an existing Mongo client you'll need to configure the reconnectTries and reconnectInterval connection settings manually, otherwise you'll find that Agenda will throw an error with the message "MongoDB connection is not recoverable, application restart required" if the connection cannot be recovered within 30 seconds.

Example Project Structure

Agenda will only process jobs that it has definitions for. This allows you to selectively choose which jobs a given agenda will process.

Consider the following project structure, which allows us to share models with the rest of our code base, and specify which jobs a worker processes, if any at all.

- server.js
- worker.js
  - agenda.js
    - user-controller.js
    - email.js
    - video-processing.js
    - image-processing.js
     - user-model.js
     - blog-post.model.js

Sample job processor (eg. jobs/email.js)

let email = require("some-email-lib"),
  User = require("../models/user-model.js");

module.exports = function (agenda) {
  agenda.define("registration email", async (job) => {
    const user = await User.get(;
    await email(,
      "Thanks for registering",
      "Thanks for registering " +

  agenda.define("reset password", async (job) => {
    // Etc

  // More email related jobs


const Agenda = require("agenda");

const connectionOpts = {
  db: { address: "localhost:27017/agenda-test", collection: "agendaJobs" },

const agenda = new Agenda(connectionOpts);

const jobTypes = process.env.JOB_TYPES ? process.env.JOB_TYPES.split(",") : [];

jobTypes.forEach((type) => {
  require("./jobs/" + type)(agenda);

if (jobTypes.length) {
  agenda.start(); // Returns a promise, which should be handled appropriately

module.exports = agenda;


let app = express(),
  User = require("../models/user-model"),
  agenda = require("../worker.js");"/users", (req, res, next) => {
  const user = new User(req.body); => {
    if (err) {
      return next(err);
    }"registration email", { userId: user.primary() });
    res.send(201, user.toJson());



Now you can do the following in your project:

node server.js

Fire up an instance with no JOB_TYPES, giving you the ability to process jobs, but not wasting resources processing jobs.

JOB_TYPES=email node server.js

Allow your http server to process email jobs.

JOB_TYPES=email node worker.js

Fire up an instance that processes email jobs.

JOB_TYPES=video-processing,image-processing node worker.js

Fire up an instance that processes video-processing/image-processing jobs. Good for a heavy hitting server.

Debugging Issues

If you think you have encountered a bug, please feel free to report it here:

Submit Issue

Please provide us with as much details as possible such as:

  • Agenda version
  • Environment (OSX, Linux, Windows, etc)
  • Small description of what happened
  • Any relevant stack track
  • Agenda logs (see below)

To turn on logging, please set your DEBUG env variable like so:

  • OSX: DEBUG="agenda:*" ts-node src/index.js
  • Linux: DEBUG="agenda:*" ts-node src/index.js
  • Windows CMD: set DEBUG=agenda:*
  • Windows PowerShell: $env:DEBUG = "agenda:*"

While not necessary, attaching a text file with this debug information would be extremely useful in debugging certain issues and is encouraged.

Known Issues

"Multiple order-by items are not supported. Please specify a single order-by item."

When running Agenda on Azure cosmosDB, you might run into this issue caused by Agenda's sort query used for finding and locking the next job. To fix this, you can pass custom sort option: sort: { nextRunAt: 1 }


Author: Agenda
Source Code: 
License: View license

#node #task #cron 

Agenda: MongoDB-backed Job Scheduling
Nat  Grady

Nat Grady


Colored Terminal Output for Python's Logging Module

coloredlogs: Colored terminal output for Python's logging module

The coloredlogs package enables colored terminal output for Python's logging module. The ColoredFormatter class inherits from logging.Formatter and uses ANSI escape sequences to render your logging messages in color. It uses only standard colors so it should work on any UNIX terminal. It's currently tested on Python 2.7, 3.5+ and PyPy (2 and 3). On Windows coloredlogs automatically tries to enable native ANSI support (on up-to-date Windows 10 installations) and falls back on using colorama (if installed). Here is a screen shot of the demo that is printed when the command coloredlogs --demo is executed:

Note that the screenshot above includes custom logging levels defined by my verboselogs package: if you install both coloredlogs and verboselogs it will Just Work (verboselogs is of course not required to use coloredlogs).


The coloredlogs package is available on PyPI which means installation should be as simple as:

$ pip install coloredlogs

There's actually a multitude of ways to install Python packages (e.g. the per user site-packages directory, virtual environments or just installing system wide) and I have no intention of getting into that discussion here, so if this intimidates you then read up on your options before returning to these instructions 😉.

Optional dependencies

Native ANSI support on Windows requires an up-to-date Windows 10 installation. If this is not working for you then consider installing the colorama package:

$ pip install colorama

Once colorama is installed it will be used automatically.


Here's an example of how easy it is to get started:

import coloredlogs, logging

# Create a logger object.
logger = logging.getLogger(__name__)

# By default the install() function installs a handler on the root logger,
# this means that log messages from your code and log messages from the
# libraries that you use will all show up on the terminal.

# If you don't want to see log messages from libraries, you can pass a
# specific logger object to the install() function. In this case only log
# messages originating from that logger will show up on the terminal.
coloredlogs.install(level='DEBUG', logger=logger)

# Some examples.
logger.debug("this is a debugging message")"this is an informational message")
logger.warning("this is a warning message")
logger.error("this is an error message")
logger.critical("this is a critical message")

Format of log messages

The ColoredFormatter class supports user defined log formats so you can use any log format you like. The default log format is as follows:

%(asctime)s %(hostname)s %(name)s[%(process)d] %(levelname)s %(message)s

This log format results in the following output:

2015-10-23 03:32:22 peter-macbook coloredlogs.demo[30462] DEBUG message with level 'debug'
2015-10-23 03:32:23 peter-macbook coloredlogs.demo[30462] VERBOSE message with level 'verbose'
2015-10-23 03:32:24 peter-macbook coloredlogs.demo[30462] INFO message with level 'info'

You can customize the log format and styling using environment variables as well as programmatically, please refer to the online documentation for details.

Enabling millisecond precision

If you're switching from logging.basicConfig() to coloredlogs.install() you may notice that timestamps no longer include milliseconds. This is because coloredlogs doesn't output milliseconds in timestamps unless you explicitly tell it to. There are three ways to do that:

The easy way is to pass the milliseconds argument to coloredlogs.install():


This became supported in release 7.1 (due to #16).

Alternatively you can change the log format to include 'msecs':

%(asctime)s,%(msecs)03d %(hostname)s %(name)s[%(process)d] %(levelname)s %(message)s

Here's what the call to coloredlogs.install() would then look like:

coloredlogs.install(fmt='%(asctime)s,%(msecs)03d %(hostname)s %(name)s[%(process)d] %(levelname)s %(message)s')

Customizing the log format also enables you to change the delimiter that separates seconds from milliseconds (the comma above). This became possible in release 3.0 which added support for user defined log formats.

If the use of %(msecs)d isn't flexible enough you can instead add %f to the date/time format, it will be replaced by the value of %(msecs)03d. Support for the %f directive was added to release 9.3 (due to #45).

Custom logging fields

The following custom log format fields are supported:

  • %(hostname)s provides the hostname of the local system.
  • %(programname)s provides the name of the currently running program.
  • %(username)s provides the username of the currently logged in user.

When coloredlogs.install() detects that any of these fields are used in the format string the applicable logging.Filter subclasses are automatically registered to populate the relevant log record fields.

Changing text styles and colors

The online documentation contains an example of customizing the text styles and colors.

Colored output from cron

When coloredlogs is used in a cron job, the output that's e-mailed to you by cron won't contain any ANSI escape sequences because coloredlogs realizes that it's not attached to an interactive terminal. If you'd like to have colors e-mailed to you by cron there are two ways to make it happen:

Modifying your crontab

Here's an example of a minimal crontab:

* * * * * root coloredlogs --to-html your-command

The coloredlogs program is installed when you install the coloredlogs Python package. When you execute coloredlogs --to-html your-command it runs your-command under the external program script (you need to have this installed). This makes your-command think that it's attached to an interactive terminal which means it will output ANSI escape sequences which will then be converted to HTML by the coloredlogs program. Yes, this is a bit convoluted, but it works great :-)

Modifying your Python code

The ColoredCronMailer class provides a context manager that automatically enables HTML output when the $CONTENT_TYPE variable has been correctly set in the crontab.

This requires my capturer package which you can install using pip install 'coloredlogs[cron]'. The [cron] extra will pull in capturer 2.4 or newer which is required to capture the output while silencing it - otherwise you'd get duplicate output in the emails sent by cron.

The context manager can also be used to retroactively silence output that has already been produced, this can be useful to avoid spammy cron jobs that have nothing useful to do but still email their output to the system administrator every few minutes :-).


The latest version of coloredlogs is available on PyPI and GitHub. The online documentation is available on Read The Docs and includes a changelog. For bug reports please create an issue on GitHub. If you have questions, suggestions, etc. feel free to send me an e-mail at

Author: Xolox
Source Code: 
License: MIT License

#python #html #terminal #cron 

Colored Terminal Output for Python's Logging Module
宇野  和也

宇野 和也


GitHub Actions, CircleCI - 何かを定期実行したいときに便利なサービスを紹介します!

何かを定期実行したいときに便利なサービスを紹介します! GitHub Actions, CircleCI


#github  #circleci  #cron 

GitHub Actions, CircleCI - 何かを定期実行したいときに便利なサービスを紹介します!
Nandu Singh

Nandu Singh


Schedule task in Node.js using full crontab syntax

The node-cron module is tiny task scheduler in pure JavaScript for node.js based on GNU crontab. This module allows you to schedule task in Node.js using full crontab syntax.

Need a job scheduler with support for worker threads and cron syntax? Try out the Bree job scheduler!

Getting Started

Install node-cron using npm:

$ npm install --save node-cron

Import node-cron and schedule a task:

var cron = require('node-cron');

cron.schedule('* * * * *', () => {
  console.log('running a task every minute');

Cron Syntax

This is a quick reference to cron syntax and also shows the options supported by node-cron.

Allowed fields

 # ┌────────────── second (optional)
 # │ ┌──────────── minute
 # │ │ ┌────────── hour
 # │ │ │ ┌──────── day of month
 # │ │ │ │ ┌────── month
 # │ │ │ │ │ ┌──── day of week
 # │ │ │ │ │ │
 # │ │ │ │ │ │
 # * * * * * *

Allowed values

day of month1-31
month1-12 (or names)
day of week0-7 (or names, 0 or 7 are sunday)

Crontab Expressions Examples

Crontab every 5 minutes

*/5 * * * *

Crontab every 10 minutes

*/10 * * * *

Crontab every 15 minutes

*/15 * * * *

Crontab every 30 minutes

*/30 * * * *

Crontab every 45 minutes

*/45 * * * *

Crontab every hour

0 * * * *

Crontab every 2 hours

0 */2 * * *

Crontab every 3 hours

0 */3 * * *

Crontab every 5 hours

0 */5 * * *

Crontab every 6 hours

0 */6 * * *

Crontab Monday to Friday

0 0 * * 1-5

Crontab every Monday

0 0 * * MON

Crontab every Tuesday

0 0 * * TUE

Crontab every Wednesday

0 0 * * WED

Crontab every Thursday

0 0 * * THU

Crontab every Friday

0 0 * * FRI

Crontab every Saturday

0 0 * * SAT

Crontab every Sunday

0 0 * * SUN

Crontab every Week

0 0 * * 0

Crontab every Day Of Week

0 1 * * *


Using multiples values

You may use multiples values separated by comma:

var cron = require('node-cron');

cron.schedule('1,2,4,5 * * * *', () => {
  console.log('running every minute 1, 2, 4 and 5');

Using ranges

You may also define a range of values:

var cron = require('node-cron');

cron.schedule('1-5 * * * *', () => {
  console.log('running every minute to 1 from 5');

Using step values

Step values can be used in conjunction with ranges, following a range with '/' and a number. e.g: 1-10/2 that is the same as 2,4,6,8,10. Steps are also permitted after an asterisk, so if you want to say “every two minutes”, just use */2.

var cron = require('node-cron');

cron.schedule('*/2 * * * *', () => {
  console.log('running a task every two minutes');

Using names

For month and week day you also may use names or short names. e.g:

var cron = require('node-cron');

cron.schedule('* * * January,September Sunday', () => {
  console.log('running on Sundays of January and September');

Or with short names:

var cron = require('node-cron');

cron.schedule('* * * Jan,Sep Sun', () => {
  console.log('running on Sundays of January and September');

Cron methods


Schedules given task to be executed whenever the cron expression ticks.


  • expression string: Cron expression
  • function Function: Task to be executed
  • options Object: Optional configuration for job scheduling.


  • scheduled: A boolean to set if the created task is scheduled. Default true;
  • timezone: The timezone that is used for job scheduling. See moment-timezone for valid values.


 var cron = require('node-cron');

 cron.schedule('0 1 * * *', () => {
   console.log('Running a job at 01:00 at America/Sao_Paulo timezone');
 }, {
   scheduled: true,
   timezone: "America/Sao_Paulo"

ScheduledTask methods


Starts the scheduled task.

var cron = require('node-cron');

var task = cron.schedule('* * * * *', () =>  {
  console.log('stopped task');
}, {
  scheduled: false



The task won't be executed unless re-started.

var cron = require('node-cron');

var task = cron.schedule('* * * * *', () =>  {
  console.log('will execute every minute until stopped');



The task will be stopped and completely destroyed.

var cron = require('node-cron');

var task = cron.schedule('* * * * *', () =>  {
  console.log('will not execute anymore, nor be able to restart');



Validate that the given string is a valid cron expression.

var cron = require('node-cron');

var valid = cron.validate('59 * * * *');
var invalid = cron.validate('60 * * * *');


Feel free to submit issues and enhancement requests here.


node-cron is under ISC License.

#node  #nodejs  #cron  #crontab 

Schedule task in Node.js using full crontab syntax
Nandu Singh

Nandu Singh


How to run a script in crontab every 15 minutes?

Crontab expression every 15 minutes is a commonly used cron schedule.

*/15 * * * *

The following key shows how each position in the cron pattern string is interpreted:

* * * * * *
| | | | | |
| | | | | day of week
| | | | month
| | | day of month
| | hour
| minute
second (optional)

How to run a script in crontab every 15 minutes?

Cron Job lets you run a script to do a repetitive job in an efficient way, here's how you can schedule a cronjob for every 5 minutes:

Step 1: Edit your cronjob file by running "crontab -e" command

crontab -e

Step 2) Add the following line for every 15 minutes interval:

*/15 * * * * /home/ubuntu/

Step 3: Save the file. Done!

How to restart cron after change crontab?

Cron will then examine the modification time on all crontab and reload those which have changed. Thus cron need not be restarted whenever a crontab file is modified.

But if you just want to make sure its done anyway:

sudo service cron reload


/etc/init.d/cron reload
sudo service cron restart
sudo systemctl reload crond

Happy coding!

#cron #crontab

How to run a script in crontab every 15 minutes?

Kubernetes Jobs | How Cron job works | K21academy

➽ Register now FREE Class for Docker & Kubernetes Administrator (CKA) at

Kubernetes Jobs and types of Jobs

Kubernetes provides several controllers for managing pods. Like ReplicaSets, DaemonSets, StatefulSets, and Deployments. They ensure that their pods are always running. If a pod fails, the controller restarts it or reschedules it to another node to make sure the application the pods is hosting keeps running.

What if we do want the pod to terminate? Well, that is where Kubernetes Jobs come in

➤ Kubernetes Jobs
The main function of a job is to create one or more pods and tracks the success of pods. They ensure that the specified number of pods are completed successfully. When a specified number of successful runs of pods is completed, then the job is considered complete. Creating a Kubernetes Job, like other Kubernetes resources, is through a definition file
➤ Kubernetes CronJobs
CronJobs are for cluster tasks that need to be executed on a predefined schedule. They are useful for periodic and recurring tasks, like running backups, sending emails, or scheduling individual tasks for a specific time, such as when your cluster is likely to be idle.

To know more, check this video from K21Academy
Where we explain:
00:00 = Introduction to Kubernetes Jobs
00:47 = Agenda
01:15 = What is Cron Job
04:49 = Cron Job - Params
09:15 = How Cron job works
11:50 = Non-parallel jobs
14:54 = Parallel job - Batch processing
21:11 = Parallel job -Task types
22:32 = Parallel job-Work Queue
25:47 = Learning path for Docker and Kubernetes Application Developer(CKAD)
26:53 = FREE Training
27:17 = Registration link for FREE Class

Also, don’t forget to join our FREE Telegram group at, and be the first to receive Docker & Kubernetes related news and updates.

#jobskubernetes #kubernetesjobs #kubernetes #kubernetescloudjobs #kubernetesindemand #k8sjobs #kubernetesjobopportunities #jobsinkubernetes #kubernetesbasicconcepts #learnkubernetes #kubernetesforbeginners #kubernetescronjobs #askatul #k21academy

Subscribe To K21Academys YouTube Channels for more FREE Information on:

➥ Microsoft Azure Cloud:
➥ Dockers & Kubernetes:
➥ AWS & Oracle :

Watch our recent upload:

➽ Facebook:
➽ Linkedin:
➽ Twitter:
➽ Instagram:

#kubernetes #docker #cron

Kubernetes Jobs |  How Cron job works | K21academy

In Linux, how do I see or list Cron Jobs?

List Cron Jobs Running by System
The root user can access and modify the crontab’s of the operating system. You can view the system’s cronjobs by running the following command as root or sudo privileged account.
How to List Hourly Cron Jobs
You can view the /ettc/cron.hourly directory to find all the cron jobs scheduled to run on every hour.
How to List Daily Cron Jobs
Similarly, you can list all the scheduled job to run on daily basis. Most of the application jobs can be find in this directory.
How to List Weekly Cron Jobs
The weekly cron jobs are scheduled under /etc/cron.weekly directory.

#cron #cronjobs #list cron jobs #linux

In Linux, how do I see or list Cron Jobs?