Monty  Boehm

Monty Boehm


Run Cron Jobs Every 5, 10, or 15 Minutes

Run Cron Jobs Every 5, 10, or 15 Minutes

A cron job is a task that is executed at specified intervals. The tasks can be scheduled to run by a minute, hour, day of the month, month, day of the week, or any combination of these.

Cron jobs are generally used to automate system maintenance or administration, such as backing up databases or data, updating the system with the latest security patches, checking the disk space usage , sending emails, and so on.

Running cron job every 5, 10, or 15 minutes are some of the most commonly used cron schedules.

Crontab Syntax and Operators

Crontab (cron table) is a text file that defines the schedule of cron jobs. Crontab files can be created, viewed , modified, and removed with the crontab command.

Each line in the user crontab file contains six fields separated by a space followed by the command to be run:

* * * * * command(s)
^ ^ ^ ^ ^
| | | | |     allowed values
| | | | |     -------
| | | | ----- Day of week (0 - 7) (Sunday=0 or 7)
| | | ------- Month (1 - 12)
| | --------- Day of month (1 - 31)
| ----------- Hour (0 - 23)
------------- Minute (0 - 59)

The first five fields (time and date) also accepts the following operators:

  • * - The asterisk operator means all allowed values. If you have the asterisk symbol in the Minute field, it means the task will be performed each minute.
  • - - The hyphen operator allows you to specify a range of values. If you set 1-5 in the Day of the week field, the task will run every weekday (From Monday to Friday). The range is inclusive, which means that the first and last values are included in the range.
  • , - The comma operator allows you to define a list of values for repetition. For example, if you have 1,3,5 in the Hour field, the task will run at 1 am, 3 am and 5 am. The list can contain single values and ranges, 1-5,7,8,10-15
  • / - The slash operator allows you to specify step values that can be used in conjunction with ranges. For example, if you have 1-10/2 in the Minutes field, it means the action will be performed every two minutes in range 1-10, same as specifying 1,3,5,7,9. Instead of a range of values, you can also use the asterisk operator. To specify a job to be run every 20 minutes, you can use “*/20”.

The syntax of system-wide crontab files is slightly different than user crontabs. It contains an additional mandatory user field that specifies which user will run the cron job.

* * * * * <username> command(s)

To edit the crontab file, or create one if it doesn’t exist, use the crontab -e command.

Run a Cron Job Every 5 Minutes

There are two ways to run a cron job every five minutes.

The first option is to use the comma operator a create a list of minutes:

0,5,10,15,20,25,30,35,40,45,50,55  * * * * command

The line above is syntactically correct and it will work just fine. However, typing the whole list can be tedious and prone to errors.

The second option to specify a job to be run every 5 minutes hours is to use the step operator:

*/5  * * * * command

*/5 means create a list of all minutes and run the job for every fifth value from the list.

Run a Cron Job Every 10 Minutes

To run a cron job every 10 minutes, add the following line in your crontab file:

*/10  * * * * command

Run a Cron Job Every 15 Minutes

To run a cron job every 15 minutes, add the following line in your crontab file:

*/15  * * * * command


We’ve shown you how to run a cron command every 5, 10, or 15 minutes.

Original article source at:

#terminal #cron #jobs 

Run Cron Jobs Every 5, 10, or 15 Minutes
Gordon  Matlala

Gordon Matlala


Comprehensive Guide: Cron Jobs

Do you need to run a script regularly but don’t want to remember to launch it manually? Or maybe you need to execute a command at a specific time or interval but don’t want the process to monopolize your CPU or memory. In either case, cron jobs are perfect for the task. Let’s look at what they are, how to set them up, and some of the things you can do with them.

There are times when there’s a need to run a group of tasks automatically at certain times in the future. These tasks are usually administrative but could be anything – from making database backups to downloading emails when everyone is asleep.

Cron is a time-based job scheduler in Unix-like operating systems, which triggers certain tasks in the future. The name originates from the Greek word χρόνος (chronos), which means time.

The most commonly used version of Cron is known as Vixie Cron. Paul Vixie originally developed it in 1987.

Cron Job Terminology

  • Job: a unit of work, a series of steps to do something. For example, sending an email to a group of users. This article will use task, job, cron job, or event interchangeably.
  • Daemon: a computer program that runs in the background, serving different purposes. Daemons often start at boot time. A web server is a daemon serving HTTP requests. Cron is a daemon for running scheduled tasks.
  • Cron Job: a cron job is a scheduled job. The daemon runs the job when it’s due.
  • Webcron: a time-based job scheduler that runs within the server environment. It’s an alternative to the standard Cron, often on shared web hosts that do not provide shell access.

Getting Started with Cron Jobs

If we take a look inside the /etc directory, we can see directories like cron.hourly, cron.daily, cron.weekly and cron.monthly, each corresponding to a certain frequency of execution.

One way to schedule our tasks is to place our scripts in the proper directory. For example, to run db_backup.php on a daily basis, we put it inside cron.daily. If the folder for a given frequency is missing, we would need to create it first.

Note: This approach uses the run-parts script, a command which runs every executable it finds within the specified directory.

This is the simplest way to schedule a task. However, if we need more flexibility, we should use Crontab.

Crontab Files

Cron uses special configuration files called crontab files, which contain a list of jobs to be done. Crontab stands for Cron Table. Each line in the crontab file is called a cron job, which resembles a set of columns separated by a space character. Each row specifies when and how often Cron should execute a certain command or script.

In a crontab file, blank lines or lines starting with #, spaces or tabs will be ignored. Lines starting with # are considered comments.

Active lines in a crontab are either the declaration of an environment variable or a cron job. Crontab does not allow comments on active lines.

Below is an example of a crontab file with just one entry:

0 0 * * *  /var/www/sites/

The first part 0 0 * * * is the cron expression, which specifies the frequency of execution. The above cron job will run once a day.

Users can have their own crontab files named after their username as registered in the /etc/passwd file. All user-level crontab files reside in Cron’s spool area. You should not edit these files directly. Instead, we should edit them using the crontab command-line utility.

Note: The spool directory varies across different distributions of Linux. On Ubuntu it’s /var/spool/cron/crontabs while in CentOS it’s /var/spool/cron.

To edit our own crontab file:

crontab -e

The above command will automatically open up the crontab file which belongs to our user. If you haven’t chosen a default editor for the crontab before, you’ll see a selection of installed editors to pick from. We can also explicitly choose or change our desired editor for editing the crontab file:

export VISUAL=nano; crontab -e

After we save the file and exit the editor, the crontab will be checked for accuracy. If everything is set properly, the file will be saved to the spool directory.

Note: Each command in the crontab file executes from the perspective of the user who owns the crontab. If your command runs as root (sudo) you will not be able to define this crontab from your own user account unless you log in as root.

To list the installed cron jobs belonging to our own user:

crontab -l

We can also write our cron jobs in a file and send its contents to the crontab file like so:

crontab /path/to/the/file/containing/cronjobs.txt

The preceding command will overwrite the existing crontab file with /path/to/the/file/containing/cronjobs.txt.

To remove the crontab, we use the -r option:

crontab -r

Anatomy of a Crontab Entry

The anatomy of a user-level crontab entry looks like the following:

 # ┌───────────── min (0 - 59) 
 # │ ┌────────────── hour (0 - 23)
 # │ │ ┌─────────────── day of month (1 - 31)
 # │ │ │ ┌──────────────── month (1 - 12)
 # │ │ │ │ ┌───────────────── day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0)
 # │ │ │ │ │
 # │ │ │ │ │
 # * * * * *  command to execute

The first two fields specify the time (minute and hour) at which the task will run. The next two fields specify the day of the month and the month. The fifth field specifies the day of the week.

Cron will execute the command when the minute, hour, month, and either day of month or day of week match the current time.

If both day of week and day of month have certain values, the event will run when either field matches the current time. Consider the following expression:

0 0 5-20/5 Feb 2 /path/to/command

The preceding cron job will run once per day every five days, from 5th to 20th of February plus all Tuesdays of February.

Important: When both day of month and day of week have certain values (not an asterisk), it will create an OR condition, meaning both days will be matched.

The syntax in system crontabs (/etc/crontab) is slightly different than user-level crontabs. The difference is that the sixth field is not the command, but it is the user we want to run the job as.

* * * * * testuser /path/to/command

It’s not recommended to put system-wide cron jobs in /etc/crontab, as this file might be modified in future system updates. Instead, we put these cron jobs in the /etc/cron.d directory.

Editing Other Users’ Crontab

We might need to edit other users’ crontab files. To do this, we use the -u option as below:

crontab -u username -e

Note We can only edit other users’ crontab files as the root user, or as a user with administrative privileges.

Some tasks require super admin privileges. You should add them to the root user’s crontab file:

sudo crontab -e

Note: Please note that using sudo with crontab -e will edit the root user’s crontab file. If we need to to edit another user’s crontab while using sudo, we should use -u option to specify the crontab owner.

To learn more about the crontab command:

man crontab

Standard and Non-Standard Crontab Values

Crontab fields accept numbers as values. However, we can put other data structures in these fields, as well.


We can pass in ranges of numbers:

0 6-18 1-15 * * /path/to/command

The above cron job will run from 6 am to 6 pm from the 1st to 15th of each month in the year. Note that the specified range is inclusive, so 1-5 means 1,2,3,4,5.


A list is a group of comma-separated values. We can have lists as field values:

0 1,4,5,7 * * * /path/to/command

The above syntax will run the cron job at 1 am, 4 am, 5 am and 7 am every day.


Steps can be used with ranges or the asterisk character (*). When they are used with ranges they specify the number of values to skip through the end of the range. They are defined with a / character after the range, followed by a number. Consider the following syntax:

0 6-18/2 * * * /path/to/command

The above cron job will run every two hours from 6 am to 6 pm.

When steps are used with an asterisk, they simply specify the frequency of that particular field. As an example, if we set the minute to */5, it simply means every five minutes.

We can combine lists, ranges, and steps together to have more flexible event scheduling:

0 0-10/5,14,15,18-23/3 1 1 * /path/to/command

The above event will run every five hours from midnight of January 1st to 10 am, 2 pm, 3 pm and also every three hours from 6pm to 11 pm.


For the fields month and day of week we can use the first three letters of a particular day or month, like Sat, sun, Feb, Sep, etc.

* * * Feb,mar sat,sun /path/to/command

The preceding cron job will run only on Saturdays and Sundays of February and March.

Please note that the names are not case-sensitive. Ranges are not allowed when using names.

Predefined Definitions

Some cron implementations may support some special strings. These strings are used instead of the first five fields, each specifying a certain frequency:

  • @yearly, @annually Run once a year at midnight of January 1 (0 0 1 1 *)
  • @monthly Run once a month, at midnight of the first day of the month (0 0 1 * *)
  • @weekly Run once a week at midnight of Sunday (0 0 * * 0)
  • @daily Run once a day at midnight (0 0 * * *)
  • @hourly Run at the beginning of every hour (0 * * * *)
  • @reboot Run once at startup

Multiple Commands in the Same Cron Job

We can run several commands in the same cron job by separating them with a semi-colon (;).

* * * * * /path/to/command-1; /path/to/command-2

If the running commands depend on each other, we can use double ampersand (&&) between them. As a result, the second command will not run if the first one fails.

* * * * * /path/to/command-1 && /path/to/command-2

Environment Variables

Environment variables in crontab files are in the form of VARIABLE_NAME = VALUE (The white spaces around the equal sign are optional). Cron does not source any startup files from the user’s home directory (when it’s running user-level crons). This means we should manually set any user-specific settings required by our tasks.

Cron daemon automatically sets some environmental variables when it starts. HOME and LOGNAME are set from the crontab owner’s information in /etc/passwd. However, we can override these values in our crontab file if there’s a need for this.

There are also a few more variables like SHELL, specifying the shell which runs the commands. It is /bin/sh by default. We can also set the PATH in which to look for programs.

PATH = /usr/bin;/usr/local/bin

Important: We should wrap the value in quotation marks when there’s a space in the value. Please note that values are ordinary strings. They will not be interpreted or parsed in any way.

Different Time Zones

Cron uses the system’s time zone setting when evaluating crontab entries. This might cause problems for multiuser systems with users based in different time zones. To work around this problem, we can add an environment variable named CRON_TZ in our crontab file. As a result, all crontab entries will parse based on the specified timezone.

How Cron Interprets Crontab Files

After Cron starts, it searches its spool area to find and load crontab files into the memory. It additionally checks the /etc/crontab and or /etc/cron.d directories for system crontabs.

After loading the crontabs into memory, Cron checks the loaded crontabs on a minute-by-minute basis, running the events which are due.

In addition to this, Cron regularly (every minute) checks if the spool directory’s modtime (modification time) has changed. If so, it checks the modetime of all the loaded crontabs and reloads those which have changed. That’s why we don’t have to restart the daemon when installing a new cron job.

Cron Permissions

We can specify which user should be able to use Cron and which user should not. There are two files that play an important role when it comes to cron permissions: /etc/cron.allow and /etc/cron.deny.

If /etc/cron.allow exists, then our username must be listed in this file in order to use crontab. If /etc/cron.deny exists, it shouldn’t contain our username. If neither of these files exists, then based on the site-dependent configuration parameters, either the superuser or all users will be able to use crontab command. For example, in Ubuntu, if neither file exists, all users can use crontab by default.

We can put ALL in /etc/cron.deny file to prevent all users from using cron:

echo ALL > /etc/cron.deny

Note: If we create an /etc/cron.allow file, there’s no need to create a /etc/cron.deny file as it has the same effect as creating a /etc/cron.deny file with ALL in it.

Redirecting Output

We can redirect the output of our cron job to a file if the command (or script) has any output:

* * * * * /path/to/php /path/to/the/command >> /var/log/cron.log

We can redirect the standard output to dev null to get no email, but still send the standard error email:

* * * * * /path/to/php /path/to/the/command > /dev/null

To prevent Cron from sending any emails to us, we change the respective crontab entry as below:

* * * * * /path/to/php /path/to/the/command > /dev/null 2>&1

This means “send both the standard output and the error output into oblivion.”

Email the Output

The output is mailed to the owner of the crontab or the email(s) specified in the MAILTO environment variable (if the standard output or standard error are not redirected as above).

If MAILTO is set to empty, no email will be sent out as the result of the cron job.

We can set several emails by separating them with commas:,
* * * * * /path/to/command

Cron and PHP

We usually run our PHP command line scripts using the PHP executable.

php script.php

Alternatively, we can use shebang at the beginning of the script, and point to the PHP executable:

#! /usr/bin/php


// PHP code here

As a result, we can execute the file by calling it by name. However, we need to make sure we have permission to execute it.

To have more robust PHP command-line scripts, we can use third-party components for creating console applications like Symfony Console Component or Laravel Artisan. This article is a good start for using Symfony’s Console Component.

Learn more about creating console commands using Laravel Artisan. If you’d rather use another command-line tool for PHP, we have a comparison here.

Task Overlaps

There are times when scheduled tasks take much longer than expected. This will cause overlaps, meaning some tasks might be running at the same time. This might not cause a problem in some cases, but when they are modifying the same data in a database, we’ll have a problem. We can overcome this by increasing the execution frequency of the tasks. Still, there’s no guarantee that these overlaps won’t happen again.

We have several options to prevent cron jobs from overlapping.

Using Flock

Flock is a nice tool to manage lock files from within shell scripts or the command line. These lock files are useful for knowing whether or not a script is running.

When used in conjunction with Cron, the respective cron jobs do not start if the lock file exists. You can install Flock using apt-get or yum depending on the Linux distribution.

apt-get install flock


yum install flock

Consider the following crontab entry:

* * * * * /usr/bin/flock --timeout=1 /path/to/cron.lock /usr/bin/php /path/to/scripts.php

In the preceding example, flock looks for /path/to/cron.lock. If the lock is acquired in one second, it will run the script. Otherwise, it will fail with an exit code of 1.

Using a Locking Mechanism in the Scripts

If the cron job executes a script, we can implement a locking mechanism in the script. Consider the following PHP script:

$lockfile = sys_get_temp_dir() . '/' md5(__FILE__) . '.lock';
$pid      = file_exists($lockfile) ? trim(file_get_contents($lockfile)) : null;

if (is_null($pid) || posix_getsid($pid) === false) {

    // Do something here
    // And then create/update the lock file
    file_put_contents($lockfile, getmypid());

} else {
    exit('Another instance of the script is already running.');

In the preceding code, we keep pid of the current PHP process in a file, which is located in the system’s temp directory. Each PHP script has its own lock file, which is the MD5 hash of the script’s filename.

First, we check if the lock file exists, and then we get its content, which is the process ID of the last running instance of the script. Then we pass the pid to posix_getsid PHP function, which returns the session ID of the process. If posix_getsid returns false it means the process is not running anymore and we can safely start a new instance.


One of the problems with Cron is that it assumes the system is running continuously (24 hours a day). This causes problems for machines that are not running all day long (like personal computers). If the system goes offline during a scheduled task time, Cron will not run that task retroactively.

Anacron is not a replacement for Cron, but it solves this problem. It runs the commands once a day, week, or month but not on a minute-by-minute or hourly basis as Cron does. It is, however, a guarantee that the task will run even if the system goes off for an unanticipated period of time.

Only root or a user with administrative privileges can manage Anacron tasks. Anacron does not run in the background like a daemon, but only once, executing the tasks which are due.

Anacron uses a configuration file (just like crontab) named anacrontabs. This file is located in the /etc directory.

The content of this file looks like this:

# /etc/anacrontab: configuration file for anacron

# See anacron(8) and anacrontab(5) for details.

# the maximal random delay added to the base delay of the jobs
# the jobs will be started during the following hours only

#period in days   delay in minutes   job-identifier   command
1       5       cron.daily              nice run-parts /etc/cron.daily
7       25      cron.weekly             nice run-parts /etc/cron.weekly
@monthly 45     cron.monthly            nice run-parts /etc/cron.monthly

In an anacrontab file, we can only set the frequencies with a period of n days, followed by the delay time in minutes. This delay time is just to make sure the tasks do not run at the same time.

The third column is a unique name, which identifies the task in the Anacron log files.

The fourth column is the actual command to run.

Consider the following entry:

1       5       cron.daily              nice run-parts /etc/cron.daily

These tasks run daily, five minutes after Anacron runs. It uses run-parts to execute all the scripts within /etc/cron.daily.

The second entry in the list above runs every 7 days (weekly), with a 25 minutes delay.

Collision Between Cron and Anacron

As you have probably noticed, Cron is also set to execute the scripts inside /etc/cron.* directories. Different flavors of Linux handle this sort of possible collision with Anacron differently. In Ubuntu, Cron checks if Anacron is present in the system and if it is so, it won’t execute the scripts within /etc/cron.* directories.

In other flavors of Linux, Cron updates the Anacron timestamps when it runs the tasks. Anacron won’t execute them if Cron has already run them.

Quick Troubleshooting

Absolute Path to the commands

It’s a good habit to use the absolute paths to all the executables we use in a crontab file.

* * * * * /usr/local/bin/php /absolute/path/to/the/command

Make Sure Cron Daemon Is Running

If our tasks are not running at all, first we need to make sure the Cron daemon is running:

ps aux | grep crond

The output should similar to this:

root      7481  0.0  0.0 116860  1180 ?        Ss    2015   0:49 crond

Check /etc/cron.allow and /etc/cron.deny Files

When cron jobs are not running, we need to check if /etc/cron.allow exists. If it does, we need to make sure we list our username in this file. And if /etc/cron.deny exists, we need to make sure our username is not listed in this file.

If we edit a user’s crontab file whereas the user does not exist in the /etc/cron.allow file, including the user in the /etc/cron.allow won’t run the cron until we re-edit the crontab file.

Execute Permission

We need to make sure that the owner of the crontab has the execute permissions for all the commands and scripts in the crontab file. Otherwise, the cron will not work. You can add execute permissions to any folder or file with:

chmod +x /some/file.php

New Line Character

Every entry in the crontab should end with a new line. This means there must be a blank line after the last crontab entry, or the last cron job will never run.

Wrapping Up

Cron is a daemon, running a list of events scheduled to take place in the future. We define these jobs in special configuration files called crontab files. Users can have their own crontab file if they are allowed to use Cron, based on /etc/cron.allow or /etc/cron.deny files. In addition to user-level cron jobs, Cron also loads the system-wide cron jobs which are slightly different in syntax.

Our tasks are commonly PHP scripts or command-line utilities. In systems that are not running all the time, we can use Anacron to run the events which happen in the period of n days.

When working with Cron, we should also be aware of the tasks overlapping each other, to prevent data loss. After a cron job is finished, the output will be sent to the owner of the crontab and or the email(s) specified in the MAILTO environment variable.

Did you learn anything new from this post? Have we missed anything? Or did you just like this post and want to tell us how awesomely comprehensive it was? Let us know in the comments below!

Original article source at:

#cron #jobs #softwaredevelopment 

Comprehensive Guide: Cron Jobs
Rupert  Beatty

Rupert Beatty


23 Amazing High Paid Jobs Over $100K You Might Not Be Aware Of

Technology is changing faster than ever before, and the skills that are in high demand are also constantly evolving.

With this in mind, it’s no surprise that salary has become one of the most critical considerations for professionals looking to switch careers or advance their Information Technology knowledge. 

This article will be talking about career paths that will pay well over $100K annually!

Cloud Engineer Salary: $120,000 - $165,000 per year. 

Cloud Engineers are a type of software engineers who specialize in a broader range of topics, especially when it comes to cloud computing.

They must be skilled in network engineering and have experience developing for Amazon Web Services or Google Compute Engine.

They must have a deep understanding of creating distributed cloud applications and know what it takes to build, deploy, test, and operate software in an AWS or Google Compute Engine environment.

The salary range given is from and makes Cloud Engineer close to the top of the best paying jobs. 

Software Product Manager salary: $115,000 - $160,000 per year. 

A Software Product Managers job responsibilities include building the product roadmap for a company’s products in their respective fields (i.e., advertising tech). They are responsible for organizing teams’ schedules by setting goals with deadlines that will produce the most efficient results while keeping customers happy!

As you can imagine, this involves working closely with project managers across departments who may be putting together different aspects of one final product. They need to communicate with both internal employees and external customers.

It’s recommended that you have at least five years of experience in the IT industry (but this job is not limited exclusively to those already working in IT).

The salary range given is from

Machine Learning Engineer salary: $130,000 - $180,000 per year. 

A Machine Learning Engineer finds ways for computers to learn how to do specific tasks without prior programming knowledge through algorithms that other software or devices can use!

A lot of research goes into developing this skill, so Machine Learning Engineers usually have a doctorate degree in their field. They typically work closely with Data Scientists since they rely on each others’ expertise. Still, it’s possible to get into that field even with a master’s degree or bachelor’s degree.

Suppose you want to become a Machine Learning Engineer. In that case, it’s recommended that you have at least three years of experience with the software engineering process and know how to code (even if it’s not your primary job).

The salary range given is from, and it looks like a machine learning engineer is one of the best-paid jobs in the IT job market.

Database Developer salary: $115,000 - $150,000 per year. 

A Database Developers’ job responsibilities include building many different databases such as relational or object-oriented databases using MySQL and Oracle SQL Server tools.

They also develop ways for companies’ IT departments to optimize their database based on specific business needs while keeping data secure! They need strong programming skills, which means you’ll be writing a lot of code!

The median salary range given is from

Full Stack Developer salary: $105,000 - $160,000 per year. 

A Full Stack Developer knows how to work with websites or applications’ back-end and front-end aspects.

They need to know HTML/CSS for putting together web pages while also learning how to program in languages like C++ and Java if they’re working with software development teams!

In addition, some essential soft skills go along with this job, such as being good at collaboration and communicating ideas across multiple departments! That makes them extremely valuable within companies that focus on digital products (i.e., Apple). 

The median salary range given is from

Full Stack Developer is a perfect job and offers a very high salary in the job market that doesn’t require advanced education. A college education is fair enough for engineering managers that decide to hire you in most cases.

Hardware Engineer salary: $90,000 - $120,000 per year. 

A Hardware Engineer designs and develops components for a company’s computer systems or products to make them more efficient!

They create many of the physical parts that complete electronic circuits. That can then be built into more extensive subsystems and test the performance of those pieces before they’re ready to ship out. 

It requires mechanical engineering or CAD software knowledge, so you’ll definitely need some schooling if you want this job!

The median salary range given is from

Data Analyst salary: $60,000 - $100,000 per year. 

Many companies have started seeing utilizing data science teams who use large sets of data to find trends and generate reports. These can be used for marketing or business orientated decisions.

Data Analysts are responsible for making sense of that information from a company’s perspective by visualizing it in ways that people within the organization understand (i.e., using dashboards).

It also involves sharing insights with other teams like sales, research, and product development to ensure they utilize their resources effectively! 

The median salary range given is from Still, according to occupational statistics, a candidate with a doctoral degree can have very high pay.

Orchestrator salary: $45,000 - $100,000 per year. 

An Orchestrator takes care of all aspects involved when bringing hardware and software components into one final system, such as networks or databases through automation.

They also ensure that all the pieces work together correctly by making frequent updates to systems while keeping track of everything. Most of their time is spent on software development and testing procedures, with some additional skills like project management needed as well! 

The median annual wage given is from

Build Engineer salary: $90,000 - $120,000 per year. 

As a Build Engineer, you’ll be using your knowledge in programming languages such as Java or Python to help build large-scale applications. Which can be used across different platforms (i.e., desktops, mobile devices).

It’s usually done within an Agile methodology, which means working closely with Product Managers responsible for setting the overall goals of the company/product they’re working on.

It also requires some problem-solving skills to debug issues that may come up while getting used to new software and hardware! 

Software Developer salary: $90,000 - $120,000 per year. 

A Software Developer will be using their knowledge in programming languages like Java or C++ (in addition to others) by developing the core functionalities of a company’s products, such as creating web pages for websites!

Their job duties include designing and testing parts involved with these features, which often includes collaborating closely with Product Managers when coming up with ideas around what needs building next (i.e., user stories). Software developers need strong analytical thinking & communication skills because they’ll definitely be working on a team! 

According to many job openings, the median pay range given is from is one of the best paying jobs. Engineering managers (hiring) don’t usually require a master’s degree. So it could be a good idea for your next job, especially when you would like to make more money.

Mobile Developer salary: $110,000 - $140,000 per year. 

A Mobile Developer creates mobile apps for users with an emphasis in iOS or Android development, which requires knowledge in programming languages like Objective-C and Java (in addition to HTML/CSS).

It also involves using software tools such as Xcode when creating these apps and testing them across different devices before they’re publicly released! 

In the past, it may have been more common to start this job by learning how to program first. Still, many companies prefer someone who already has experience developing specifically within their platform of choice, even if not full-time. So definitely keep that in mind while starting your journey into tech! 

Data Scientist salary: $120,000 - $170,000 per year. 

A Data Scientist takes all the information available and turns it into something actionable by finding trends or insights that can be used for business purposes! They use programming languages like Python to analyze data which requires strong knowledge in statistics, so they’re able to explain their findings using graphs & charts (i.e., dashboards). 

It also involves communicating with people within organizations about what was found, so everyone understands how best to utilize those results! A doctorate degree may help your chances but isn’t necessary since there are plenty of entry-level positions open too if you have the right skills. 

Information systems managers salary: $87,000 - $250,000 per year.

Information Systems Managers use their knowledge in high-level programming languages like C++ & Java. Also, having a solid background in business and mathematics to plan, direct and coordinate activities involved with an organization’s computer systems such as networks and databases!

As you might imagine, this is usually done within larger companies. That requires someone who can think outside the box for creative solutions when problems arise (i.e., technology or infrastructure-based).

Communication skills are crucial since they’ll be working closely with both technical teams to ensure plans get implemented correctly and senior management across different departments. 

The median salary range given is from but keep in mind that high-level positions often pay quite well regardless of what your role will

Financial analyst salary: $67,000 - $126,000 per year.

A financial analyst analyzes and evaluates the performance of investment portfolios. They are typically employed by high-profile firms, such as banks or brokerage houses. The job requires a high degree of intelligence, analytical skills, research capability, and communication skills. 

Financial analysts work in teams with other professionals to decide what investments are best for their company’s portfolio. They must be able to evaluate risks carefully and have enough knowledge about different companies to recommend an investment strategy that will bring high returns on invested capital over time with low-risk exposure. 

It also involves creating reports and presentations that require strong statistics knowledge (i.e., descriptive & inferential).

This is a job where graduates may actually have an advantage since it’s often considered entry-level even if you don’t yet have relevant experience. Still, at least now you know what skills are needed regardless of whether or not this becomes your next career path. 

Research analysts salary: $58,000 - $145,000 per year.

A research analyst is an individual who analyzes data for a company or organization to help them make decisions. They carry out interviews, surveys and compile data to prepare reports that the company can use to decide what steps they need to take next.

It also involves communicating findings across different departments within an organization before they’re even public information! This is a job where graduates may actually have an advantage since it’s often considered entry-level even if you don’t yet have relevant experience. Still, at least now you know what skills are needed regardless of whether or not this becomes your next career path. 

The median salary range is from, but high-level positions will usually pay more than those lower down on the totem pole.

Business operations manager salary: $69,000 - $249,000 per year.

A business operations manager is in charge of running all aspects of a business, including managing strategy, customer support, and administrative functions. They are high-level managers that must have high levels of accountability. Also, high amounts of skill set diversity, high levels of analytical insight. As well as high levels of emotional intelligence, high levels of innovation, high levels of knowledge in the field they’re working in or are studying in.

It also involves communicating findings across different departments within an organization often before they’re even public information. This is a job where graduates may actually have an advantage since it’s often considered entry-level even if you don’t yet have relevant experience. Still, at least now you know what skills are needed regardless of whether or not this becomes your next career path. 

The median salary range is from, but high-level positions will usually pay more than those lower down on the totem pole.

Petroleum engineering salary: $84,000 - $258,000 per year.

Petroleum engineers are experts in finding and capturing oil, gas, and other natural resources. Petroleum engineers are high-paid specialists. You often could see on TV when their company has just made a significant discovery. Or when they need to design new equipment to extract oil from under the ocean floor.    Petroleum engineers have exciting jobs because they work with high stakes: how high those stakes depend on where the engineer works. In Saudi Arabia, for example, a petroleum engineer can make more than $1 million annually if they find a well that produces lots of high-quality crudes.

In contrast, an American petroleum engineer might earn a lot at a large exploration firm drilling wells in Texas and Alaska. But even though he makes less money.

A master’s degree may benefit petroleum engineering since it is frequently regarded as entry-level even if you don’t have previous expertise. Still, at least now you know what abilities are required, regardless of whether or not this becomes your next career path.

The median salary range for petroleum engineers given is from High-level positions will usually pay more than those lower down on the totem pole, though, and makes petroleum engineers one of the best jobs.

Medical doctor salary: $239,000 - $503,000 per year. 

Doctors save lives. They can be high-paid professionals who work in high-stakes situations, though not always for high-stakes organizations (or at least that’s the perception).

High school students thinking about doctoring as a career might consider becoming veterinarians since it pays better than human medicine and doesn’t require long years of schooling. 

However, average salaries are very competitive with other medical professions, so this is one high-paid job where graduates may actually have an edge! Though once again, there is no substitute for experience when it comes to landing your dream job, so get out there and start working hard while you’re still young! 

The median salary range given is from, but high-level positions will usually pay more than those lower down on the totem.

Financial managers salary: $96,000 - $250,000 per year.

Financial managers are high-level business leaders who oversee all aspects of a company’s financial situation. They have high levels of accountability and high amounts of skill set diversity. Also, high levels of analytical understanding, high levels of emotional intelligence, high levels of innovation, and high knowledge in the field they’re working in or are studying in.

A financial manager may benefit from a master’s degree. Although it is considered entry-level even if you don’t have prior knowledge, in some cases. It also involves communicating findings across different departments within an organization often before they’re even public information. At least now you know what abilities are required, regardless of whether or not this becomes your next career path.

The median salary range is given by Still, high-level positions will usually pay more than those lower down on the totem pole, though.

Marketing manager salary: $63,000 - $194,000 per year.

Advertising is a high-stakes and high-risk business.  Sometimes, if the campaign doesn’t work out exactly as planned. It can end up costing a company millions in profit.

There’s no such thing as “we’ll try again later” when you’ve already spent your annual advertising budget for that year. On one poorly executed ad! This means marketers need to be especially skilled individuals who know their stuff inside and out about what they’re selling.

Also, how to sell it publicly and privately, at a practical cost that will not be detrimental but beneficial. 

Starting doesn’t require an advanced degree. A high school graduate with a bachelor’s degree in marketing can usually expect to make about $63,000 per year - and that’s the median. High-level positions will typically pay more than lower-level positions.

That makes marketing manager one of the highest paying careers available for high school graduates.

Financial advisors salary: $72,000 - $173,000 per year.

A financial advisor is a specialist who helps people with their finances.  A financial advisor can help you make more money or save money on taxes.

They might also be able to help you plan for your retirement, college education for your children, healthcare costs, and other long-term goals. So what does a financial advisor do?

A few examples include:

  • Analyzing all of the income that someone brings in each year (such as salary) and then figuring out how much that person should invest (or spend) every month;  
  • Helping someone find high-quality investments;  
  • Investigating whether it makes sense to buy life insurance

The median salary range given by is $72,000 - $173,000 per year. Still, high-level positions will usually pay more than those lower down on the totem pole, making financial advisor one of the highest paying careers available today!

Sales managers salary: $73,000 - $120,000 per year.

The sales manager is interested in developing and maintaining relationships with potential and current buyers. They must also identify new customers and prospects, which means the sales manager must be analytical and creative.

The job is not always low-stress because there is a lot of competition and high pressure to make the sale.

A high school graduate with a bachelor’s degree can expect to make about $73,000 to $120,000 per year. Still, high-level positions will usually pay more than those lower down on the totem pole, making sales manager one of the highest paying careers available today!

Chief executives salary: $116,000 - $228,000 per year.

Chief executives are in charge of almost everything when overseeing their own business. In most cases, they report directly to a board or shareholders.

However, that may vary depending upon what industry they’ve worked themselves up through to reach this level in their career. They have to handle pressure head-on and under immense scrutiny from all angles; it is definitely not for everyone. 

The median range given by is $116,000 - $228,000 per year. So it makes chief executive one of the highest-paying careers available today! Still, high-level positions will usually pay more than those lower down on the totem pole.

Is a bachelor’s degree in computer science necessary?

Many high-paid jobs are in computer science. This is because technology changes every year. New technologies replace older ones, and the high-paying jobs are in software development.

These high-paying job titles can require a master’s degree, bachelor’s, or college degree. Still, we will be suitable for many of the highest-paying jobs just when we finish one of the training programs.

Conclusion of high paid jobs of the future

As you can see, there are many different kinds of tech jobs available, with the demand only continuing to increase over time!

It’s also crucial that you’re up-to-date on what technologies companies prefer using so they’ll be easier to find by doing research beforehand! Since salary ranges vary depending on experience and location, use this as a rough guide when starting your journey into technology careers :)

Salary ranges were taken from

If you prefer video, here is the youtube version.

Original article source at:

#jobs #paid 

23 Amazing High Paid Jobs Over $100K You Might Not Be Aware Of
Desmond  Gerber

Desmond Gerber


Learn Database-Related Jobs and The Differences Between Them

Find out who’s who in the database department and decide which role you most identify with.

In small companies, there is usually only one database job. The person in that position may be an architect one day, a designer the next day, a programmer another day, an administrator the day after that, and sometimes even an analyst or even a data scientist. If you’re planning to work in a small company, you should get used to the idea that you’ll be known as “the database guy/gal” and you’ll have to do a little bit of everything.

But if you look through the job postings of large companies, you’ll find a wide variety of database-related jobs. The postings are oriented towards database professionals with different skills and knowledge. Some require purely technical talents; others require analytical and design talents, strategic business acumen, or leadership and team management skills. In this guide, I will help you understand the differences between common database-related jobs. Then you can determine which one(s) you should aspire to, depending on your particular preferences and skills.

database-related jobs

Database professions, grouped by their preferred core competencies.

Database Jobs Requiring Technical Skills

Within the database-related jobs community, some primarily require technical database skills and knowledge. In this category, there are three main roles: database administrator (DBA), database programmer, and database tester.

Database Administrator (DBA)

The DBA is the person in charge of making the database servers work to their full potential. In large organizations, DBAs typically belong to the IT operations team. Their main task is to constantly monitor production database servers with the goal of avoiding operational or performance problems and finding solutions when problems occur.

DBAs must have a deep knowledge of the database engines they work with. That’s why an Oracle DBA is not the same as an MS SQL or MySQL DBA. Each organization will look for a DBA depending on the relational database management system (RDBMS) they use. DBAs must also be proficient in SQL (Structured Query Language) so they can write queries and maintenance operations on the databases they manage.

SQL is a standard language, so virtually all database engines implement it. The most recent version of the standard is SQL:2016. But as with many standards, there are various dialects, with small differences for each RDBMS. Good DBAs must know the peculiarities of the SQL dialect they work with.

In terms of administration tools, there are differences between the various RDBMSs. Thus, the DBA must have a deep knowledge of the tools for their RDBMS. For administrative database jobs, getting certified in the relevant RDBMS tools is highly valued.

Database Programmer

Database programming refers primarily to writing programming code in SQL – either writing queries or creating views, functions, or stored procedures. Companies looking for database programmers usually post SQL jobs, which require candidates to be proficient in standard SQL and, preferably, one or more dialects (e.g. MySQL, PostgreSQL, etc.).

A knowledge of other popular data manipulation languages, such as Python or R, may also be required for a database programmer job. In particular, experience with such languages’ database management libraries is a must.

With the rise of NoSQL databases and non-relational database models, these relatively new terms are taking up more space in database programmer searches. For this reason, it pays to master concepts such as JSON databases and MongoDB design patterns.

It’s common for application programmers to understand at least some database programming as well, since most applications make use of a database. For that reason, every programmer should have at least basic knowledge of SQL and language-specific ORM (object-relational mapping).

A frequent dilemma faced by database programmers employed in SQL jobs is WHETHER OR NOT TO IMPLEMENT BUSINESS LOGIC IN THE DATABASE. The short answer is that it all depends; the long answer is in the link.

Database Tester

The role of any tester is to test software systematically (i.e. not randomly) and detect flaws before users do. The database tester focuses their work on DETECTING ERRORS THAT ORIGINATE IN A DATABASE. These could be transactions that can’t be completed due to update failures (i.e. the violation of primary or foreign keys or blocking between processes), queries that take too long, and inconsistent, invalid, or anomalous data.

Database testers must have sufficient technical knowledge to unequivocally identify the causes of a problem and the conditions under which the problem occurs. This may require knowledge of SQL and some database administration tools (particularly monitors and profilers), although not to the same level as a programmer or a DBA.

Database testers must be clear about how to reproduce a detected problem. Their duty is not to solve it, although they must know who to report it to; depending on the cause, this could be a programmer, a designer, or a DBA.

Database Jobs Requiring Analytical, Interpretive, and Communication Skills

Database jobs requiring analytical and design skills usually center on interpretation and brokering. The actual job names are varied and the difference between them is not always very clear. They may be called designers, engineers, or architects. Often, the differences between these titles have more to do with hierarchy and vision than with particular skills or knowledge.

The common denominator of these positions is the task of interpretation and intermediation. Their work consists mainly of interpreting requirements or needs and expressing them in different ways, either through project development or through design artifacts like entity-relationship diagrams.

Although the boundaries of responsibility between architects, engineers, and database designers are usually blurred, we can establish some differences based on what most companies look for in these roles.

Database Architect

Database architects are commonly asked to have a strategic view of a company’s data infrastructure. To do so, they must use design tools to build artifacts (such as architecture diagrams) in line with that vision.

Database architects collaborate with other IT roles – communications architects, software/applications architects, server architects, cybersecurity architects, etc. – to align their respective objectives with those of the company. In turn, they work with other database professionals – engineers, designers, programmers, and testers – to agree on the responsibilities and guidelines with which each should work.

One of the responsibilities of database architects is to identify database needs and opportunities for improvement as well as planning for the long term based on growth projections, market trends, and overall objectives. For example, imagine migrating your company’s databases to the Cloud. It’s the architect’s responsibility to analyze the costs and benefits of the different alternatives and choose the best one. A database engineer might also, for example, determine the need to migrate an RDBMS to a new version based on an analysis of the costs and benefits.

Due to their strategic and long-term vision, it is common for database architects to have a position of authority. They may determine guidelines and work criteria for the entire database department of a company. That is why leadership skills are often desired in database architects.

Database Engineer

A database engineer is usually expected to work on projects from start to finish. Once a database project – possibly one suggested by the architect – has the approval of the company’s management and has the sponsors to take it forward, the engineer must get to work. It is the engineer’s responsibility to make sure that the databases involved in the project are operational on schedule.

In small organizations, database engineers will probably have to do everything required to get the databases up and running: install/configure the RDBMS, design/implement the database schemas, and even take care of some of the programming and process automation. In larger organizations, an engineer may delegate some of these tasks to DBAs, designers, and programmers.

What the engineer always has is the task of planning for the databases to remain operational over the long term. This means estimating the growth in data volume and transaction volume and making forecasts – in conjunction with the architect – so that at no time does this growth exceed the capacity of the databases.

Database Designer or Modeler

WHY DO BUSINESSES NEED DATA MODELING? Data models provide conceptual views of how the information in a database is structured. The database modeler builds these models based on project requirements or requests from architects or engineers. The result of their work consists of schemas and scripts to create new databases, adapt existing databases to new requirements, migrate legacy databases, or re-engineer and optimize schemas.

If you aspire to a job as a designer or modeler, I suggest you review THE MOST COMMON QUESTIONS ASKED IN A DATABASE MODELING JOB INTERVIEW.

A database designer usually uses database modeling and design tools that are independent of particular RDBMSs. Ideally, the data models they create should be implementable on any infrastructure. Nevertheless, a designer should be familiar with technical aspects of databases (especially SQL) so they can understand and eventually adjust the database creation scripts generated by design tools.

Like database engineers, a database designer’s daily tasks will depend on the number of people working in the database area. If you’re the only database person, you’ll have to do a little bit of everything. If there are lots of people, you may only handle the design, referring all tasks involving SQL scripts to the programmers or DBAs.

For more details on the database modeler role, I suggest you read this description of WHO IS A DATABASE MODELER.

Analysts and Data Scientists

Data analysts and data scientists are involved in collecting, organizing, and statistically interpreting information in databases. They use specific extraction, analysis, and visualization tools as well as directly querying data using SQL.

These roles do not need to have a deep technical knowledge of database tools. However, they must be able to express their needs in more detailed ways than an ordinary user can. This means that an analyst or scientist can, for example, examine an ERD and suggest the designer make certain changes to optimize the schema for the analysis tasks they need to perform.

Every analyst or data scientist should know how to perform queries on a database using SQL language. Preferably, they should also be proficient in a language with a strong focus on Big Data analysis, such as THE VERY POPULAR PYTHON and the more scientific R.

Database Jobs Requiring Leadership

In this category, you’ll find positions like Head of Data, Chief Data Officer (CDO), or simply “the boss”. Obviously, the person in this position must have the ability to lead groups and make decisions.

Head of Data and CDO roles appear in large companies, where there is a team of people dedicated exclusively to databases.

In these roles, the person must know how to distribute and coordinate the work among the different members of their teams. They must know what each employee can do and be accountable for the team as a whole to the rest of the organization. They must also be able to understand the work of each member and evaluate it to determine its quality. This means that they must be able to look at an architecture diagram, an ERD, a SQL script, or a monitoring trace, understand its meaning, and judge whether it is well or poorly done.

It may happen that the Head of Data role is filled by someone who also performs another role – for example, an architect or a database engineer. However, if the team consists of many people (e.g. more than 7), the time they will have to devote to leading the team will probably take away from other tasks.

Database Consultants

Database consultants specialize in a particular task – which can be database administration, programming, analysis, design, project management, or another database-related job!

The difference with the jobs mentioned above is that consultants usually work independently or freelance; they only join a company temporarily, when they’re needed for a project or a specific task. For example, if a database is behaving strangely and no one is able to solve the problem, a database consultant is hired to diagnose the problem and propose a solution.

Consultants are also called in when an organization lacks the profile it needs for a specific project. For example, suppose a company is developing a software product. A database designer is needed and the development company does not have one (or the ones it has are already busy on other projects). The company can hire an independent designer to do the job in time and with optimum quality.

These consultants sometimes become almost like superheroes. They are called in when there is a need or a problem that no one within the organization can handle. They are trusted to solve that problem, no matter how impossible it may seem. That’s why companies sometimes pay them exorbitant fees!

It can be tempting to work as an independent consultant, since their fees are high and they are not bound by the obligations of a regular employee – like keeping schedules, wearing suits, and having only a two-week vacation a year. But independent consultants must be true rockstars in their fields and must be prepared to do whatever it takes to accomplish the tasks at hand, even if that means sleepless nights or sacrificing weekends.

What Database-Related Job Calls to You?

I hope this guide to who’s who in the database department will help you figure out which role you most identify with. Unfortunately, database job postings are often unclear about the position they need to fill – for example, they may ask for a database architect when what they really need is a database administrator. So when looking at a database job offer, ask for a job description; based on the tasks detailed in that description, you will know whether or not the job is a good fit for your talents, aspirations, and career goals.

Original article source at:

#database #jobs 

Learn Database-Related Jobs and The Differences Between Them
Desmond  Gerber

Desmond Gerber


Your Guide To Unlocking Top Data Scientist Jobs

Data Science Career Opportunities: Your Guide To Unlocking Top Data Scientist Jobs

In a world where 2.5 quintillion bytes of data is produced every day, a professional who can organize this humongous data to provide business solutions is indeed the hero! Much has been spoken about why Big Data is here to stay and why Big Data Analytics is the best career move. Building on what’s already been written and said, let’s discuss Data Science career opportunities and why ‘Data Scientist’ is the sexiest job title of the 21st century.

Data Science Career Opportunities

A Data Scientist, according to Harvard Business Review, “is a high-ranking professional with the training and curiosity to make discoveries in the world of Big Data”. Therefore it comes as no surprise that Data Scientists are coveted professionals in the Big Data Analytics and IT industry.

With experts predicting that 40 zettabytes of data will be in existence by 2020 (Source), Data Science career opportunities will only shoot through the roof! Shortage of skilled professionals in a world that is increasingly turning to data for decision-making has also led to the huge demand for Data Scientists in start-ups and well-established companies. A McKinsey Global Institute study states that by 2018, the US alone will face a shortage of about 190,000 professionals with deep analytical skills. With the Big Data wave showing no signs of slowing down, there’s a rush among global companies to hire Data Scientists to tame their business-critical Big Data.

Data Scientist Salary Trends

A report by Glassdoor shows that Data scientists lead the pack for the best jobs in America. The report goes on to say that the median salary for a Data Scientist is an impressive $91,470 in the US and ₹622,162 and there are over 2300 job openings posted on the site (Source).

On, the average Data Scientist salaries for job postings in the US are 80% higher than average salaries for all job postings nationwide, as of May 2019.

Data Scientist salary trend


In India the trend is no different; as of May 2019,  the median salary for a Data Scientist role is Rs. 622,162 according to


Data Scientist Job Roles

A Data Scientist dons many hats in his/her workplace. Not only are Data Scientists responsible for business analytics, they are also involved in building data products and software platforms, along with developing visualizations and machine learning algorithms.

Some of the prominent Data Scientist job titles are:

  • Data Scientist
  • Data Architect
  • Data Administrator
  • Data Analyst
  • Business Analyst
  • Data/Analytics Manager
  • Business Intelligence Manager



Hot Data Science Skills

Coding skills clubbed with knowledge of statistics and the ability to think critically, make up the arsenal of a successful data scientist. Some of the in-demand Data Scientist skills that will fetch big career opportunities in Data Science are:

The chart below shows the average Data Scientist Salary by skills in the USA and India.


Currency: India – ₹, US – $

The upward swing in Data Science career opportunities is expected to continue for a long time to come. As data pervades our life and companies try to make sense of the data generated, skilled Data Scientists will be continued to be wooed by businesses big and small. Case in point, a look at the jobs board on reveals top companies competing with each other to hire Data Scientists. A few big names include Facebook, Twitter, Airbnb, Apple, LinkedIn, IBM, and PayPal among others.

The time is ripe to up-skill in Data Science and Big Data Analytics to take advantage of the Data Science career opportunities that come your way. This is the best opportunity to kick off your career in the field of data science by taking the Data Science Training.

Also, Edureka has a specially curated Data Science with Python course which helps you gain expertise in Machine Learning Algorithms like K-Means Clustering, Decision Trees, Random Forest, and Naive Bayes. You’ll also learn the concepts of Statistics, Time Series, Text Mining, and an introduction to Deep Learning. New batches for the Data Science course are starting soon!!

Got a question for us? Please mention it in the comments section and we will get back to you.

Original article source at:

#datascientist #jobs #datascience 

Your Guide To Unlocking Top Data Scientist Jobs
Tech2 etc

Tech2 etc


How do I start working as a freelancer?

To start freelancing while you already have a full-time job, you’ll have to consider the following steps:

How to start freelancing (even when working full-time)?

1. Define your business goals.
2. Find a perspective niche (and stick to it)
3. Identify target clients.
4. Set your freelance rates.
5. Create a website (and portfolio)
6. Find your first client.
7. Expand your network.
8. Balance your full-time job with your part-time freelancing side gigs.

Define your business goals

Before you start freelancing, you’ll have to be honest with yourself, and answer an important question:
* Is freelancing just a side gig? Or do you plan to expand it to a full-time business?

The answer to this question will determine your next steps, considering that you’ll either aim to balance your full-time and freelance work, OR aim to work your way out of your current job to pursue a full-time freelance career.

The answer to this question is your long-term goal. To pursue it, you’ll have to set a number of short-term goals and answer questions such as:

* What niche will you specialize in?
* What services will you offer?
* What amount do you want to be earning on a monthly basis to decide to quit your full-time job (if applicable)?

Find a perspective niche (and stick to it)

No matter whether you’re a graphic designer, copywriter, developer, or anything in between by vocation, it’d be best if you were to specialize in a particular area of work:

For example, If you’re a content writer, don’t aim to write about any topic under the sun, from Top 3 Ways to Prepare Your Garden for Spring to Taxation Laws in all 50 US States Explained.

Sure, you may start by writing various topics, to find your ideal niche, but eventually, you should pick one, and stick to it.

But, Cryptocurrency or Technology content writer always sound much better in your CV than General content writer. Moreover, they inspire more confidence in you on the part of the clients who’ll always be looking for specific, and not general content.

The same is true if you’re a graphic designer:
* consider your level of experience
* your current pool of connections
* your natural inclinations to a particular design niche

Then, make your pick — focus on delivering interface design for apps, creating new custom logos, devising layouts for books, or any other specific design work.

Identify target clients

Just like you shouldn’t aim to cover every niche in your industry, you shouldn’t aim to cater to the needs of the entire industry’s market.

Small businesses, teams, remote workers, or even other freelancers may all require the same type of service you’re looking to offer. But, you’ll need to target one or two types of clients especially.

Say you want to start a blog about everything related to working remotely. There are freelancers, teams, but also entire businesses working remotely, and they can serve as your starting point.

* Think about the age of your desired readers. Perhaps you’re a Millennial, so you can write a blog about working remotely for Millennials?
* Think about the location. Perhaps you want to cover predominantly the US market?
* Think about the education level. Perhaps you want to cover newly independent remote workers, who’re just starting out their careers?
* Think about income. Perhaps you’re looking to write for people with a limited budget, but who want to try digital nomadism?
* Think about gender. Perhaps you want to predominantly target women freelancers?

These are only some questions you should ask yourself, but they reveal a lot. For example, that you can write for fresh-out-of-college female Millennials from the US looking to start and cultivate a remote career while traveling abroad with a limited budget.

Set your freelance rates

Setting your freelance rates always seems like a challenging point, but it’s a lot more straightforward when you list the necessary parameters that help determine your ideal (and realistic) pricing:

* Experience (if any)
* Education level
* Supply and demand for your services
* The prices in your industry
* The average freelance hourly rates in your niche
* Your location

Once you have all this data, you’ll need to calculate your hourly rate based on it — higher education, experience, and demand for your niche will mean you can set higher prices. If you’re based in the US, you’ll likely be able to command higher rates than if you’re based in the Philippines. Of course, your living standards and expenses will be higher, so you’ll also need to command higher rates.

Create a website (and portfolio)

Once you’ve defined your business goals, found a niche, identified your target clients, and set your prices, you’ll want to create an online presence. And, the best way to do so is by creating your own website with a portfolio showcasing your previous work, skills, and expertise. There are plenty of amazing tutorials on YouTube.

Creating a website for free through a website builder like Wix is fine, but you’ll be better off if you were to buy a domain name from a hosting website. You’ll get a unique name for your online presence and a customized email address, so you’ll look much more credible and overall more professional to potential clients.

Regardless of what your industry is, it may be best if you were to choose your own name for the domain, especially when you’re mostly looking to showcase your portfolio. You’ll stand out better, and it’ll later be easier to switch to a different industry (or niche) if you find that you want to.

Once you’ve selected a host and domain name, you can install WordPress to your website, and choose the website’s theme. Then, you can add a landing page describing your services, and prices, maybe even a separate page for a blog where you’ll write about industry-related topics.

Find your first client

Your first client may contact you because of your personal website portfolio, but you should also actively pursue your first gig bearing in mind what employers look for. There are several ways you can do this:

* Get involved in your industry’s community
* Learn how to pitch through email
* Look through freelance job platforms/websites

Expand your network

Once you’ve landed your first client, you’ll need to work on finding recurring clients. Perhaps your first client will become a recurring one. And, perhaps the referral you’ve been given by said first client will inspire others to contact you and provide a steady stream of work.

In any case, it’s best that you expand your network — and here’s where the famous Pareto principle comes in handy. According to it, cultivating a good relationship with 20% of your clients will help you find 80% of new work through their referrals. Moreover, each new 20 referrals increase your chances of getting new projects by 80%.

To expand your network, you can:

* partake in industry webinars
* attend events
* join Facebook groups, pages and communities
* streamline your LinkedIn network
* send out invites to professionals in your field (or a field that often requires your services)

Work on additional skills

Apart from your core, industry-related freelance skills (i.e., your hard skills), you’ll need to work on some additional skills — your soft skills.

Soft skills are more personality-related: communicativeness and critical thinking are probably the most important traits to pursue, but, you’ll also need to be persistent, good at handling stress, an efficient scheduler, and skilled in time management.

The more you can skill up yourself, the more expensive you will become. Remember knowledge is priceless.

You’ll also need to be confident, to persuade your potential clients that you possess the skills and experience they’re looking for.


Entering the freelancing business may sound overwhelming and complicated, but it’s actually pretty straightforward, once you follow the right steps.

Take time and do what you find passionate about.


#freelance #freelancing #job #jobs #projects #money #earning #skills #dev 

How do I start working as a freelancer?

10 Popular Libraries for Scheduling Jobs in Go

In today's post we will learn about 10 Popular Libraries for Scheduling Jobs in Go.

In Go and Golang programming, a scheduler is responsible for distributing jobs in a multiprocessing environment. When the available resources are limited, it is the task of the scheduler to manage the work that needs to be done in the most efficient way. In Go, the scheduler is responsible for scheduling goroutines, which is particularly useful in concurrency. Goroutines are like OS threads, but they are much lighter weight. However, goroutines always take the help of the underlying OS thread model and the scheduler it works on is at a much higher level than the OS scheduler. This Go programming tutorial provides a quick look at the concepts behind the Go scheduler.

Table of contents:

  • Cdule - Job scheduler library with database support
  • Cheek - A simple crontab like scheduler that aims to offer a KISS approach to job scheduling.
  • Clockwerk - Go package to schedule periodic jobs using a simple, fluent syntax.
  • Cronticker - A ticker implementation to support cron schedules.
  • Dagu - No-code workflow executor. it executes DAGs defined in a simple YAML format.
  • Go-cron - Simple Cron library for go that can execute closures or functions at varying intervals, from once a second to once a year on a specific date and time. Primarily for web applications and long running daemons.
  • Go-quartz - Simple, zero-dependency scheduling library for Go.
  • Gocron - Easy and fluent Go job scheduling. This is an actively maintained fork of jasonlvhit/gocron.
  • Goflow - A workflow orchestrator and scheduler for rapid prototyping of ETL/ML/AI pipelines.
  • Gron - Define time-based tasks using a simple Go API and Gron’s scheduler will run them accordingly.

1 - Cdule: Job scheduler library with database support.

cdule (pronounce as Schedule)

Golang based scheduler library with database support. Users could use any database which is supported by

To Download the cdule library

go get

Usage Instruction

In order to schedule jobs with cdule, user needs to

  1. Configure persistence
  2. Implement cdule.Job Interface &
  3. Schedule job with required cron expression.

Job will be persisted in the jobs table.
Next execution would be persisted in schedules tables.
Job history would be persisted and maintained in job_histories table.


User needs to create a resources/config.yml in their project home directory with the followling keys

  • cduletype
  • dburl
  • cduleconsistency

cduletype is used to specify whether it is an In-Memory or Database based configuration. Possible values are DATABASE and MEMORY. dburl is the database connection url. cduleconsistency is for reserved for future usage.

config.yml for postgressql based configuration

cduletype: DATABASE
dburl: postgres://cduleuser:cdulepassword@localhost:5432/cdule?sslmode=disable
cduleconsistency: AT_MOST_ONCE

config.yml for sqlite based in-memory configuration

cduletype: MEMORY
dburl: /Users/dsinghvi/sqlite.db
cduleconsistency: AT_MOST_ONCE

Job Interface Implementation

var testJobData map[string]string

type TestJob struct {
	Job cdule.Job

func (m TestJob) Execute(jobData map[string]string) {
	log.Info("In TestJob")
	for k, v := range jobData {
		valNum, err := strconv.Atoi(v)
		if nil == err {
			jobData[k] = strconv.Itoa(valNum + 1)
		} else {

	testJobData = jobData

func (m TestJob) JobName() string {
	return "job.TestJob"

func (m TestJob) GetJobData() map[string]string {
	return testJobData

View on Github

2 - Cheek: A simple crontab like scheduler that aims to offer a KISS approach to job scheduling.

cheek, of course, stands for Crontab-like scHeduler for Effective Execution of tasKs. cheek is a KISS approach to crontab-like job scheduling. It was born out of a (/my?) frustration about the big gap between a lightweight crontab and full-fledged solutions like Airflow.

cheek aims to be a KISS approach to job scheduling. Focus is on the KISS approach not to necessarily do this in the most robust way possible.

Getting started

Fetch the latest version for your system below.

You can (for example) fetch it like below, make it executable and run it. Optionally put the cheek on your PATH.

curl -o cheek
chmod +x cheek

Create a schedule specification using the below YAML structure:

tz_location: Europe/Brussels
    command: date
    cron: "* * * * *"
        - bar
      - echo
      - bar
      - foo
    command: this fails
    cron: "* * * * *"
    retries: 3

If your command requires arguments, please make sure to pass them as an array like in foo_job.

Note that you can set tz_location if the system time of where you run your service is not to your liking.


The core of cheek consists of a scheduler that uses a schedule specified in a yaml file to triggers jobs when they are due.

You can launch the scheduler via:

cheek run ./path/to/my-schedule.yaml

Check out cheek run --help for configuration options.

View on Github

3 - Clockwerk: Go package to schedule periodic jobs using a simple, fluent syntax.

Job Scheduling Library

clockwerk allows you to schedule periodic jobs using a simple, fluent syntax.


go get
package main

import (

type DummyJob struct{}

func (d DummyJob) Run() {
  fmt.Println("Every 30 seconds")

func main() {
  var job DummyJob
  c := clockwerk.New()
  c.Every(30 * time.Second).Do(job)

View on Github

4 - Cronticker: A ticker implementation to support cron schedules.

Golang ticker that works with Cron scheduling.

Import it

go get
import ""


Create a new ticker:

ticker, err := NewTicker("TZ=America/New_York 0 0 0 ? * SUN")

Check the ticker's channel for the next tick:

tickerTime := <-ticker.C

Reset the ticker to a new cron schedule

err := ticker.Reset("0 0 0 ? * MON,TUE,WED")

Stop the ticker


Use defer ticker.Stop() whenever you can to ensure the cleanup of goroutines.

ticker, _ := NewTicker("@daily")
defer ticker.Stop()

View on Github

5 - Dagu: No-code workflow executor. it executes DAGs defined in a simple YAML format.

A just another Cron alternative with a Web UI, but with much more capabilities
It runs DAGs (Directed acyclic graph) defined in a simple YAML format.


  • Install by placing just a single binary file
  • Schedule executions of DAGs with Cron expressions
  • Define dependencies between related jobs and represent them as a single DAG (unit of execution)

How does it work?

dagu is a single command and it uses the local file system to store data. Therefore, no DBMS or cloud service is required. dagu executes DAGs defined in declarative YAML format. Existing programs can be used without any modification.

Install dagu

You can quickly install dagu command and try it out.

via Homebrew

brew install yohamta/tap/dagu

Upgrade to the latest version:

brew upgrade yohamta/tap/dagu

via Bash script

curl -L | bash

via GitHub Release Page

Download the latest binary from the Releases page and place it in your $PATH (e.g. /usr/local/bin).

️Quick start

1. Launch the Web UI

Start the server with dagu server and browse to to explore the Web UI.

2. Create a new DAG

Create a DAG by clicking the New DAG button on the top page of the web UI. Input example in the dialog.

Note: DAG (YAML) files will be placed in ~/.dagu/dags by default. See Admin Configuration for more details.

3. Edit the DAG

Go to the SPEC Tab and hit the Edit button. Copy & Paste this example YAML and click the Save button.

4. Execute the DAG

You can execute the example by pressing the Start button.

Note: Leave the parameter field in the dialog blank and press OK.

Command Line User Interface

  • dagu start [--params=<params>] <file> - Runs the DAG
  • dagu status <file> - Displays the current status of the DAG
  • dagu retry --req=<request-id> <file> - Re-runs the specified DAG run
  • dagu stop <file> - Stops the DAG execution by sending TERM signals
  • dagu restart <file> - Restart the current running DAG
  • dagu dry [--params=<params>] <file> - Dry-runs the DAG
  • dagu server [--host=<host>] [--port=<port>] [--dags=<path/to/the DAGs directory>] - Starts the web server for web UI
  • dagu scheduler [--dags=<path/to/the DAGs directory>] - Starts the scheduler process
  • dagu version - Shows the current binary version

The --config=<config> option is available to all commands. It allows to specify different dagu configuration for the commands. Which enables you to manage multiple dagu process in a single instance. See Admin Configuration for more details.

For example:

dagu server --config=~/.dagu/dev.yaml
dagu scheduler --config=~/.dagu/dev.yaml

View on Github

6 - Go-cron: Simple Cron library for go that can execute closures or functions at varying intervals, from once a second to once a year on a specific date and time. Primarily for web applications and long running daemons.

This is a simple library to handle scheduled tasks. Tasks can be run in a minimum delay of once a second--for which Cron isn't actually designed. Comparisons are fast and efficient and take place in a goroutine; matched jobs are also executed in goroutines.

For instance, you can use the following in your web application that uses MySQL:

func init() {
  cron.NewWeeklyJob(1, 23, 59, 59, func (time.Time) {
    _, err := conn.Query("OPTIMIZE TABLE mytable;")
    if(err != nil) { println(err) }

View on Github

7 - Go-quartz: Simple, zero-dependency scheduling library for Go.

A minimalistic and zero-dependency scheduling library for Go.


Inspired by the Quartz Java scheduler.

Library building blocks

Scheduler interface

type Scheduler interface {
	// Start starts the scheduler.
	// IsStarted determines whether the scheduler has been started.
	IsStarted() bool
	// ScheduleJob schedules a job using a specified trigger.
	ScheduleJob(job Job, trigger Trigger) error
	// GetJobKeys returns the keys of all of the scheduled jobs.
	GetJobKeys() []int
	// GetScheduledJob returns the scheduled job with the specified key.
	GetScheduledJob(key int) (*ScheduledJob, error)
	// DeleteJob removes the job with the specified key from the Scheduler's execution queue.
	DeleteJob(key int) error
	// Clear removes all of the scheduled jobs.
	// Stop shutdowns the scheduler.

Implemented Schedulers

  • StdScheduler

Trigger interface

type Trigger interface {
	// NextFireTime returns the next time at which the Trigger is scheduled to fire.
	NextFireTime(prev int64) (int64, error)
	// Description returns the description of the Trigger.
	Description() string

Implemented Triggers

  • CronTrigger
  • SimpleTrigger
  • RunOnceTrigger

Job interface. Any type that implements it can be scheduled.

type Job interface {
	// Execute is called by a Scheduler when the Trigger associated with this job fires.
	// Description returns the description of the Job.
	Description() string
	// Key returns the unique key for the Job.
	Key() int

Implemented Jobs

  • ShellJob
  • CurlJob

View on Github

8 - Gocron: Easy and fluent Go job scheduling. This is an actively maintained fork of jasonlvhit/gocron.

gocron is a job scheduling package which lets you run Go functions at pre-determined intervals using a simple, human-friendly syntax.

gocron is a Golang scheduler implementation similar to the Ruby module clockwork and the Python job scheduling package schedule.

See also these two great articles that were used for design input:

If you want to chat, you can find us at Slack!


  • Scheduler: The scheduler tracks all the jobs assigned to it and makes sure they are passed to the executor when ready to be run. The scheduler is able to manage overall aspects of job behavior like limiting how many jobs are running at one time.
  • Job: The job is simply aware of the task (go function) it's provided and is therefore only able to perform actions related to that task like preventing itself from overruning a previous task that is taking a long time.
  • Executor: The executor, as it's name suggests, is simply responsible for calling the task (go function) that the job hands to it when sent by the scheduler.


s := gocron.NewScheduler(time.UTC)

s.Every(5).Seconds().Do(func(){ ... })

// strings parse to duration
s.Every("5m").Do(func(){ ... })

s.Every(5).Days().Do(func(){ ... })

s.Every(1).Month(1, 2, 3).Do(func(){ ... })

// set time
s.Every(1).Day().At("10:30").Do(func(){ ... })

// set multiple times
s.Every(1).Day().At("10:30;08:00").Do(func(){ ... })

s.Every(1).Day().At("10:30").At("08:00").Do(func(){ ... })

// Schedule each last day of the month
s.Every(1).MonthLastDay().Do(func(){ ... })

// Or each last day of every other month
s.Every(2).MonthLastDay().Do(func(){ ... })

// cron expressions supported
s.Cron("*/1 * * * *").Do(task) // every minute

// you can start running the scheduler in two different ways:
// starts the scheduler asynchronously
// starts the scheduler and blocks current execution path

For more examples, take a look in our go docs

View on Github

9 - Goflow: A workflow orchestrator and scheduler for rapid prototyping of ETL/ML/AI pipelines.

A workflow/DAG orchestrator written in Go for rapid prototyping of ETL/ML/AI pipelines. Goflow comes complete with a web UI for inspecting and triggering jobs.

Quick start

With Docker

docker run -p 8181:8181

Browse to localhost:8181 to explore the UI.


Without Docker

In a fresh project directory:

go mod init # create a new module
go get # install dependencies

Create a file main.go with contents:

package main

import ""

func main() {
        options := goflow.Options{
                AssetBasePath: "assets/",
                StreamJobRuns: true,
                ShowExamples:  true,
        gf := goflow.New(options)

Download the front-end from the release page, untar it, and move it to the location specified in goflow.Options.AssetBasePath. Now run the application with go run main.go and see it in the browser at localhost:8181.

Use case

Goflow was built as a simple replacement for Apache Airflow to manage some small data pipeline projects. Airflow started to feel too heavyweight for these projects where all the computation was offloaded to independent services, but there was still a need for basic orchestration, concurrency, retries, visibility etc.

Goflow prioritizes ease of deployment over features and scalability. If you need distributed workers, backfilling over time slices, a durable database of job runs, etc, then Goflow is not for you. On the other hand, if you want to rapidly prototype some pipelines, then Goflow might be a good fit.

Concepts and features

  • Job: A Goflow workflow is called a Job. Jobs can be scheduled using cron syntax.
  • Task: Each job consists of one or more tasks organized into a dependency graph. A task can be run under certain conditions; by default, a task runs when all of its dependencies finish successfully.
  • Concurrency: Jobs and tasks execute concurrently.
  • Operator: An Operator defines the work done by a Task. Goflow comes with a handful of basic operators, and implementing your own Operator is straightforward.
  • Retries: You can allow a Task a given number of retry attempts. Goflow comes with two retry strategies, ConstantDelay and ExponentialBackoff.
  • Database: Goflow supports two database types, in-memory and BoltDB. BoltDB will persist your history of job runs, whereas in-memory means the history will be lost each time the Goflow server is stopped. The default is BoltDB.
  • Streaming: Goflow uses server-sent events to stream the status of jobs and tasks to the UI in real time.

View on Github

10 - Gron: Define time-based tasks using a simple Go API and Gron’s scheduler will run them accordingly.

Gron provides a clear syntax for writing and deploying cron jobs.


  • Minimalist APIs for scheduling jobs.
  • Thread safety.
  • Customizable Job Type.
  • Customizable Schedule.


$ go get


Create schedule.go

package main

import (

func main() {
	c := gron.New()
	c.AddFunc(gron.Every(1*time.Hour), func() {
		fmt.Println("runs every hour.")

Schedule Parameters

All scheduling is done in the machine's local time zone (as provided by the Go time package).

Setup basic periodic schedule with gron.Every().


Also support Day, Week by importing gron/xtime:

import ""

gron.Every(1 * xtime.Day)
gron.Every(1 * xtime.Week)

Schedule to run at specific time with .At(hh:mm)

gron.Every(30 * xtime.Day).At("00:00")
gron.Every(1 * xtime.Week).At("23:59")

View on Github

Thank you for following this article.

Related videos:

Go scheduler: Implementing language with lightweight concurrency

#go #golang #scheduling #jobs 

10 Popular Libraries for Scheduling Jobs in Go

Rufus Scheduler: Job Scheduler for Ruby (at, Cron, in and Every Jobs)


Job scheduler for Ruby (at, cron, in and every jobs).

It uses threads.

Note: maybe are you looking for the README of rufus-scheduler 2.x? (especially if you're using Dashing which is stuck on rufus-scheduler 2.0.24)


# quickstart.rb

require 'rufus-scheduler'

scheduler = '3s' do
  puts 'Hello... Rufus'

  # let the current thread join the scheduler thread
  # (please note that this join should be removed when scheduling
  # in a web application (Rails and friends) initializer)

(run with ruby quickstart.rb)

Various forms of scheduling are supported:

require 'rufus-scheduler'

scheduler =

# ... '10d' do
  # do something in 10 days
end '2030/12/12 23:30:00' do
  # do something at a given point in time

scheduler.every '3h' do
  # do something every 3 hours
scheduler.every '3h10m' do
  # do something every 3 hours and 10 minutes

scheduler.cron '5 0 * * *' do
  # do something every day, five minutes after midnight
  # (see "man 5 crontab" in your terminal)

# ...

Rufus-scheduler uses fugit for parsing time strings, et-orbi for pairing time and tzinfo timezones.


Rufus-scheduler (out of the box) is an in-process, in-memory scheduler. It uses threads.

It does not persist your schedules. When the process is gone and the scheduler instance with it, the schedules are gone.

A rufus-scheduler instance will go on scheduling while it is present among the objects in a Ruby process. To make it stop scheduling you have to call its #shutdown method.

related and similar gems

  • Whenever - let cron call back your Ruby code, trusted and reliable cron drives your schedule
  • ruby-clock - a clock process / job scheduler for Ruby
  • Clockwork - rufus-scheduler inspired gem
  • Crono - an in-Rails cron scheduler
  • PerfectSched - highly available distributed cron built on Sequel and more

(please note: rufus-scheduler is not a cron replacement)

note about the 3.0 line

It's a complete rewrite of rufus-scheduler.

There is no EventMachine-based scheduler anymore.

I don't know what this Ruby thing is, where are my Rails?

I'll drive you right to the tracks.

notable changes:

  • As said, no more EventMachine-based scheduler
  • scheduler.every('100') { will schedule every 100 seconds (previously, it would have been 0.1s). This aligns rufus-scheduler with Ruby's sleep(100)
  • The scheduler isn't catching the whole of Exception anymore, only StandardError
  • The error_handler is #on_error (instead of #on_exception), by default it now prints the details of the error to $stderr (used to be $stdout)
  • Rufus::Scheduler::TimeOutError renamed to Rufus::Scheduler::TimeoutError
  • Introduction of "interval" jobs. Whereas "every" jobs are like "every 10 minutes, do this", interval jobs are like "do that, then wait for 10 minutes, then do that again, and so on"
  • Introduction of a lockfile: true/filename mechanism to prevent multiple schedulers from executing
  • "discard_past" is on by default. If the scheduler (its host) sleeps for 1 hour and a every '10m' job is on, it will trigger once at wakeup, not 6 times (discard_past was false by default in rufus-scheduler 2.x). No intention to re-introduce discard_past: false in 3.0 for now.
  • Introduction of Scheduler #on_pre_trigger and #on_post_trigger callback points

getting help

So you need help. People can help you, but first help them help you, and don't waste their time. Provide a complete description of the issue. If it works on A but not on B and others have to ask you: "so what is different between A and B" you are wasting everyone's time.

"hello", "please" and "thanks" are not swear words.

Go read how to report bugs effectively, twice.

Update: might help help you.

on Gitter

You can find help via chat over at It's fugit, et-orbi, and rufus-scheduler combined chat room.

Please be courteous.


Yes, issues can be reported in rufus-scheduler issues, I'd actually prefer bugs in there. If there is nothing wrong with rufus-scheduler, a Stack Overflow question is better.



Rufus-scheduler supports five kinds of jobs. in, at, every, interval and cron jobs.

Most of the rufus-scheduler examples show block scheduling, but it's also OK to schedule handler instances or handler classes.

in, at, every, interval, cron

In and at jobs trigger once.

require 'rufus-scheduler'

scheduler = '10d' do
  puts "10 days reminder for review X!"
end '2014/12/24 2000' do
  puts "merry xmas!"

In jobs are scheduled with a time interval, they trigger after that time elapsed. At jobs are scheduled with a point in time, they trigger when that point in time is reached (better to choose a point in the future).

Every, interval and cron jobs trigger repeatedly.

require 'rufus-scheduler'

scheduler =

scheduler.every '3h' do
  puts "change the oil filter!"

scheduler.interval '2h' do
  puts "thinking..."
  puts sleep(rand * 1000)
  puts "thought."

scheduler.cron '00 09 * * *' do
  puts "it's 9am! good morning!"

Every jobs try hard to trigger following the frequency they were scheduled with.

Interval jobs trigger, execute and then trigger again after the interval elapsed. (every jobs time between trigger times, interval jobs time between trigger termination and the next trigger start).

Cron jobs are based on the venerable cron utility (man 5 crontab). They trigger following a pattern given in (almost) the same language cron uses.


#schedule_x vs #x

schedule_in, schedule_at, schedule_cron, etc will return the new Job instance.

in, at, cron will return the new Job instance's id (a String).

job_id = '10d' do
    # ...
job = scheduler.job(job_id)

# versus

job =
  scheduler.schedule_in '10d' do
    # ...

# also

job = '10d', job: true do
    # ...

#schedule and #repeat

Sometimes it pays to be less verbose.

The #schedule methods schedules an at, in or cron job. It just decides based on its input. It returns the Job instance.

scheduler.schedule '10d' do; end.class
  # => Rufus::Scheduler::InJob

scheduler.schedule '2013/12/12 12:30' do; end.class
  # => Rufus::Scheduler::AtJob

scheduler.schedule '* * * * *' do; end.class
  # => Rufus::Scheduler::CronJob

The #repeat method schedules and returns an EveryJob or a CronJob.

scheduler.repeat '10d' do; end.class
  # => Rufus::Scheduler::EveryJob

scheduler.repeat '* * * * *' do; end.class
  # => Rufus::Scheduler::CronJob

(Yes, no combination here gives back an IntervalJob).

schedule blocks arguments (job, time)

A schedule block may be given 0, 1 or 2 arguments.

The first argument is "job", it's simply the Job instance involved. It might be useful if the job is to be unscheduled for some reason.

scheduler.every '10m' do |job|

  status = determine_pie_status

  if status == 'burnt' || status == 'cooked'

The second argument is "time", it's the time when the job got cleared for triggering (not

Note that time is the time when the job got cleared for triggering. If there are mutexes involved, now = mutex_wait_time + time...

"every" jobs and changing the next_time in-flight

It's OK to change the next_time of an every job in-flight:

scheduler.every '10m' do |job|

  # ...

  status = determine_pie_status

  job.next_time = + 30 * 60 if status == 'burnt'
    # if burnt, wait 30 minutes for the oven to cool a bit

It should work as well with cron jobs, not so with interval jobs whose next_time is computed after their block ends its current run.

scheduling handler instances

It's OK to pass any object, as long as it responds to #call(), when scheduling:

class Handler
  def, time)
    p "- Handler called for #{} at #{time}"
end '10d', Handler

# or

class OtherHandler
  def initialize(name)
    @name = name
  def call(job, time)
    p "* #{time} - Handler #{name.inspect} called for #{}"

oh ='Doe')

scheduler.every '10m', oh '3d5m', oh

The call method must accept 2 (job, time), 1 (job) or 0 arguments.

Note that time is the time when the job got cleared for triggering. If there are mutexes involved, now = mutex_wait_time + time...

scheduling handler classes

One can pass a handler class to rufus-scheduler when scheduling. Rufus will instantiate it and that instance will be available via job#handler.

class MyHandler
  attr_reader :count
  def initialize
    @count = 0
  def call(job)
    @count += 1
    puts ". #{self.class} called at #{} (#{@count})"

job = scheduler.schedule_every '35m', MyHandler

  # => #<MyHandler:0x000000021034f0>
  # => 0

If you want to keep that "block feeling":

job_id =
  scheduler.every '10m', do
    def call(job)
      puts ". hello #{self.inspect} at #{}"

pause and resume the scheduler

The scheduler can be paused via the #pause and #resume methods. One can determine if the scheduler is currently paused by calling #paused?.

While paused, the scheduler still accepts schedules, but no schedule will get triggered as long as #resume isn't called.

job options

name: string

Sets the name of the job.

scheduler.cron '*/15 8 * * *', name: 'Robert' do |job|
  puts "A, it's #{} and my name is #{}"

job1 =
  scheduler.schedule_cron '*/30 9 * * *', n: 'temporary' do |job|
    puts "B, it's #{} and my name is #{}"
# ... = 'Beowulf'

blocking: true

By default, jobs are triggered in their own, new threads. When blocking: true, the job is triggered in the scheduler thread (a new thread is not created). Yes, while a blocking job is running, the scheduler is not scheduling.

overlap: false

Since, by default, jobs are triggered in their own new threads, job instances might overlap. For example, a job that takes 10 minutes and is scheduled every 7 minutes will have overlaps.

To prevent overlap, one can set overlap: false. Such a job will not trigger if one of its instances is already running.

The :overlap option is considered before the :mutex option when the scheduler is reviewing jobs for triggering.

mutex: mutex_instance / mutex_name / array of mutexes

When a job with a mutex triggers, the job's block is executed with the mutex around it, preventing other jobs with the same mutex from entering (it makes the other jobs wait until it exits the mutex).

This is different from overlap: false, which is, first, limited to instances of the same job, and, second, doesn't make the incoming job instance block/wait but give up.

:mutex accepts a mutex instance or a mutex name (String). It also accept an array of mutex names / mutex instances. It allows for complex relations between jobs.

Array of mutexes: original idea and implementation by Rainux Luo

Note: creating lots of different mutexes is OK. Rufus-scheduler will place them in its Scheduler#mutexes hash... And they won't get garbage collected.

The :overlap option is considered before the :mutex option when the scheduler is reviewing jobs for triggering.

timeout: duration or point in time

It's OK to specify a timeout when scheduling some work. After the time specified, it gets interrupted via a Rufus::Scheduler::TimeoutError. '10d', timeout: '1d' do
    # ... do something
  rescue Rufus::Scheduler::TimeoutError
    # ... that something got interrupted after 1 day

The :timeout option accepts either a duration (like "1d" or "2w3d") or a point in time (like "2013/12/12 12:00").

:first_at, :first_in, :first, :first_time

This option is for repeat jobs (cron / every) only.

It's used to specify the first time after which the repeat job should trigger for the first time.

In the case of an "every" job, this will be the first time (modulo the scheduler frequency) the job triggers. For a "cron" job as well, the :first will point to the first time the job has to trigger, the following trigger times are then determined by the cron string.

scheduler.every '2d', first_at: + 10 * 3600 do
  # ... every two days, but start in 10 hours

scheduler.every '2d', first_in: '10h' do
  # ... every two days, but start in 10 hours

scheduler.cron '00 14 * * *', first_in: '3d' do
  # ... every day at 14h00, but start after 3 * 24 hours

:first, :first_at and :first_in all accept a point in time or a duration (number or time string). Use the symbol you think makes your schedule more readable.

Note: it's OK to change the first_at (a Time instance) directly:

job.first_at = + 10
job.first_at = Rufus::Scheduler.parse('2029-12-12')

The first argument (in all its flavours) accepts a :now or :immediately value. That schedules the first occurrence for immediate triggering. Consider:

require 'rufus-scheduler'

s =

n =; p [ :scheduled_at, n, n.to_f ]

s.every '3s', first: :now do
  n =; p [ :in, n, n.to_f ]


that'll output something like:

[:scheduled_at, 2014-01-22 22:21:21 +0900, 1390396881.344438]
[:in, 2014-01-22 22:21:21 +0900, 1390396881.6453865]
[:in, 2014-01-22 22:21:24 +0900, 1390396884.648807]
[:in, 2014-01-22 22:21:27 +0900, 1390396887.651686]
[:in, 2014-01-22 22:21:30 +0900, 1390396890.6571937]

:last_at, :last_in, :last

This option is for repeat jobs (cron / every) only.

It indicates the point in time after which the job should unschedule itself.

scheduler.cron '5 23 * * *', last_in: '10d' do
  # ... do something every evening at 23:05 for 10 days

scheduler.every '10m', last_at: + 10 * 3600 do
  # ... do something every 10 minutes for 10 hours

scheduler.every '10m', last_in: 10 * 3600 do
  # ... do something every 10 minutes for 10 hours

:last, :last_at and :last_in all accept a point in time or a duration (number or time string). Use the symbol you think makes your schedule more readable.

Note: it's OK to change the last_at (nil or a Time instance) directly:

job.last_at = nil
  # remove the "last" bound

job.last_at = Rufus::Scheduler.parse('2029-12-12')
  # set the last bound

times: nb of times (before auto-unscheduling)

One can tell how many times a repeat job (CronJob or EveryJob) is to execute before unscheduling by itself.

scheduler.every '2d', times: 10 do
  # ... do something every two days, but not more than 10 times

scheduler.cron '0 23 * * *', times: 31 do
  # ... do something every day at 23:00 but do it no more than 31 times

It's OK to assign nil to :times to make sure the repeat job is not limited. It's useful when the :times is determined at scheduling time.

scheduler.cron '0 23 * * *', times: (nolimit ? nil : 10) do
  # ...

The value set by :times is accessible in the job. It can be modified anytime.

job =
  scheduler.cron '0 23 * * *' do
    # ...

# later on...

job.times = 10
  # 10 days and it will be over

Job methods

When calling a schedule method, the id (String) of the job is returned. Longer schedule methods return Job instances directly. Calling the shorter schedule methods with the job: true also returns Job instances instead of Job ids (Strings).

  require 'rufus-scheduler'

  scheduler =

  job_id = '10d' do
      # ...

  job =
    scheduler.schedule_in '1w' do
      # ...

  job = '1w', job: true do
      # ...

Those Job instances have a few interesting methods / properties:

id, job_id

Returns the job id.

job = scheduler.schedule_in('10d') do; end
  # => "in_1374072446.8923042_0.0_0"


Returns the scheduler instance itself.


Returns the options passed at the Job creation.

job = scheduler.schedule_in('10d', tag: 'hello') do; end
  # => { :tag => 'hello' }


Returns the original schedule.

job = scheduler.schedule_in('10d', tag: 'hello') do; end
  # => '10d'

callable, handler

callable() returns the scheduled block (or the call method of the callable object passed in lieu of a block)

handler() returns nil if a block was scheduled and the instance scheduled otherwise.

# when passing a block

job =
  scheduler.schedule_in('10d') do
    # ...

  # => nil
  # => #<Proc:0x00000001dc6f58@/home/jmettraux/whatever.rb:115>


# when passing something else than a block

class MyHandler
  attr_reader :counter
  def initialize
    @counter = 0
  def call(job, time)
    @counter = @counter + 1

job = scheduler.schedule_in('10d',

  # => #<Method: MyHandler#call>
  # => #<MyHandler:0x0000000163ae88 @counter=0>


Added to rufus-scheduler 3.8.0.

Returns the array [ 'path/to/file.rb', 123 ] like Proc#source_location does.

require 'rufus-scheduler'

scheduler =

job = scheduler.schedule_every('2h') { p }

p job.source_location
  # ==> [ '/home/jmettraux/rufus-scheduler/test.rb', 6 ]


Returns the Time instance when the job got created.

job = scheduler.schedule_in('10d', tag: 'hello') do; end
  # => 2013-07-17 23:48:54 +0900


Returns the last time the job triggered (is usually nil for AtJob and InJob).

job = scheduler.schedule_every('10s') do; end

  # => 2013-07-17 23:48:54 +0900
  # => nil (since we've just scheduled it)

# after 10 seconds

  # => 2013-07-17 23:48:54 +0900 (same as above)
  # => 2013-07-17 23:49:04 +0900


Returns the previous #next_time

scheduler.every('10s') do |job|
  puts "job scheduled for #{job.previous_time} triggered at #{}"
  puts "next time will be around #{job.next_time}"
  puts "."

last_work_time, mean_work_time

The job keeps track of how long its work was in the last_work_time attribute. For a one time job (in, at) it's probably not very useful.

The attribute mean_work_time contains a computed mean work time. It's recomputed after every run (if it's a repeat job).


Returns an array of EtOrbi::EoTime instances (Time instances with a designated time zone), listing the n next occurrences for this job.

Please note that for "interval" jobs, a mean work time is computed each time and it's used by this #next_times(n) method to approximate the next times beyond the immediate next time.


Unschedule the job, preventing it from firing again and removing it from the schedule. This doesn't prevent a running thread for this job to run until its end.


Returns the list of threads currently "hosting" runs of this Job instance.


Interrupts all the work threads currently running for this job instance. They discard their work and are free for their next run (of whatever job).

Note: this doesn't unschedule the Job instance.

Note: if the job is pooled for another run, a free work thread will probably pick up that next run and the job will appear as running again. You'd have to unschedule and kill to make sure the job doesn't run again.


Returns true if there is at least one running Thread hosting a run of this Job instance.


Returns true if the job is scheduled (is due to trigger). For repeat jobs it should return true until the job gets unscheduled. "at" and "in" jobs will respond with false as soon as they start running (execution triggered).

pause, resume, paused?, paused_at

These four methods are only available to CronJob, EveryJob and IntervalJob instances. One can pause or resume such jobs thanks to these methods.

job =
  scheduler.schedule_every('10s') do
    # ...

  # => 2013-07-20 01:22:22 +0900
  # => true
  # => 2013-07-20 01:22:22 +0900

  # => nil


Returns the list of tags attached to this Job instance.

By default, returns an empty array.

job = scheduler.schedule_in('10d') do; end
  # => []

job = scheduler.schedule_in('10d', tag: 'hello') do; end
  # => [ 'hello' ]

[]=, [], key?, has_key?, keys, values, and entries

Threads have thread-local variables, similarly Rufus-scheduler jobs have job-local variables. Those are more like a dict with thread-safe access.

job =
  @scheduler.schedule_every '1s' do |job|
    job[:timestamp] =
    job[:counter] ||= 0
    job[:counter] += 1

sleep 3.6

  # => 3

job.key?(:timestamp) # => true
job.has_key?(:timestamp) # => true
job.keys # => [ :timestamp, :counter ]

Locals can be set at schedule time:

job0 =
  @scheduler.schedule_cron '*/15 12 * * *', locals: { a: 0 } do
    # ...
job1 =
  @scheduler.schedule_cron '*/15 13 * * *', l: { a: 1 } do
    # ...

One can fetch the Hash directly with Job#locals. Of course, direct manipulation is not thread-safe.

job.locals.entries do |k, v|
  p "#{k}: #{v}"


Job instances have a #call method. It simply calls the scheduled block or callable immediately.

job =
  @scheduler.schedule_every '10m' do |job|
    # ...

Warning: the Scheduler#on_error handler is not involved. Error handling is the responsibility of the caller.

If the call has to be rescued by the error handler of the scheduler, call(true) might help:

require 'rufus-scheduler'

s =

def s.on_error(job, err)
  if job
    p [ 'error in scheduled job', job.class, job.original, err.message ]
    p [ 'error while scheduling', err.message ]
  p $!

job =
  s.schedule_in('1d') do
    fail 'again'
  # true lets the error_handler deal with error in the job call

AtJob and InJob methods


Returns when the job will trigger (hopefully).


An alias for time.

EveryJob, IntervalJob and CronJob methods


Returns the next time the job will trigger (hopefully).


Returns how many times the job fired.

EveryJob methods


It returns the scheduling frequency. For a job scheduled "every 20s", it's 20.

It's used to determine if the job frequency is higher than the scheduler frequency (it raises an ArgumentError if that is the case).

IntervalJob methods


Returns the interval scheduled between each execution of the job.

Every jobs use a time duration between each start of their execution, while interval jobs use a time duration between the end of an execution and the start of the next.

CronJob methods


An expensive method to run, it's brute. It caches its results. By default it runs for 2017 (a non leap-year).

  require 'rufus-scheduler'

  Rufus::Scheduler.parse('* * * * *').brute_frequency
    # => #<Fugit::Cron::Frequency:0x00007fdf4520c5e8
    #      @span=31536000.0, @delta_min=60, @delta_max=60,
    #      @occurrences=525600, @span_years=1.0, @yearly_occurrences=525600.0>
      # Occurs 525600 times in a span of 1 year (2017) and 1 day.
      # There are least 60 seconds between "triggers" and at most 60 seconds.

  Rufus::Scheduler.parse('0 12 * * *').brute_frequency
    # => #<Fugit::Cron::Frequency:0x00007fdf451ec6d0
    #      @span=31536000.0, @delta_min=86400, @delta_max=86400,
    #      @occurrences=365, @span_years=1.0, @yearly_occurrences=365.0>
  Rufus::Scheduler.parse('0 12 * * *').brute_frequency.to_debug_s
    # => "dmin: 1D, dmax: 1D, ocs: 365, spn: 52W1D, spnys: 1, yocs: 365"
      # 365 occurrences, at most 1 day between each, at least 1 day.

The CronJob#frequency method found in rufus-scheduler < 3.5 has been retired.

looking up jobs


The scheduler #job(job_id) method can be used to look up Job instances.

  require 'rufus-scheduler'

  scheduler =

  job_id = '10d' do
      # ...

  # later on...

  job = scheduler.job(job_id)

Scheduler #jobs #at_jobs #in_jobs #every_jobs #interval_jobs and #cron_jobs

Are methods for looking up lists of scheduled Job instances.

Here is an example:

  # let's unschedule all the at jobs


Scheduler#jobs(tag: / tags: x)

When scheduling a job, one can specify one or more tags attached to the job. These can be used to look up the job later on. '10d', tag: 'main_process' do
    # ...
  end '10d', tags: [ 'main_process', 'side_dish' ] do
    # ...

  # ...

  jobs = 'main_process')
    # find all the jobs with the 'main_process' tag

  jobs = [ 'main_process', 'side_dish' ]
    # find all the jobs with the 'main_process' AND 'side_dish' tags


Returns the list of Job instance that have currently running instances.

Whereas other "_jobs" method scan the scheduled job list, this method scans the thread list to find the job. It thus comprises jobs that are running but are not scheduled anymore (that happens for at and in jobs).

misc Scheduler methods


Unschedule a job given directly or by its id.


Shuts down the scheduler, ceases any scheduler/triggering activity.


Shuts down the scheduler, waits (blocks) until all the jobs cease running.

Scheduler#shutdown(wait: n)

Shuts down the scheduler, waits (blocks) at most n seconds until all the jobs cease running. (Jobs are killed after n seconds have elapsed).


Kills all the job (threads) and then shuts the scheduler down. Radical.


Returns true if the scheduler has been shut down.


Returns the Time instance at which the scheduler got started.

Scheduler #uptime / #uptime_s

Returns since the count of seconds for which the scheduler has been running.

#uptime_s returns this count in a String easier to grasp for humans, like "3d12m45s123".


Lets the current thread join the scheduling thread in rufus-scheduler. The thread comes back when the scheduler gets shut down.

#join is mostly used in standalone scheduling script (or tiny one file examples). Calling #join from a web application initializer will probably hijack the main thread and prevent the web application from being served. Do not put a #join in such a web application initializer file.


Returns all the threads associated with the scheduler, including the scheduler thread itself.


Lists the work threads associated with the scheduler. The query option defaults to :all.

  • :all : all the work threads
  • :active : all the work threads currently running a Job
  • :vacant : all the work threads currently not running a Job

Note that the main schedule thread will be returned if it is currently running a Job (ie one of those blocking: true jobs).


Returns true if the arg is a currently scheduled job (see Job#scheduled?).

Scheduler#occurrences(time0, time1)

Returns a hash { job => [ t0, t1, ... ] } mapping jobs to their potential trigger time within the [ time0, time1 ] span.

Please note that, for interval jobs, the #mean_work_time is used, so the result is only a prediction.

Scheduler#timeline(time0, time1)

Like #occurrences but returns a list [ [ t0, job0 ], [ t1, job1 ], ... ] of time + job pairs.

dealing with job errors

The easy, job-granular way of dealing with errors is to rescue and deal with them immediately. The two next sections show examples. Skip them for explanations on how to deal with errors at the scheduler level.

block jobs

As said, jobs could take care of their errors themselves.

scheduler.every '10m' do
    # do something that might fail...
  rescue => e
    $stderr.puts '-' * 80
    $stderr.puts e.message
    $stderr.puts e.stacktrace
    $stderr.puts '-' * 80

callable jobs

Jobs are not only shrunk to blocks, here is how the above would look like with a dedicated class.

scheduler.every '10m', do
  def call(job)
    # do something that might fail...
  rescue => e
    $stderr.puts '-' * 80
    $stderr.puts e.message
    $stderr.puts e.stacktrace
    $stderr.puts '-' * 80

TODO: talk about callable#on_error (if implemented)

(see scheduling handler instances and scheduling handler classes for more about those "callable jobs")


By default, rufus-scheduler intercepts all errors (that inherit from StandardError) and dumps abundant details to $stderr.

If, for example, you'd like to divert that flow to another file (descriptor), you can reassign $stderr for the current Ruby process

$stderr ='/var/log/myapplication.log', 'ab')

or, you can limit that reassignement to the scheduler itself

scheduler.stderr ='/var/log/myapplication.log', 'ab')

Rufus::Scheduler#on_error(job, error)

We've just seen that, by default, rufus-scheduler dumps error information to $stderr. If one needs to completely change what happens in case of error, it's OK to overwrite #on_error

def scheduler.on_error(job, error)

  Logger.warn("intercepted error in #{}: #{error.message}")

On Rails, the on_error method redefinition might look like:

def scheduler.on_error(job, error)

    "err#{error.object_id} rufus-scheduler intercepted #{error.inspect}" +
    " in job #{job.inspect}")
  error.backtrace.each_with_index do |line, i|
      "err#{error.object_id} #{i}: #{line}")


Rufus::Scheduler #on_pre_trigger and #on_post_trigger callbacks

One can bind callbacks before and after jobs trigger:

s =

def s.on_pre_trigger(job, trigger_time)
  puts "triggering job #{}..."

def s.on_post_trigger(job, trigger_time)
  puts "triggered job #{}."

s.every '1s' do
  # ...

The trigger_time is the time at which the job triggers. It might be a bit before

Warning: these two callbacks are executed in the scheduler thread, not in the work threads (the threads where the job execution really happens).


One can create an around callback which will wrap a job:

def s.around_trigger(job)
  t =
  puts "Starting job #{}..."
  puts "job #{} finished in #{} seconds."

The around callback is executed in the thread.

Rufus::Scheduler#on_pre_trigger as a guard

Returning false in on_pre_trigger will prevent the job from triggering. Returning anything else (nil, -1, true, ...) will let the job trigger.

Note: your business logic should go in the scheduled block itself (or the scheduled instance). Don't put business logic in on_pre_trigger. Return false for admin reasons (backend down, etc), not for business reasons that are tied to the job itself.

def s.on_pre_trigger(job, trigger_time)

  return false if Backend.down?

  puts "triggering job #{}..."
end options


By default, rufus-scheduler sleeps 0.300 second between every step. At each step it checks for jobs to trigger and so on.

The :frequency option lets you change that 0.300 second to something else.

scheduler = 5)

It's OK to use a time string to specify the frequency.

scheduler = '2h10m')
  # this scheduler will sleep 2 hours and 10 minutes between every "step"

Use with care.

lockfile: "mylockfile.txt"

This feature only works on OSes that support the flock (man 2 flock) call.

Starting the scheduler with lockfile: '.rufus-scheduler.lock' will make the scheduler attempt to create and lock the file .rufus-scheduler.lock in the current working directory. If that fails, the scheduler will not start.

The idea is to guarantee only one scheduler (in a group of schedulers sharing the same lockfile) is running.

This is useful in environments where the Ruby process holding the scheduler gets started multiple times.

If the lockfile mechanism here is not sufficient, you can plug your custom mechanism. It's explained in advanced lock schemes below.


(since rufus-scheduler 3.0.9)

The scheduler lock is an object that responds to #lock and #unlock. The scheduler calls #lock when starting up. If the answer is false, the scheduler stops its initialization work and won't schedule anything.

Here is a sample of a scheduler lock that only lets the scheduler on host "" start:

class HostLock
  def initialize(lock_name)
    @lock_name = lock_name
  def lock
    @lock_name == `hostname -f`.strip
  def unlock

scheduler =''))

By default, the scheduler_lock is an instance of Rufus::Scheduler::NullLock, with a #lock that returns true.


(since rufus-scheduler 3.0.9)

The trigger lock in an object that responds to #lock. The scheduler calls that method on the job lock right before triggering any job. If the answer is false, the trigger doesn't happen, the job is not done (at least not in this scheduler).

Here is a (stupid) PingLock example, it'll only trigger if an "other host" is not responding to ping. Do not use that in production, you don't want to fork a ping process for each trigger attempt...

class PingLock
  def initialize(other_host)
    @other_host = other_host
  def lock
    ! system("ping -c 1 #{@other_host}")

scheduler =''))

By default, the trigger_lock is an instance of Rufus::Scheduler::NullLock, with a #lock that always returns true.

As explained in advanced lock schemes, another way to tune that behaviour is by overriding the scheduler's #confirm_lock method. (You could also do that with an #on_pre_trigger callback).


In rufus-scheduler 2.x, by default, each job triggering received its own, brand new, thread of execution. In rufus-scheduler 3.x, execution happens in a pooled work thread. The max work thread count (the pool size) defaults to 28.

One can set this maximum value when starting the scheduler.

scheduler = 77)

It's OK to increase the :max_work_threads of a running scheduler.

scheduler.max_work_threads += 10


Do not want to store a reference to your rufus-scheduler instance? Then Rufus::Scheduler.singleton can help, it returns a singleton instance of the scheduler, initialized the first time this class method is called.

Rufus::Scheduler.singleton.every '10s' { puts "hello, world!" }

It's OK to pass initialization arguments (like :frequency or :max_work_threads) but they will only be taken into account the first time .singleton is called.

Rufus::Scheduler.singleton(max_work_threads: 77)
Rufus::Scheduler.singleton(max_work_threads: 277) # no effect

The .s is a shortcut for .singleton.

Rufus::Scheduler.s.every '10s' { puts "hello, world!" }

advanced lock schemes

As seen above, rufus-scheduler proposes the :lockfile system out of the box. If in a group of schedulers only one is supposed to run, the lockfile mechanism prevents schedulers that have not set/created the lockfile from running.

There are situations where this is not sufficient.

By overriding #lock and #unlock, one can customize how schedulers lock.

This example was provided by Eric Lindvall:

class ZookeptScheduler < Rufus::Scheduler

  def initialize(zookeeper, opts={})
    @zk = zookeeper

  def lock
    @zk_locker = @zk.exclusive_locker('scheduler')
    @zk_locker.lock # returns true if the lock was acquired, false else

  def unlock

  def confirm_lock
    return false if down?
  rescue ZK::Exceptions::LockAssertionFailedError => e
    # we've lost the lock, shutdown (and return false to at least prevent
    # this job from triggering

This uses a zookeeper to make sure only one scheduler in a group of distributed schedulers runs.

The methods #lock and #unlock are overridden and #confirm_lock is provided, to make sure that the lock is still valid.

The #confirm_lock method is called right before a job triggers (if it is provided). The more generic callback #on_pre_trigger is called right after #confirm_lock.

:scheduler_lock and :trigger_lock

(introduced in rufus-scheduler 3.0.9).

Another way of prodiving #lock, #unlock and #confirm_lock to a rufus-scheduler is by using the :scheduler_lock and :trigger_lock options.

See :trigger_lock and :scheduler_lock.

The scheduler lock may be used to prevent a scheduler from starting, while a trigger lock prevents individual jobs from triggering (the scheduler goes on scheduling).

One has to be careful with what goes in #confirm_lock or in a trigger lock, as it gets called before each trigger.

Warning: you may think you're heading towards "high availability" by using a trigger lock and having lots of schedulers at hand. It may be so if you limit yourself to scheduling the same set of jobs at scheduler startup. But if you add schedules at runtime, they stay local to their scheduler. There is no magic that propagates the jobs to all the schedulers in your pack.

parsing cronlines and time strings

(Please note that fugit does the heavy-lifting parsing work for rufus-scheduler).

Rufus::Scheduler provides a class method .parse to parse time durations and cron strings. It's what it's using when receiving schedules. One can use it directly (no need to instantiate a Scheduler).

require 'rufus-scheduler'

  # => 777600.0
  # => 777600.0

Rufus::Scheduler.parse('Sun Nov 18 16:01:00 2012').strftime('%c')
  # => 'Sun Nov 18 16:01:00 2012'

Rufus::Scheduler.parse('Sun Nov 18 16:01:00 2012 Europe/Berlin').strftime('%c %z')
  # => 'Sun Nov 18 15:01:00 2012 +0000'

  # => 0.1

Rufus::Scheduler.parse('* * * * *')
  # => #<Fugit::Cron:0x00007fb7a3045508
  #      @original="* * * * *", @cron_s=nil,
  #      @seconds=[0], @minutes=nil, @hours=nil, @monthdays=nil, @months=nil,
  #      @weekdays=nil, @zone=nil, @timezone=nil>

It returns a number when the input is a duration and a Fugit::Cron instance when the input is a cron string.

It will raise an ArgumentError if it can't parse the input.

Beyond .parse, there are also .parse_cron and .parse_duration, for finer granularity.

There is an interesting helper method named .to_duration_hash:

require 'rufus-scheduler'

  # => { :m => 1 }
  # => { :m => 1, :s => 2, :ms => 127 }

Rufus::Scheduler.to_duration_hash(62.127, drop_seconds: true)
  # => { :m => 1 }

cronline notations specific to rufus-scheduler

first Monday, last Sunday et al

To schedule something at noon every first Monday of the month:

scheduler.cron('00 12 * * mon#1') do
  # ...

To schedule something at noon the last Sunday of every month:

scheduler.cron('00 12 * * sun#-1') do
  # ...
# OR
scheduler.cron('00 12 * * sun#L') do
  # ...

Such cronlines can be tested with scripts like:

require 'rufus-scheduler'
  # => 2013-10-26 07:07:08 +0900
Rufus::Scheduler.parse('* * * * mon#1').next_time.to_s
  # => 2013-11-04 00:00:00 +0900

L (last day of month)

L can be used in the "day" slot:

In this example, the cronline is supposed to trigger every last day of the month at noon:

require 'rufus-scheduler'
  # => 2013-10-26 07:22:09 +0900
Rufus::Scheduler.parse('00 12 L * *').next_time.to_s
  # => 2013-10-31 12:00:00 +0900

negative day (x days before the end of the month)

It's OK to pass negative values in the "day" slot:

scheduler.cron '0 0 -5 * *' do
  # do it at 00h00 5 days before the end of the month...

Negative ranges (-10--5-: 10 days before the end of the month to 5 days before the end of the month) are OK, but mixed positive / negative ranges will raise an ArgumentError.

Negative ranges with increments (-10---2/2) are accepted as well.

Descending day ranges are not accepted (10-8 or -8--10 for example).

a note about timezones

Cron schedules and at schedules support the specification of a timezone.

scheduler.cron '0 22 * * 1-5 America/Chicago' do
  # the job...
end '2013-12-12 14:00 Pacific/Samoa' do
  puts "it's tea time!"

# or even

Rufus::Scheduler.parse("2013-12-12 14:00 Pacific/Saipan")
  # => #<Rufus::Scheduler::ZoTime:0x007fb424abf4e8 @seconds=1386820800.0, @zone=#<TZInfo::DataTimezone: Pacific/Saipan>, @time=nil>

I get "zotime.rb:41:in `initialize': cannot determine timezone from nil"

For when you see an error like:

  in `initialize':
    cannot determine timezone from nil (etz:nil,tnz:"中国标准时间",tzid:nil)
    from rufus-scheduler/lib/rufus/scheduler/zotime.rb:198:in `new'
    from rufus-scheduler/lib/rufus/scheduler/zotime.rb:198:in `now'
    from rufus-scheduler/lib/rufus/scheduler.rb:561:in `start'

It may happen on Windows or on systems that poorly hint to Ruby which timezone to use. It should be solved by setting explicitly the ENV['TZ'] before the scheduler instantiation:

ENV['TZ'] = 'Asia/Shanghai'
scheduler =
scheduler.every '2s' do
  puts "#{} Hello #{ENV['TZ']}!"

On Rails you might want to try with:

ENV['TZ'] = # Rails only
scheduler =
scheduler.every '2s' do
  puts "#{} Hello #{ENV['TZ']}!"

(Hat tip to Alexander in gh-230)

Rails sets its timezone under config/application.rb.

Rufus-Scheduler 3.3.3 detects the presence of Rails and uses its timezone setting (tested with Rails 4), so setting ENV['TZ'] should not be necessary.

The value can be determined thanks to

Use a "continent/city" identifier (for example "Asia/Shanghai"). Do not use an abbreviation (not "CST") and do not use a local time zone name (not "中国标准时间" nor "Eastern Standard Time" which, for instance, points to a time zone in America and to another one in Australia...).

If the error persists (and especially on Windows), try to add the tzinfo-data to your Gemfile, as in:

gem 'tzinfo-data'

or by manually requiring it before requiring rufus-scheduler (if you don't use Bundler):

require 'tzinfo/data'
require 'rufus-scheduler'

so Rails?

Yes, I know, all of the above is boring and you're only looking for a snippet to paste in your Ruby-on-Rails application to schedule...

Here is an example initializer:

# config/initializers/scheduler.rb

require 'rufus-scheduler'

# Let's use the rufus-scheduler singleton
s = Rufus::Scheduler.singleton

# Stupid recurrent task...
s.every '1m' do "hello, it's #{}"

And now you tell me that this is good, but you want to schedule stuff from your controller.


class ScheController < ApplicationController

  # GET /sche/
  def index

    job_id = '5s' do "time flies, it's now #{}"

    render text: "scheduled job #{job_id}"

The rufus-scheduler singleton is instantiated in the config/initializers/scheduler.rb file, it's then available throughout the webapp via Rufus::Scheduler.singleton.

Warning: this works well with single-process Ruby servers like Webrick and Thin. Using rufus-scheduler with Passenger or Unicorn requires a bit more knowledge and tuning, gently provided by a bit of googling and reading, see Faq above.

avoid scheduling when running the Ruby on Rails console

(Written in reply to gh-186)

If you don't want rufus-scheduler to trigger anything while running the Ruby on Rails console, running for tests/specs, or running from a Rake task, you can insert a conditional return statement before jobs are added to the scheduler instance:

# config/initializers/scheduler.rb

require 'rufus-scheduler'

return if defined?(Rails::Console) || Rails.env.test? || File.split($PROGRAM_NAME).last == 'rake'
  # do not schedule when Rails is run from its console, for a test/spec, or
  # from a Rake task

# return if $PROGRAM_NAME.include?('spring')
  # see

s = Rufus::Scheduler.singleton

s.every '1m' do "hello, it's #{}"

(Beware later version of Rails where Spring takes care pre-running the initializers. Running spring stop or disabling Spring might be necessary in some cases to see changes to initializers being taken into account.)

rails server -d

(Written in reply to )

There is the handy rails server -d that starts a development Rails as a daemon. The annoying thing is that the scheduler as seen above is started in the main process that then gets forked and daemonized. The rufus-scheduler thread (and any other thread) gets lost, no scheduling happens.

I avoid running -d in development mode and bother about daemonizing only for production deployment.

These are two well crafted articles on process daemonization, please read them:

If, anyway, you need something like rails server -d, why not try bundle exec unicorn -D instead? In my (limited) experience, it worked out of the box (well, had to add gem 'unicorn' to Gemfile first).

executor / reloader

You might benefit from wraping your scheduled code in the executor or reloader. Read more here:


see getting help above.

Author: jmettraux
Source code:
License: MIT license


Rufus Scheduler: Job Scheduler for Ruby (at, Cron, in and Every Jobs)
Royce  Reinger

Royce Reinger


Qu: A Ruby Library for Queuing and Processing Background Jobs


Qu is a Ruby library for queuing and processing background jobs. It is heavily inspired by delayed_job and Resque.

Qu was created to overcome some shortcomings in the existing queuing libraries that we experienced at Ordered List while building SpeakerDeck, and Harmony. The advantages of Qu are:

  • Multiple backends (redis, mongo)
  • Jobs are requeued when worker is killed
  • Resque-like API

Information & Help


Rails 3 and 4

Decide which backend you want to use and add the gem to your Gemfile.

gem 'qu-rails'
gem 'qu-redis'

That's all you need to do!

Rails 2

Decide which backend you want to use and add the gem to config.gems in environment.rb:

config.gem 'qu-redis'

To load the rake tasks, add the following to your Rakefile:

require 'qu/tasks'


Jobs are defined by extending the Qu::Job class:

class ProcessPresentation < Qu::Job
  def initialize(presentation_id)
    @presentation_id = presentation_id

  def perform

You can add a job to the queue by calling create on your job:

job = ProcessPresentation.create(
puts "Created job #{}"

The job will be initialized with any parameters that are passed to it when it is performed. These parameters will be stored in the backend, so they must be simple types that can easily be serialized and unserialized. Don't try to pass in an ActiveRecord object.

Processing the jobs on the queue can be done with a Rake task:

$ bundle exec rake qu:work

You can easily inspect the queue or clear it:

puts "Jobs on the queue:", Qu.size


The default queue is used, um…by default. Jobs that don't specify a queue will be placed in that queue, and workers that don't specify a queue will work on that queue.

However, if you have some background jobs that are more or less important, or some that take longer than others, you may want to consider using multiple queues. You can have workers dedicated to specific queues, or simply tell all your workers to work on the most important queue first.

Jobs can be placed in a specific queue by setting the queue:

class CallThePresident < Qu::Job
  queue :urgent

  def initialize(message)
    @message = message

  def perform
    # …

You can then tell workers to work on this queue by passing an environment variable

$ bundle exec rake qu:work QUEUES=urgent,default

Note that if you still want your worker to process the default queue, you must specify it. Queues will be process in the order they are specified.

You can also get the size or clear a specific queue:



Most of the configuration for Qu should be automatic. It will also automatically detect ENV variables from Heroku for backend connections, so you shouldn't need to do anything to configure the backend.

However, if you do need to customize it, you can by calling the Qu.configure:

Qu.configure do |c|
  c.logger ='log/qu.log')
  c.graceful_shutdown = true


If you prefer to have jobs processed immediatly in your tests, there is an Immediate backend that will perform the job instead of enqueuing it. In your test helper, require qu-immediate:

require 'qu-immediate'

Why another queuing library?

Resque and delayed_job are both great, but both of them have shortcomings that can be frustrating in production applications.

delayed_job was a brilliantly simple pioneer in the world of database-backed queues. While most asynchronous queuing systems were tending toward overly complex, it made use of your existing database and just worked. But there are a few flaws:

  • Occasionally fails silently.
  • Use of priority instead of separate named queues.
  • Contention in the ActiveRecord backend with multiple workers. Occasionally the same job gets performed by multiple workers.

Resque, the wiser relative of delayed_job, fixes most of those issues. But in doing so, it forces some of its beliefs on you, and sometimes those beliefs just don't make sense for your environment. Here are some of the flaws of Resque:

  • Redis is a great queue backend, but it doesn't make sense for every environment.
  • Forking before each job prevents memory leaks, but it is terribly inefficient in environments with a lot of fast jobs (the resque-jobs-per-fork plugin alleviates this)

Those shortcomings lead us to write Qu. It is not perfect, but we hope to overcome the issues we faced with other queuing libraries.


If you find what looks like a bug:

  1. Search the issues on GitHub to see if anyone else has reported issue.
  2. If you don't see anything, create an issue with information on how to reproduce it.

If you want to contribute an enhancement or a fix:

  1. Fork the project on GitHub.
  2. Make your changes with tests.
  3. Commit the changes without making changes to the Rakefile, Gemfile, gemspec, or any other files that aren't related to your enhancement or fix
  4. Send a pull request.

Author: Bkeepers
Source Code: 
License: MIT license

#ruby #background #jobs 

Qu: A Ruby Library for Queuing and Processing Background Jobs

World Web Technology has Multiple Job Vacancies for Multiple Positions

Are you looking for a company that gives you the chance to explore your skills and assures you rapid growth of your career?

World Web Technology Pvt. Ltd. has Multiple vacancies for multiple positions. So, ✊ Grab the opportunity to work with the best company in Ahmedabad. 😍

Interested candidates just drop an email to 📩 𝗢𝗥 apply online at

#reference are highly appreciated.

#job #jobs

World Web Technology has Multiple Job Vacancies for Multiple Positions
Zac Efron

Zac Efron


Benefits of Working With a Staffing Company!

ESP Workforce, began as a local staffing agency in Albany, CA and has since grown to serve all major markets. Since 2001 our dedicated and talented team has been serving our clients to help them grow and reach their business goals at the most affordable rates possible. We offer a range of services including outsourcing employees for any industry and any profession. Our goal is to work with you to find quality employees at a significantly lower cost. We develop long term and stable relationships with our clientele.

ESP Workforce Custom Staffing Solution is seeking a variety of job positions in the mining industry, including underground and surface miners. While it is true that many mining jobs are seasonal, many are permanent, full-time positions. To find out whether Custom Staffing has a position that is perfect for you, visit their website. They also host weekly job fairs where potential employees can find full-time employment. If you're looking for a temporary position, visit their website to find out more about current opportunities.

Coal mining is an important industry with a high turnover rate, so it's important to find the right candidate for the right position. There are many benefits to working with a staffing agency that specializes in finding the perfect candidate for your needs. This firm also provides free job assistance and quick placements for job-seekers who aren't ready to commit to a permanent position. In addition, they are dedicated to helping their clients find long-term, full-time employment.

Besides offering great service, ESP Workforce also offers affordable staffing solutions. They can help you find the right candidates for the job and even transition them to permanent status. These staffing services take the headache out of hiring new employees. By providing the right people, they can meet the needs of their clients and make the process a bit easier. With the help of a recruiter, you can be sure that your staff will be successful.

In addition to offering flexible staffing solutions, this staffing firm can also provide temporary and direct hire employees. By providing them with the right candidates for their positions, they can provide an ideal environment for them to work. They will help with the screening and hiring, and will continue to help their clients find the best employees. A professional recruitment company will listen to your needs and make it easier to find the right person for the job. If you are looking for a new employee, you should always consider a custom staffing solution.

ESP are a great way to find temporary employees that meet your specific workforce needs. With a temp agency, you can get the workers you need for specific roles faster and with a lower cost. By using a ESP agency, you can ensure that the workers you hire will be the right fit for your company. This staffing solution is an ideal choice if you need a temporary workforce that has a specific skill set.

The main advantage of using a custom staffing agency is that you can choose the workers that suit your needs and budget. ESP agency help employers find qualified candidates faster. They also fill gaps in the local workforce. By choosing a custom staffing agency, you'll be able to pay for only the workers who have the skills they need. This is a great option for businesses that don't want to spend a lot of money on hiring employees.

Temps are a great way to find a skilled employee quickly. Using a custom staffing agency will ensure that you get the right candidate for the job. Temps will also ensure that the candidate you hire is trained and competent to do the job well. These workers will be a good fit for your business. If you're looking for temporary workers, you'll be happy with the results. You'll be able to hire the right people in the most affordable way possible.

Temps can be an invaluable resource for your business. They can help you avoid the hassle of hiring and managing a permanent employee. Temps will not take up your entire budget and will not charge you unless they do the job. While you'll have to pay for the workers you need, you'll only need to pay for the workers who have the right skills and background. And if you're an employer, you'll only need to pay for those who will fit your company's culture.

#staffing #employee #employment #custom #jobs #offers #usa

Benefits of Working With a Staffing Company!
Web  Dev

Web Dev


All You Need to Get a Full Stack Web Developer Job in 2022

This is LITERALLY All You Need to Get a Full Stack Web Developer Job in 2022 🤯

Tune into this video till the end, to find out EXACTLY what you need to get a full-stack web developer job in 2022! Let's discuss how you can approach it, and how companies like codedamn hire people like you who are applying for an SDE role in mid-size start-up companies and in places you are needed!

Drop a comment and let us know if you watched this video till the end.

0:00 Teaser
0:26 Developer job for Codedamn
1:12 What is a Full stack developer?
4:44 Build Projects
7:00 How does Codedamn work?
9:11 Conclusion
10:11 Outro

#fullstack #developer #jobs #hiring #javascript #html #css #programming 

All You Need to Get a Full Stack Web Developer Job in 2022
Dylan  Iqbal

Dylan Iqbal


Hiring an Engineering Manager for Chrome's Aurora Team!

I'm hiring an Engineering Manager for Chrome's Aurora team! Want to work with popular JavaScript frameworks (e.g. Next.js/React, Angular, Vue) on improving UX quality & performance by default? 

Apply here: 

Info on team:

#hiring #jobs #chrome #javascript #next #angular #vue #react 


Hiring an Engineering Manager for Chrome's Aurora Team!

Top 20 Job Search Tips

Job searching isn’t just applying or hoping to get a call📞- It’s more than just that. There is plenty of advice out there- so we won’t beat around the bush but rather give you the Top 20 Job Searching 🔎 tips.

1. Be patient, but have a plan

2. Take at least one step daily

3. Identify your unique qualifications

4. Get out there

5. Be persistent

6. Build a network

7. Look for “hidden” jobs

8. Consider an internship

9. Volunteer your time and skills

10. Consider temporary work

11. Expand your search geographically

12. Consider recession-proof industries

13. Log in to Professional networking sites

14. Clean up your profile

15. Use your best job search manners

16. Talk with faculty members and remind them of your career interest areas

17. Keep track of the contact information for individuals in your network

18. Don’t hide out in grad school

19. Divide and conquer

20. Use your Career and Professional Development Center


To explore more, click here.


#job #career #jobs #resume 

Top 20 Job Search Tips