Nat  Grady

Nat Grady

1660879860

CronR: A Simple R Package for Managing Your Cron Jobs

cronR

Schedule R scripts/processes with the cron scheduler. This allows R users working on Unix/Linux to automate R processes at specific timepoints from R itself. Note that if you are looking for a Windows scheduler, you might be interested in the R package taskscheduleR available at bnosac/taskscheduleR

Basic usage

This R package allows you to

  • Get the list of scheduled jobs
  • Remove scheduled jobs
  • Add a job
    • A job is basically a script with R code which is run through Rscript
    • You can schedule tasks 'ONCE', 'EVERY MINUTE', 'EVERY HOUR', 'EVERY DAY', 'EVERY WEEK', 'EVERY MONTH' or any complex schedule
    • The task log contains the stdout & stderr of the Rscript which was run on that timepoint. This log can be found at the same folder as the R script

RStudio add-in

The package also contains an RStudio addin. If you install the package and use RStudio version 0.99.893 or later you can just click to schedule a task. Just click Addins > Schedule R scripts on Linux/Unix.

  • Alternatively, run cronR::cron_rstudioaddin() to open the addin interface. If you use that addin to schedule scripts, by default it will copy the R scripts to your current working directory and launch the copied scripts from there. A different directory can be specified by passing an argument to the RscriptRepository parameter cronR::cron_rstudioaddin(RscriptRepository = "/path/to/your/deployment/directory")
  • You can also set the CRON_LIVE environment variable to specify a default directory to copy scheduled scripts to. This can be done by putting something like CRON_LIVE="/path/to/your/deployment/directory" in your .Renviron or .Rprofile file. See ?Startup

Usage

Some example use cases are shown below, indicating to schedule a script at specific timepoints.

library(cronR)
f   <- system.file(package = "cronR", "extdata", "helloworld.R")
cmd <- cron_rscript(f)
cmd

cron_add(command = cmd, frequency = 'minutely', id = 'test1', description = 'My process 1', tags = c('lab', 'xyz'))
cron_add(command = cmd, frequency = 'daily', at='7AM', id = 'test2')
cron_njobs()

cron_ls()
cron_clear(ask = TRUE)
cron_ls()

cmd <- cron_rscript(f, rscript_args = c("productx", "arg2", "123"))
cmd
cron_add(cmd, frequency = 'minutely', id = 'job1', description = 'Customers')
cron_add(cmd, frequency = 'hourly', id = 'job2', description = 'Weather')
cron_add(cmd, frequency = 'hourly', id = 'job3', days_of_week = c(1, 2))
cron_add(cmd, frequency = 'hourly', id = 'job4', at = '00:20', days_of_week = c(1, 2))
cron_add(cmd, frequency = 'daily', id = 'job5', at = '14:20')
cron_add(cmd, frequency = 'daily', id = 'job6', at = '14:20', days_of_week = c(0, 3, 5))
cron_add(cmd, frequency = 'daily', id = 'job7', at = '23:59', days_of_month = c(1, 30))
cron_add(cmd, frequency = 'monthly', id = 'job8', at = '10:30', days_of_month = 'first', days_of_week = '*')
cron_add(cmd, frequency = '@reboot', id = 'job9', description = 'Good morning')
cron_add(cmd, frequency = '*/15 * * * *', id = 'job10', description = 'Every 15 min')   
cron_ls()
cron_clear(ask = TRUE)

By default, when you use the functions cron_add, cron_rm, cron_clear, cron_load, a prompt will ask for your permission to update the cron schedule, unless you put the argument ask=FALSE in each of these functions, in which case you automatically allow to update the schedule. (new since R package version 0.6.0)

Install

Make sure the cron daemon (https://en.wikipedia.org/wiki/Cron) is running. On Debian this is done as follows.

sudo apt-get update
sudo apt-get install -y cron
sudo cron start

If you want the RStudio add-in to work, also install miniUI, shiny and shinyFiles

install.packages('miniUI')
install.packages('shiny')
install.packages('shinyFiles')

Now have a look at ?cron_add or start the RStudio addin

Notes

  • Consider adding options(echo = TRUE) at the start of your R scripts in order to debug your scripts in case of errors.
  • Consider using argument env when calling cron_add if you need specific environment variables to be used in your script, These will be prepended to your script. E.g. as follows where you replace dry_run to FALSE to add it to your cron job list.
cmd <- cron_rscript("/path/to/your/script.R")
cron_add(cmd, frequency = 'minutely', id = 'job1', description = 'Customers', 
         env = c(LANG = "en_US.UTF-8", R_LIBS_USER = Sys.getenv("R_LIBS_USER"), YOUR_PROJECT_XYZ = getwd()), 
         ask = FALSE, dry_run = TRUE)
  • Currently, cronR does not preserve or handle cron jobs not generated through the package. This will be handled some time in the future. To be safe, you should run cron_save("cron.backup") before fiddling around.

Download Details:

Author: Bnosac
Source Code: https://github.com/bnosac/cronR 
License: View license

#r #cron #rstudio #schedule 

What is GEEK

Buddha Community

CronR: A Simple R Package for Managing Your Cron Jobs

CSharp REPL: A Command Line C# REPL with Syntax Highlighting

C# REPL

A cross-platform command line REPL for the rapid experimentation and exploration of C#. It supports intellisense, installing NuGet packages, and referencing local .NET projects and assemblies.

C# REPL screenshot 

(click to view animation)

C# REPL provides the following features:

  • Syntax highlighting via ANSI escape sequences
  • Intellisense with fly-out documentation
  • Nuget package installation
  • Reference local assemblies, solutions, and projects
  • Navigate to source via Source Link
  • IL disassembly (both Debug and Release mode)
  • Fast and flicker-free rendering. A "diff" algorithm is used to only render what's changed.

Installation

C# REPL is a .NET 6 global tool, and runs on Windows 10, Mac OS, and Linux. It can be installed via:

dotnet tool install -g csharprepl

If you're running on Mac OS Catalina (10.15) or later, make sure you follow any additional directions printed to the screen. You may need to update your PATH variable in order to use .NET global tools.

After installation is complete, run csharprepl to begin. C# REPL can be updated via dotnet tool update -g csharprepl.

Usage:

Run csharprepl from the command line to begin an interactive session. The default colorscheme uses the color palette defined by your terminal, but these colors can be changed using a theme.json file provided as a command line argument.

Evaluating Code

Type some C# into the prompt and press Enter to run it. The result, if any, will be printed:

> Console.WriteLine("Hello World")
Hello World

> DateTime.Now.AddDays(8)
[6/7/2021 5:13:00 PM]

To evaluate multiple lines of code, use Shift+Enter to insert a newline:

> var x = 5;
  var y = 8;
  x * y
40

Additionally, if the statement is not a "complete statement" a newline will automatically be inserted when Enter is pressed. For example, in the below code, the first line is not a syntactically complete statement, so when we press enter we'll go down to a new line:

> if (x == 5)
  | // caret position, after we press Enter on Line 1

Finally, pressing Ctrl+Enter will show a "detailed view" of the result. For example, for the DateTime.Now expression below, on the first line we pressed Enter, and on the second line we pressed Ctrl+Enter to view more detailed output:

> DateTime.Now // Pressing Enter shows a reasonable representation
[5/30/2021 5:13:00 PM]

> DateTime.Now // Pressing Ctrl+Enter shows a detailed representation
[5/30/2021 5:13:00 PM] {
  Date: [5/30/2021 12:00:00 AM],
  Day: 30,
  DayOfWeek: Sunday,
  DayOfYear: 150,
  Hour: 17,
  InternalKind: 9223372036854775808,
  InternalTicks: 637579915804530992,
  Kind: Local,
  Millisecond: 453,
  Minute: 13,
  Month: 5,
  Second: 0,
  Ticks: 637579915804530992,
  TimeOfDay: [17:13:00.4530992],
  Year: 2021,
  _dateData: 9860951952659306800
}

A note on semicolons: C# expressions do not require semicolons, but statements do. If a statement is missing a required semicolon, a newline will be added instead of trying to run the syntatically incomplete statement; simply type the semicolon to complete the statement.

> var now = DateTime.Now; // assignment statement, semicolon required

> DateTime.Now.AddDays(8) // expression, we don't need a semicolon
[6/7/2021 5:03:05 PM]

Keyboard Shortcuts

  • Basic Usage
    • Ctrl+C - Cancel current line
    • Ctrl+L - Clear screen
    • Enter - Evaluate the current line if it's a syntactically complete statement; otherwise add a newline
    • Control+Enter - Evaluate the current line, and return a more detailed representation of the result
    • Shift+Enter - Insert a new line (this does not currently work on Linux or Mac OS; Hopefully this will work in .NET 7)
    • Ctrl+Shift+C - Copy current line to clipboard
    • Ctrl+V, Shift+Insert, and Ctrl+Shift+V - Paste text to prompt. Automatically trims leading indent
  • Code Actions
    • F1 - Opens the MSDN documentation for the class/method under the caret (example)
    • F9 - Shows the IL (intermediate language) for the current statement in Debug mode.
    • Ctrl+F9 - Shows the IL for the current statement with Release mode optimizations.
    • F12 - Opens the source code in the browser for the class/method under the caret, if the assembly supports Source Link.
  • Autocompletion
    • Ctrl+Space - Open autocomplete menu. If there's a single option, pressing Ctrl+Space again will select the option
    • Enter, Right Arrow, Tab - Select active autocompletion option
    • Escape - closes autocomplete menu
  • Text Navigation
    • Home and End - Navigate to beginning of a single line and end of a single line, respectively
    • Ctrl+Home and Ctrl+End - Navigate to beginning of line and end across multiple lines in a multiline prompt, respectively
    • Arrows - Navigate characters within text
    • Ctrl+Arrows - Navigate words within text
    • Ctrl+Backspace - Delete previous word
    • Ctrl+Delete - Delete next word

Adding References

Use the #r command to add assembly or nuget references.

  • For assembly references, run #r "AssemblyName" or #r "path/to/assembly.dll"
  • For project references, run #r "path/to/project.csproj". Solution files (.sln) can also be referenced.
  • For nuget references, run #r "nuget: PackageName" to install the latest version of a package, or #r "nuget: PackageName, 13.0.5" to install a specific version (13.0.5 in this case).

Installing nuget packages

To run ASP.NET applications inside the REPL, start the csharprepl application with the --framework parameter, specifying the Microsoft.AspNetCore.App shared framework. Then, use the above #r command to reference the application DLL. See the Command Line Configuration section below for more details.

csharprepl --framework  Microsoft.AspNetCore.App

Command Line Configuration

The C# REPL supports multiple configuration flags to control startup, behavior, and appearance:

csharprepl [OPTIONS] [response-file.rsp] [script-file.csx] [-- <additional-arguments>]

Supported options are:

  • OPTIONS:
    • -r <dll> or --reference <dll>: Reference an assembly, project file, or nuget package. Can be specified multiple times. Uses the same syntax as #r statements inside the REPL. For example, csharprepl -r "nuget:Newtonsoft.Json" "path/to/myproj.csproj"
      • When an assembly or project is referenced, assemblies in the containing directory will be added to the assembly search path. This means that you don't need to manually add references to all of your assembly's dependencies (e.g. other references and nuget packages). Referencing the main entry assembly is enough.
    • -u <namespace> or --using <namespace>: Add a using statement. Can be specified multiple times.
    • -f <framework> or --framework <framework>: Reference a shared framework. The available shared frameworks depends on the local .NET installation, and can be useful when running an ASP.NET application from the REPL. Example frameworks are:
      • Microsoft.NETCore.App (default)
      • Microsoft.AspNetCore.All
      • Microsoft.AspNetCore.App
      • Microsoft.WindowsDesktop.App
    • -t <theme.json> or --theme <theme.json>: Read a theme file for syntax highlighting. This theme file associates C# syntax classifications with colors. The color values can be full RGB, or ANSI color names (defined in your terminal's theme). The NO_COLOR standard is supported.
    • --trace: Produce a trace file in the current directory that logs CSharpRepl internals. Useful for CSharpRepl bug reports.
    • -v or --version: Show version number and exit.
    • -h or --help: Show help and exit.
  • response-file.rsp: A filepath of an .rsp file, containing any of the above command line options.
  • script-file.csx: A filepath of a .csx file, containing lines of C# to evaluate before starting the REPL. Arguments to this script can be passed as <additional-arguments>, after a double hyphen (--), and will be available in a global args variable.

If you have dotnet-suggest enabled, all options can be tab-completed, including values provided to --framework and .NET namespaces provided to --using.

Integrating with other software

C# REPL is a standalone software application, but it can be useful to integrate it with other developer tools:

Windows Terminal

To add C# REPL as a menu entry in Windows Terminal, add the following profile to Windows Terminal's settings.json configuration file (under the JSON property profiles.list):

{
    "name": "C# REPL",
    "commandline": "csharprepl"
},

To get the exact colors shown in the screenshots in this README, install the Windows Terminal Dracula theme.

Visual Studio Code

To use the C# REPL with Visual Studio Code, simply run the csharprepl command in the Visual Studio Code terminal. To send commands to the REPL, use the built-in Terminal: Run Selected Text In Active Terminal command from the Command Palette (workbench.action.terminal.runSelectedText).

Visual Studio Code screenshot

Windows OS

To add the C# REPL to the Windows Start Menu for quick access, you can run the following PowerShell command, which will start C# REPL in Windows Terminal:

$shell = New-Object -ComObject WScript.Shell
$shortcut = $shell.CreateShortcut("$env:appdata\Microsoft\Windows\Start Menu\Programs\csharprepl.lnk")
$shortcut.TargetPath = "wt.exe"
$shortcut.Arguments = "-w 0 nt csharprepl.exe"
$shortcut.Save()

You may also wish to add a shorter alias for C# REPL, which can be done by creating a .cmd file somewhere on your path. For example, put the following contents in C:\Users\username\.dotnet\tools\csr.cmd:

wt -w 0 nt csharprepl

This will allow you to launch C# REPL by running csr from anywhere that accepts Windows commands, like the Window Run dialog.

Comparison with other REPLs

This project is far from being the first REPL for C#. Here are some other projects; if this project doesn't suit you, another one might!

Visual Studio's C# Interactive pane is full-featured (it has syntax highlighting and intellisense) and is part of Visual Studio. This deep integration with Visual Studio is both a benefit from a workflow perspective, and a drawback as it's not cross-platform. As far as I know, the C# Interactive pane does not support NuGet packages or navigating to documentation/source code. Subjectively, it does not follow typical command line keybindings, so can feel a bit foreign.

csi.exe ships with C# and is a command line REPL. It's great because it's a cross platform REPL that comes out of the box, but it doesn't support syntax highlighting or autocompletion.

dotnet script allows you to run C# scripts from the command line. It has a REPL built-in, but the predominant focus seems to be as a script runner. It's a great tool, though, and has a strong community following.

dotnet interactive is a tool from Microsoft that creates a Jupyter notebook for C#, runnable through Visual Studio Code. It also provides a general framework useful for running REPLs.

Download Details:
Author: waf
Source Code: https://github.com/waf/CSharpRepl
License: MPL-2.0 License

#dotnet  #aspdotnet  #csharp 

Nat  Grady

Nat Grady

1660879860

CronR: A Simple R Package for Managing Your Cron Jobs

cronR

Schedule R scripts/processes with the cron scheduler. This allows R users working on Unix/Linux to automate R processes at specific timepoints from R itself. Note that if you are looking for a Windows scheduler, you might be interested in the R package taskscheduleR available at bnosac/taskscheduleR

Basic usage

This R package allows you to

  • Get the list of scheduled jobs
  • Remove scheduled jobs
  • Add a job
    • A job is basically a script with R code which is run through Rscript
    • You can schedule tasks 'ONCE', 'EVERY MINUTE', 'EVERY HOUR', 'EVERY DAY', 'EVERY WEEK', 'EVERY MONTH' or any complex schedule
    • The task log contains the stdout & stderr of the Rscript which was run on that timepoint. This log can be found at the same folder as the R script

RStudio add-in

The package also contains an RStudio addin. If you install the package and use RStudio version 0.99.893 or later you can just click to schedule a task. Just click Addins > Schedule R scripts on Linux/Unix.

  • Alternatively, run cronR::cron_rstudioaddin() to open the addin interface. If you use that addin to schedule scripts, by default it will copy the R scripts to your current working directory and launch the copied scripts from there. A different directory can be specified by passing an argument to the RscriptRepository parameter cronR::cron_rstudioaddin(RscriptRepository = "/path/to/your/deployment/directory")
  • You can also set the CRON_LIVE environment variable to specify a default directory to copy scheduled scripts to. This can be done by putting something like CRON_LIVE="/path/to/your/deployment/directory" in your .Renviron or .Rprofile file. See ?Startup

Usage

Some example use cases are shown below, indicating to schedule a script at specific timepoints.

library(cronR)
f   <- system.file(package = "cronR", "extdata", "helloworld.R")
cmd <- cron_rscript(f)
cmd

cron_add(command = cmd, frequency = 'minutely', id = 'test1', description = 'My process 1', tags = c('lab', 'xyz'))
cron_add(command = cmd, frequency = 'daily', at='7AM', id = 'test2')
cron_njobs()

cron_ls()
cron_clear(ask = TRUE)
cron_ls()

cmd <- cron_rscript(f, rscript_args = c("productx", "arg2", "123"))
cmd
cron_add(cmd, frequency = 'minutely', id = 'job1', description = 'Customers')
cron_add(cmd, frequency = 'hourly', id = 'job2', description = 'Weather')
cron_add(cmd, frequency = 'hourly', id = 'job3', days_of_week = c(1, 2))
cron_add(cmd, frequency = 'hourly', id = 'job4', at = '00:20', days_of_week = c(1, 2))
cron_add(cmd, frequency = 'daily', id = 'job5', at = '14:20')
cron_add(cmd, frequency = 'daily', id = 'job6', at = '14:20', days_of_week = c(0, 3, 5))
cron_add(cmd, frequency = 'daily', id = 'job7', at = '23:59', days_of_month = c(1, 30))
cron_add(cmd, frequency = 'monthly', id = 'job8', at = '10:30', days_of_month = 'first', days_of_week = '*')
cron_add(cmd, frequency = '@reboot', id = 'job9', description = 'Good morning')
cron_add(cmd, frequency = '*/15 * * * *', id = 'job10', description = 'Every 15 min')   
cron_ls()
cron_clear(ask = TRUE)

By default, when you use the functions cron_add, cron_rm, cron_clear, cron_load, a prompt will ask for your permission to update the cron schedule, unless you put the argument ask=FALSE in each of these functions, in which case you automatically allow to update the schedule. (new since R package version 0.6.0)

Install

Make sure the cron daemon (https://en.wikipedia.org/wiki/Cron) is running. On Debian this is done as follows.

sudo apt-get update
sudo apt-get install -y cron
sudo cron start

If you want the RStudio add-in to work, also install miniUI, shiny and shinyFiles

install.packages('miniUI')
install.packages('shiny')
install.packages('shinyFiles')

Now have a look at ?cron_add or start the RStudio addin

Notes

  • Consider adding options(echo = TRUE) at the start of your R scripts in order to debug your scripts in case of errors.
  • Consider using argument env when calling cron_add if you need specific environment variables to be used in your script, These will be prepended to your script. E.g. as follows where you replace dry_run to FALSE to add it to your cron job list.
cmd <- cron_rscript("/path/to/your/script.R")
cron_add(cmd, frequency = 'minutely', id = 'job1', description = 'Customers', 
         env = c(LANG = "en_US.UTF-8", R_LIBS_USER = Sys.getenv("R_LIBS_USER"), YOUR_PROJECT_XYZ = getwd()), 
         ask = FALSE, dry_run = TRUE)
  • Currently, cronR does not preserve or handle cron jobs not generated through the package. This will be handled some time in the future. To be safe, you should run cron_save("cron.backup") before fiddling around.

Download Details:

Author: Bnosac
Source Code: https://github.com/bnosac/cronR 
License: View license

#r #cron #rstudio #schedule 

Dotnet Script: Run C# Scripts From The .NET CLI

dotnet script

Run C# scripts from the .NET CLI, define NuGet packages inline and edit/debug them in VS Code - all of that with full language services support from OmniSharp.

NuGet Packages

NameVersionFramework(s)
dotnet-script (global tool)Nugetnet6.0, net5.0, netcoreapp3.1
Dotnet.Script (CLI as Nuget)Nugetnet6.0, net5.0, netcoreapp3.1
Dotnet.Script.CoreNugetnetcoreapp3.1 , netstandard2.0
Dotnet.Script.DependencyModelNugetnetstandard2.0
Dotnet.Script.DependencyModel.NugetNugetnetstandard2.0

Installing

Prerequisites

The only thing we need to install is .NET Core 3.1 or .NET 5.0 SDK.

.NET Core Global Tool

.NET Core 2.1 introduced the concept of global tools meaning that you can install dotnet-script using nothing but the .NET CLI.

dotnet tool install -g dotnet-script

You can invoke the tool using the following command: dotnet-script
Tool 'dotnet-script' (version '0.22.0') was successfully installed.

The advantage of this approach is that you can use the same command for installation across all platforms. .NET Core SDK also supports viewing a list of installed tools and their uninstallation.

dotnet tool list -g

Package Id         Version      Commands
---------------------------------------------
dotnet-script      0.22.0       dotnet-script
dotnet tool uninstall dotnet-script -g

Tool 'dotnet-script' (version '0.22.0') was successfully uninstalled.

Windows

choco install dotnet.script

We also provide a PowerShell script for installation.

(new-object Net.WebClient).DownloadString("https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.ps1") | iex

Linux and Mac

curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | bash

If permission is denied we can try with sudo

curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | sudo bash

Docker

A Dockerfile for running dotnet-script in a Linux container is available. Build:

cd build
docker build -t dotnet-script -f Dockerfile ..

And run:

docker run -it dotnet-script --version

Github

You can manually download all the releases in zip format from the GitHub releases page.

Usage

Our typical helloworld.csx might look like this:

Console.WriteLine("Hello world!");

That is all it takes and we can execute the script. Args are accessible via the global Args array.

dotnet script helloworld.csx

Scaffolding

Simply create a folder somewhere on your system and issue the following command.

dotnet script init

This will create main.csx along with the launch configuration needed to debug the script in VS Code.

.
├── .vscode
│   └── launch.json
├── main.csx
└── omnisharp.json

We can also initialize a folder using a custom filename.

dotnet script init custom.csx

Instead of main.csx which is the default, we now have a file named custom.csx.

.
├── .vscode
│   └── launch.json
├── custom.csx
└── omnisharp.json

Note: Executing dotnet script init inside a folder that already contains one or more script files will not create the main.csx file.

Running scripts

Scripts can be executed directly from the shell as if they were executables.

foo.csx arg1 arg2 arg3

OSX/Linux

Just like all scripts, on OSX/Linux you need to have a #! and mark the file as executable via chmod +x foo.csx. If you use dotnet script init to create your csx it will automatically have the #! directive and be marked as executable.

The OSX/Linux shebang directive should be #!/usr/bin/env dotnet-script

#!/usr/bin/env dotnet-script
Console.WriteLine("Hello world");

You can execute your script using dotnet script or dotnet-script, which allows you to pass arguments to control your script execution more.

foo.csx arg1 arg2 arg3
dotnet script foo.csx -- arg1 arg2 arg3
dotnet-script foo.csx -- arg1 arg2 arg3

Passing arguments to scripts

All arguments after -- are passed to the script in the following way:

dotnet script foo.csx -- arg1 arg2 arg3

Then you can access the arguments in the script context using the global Args collection:

foreach (var arg in Args)
{
    Console.WriteLine(arg);
}

All arguments before -- are processed by dotnet script. For example, the following command-line

dotnet script -d foo.csx -- -d

will pass the -d before -- to dotnet script and enable the debug mode whereas the -d after -- is passed to script for its own interpretation of the argument.

NuGet Packages

dotnet script has built-in support for referencing NuGet packages directly from within the script.

#r "nuget: AutoMapper, 6.1.0"

package

Note: Omnisharp needs to be restarted after adding a new package reference

Package Sources

We can define package sources using a NuGet.Config file in the script root folder. In addition to being used during execution of the script, it will also be used by OmniSharp that provides language services for packages resolved from these package sources.

As an alternative to maintaining a local NuGet.Config file we can define these package sources globally either at the user level or at the computer level as described in Configuring NuGet Behaviour

It is also possible to specify packages sources when executing the script.

dotnet script foo.csx -s https://SomePackageSource

Multiple packages sources can be specified like this:

dotnet script foo.csx -s https://SomePackageSource -s https://AnotherPackageSource

Creating DLLs or Exes from a CSX file

Dotnet-Script can create a standalone executable or DLL for your script.

SwitchLong switchdescription
-o--outputDirectory where the published executable should be placed. Defaults to a 'publish' folder in the current directory.
-n--nameThe name for the generated DLL (executable not supported at this time). Defaults to the name of the script.
 --dllPublish to a .dll instead of an executable.
-c--configurationConfiguration to use for publishing the script [Release/Debug]. Default is "Debug"
-d--debugEnables debug output.
-r--runtimeThe runtime used when publishing the self contained executable. Defaults to your current runtime.

The executable you can run directly independent of dotnet install, while the DLL can be run using the dotnet CLI like this:

dotnet script exec {path_to_dll} -- arg1 arg2

Caching

We provide two types of caching, the dependency cache and the execution cache which is explained in detail below. In order for any of these caches to be enabled, it is required that all NuGet package references are specified using an exact version number. The reason for this constraint is that we need to make sure that we don't execute a script with a stale dependency graph.

Dependency Cache

In order to resolve the dependencies for a script, a dotnet restore is executed under the hood to produce a project.assets.json file from which we can figure out all the dependencies we need to add to the compilation. This is an out-of-process operation and represents a significant overhead to the script execution. So this cache works by looking at all the dependencies specified in the script(s) either in the form of NuGet package references or assembly file references. If these dependencies matches the dependencies from the last script execution, we skip the restore and read the dependencies from the already generated project.assets.json file. If any of the dependencies has changed, we must restore again to obtain the new dependency graph.

Execution cache

In order to execute a script it needs to be compiled first and since that is a CPU and time consuming operation, we make sure that we only compile when the source code has changed. This works by creating a SHA256 hash from all the script files involved in the execution. This hash is written to a temporary location along with the DLL that represents the result of the script compilation. When a script is executed the hash is computed and compared with the hash from the previous compilation. If they match there is no need to recompile and we run from the already compiled DLL. If the hashes don't match, the cache is invalidated and we recompile.

You can override this automatic caching by passing --no-cache flag, which will bypass both caches and cause dependency resolution and script compilation to happen every time we execute the script.

Cache Location

The temporary location used for caches is a sub-directory named dotnet-script under (in order of priority):

  1. The path specified for the value of the environment variable named DOTNET_SCRIPT_CACHE_LOCATION, if defined and value is not empty.
  2. Linux distributions only: $XDG_CACHE_HOME if defined otherwise $HOME/.cache
  3. macOS only: ~/Library/Caches
  4. The value returned by Path.GetTempPath for the platform.

 

Debugging

The days of debugging scripts using Console.WriteLine are over. One major feature of dotnet script is the ability to debug scripts directly in VS Code. Just set a breakpoint anywhere in your script file(s) and hit F5(start debugging)

debug

Script Packages

Script packages are a way of organizing reusable scripts into NuGet packages that can be consumed by other scripts. This means that we now can leverage scripting infrastructure without the need for any kind of bootstrapping.

Creating a script package

A script package is just a regular NuGet package that contains script files inside the content or contentFiles folder.

The following example shows how the scripts are laid out inside the NuGet package according to the standard convention .

└── contentFiles
    └── csx
        └── netstandard2.0
            └── main.csx

This example contains just the main.csx file in the root folder, but packages may have multiple script files either in the root folder or in subfolders below the root folder.

When loading a script package we will look for an entry point script to be loaded. This entry point script is identified by one of the following.

  • A script called main.csx in the root folder
  • A single script file in the root folder

If the entry point script cannot be determined, we will simply load all the scripts files in the package.

The advantage with using an entry point script is that we can control loading other scripts from the package.

Consuming a script package

To consume a script package all we need to do specify the NuGet package in the #loaddirective.

The following example loads the simple-targets package that contains script files to be included in our script.

#load "nuget:simple-targets-csx, 6.0.0"

using static SimpleTargets;
var targets = new TargetDictionary();

targets.Add("default", () => Console.WriteLine("Hello, world!"));

Run(Args, targets);

Note: Debugging also works for script packages so that we can easily step into the scripts that are brought in using the #load directive.

Remote Scripts

Scripts don't actually have to exist locally on the machine. We can also execute scripts that are made available on an http(s) endpoint.

This means that we can create a Gist on Github and execute it just by providing the URL to the Gist.

This Gist contains a script that prints out "Hello World"

We can execute the script like this

dotnet script https://gist.githubusercontent.com/seesharper/5d6859509ea8364a1fdf66bbf5b7923d/raw/0a32bac2c3ea807f9379a38e251d93e39c8131cb/HelloWorld.csx

That is a pretty long URL, so why don't make it a TinyURL like this:

dotnet script https://tinyurl.com/y8cda9zt

Script Location

A pretty common scenario is that we have logic that is relative to the script path. We don't want to require the user to be in a certain directory for these paths to resolve correctly so here is how to provide the script path and the script folder regardless of the current working directory.

public static string GetScriptPath([CallerFilePath] string path = null) => path;
public static string GetScriptFolder([CallerFilePath] string path = null) => Path.GetDirectoryName(path);

Tip: Put these methods as top level methods in a separate script file and #load that file wherever access to the script path and/or folder is needed.

REPL

This release contains a C# REPL (Read-Evaluate-Print-Loop). The REPL mode ("interactive mode") is started by executing dotnet-script without any arguments.

The interactive mode allows you to supply individual C# code blocks and have them executed as soon as you press Enter. The REPL is configured with the same default set of assembly references and using statements as regular CSX script execution.

Basic usage

Once dotnet-script starts you will see a prompt for input. You can start typing C# code there.

~$ dotnet script
> var x = 1;
> x+x
2

If you submit an unterminated expression into the REPL (no ; at the end), it will be evaluated and the result will be serialized using a formatter and printed in the output. This is a bit more interesting than just calling ToString() on the object, because it attempts to capture the actual structure of the object. For example:

~$ dotnet script
> var x = new List<string>();
> x.Add("foo");
> x
List<string>(1) { "foo" }
> x.Add("bar");
> x
List<string>(2) { "foo", "bar" }
>

Inline Nuget packages

REPL also supports inline Nuget packages - meaning the Nuget packages can be installed into the REPL from within the REPL. This is done via our #r and #load from Nuget support and uses identical syntax.

~$ dotnet script
> #r "nuget: Automapper, 6.1.1"
> using AutoMapper;
> typeof(MapperConfiguration)
[AutoMapper.MapperConfiguration]
> #load "nuget: simple-targets-csx, 6.0.0";
> using static SimpleTargets;
> typeof(TargetDictionary)
[Submission#0+SimpleTargets+TargetDictionary]

Multiline mode

Using Roslyn syntax parsing, we also support multiline REPL mode. This means that if you have an uncompleted code block and press Enter, we will automatically enter the multiline mode. The mode is indicated by the * character. This is particularly useful for declaring classes and other more complex constructs.

~$ dotnet script
> class Foo {
* public string Bar {get; set;}
* }
> var foo = new Foo();

REPL commands

Aside from the regular C# script code, you can invoke the following commands (directives) from within the REPL:

CommandDescription
#loadLoad a script into the REPL (same as #load usage in CSX)
#rLoad an assembly into the REPL (same as #r usage in CSX)
#resetReset the REPL back to initial state (without restarting it)
#clsClear the console screen without resetting the REPL state
#exitExits the REPL

Seeding REPL with a script

You can execute a CSX script and, at the end of it, drop yourself into the context of the REPL. This way, the REPL becomes "seeded" with your code - all the classes, methods or variables are available in the REPL context. This is achieved by running a script with an -i flag.

For example, given the following CSX script:

var msg = "Hello World";
Console.WriteLine(msg);

When you run this with the -i flag, Hello World is printed, REPL starts and msg variable is available in the REPL context.

~$ dotnet script foo.csx -i
Hello World
>

You can also seed the REPL from inside the REPL - at any point - by invoking a #load directive pointed at a specific file. For example:

~$ dotnet script
> #load "foo.csx"
Hello World
>

Piping

The following example shows how we can pipe data in and out of a script.

The UpperCase.csx script simply converts the standard input to upper case and writes it back out to standard output.

using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
    Write(streamReader.ReadToEnd().ToUpper());
}

We can now simply pipe the output from one command into our script like this.

echo "This is some text" | dotnet script UpperCase.csx
THIS IS SOME TEXT

Debugging

The first thing we need to do add the following to the launch.config file that allows VS Code to debug a running process.

{
    "name": ".NET Core Attach",
    "type": "coreclr",
    "request": "attach",
    "processId": "${command:pickProcess}"
}

To debug this script we need a way to attach the debugger in VS Code and the simplest thing we can do here is to wait for the debugger to attach by adding this method somewhere.

public static void WaitForDebugger()
{
    Console.WriteLine("Attach Debugger (VS Code)");
    while(!Debugger.IsAttached)
    {
    }
}

To debug the script when executing it from the command line we can do something like

WaitForDebugger();
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
    Write(streamReader.ReadToEnd().ToUpper()); // <- SET BREAKPOINT HERE
}

Now when we run the script from the command line we will get

$ echo "This is some text" | dotnet script UpperCase.csx
Attach Debugger (VS Code)

This now gives us a chance to attach the debugger before stepping into the script and from VS Code, select the .NET Core Attach debugger and pick the process that represents the executing script.

Once that is done we should see our breakpoint being hit.

Configuration(Debug/Release)

By default, scripts will be compiled using the debug configuration. This is to ensure that we can debug a script in VS Code as well as attaching a debugger for long running scripts.

There are however situations where we might need to execute a script that is compiled with the release configuration. For instance, running benchmarks using BenchmarkDotNet is not possible unless the script is compiled with the release configuration.

We can specify this when executing the script.

dotnet script foo.csx -c release

 

Nullable reference types

Starting from version 0.50.0, dotnet-script supports .Net Core 3.0 and all the C# 8 features. The way we deal with nullable references types in dotnet-script is that we turn every warning related to nullable reference types into compiler errors. This means every warning between CS8600 and CS8655 are treated as an error when compiling the script.

Nullable references types are turned off by default and the way we enable it is using the #nullable enable compiler directive. This means that existing scripts will continue to work, but we can now opt-in on this new feature.

#!/usr/bin/env dotnet-script

#nullable enable

string name = null;

Trying to execute the script will result in the following error

main.csx(5,15): error CS8625: Cannot convert null literal to non-nullable reference type.

We will also see this when working with scripts in VS Code under the problems panel.

image

Download Details:
Author: filipw
Source Code: https://github.com/filipw/dotnet-script
License: MIT License

#dotnet  #aspdotnet  #csharp 

Cron Job Scheduling In Laravel

Today I will show you Cron Job Scheduling In Laravel, many time we require to run some piece of code specific interval time period in laravel and we need to run manually every time but command scheduler through we can run and create cron job in laravel.

So, here i will teach you how to create cron job in laravel, and how to create custom command in laravel.

Cron Job Scheduling In Laravel

https://websolutionstuff.com/post/cron-job-scheduling-in-laravel

#cron job scheduling in laravel #laravel #scheduling #scheduler #cron #how to create cron job in laravel

Rufus Scheduler: Job Scheduler for Ruby (at, Cron, in and Every Jobs)

rufus-scheduler

Job scheduler for Ruby (at, cron, in and every jobs).

It uses threads.

Note: maybe are you looking for the README of rufus-scheduler 2.x? (especially if you're using Dashing which is stuck on rufus-scheduler 2.0.24)

Quickstart:

# quickstart.rb

require 'rufus-scheduler'

scheduler = Rufus::Scheduler.new

scheduler.in '3s' do
  puts 'Hello... Rufus'
end

scheduler.join
  #
  # let the current thread join the scheduler thread
  #
  # (please note that this join should be removed when scheduling
  # in a web application (Rails and friends) initializer)

(run with ruby quickstart.rb)

Various forms of scheduling are supported:

require 'rufus-scheduler'

scheduler = Rufus::Scheduler.new

# ...

scheduler.in '10d' do
  # do something in 10 days
end

scheduler.at '2030/12/12 23:30:00' do
  # do something at a given point in time
end

scheduler.every '3h' do
  # do something every 3 hours
end
scheduler.every '3h10m' do
  # do something every 3 hours and 10 minutes
end

scheduler.cron '5 0 * * *' do
  # do something every day, five minutes after midnight
  # (see "man 5 crontab" in your terminal)
end

# ...

Rufus-scheduler uses fugit for parsing time strings, et-orbi for pairing time and tzinfo timezones.

non-features

Rufus-scheduler (out of the box) is an in-process, in-memory scheduler. It uses threads.

It does not persist your schedules. When the process is gone and the scheduler instance with it, the schedules are gone.

A rufus-scheduler instance will go on scheduling while it is present among the objects in a Ruby process. To make it stop scheduling you have to call its #shutdown method.

related and similar gems

  • Whenever - let cron call back your Ruby code, trusted and reliable cron drives your schedule
  • ruby-clock - a clock process / job scheduler for Ruby
  • Clockwork - rufus-scheduler inspired gem
  • Crono - an in-Rails cron scheduler
  • PerfectSched - highly available distributed cron built on Sequel and more

(please note: rufus-scheduler is not a cron replacement)

note about the 3.0 line

It's a complete rewrite of rufus-scheduler.

There is no EventMachine-based scheduler anymore.

I don't know what this Ruby thing is, where are my Rails?

I'll drive you right to the tracks.

notable changes:

  • As said, no more EventMachine-based scheduler
  • scheduler.every('100') { will schedule every 100 seconds (previously, it would have been 0.1s). This aligns rufus-scheduler with Ruby's sleep(100)
  • The scheduler isn't catching the whole of Exception anymore, only StandardError
  • The error_handler is #on_error (instead of #on_exception), by default it now prints the details of the error to $stderr (used to be $stdout)
  • Rufus::Scheduler::TimeOutError renamed to Rufus::Scheduler::TimeoutError
  • Introduction of "interval" jobs. Whereas "every" jobs are like "every 10 minutes, do this", interval jobs are like "do that, then wait for 10 minutes, then do that again, and so on"
  • Introduction of a lockfile: true/filename mechanism to prevent multiple schedulers from executing
  • "discard_past" is on by default. If the scheduler (its host) sleeps for 1 hour and a every '10m' job is on, it will trigger once at wakeup, not 6 times (discard_past was false by default in rufus-scheduler 2.x). No intention to re-introduce discard_past: false in 3.0 for now.
  • Introduction of Scheduler #on_pre_trigger and #on_post_trigger callback points

getting help

So you need help. People can help you, but first help them help you, and don't waste their time. Provide a complete description of the issue. If it works on A but not on B and others have to ask you: "so what is different between A and B" you are wasting everyone's time.

"hello", "please" and "thanks" are not swear words.

Go read how to report bugs effectively, twice.

Update: help_help.md might help help you.

on Gitter

You can find help via chat over at https://gitter.im/floraison/fugit. It's fugit, et-orbi, and rufus-scheduler combined chat room.

Please be courteous.

issues

Yes, issues can be reported in rufus-scheduler issues, I'd actually prefer bugs in there. If there is nothing wrong with rufus-scheduler, a Stack Overflow question is better.

faq

scheduling

Rufus-scheduler supports five kinds of jobs. in, at, every, interval and cron jobs.

Most of the rufus-scheduler examples show block scheduling, but it's also OK to schedule handler instances or handler classes.

in, at, every, interval, cron

In and at jobs trigger once.

require 'rufus-scheduler'

scheduler = Rufus::Scheduler.new

scheduler.in '10d' do
  puts "10 days reminder for review X!"
end

scheduler.at '2014/12/24 2000' do
  puts "merry xmas!"
end

In jobs are scheduled with a time interval, they trigger after that time elapsed. At jobs are scheduled with a point in time, they trigger when that point in time is reached (better to choose a point in the future).

Every, interval and cron jobs trigger repeatedly.

require 'rufus-scheduler'

scheduler = Rufus::Scheduler.new

scheduler.every '3h' do
  puts "change the oil filter!"
end

scheduler.interval '2h' do
  puts "thinking..."
  puts sleep(rand * 1000)
  puts "thought."
end

scheduler.cron '00 09 * * *' do
  puts "it's 9am! good morning!"
end

Every jobs try hard to trigger following the frequency they were scheduled with.

Interval jobs trigger, execute and then trigger again after the interval elapsed. (every jobs time between trigger times, interval jobs time between trigger termination and the next trigger start).

Cron jobs are based on the venerable cron utility (man 5 crontab). They trigger following a pattern given in (almost) the same language cron uses.

 

#schedule_x vs #x

schedule_in, schedule_at, schedule_cron, etc will return the new Job instance.

in, at, cron will return the new Job instance's id (a String).

job_id =
  scheduler.in '10d' do
    # ...
  end
job = scheduler.job(job_id)

# versus

job =
  scheduler.schedule_in '10d' do
    # ...
  end

# also

job =
  scheduler.in '10d', job: true do
    # ...
  end

#schedule and #repeat

Sometimes it pays to be less verbose.

The #schedule methods schedules an at, in or cron job. It just decides based on its input. It returns the Job instance.

scheduler.schedule '10d' do; end.class
  # => Rufus::Scheduler::InJob

scheduler.schedule '2013/12/12 12:30' do; end.class
  # => Rufus::Scheduler::AtJob

scheduler.schedule '* * * * *' do; end.class
  # => Rufus::Scheduler::CronJob

The #repeat method schedules and returns an EveryJob or a CronJob.

scheduler.repeat '10d' do; end.class
  # => Rufus::Scheduler::EveryJob

scheduler.repeat '* * * * *' do; end.class
  # => Rufus::Scheduler::CronJob

(Yes, no combination here gives back an IntervalJob).

schedule blocks arguments (job, time)

A schedule block may be given 0, 1 or 2 arguments.

The first argument is "job", it's simply the Job instance involved. It might be useful if the job is to be unscheduled for some reason.

scheduler.every '10m' do |job|

  status = determine_pie_status

  if status == 'burnt' || status == 'cooked'
    stop_oven
    takeout_pie
    job.unschedule
  end
end

The second argument is "time", it's the time when the job got cleared for triggering (not Time.now).

Note that time is the time when the job got cleared for triggering. If there are mutexes involved, now = mutex_wait_time + time...

"every" jobs and changing the next_time in-flight

It's OK to change the next_time of an every job in-flight:

scheduler.every '10m' do |job|

  # ...

  status = determine_pie_status

  job.next_time = Time.now + 30 * 60 if status == 'burnt'
    #
    # if burnt, wait 30 minutes for the oven to cool a bit
end

It should work as well with cron jobs, not so with interval jobs whose next_time is computed after their block ends its current run.

scheduling handler instances

It's OK to pass any object, as long as it responds to #call(), when scheduling:

class Handler
  def self.call(job, time)
    p "- Handler called for #{job.id} at #{time}"
  end
end

scheduler.in '10d', Handler

# or

class OtherHandler
  def initialize(name)
    @name = name
  end
  def call(job, time)
    p "* #{time} - Handler #{name.inspect} called for #{job.id}"
  end
end

oh = OtherHandler.new('Doe')

scheduler.every '10m', oh
scheduler.in '3d5m', oh

The call method must accept 2 (job, time), 1 (job) or 0 arguments.

Note that time is the time when the job got cleared for triggering. If there are mutexes involved, now = mutex_wait_time + time...

scheduling handler classes

One can pass a handler class to rufus-scheduler when scheduling. Rufus will instantiate it and that instance will be available via job#handler.

class MyHandler
  attr_reader :count
  def initialize
    @count = 0
  end
  def call(job)
    @count += 1
    puts ". #{self.class} called at #{Time.now} (#{@count})"
  end
end

job = scheduler.schedule_every '35m', MyHandler

job.handler
  # => #<MyHandler:0x000000021034f0>
job.handler.count
  # => 0

If you want to keep that "block feeling":

job_id =
  scheduler.every '10m', Class.new do
    def call(job)
      puts ". hello #{self.inspect} at #{Time.now}"
    end
  end

pause and resume the scheduler

The scheduler can be paused via the #pause and #resume methods. One can determine if the scheduler is currently paused by calling #paused?.

While paused, the scheduler still accepts schedules, but no schedule will get triggered as long as #resume isn't called.

job options

name: string

Sets the name of the job.

scheduler.cron '*/15 8 * * *', name: 'Robert' do |job|
  puts "A, it's #{Time.now} and my name is #{job.name}"
end

job1 =
  scheduler.schedule_cron '*/30 9 * * *', n: 'temporary' do |job|
    puts "B, it's #{Time.now} and my name is #{job.name}"
  end
# ...
job1.name = 'Beowulf'

blocking: true

By default, jobs are triggered in their own, new threads. When blocking: true, the job is triggered in the scheduler thread (a new thread is not created). Yes, while a blocking job is running, the scheduler is not scheduling.

overlap: false

Since, by default, jobs are triggered in their own new threads, job instances might overlap. For example, a job that takes 10 minutes and is scheduled every 7 minutes will have overlaps.

To prevent overlap, one can set overlap: false. Such a job will not trigger if one of its instances is already running.

The :overlap option is considered before the :mutex option when the scheduler is reviewing jobs for triggering.

mutex: mutex_instance / mutex_name / array of mutexes

When a job with a mutex triggers, the job's block is executed with the mutex around it, preventing other jobs with the same mutex from entering (it makes the other jobs wait until it exits the mutex).

This is different from overlap: false, which is, first, limited to instances of the same job, and, second, doesn't make the incoming job instance block/wait but give up.

:mutex accepts a mutex instance or a mutex name (String). It also accept an array of mutex names / mutex instances. It allows for complex relations between jobs.

Array of mutexes: original idea and implementation by Rainux Luo

Note: creating lots of different mutexes is OK. Rufus-scheduler will place them in its Scheduler#mutexes hash... And they won't get garbage collected.

The :overlap option is considered before the :mutex option when the scheduler is reviewing jobs for triggering.

timeout: duration or point in time

It's OK to specify a timeout when scheduling some work. After the time specified, it gets interrupted via a Rufus::Scheduler::TimeoutError.

scheduler.in '10d', timeout: '1d' do
  begin
    # ... do something
  rescue Rufus::Scheduler::TimeoutError
    # ... that something got interrupted after 1 day
  end
end

The :timeout option accepts either a duration (like "1d" or "2w3d") or a point in time (like "2013/12/12 12:00").

:first_at, :first_in, :first, :first_time

This option is for repeat jobs (cron / every) only.

It's used to specify the first time after which the repeat job should trigger for the first time.

In the case of an "every" job, this will be the first time (modulo the scheduler frequency) the job triggers. For a "cron" job as well, the :first will point to the first time the job has to trigger, the following trigger times are then determined by the cron string.

scheduler.every '2d', first_at: Time.now + 10 * 3600 do
  # ... every two days, but start in 10 hours
end

scheduler.every '2d', first_in: '10h' do
  # ... every two days, but start in 10 hours
end

scheduler.cron '00 14 * * *', first_in: '3d' do
  # ... every day at 14h00, but start after 3 * 24 hours
end

:first, :first_at and :first_in all accept a point in time or a duration (number or time string). Use the symbol you think makes your schedule more readable.

Note: it's OK to change the first_at (a Time instance) directly:

job.first_at = Time.now + 10
job.first_at = Rufus::Scheduler.parse('2029-12-12')

The first argument (in all its flavours) accepts a :now or :immediately value. That schedules the first occurrence for immediate triggering. Consider:

require 'rufus-scheduler'

s = Rufus::Scheduler.new

n = Time.now; p [ :scheduled_at, n, n.to_f ]

s.every '3s', first: :now do
  n = Time.now; p [ :in, n, n.to_f ]
end

s.join

that'll output something like:

[:scheduled_at, 2014-01-22 22:21:21 +0900, 1390396881.344438]
[:in, 2014-01-22 22:21:21 +0900, 1390396881.6453865]
[:in, 2014-01-22 22:21:24 +0900, 1390396884.648807]
[:in, 2014-01-22 22:21:27 +0900, 1390396887.651686]
[:in, 2014-01-22 22:21:30 +0900, 1390396890.6571937]
...

:last_at, :last_in, :last

This option is for repeat jobs (cron / every) only.

It indicates the point in time after which the job should unschedule itself.

scheduler.cron '5 23 * * *', last_in: '10d' do
  # ... do something every evening at 23:05 for 10 days
end

scheduler.every '10m', last_at: Time.now + 10 * 3600 do
  # ... do something every 10 minutes for 10 hours
end

scheduler.every '10m', last_in: 10 * 3600 do
  # ... do something every 10 minutes for 10 hours
end

:last, :last_at and :last_in all accept a point in time or a duration (number or time string). Use the symbol you think makes your schedule more readable.

Note: it's OK to change the last_at (nil or a Time instance) directly:

job.last_at = nil
  # remove the "last" bound

job.last_at = Rufus::Scheduler.parse('2029-12-12')
  # set the last bound

times: nb of times (before auto-unscheduling)

One can tell how many times a repeat job (CronJob or EveryJob) is to execute before unscheduling by itself.

scheduler.every '2d', times: 10 do
  # ... do something every two days, but not more than 10 times
end

scheduler.cron '0 23 * * *', times: 31 do
  # ... do something every day at 23:00 but do it no more than 31 times
end

It's OK to assign nil to :times to make sure the repeat job is not limited. It's useful when the :times is determined at scheduling time.

scheduler.cron '0 23 * * *', times: (nolimit ? nil : 10) do
  # ...
end

The value set by :times is accessible in the job. It can be modified anytime.

job =
  scheduler.cron '0 23 * * *' do
    # ...
  end

# later on...

job.times = 10
  # 10 days and it will be over

Job methods

When calling a schedule method, the id (String) of the job is returned. Longer schedule methods return Job instances directly. Calling the shorter schedule methods with the job: true also returns Job instances instead of Job ids (Strings).

  require 'rufus-scheduler'

  scheduler = Rufus::Scheduler.new

  job_id =
    scheduler.in '10d' do
      # ...
    end

  job =
    scheduler.schedule_in '1w' do
      # ...
    end

  job =
    scheduler.in '1w', job: true do
      # ...
    end

Those Job instances have a few interesting methods / properties:

id, job_id

Returns the job id.

job = scheduler.schedule_in('10d') do; end
job.id
  # => "in_1374072446.8923042_0.0_0"

scheduler

Returns the scheduler instance itself.

opts

Returns the options passed at the Job creation.

job = scheduler.schedule_in('10d', tag: 'hello') do; end
job.opts
  # => { :tag => 'hello' }

original

Returns the original schedule.

job = scheduler.schedule_in('10d', tag: 'hello') do; end
job.original
  # => '10d'

callable, handler

callable() returns the scheduled block (or the call method of the callable object passed in lieu of a block)

handler() returns nil if a block was scheduled and the instance scheduled otherwise.

# when passing a block

job =
  scheduler.schedule_in('10d') do
    # ...
  end

job.handler
  # => nil
job.callable
  # => #<Proc:0x00000001dc6f58@/home/jmettraux/whatever.rb:115>

and

# when passing something else than a block

class MyHandler
  attr_reader :counter
  def initialize
    @counter = 0
  end
  def call(job, time)
    @counter = @counter + 1
  end
end

job = scheduler.schedule_in('10d', MyHandler.new)

job.handler
  # => #<Method: MyHandler#call>
job.callable
  # => #<MyHandler:0x0000000163ae88 @counter=0>

source_location

Added to rufus-scheduler 3.8.0.

Returns the array [ 'path/to/file.rb', 123 ] like Proc#source_location does.

require 'rufus-scheduler'

scheduler = Rufus::Scheduler.new

job = scheduler.schedule_every('2h') { p Time.now }

p job.source_location
  # ==> [ '/home/jmettraux/rufus-scheduler/test.rb', 6 ]

scheduled_at

Returns the Time instance when the job got created.

job = scheduler.schedule_in('10d', tag: 'hello') do; end
job.scheduled_at
  # => 2013-07-17 23:48:54 +0900

last_time

Returns the last time the job triggered (is usually nil for AtJob and InJob).

job = scheduler.schedule_every('10s') do; end

job.scheduled_at
  # => 2013-07-17 23:48:54 +0900
job.last_time
  # => nil (since we've just scheduled it)

# after 10 seconds

job.scheduled_at
  # => 2013-07-17 23:48:54 +0900 (same as above)
job.last_time
  # => 2013-07-17 23:49:04 +0900

previous_time

Returns the previous #next_time

scheduler.every('10s') do |job|
  puts "job scheduled for #{job.previous_time} triggered at #{Time.now}"
  puts "next time will be around #{job.next_time}"
  puts "."
end

last_work_time, mean_work_time

The job keeps track of how long its work was in the last_work_time attribute. For a one time job (in, at) it's probably not very useful.

The attribute mean_work_time contains a computed mean work time. It's recomputed after every run (if it's a repeat job).

next_times(n)

Returns an array of EtOrbi::EoTime instances (Time instances with a designated time zone), listing the n next occurrences for this job.

Please note that for "interval" jobs, a mean work time is computed each time and it's used by this #next_times(n) method to approximate the next times beyond the immediate next time.

unschedule

Unschedule the job, preventing it from firing again and removing it from the schedule. This doesn't prevent a running thread for this job to run until its end.

threads

Returns the list of threads currently "hosting" runs of this Job instance.

kill

Interrupts all the work threads currently running for this job instance. They discard their work and are free for their next run (of whatever job).

Note: this doesn't unschedule the Job instance.

Note: if the job is pooled for another run, a free work thread will probably pick up that next run and the job will appear as running again. You'd have to unschedule and kill to make sure the job doesn't run again.

running?

Returns true if there is at least one running Thread hosting a run of this Job instance.

scheduled?

Returns true if the job is scheduled (is due to trigger). For repeat jobs it should return true until the job gets unscheduled. "at" and "in" jobs will respond with false as soon as they start running (execution triggered).

pause, resume, paused?, paused_at

These four methods are only available to CronJob, EveryJob and IntervalJob instances. One can pause or resume such jobs thanks to these methods.

job =
  scheduler.schedule_every('10s') do
    # ...
  end

job.pause
  # => 2013-07-20 01:22:22 +0900
job.paused?
  # => true
job.paused_at
  # => 2013-07-20 01:22:22 +0900

job.resume
  # => nil

tags

Returns the list of tags attached to this Job instance.

By default, returns an empty array.

job = scheduler.schedule_in('10d') do; end
job.tags
  # => []

job = scheduler.schedule_in('10d', tag: 'hello') do; end
job.tags
  # => [ 'hello' ]

[]=, [], key?, has_key?, keys, values, and entries

Threads have thread-local variables, similarly Rufus-scheduler jobs have job-local variables. Those are more like a dict with thread-safe access.

job =
  @scheduler.schedule_every '1s' do |job|
    job[:timestamp] = Time.now.to_f
    job[:counter] ||= 0
    job[:counter] += 1
  end

sleep 3.6

job[:counter]
  # => 3

job.key?(:timestamp) # => true
job.has_key?(:timestamp) # => true
job.keys # => [ :timestamp, :counter ]

Locals can be set at schedule time:

job0 =
  @scheduler.schedule_cron '*/15 12 * * *', locals: { a: 0 } do
    # ...
  end
job1 =
  @scheduler.schedule_cron '*/15 13 * * *', l: { a: 1 } do
    # ...
  end

One can fetch the Hash directly with Job#locals. Of course, direct manipulation is not thread-safe.

job.locals.entries do |k, v|
  p "#{k}: #{v}"
end

call

Job instances have a #call method. It simply calls the scheduled block or callable immediately.

job =
  @scheduler.schedule_every '10m' do |job|
    # ...
  end

job.call

Warning: the Scheduler#on_error handler is not involved. Error handling is the responsibility of the caller.

If the call has to be rescued by the error handler of the scheduler, call(true) might help:

require 'rufus-scheduler'

s = Rufus::Scheduler.new

def s.on_error(job, err)
  if job
    p [ 'error in scheduled job', job.class, job.original, err.message ]
  else
    p [ 'error while scheduling', err.message ]
  end
rescue
  p $!
end

job =
  s.schedule_in('1d') do
    fail 'again'
  end

job.call(true)
  #
  # true lets the error_handler deal with error in the job call

AtJob and InJob methods

time

Returns when the job will trigger (hopefully).

next_time

An alias for time.

EveryJob, IntervalJob and CronJob methods

next_time

Returns the next time the job will trigger (hopefully).

count

Returns how many times the job fired.

EveryJob methods

frequency

It returns the scheduling frequency. For a job scheduled "every 20s", it's 20.

It's used to determine if the job frequency is higher than the scheduler frequency (it raises an ArgumentError if that is the case).

IntervalJob methods

interval

Returns the interval scheduled between each execution of the job.

Every jobs use a time duration between each start of their execution, while interval jobs use a time duration between the end of an execution and the start of the next.

CronJob methods

brute_frequency

An expensive method to run, it's brute. It caches its results. By default it runs for 2017 (a non leap-year).

  require 'rufus-scheduler'

  Rufus::Scheduler.parse('* * * * *').brute_frequency
    #
    # => #<Fugit::Cron::Frequency:0x00007fdf4520c5e8
    #      @span=31536000.0, @delta_min=60, @delta_max=60,
    #      @occurrences=525600, @span_years=1.0, @yearly_occurrences=525600.0>
      #
      # Occurs 525600 times in a span of 1 year (2017) and 1 day.
      # There are least 60 seconds between "triggers" and at most 60 seconds.

  Rufus::Scheduler.parse('0 12 * * *').brute_frequency
    # => #<Fugit::Cron::Frequency:0x00007fdf451ec6d0
    #      @span=31536000.0, @delta_min=86400, @delta_max=86400,
    #      @occurrences=365, @span_years=1.0, @yearly_occurrences=365.0>
  Rufus::Scheduler.parse('0 12 * * *').brute_frequency.to_debug_s
    # => "dmin: 1D, dmax: 1D, ocs: 365, spn: 52W1D, spnys: 1, yocs: 365"
      #
      # 365 occurrences, at most 1 day between each, at least 1 day.

The CronJob#frequency method found in rufus-scheduler < 3.5 has been retired.

looking up jobs

Scheduler#job(job_id)

The scheduler #job(job_id) method can be used to look up Job instances.

  require 'rufus-scheduler'

  scheduler = Rufus::Scheduler.new

  job_id =
    scheduler.in '10d' do
      # ...
    end

  # later on...

  job = scheduler.job(job_id)

Scheduler #jobs #at_jobs #in_jobs #every_jobs #interval_jobs and #cron_jobs

Are methods for looking up lists of scheduled Job instances.

Here is an example:

  #
  # let's unschedule all the at jobs

  scheduler.at_jobs.each(&:unschedule)

Scheduler#jobs(tag: / tags: x)

When scheduling a job, one can specify one or more tags attached to the job. These can be used to look up the job later on.

  scheduler.in '10d', tag: 'main_process' do
    # ...
  end
  scheduler.in '10d', tags: [ 'main_process', 'side_dish' ] do
    # ...
  end

  # ...

  jobs = scheduler.jobs(tag: 'main_process')
    # find all the jobs with the 'main_process' tag

  jobs = scheduler.jobs(tags: [ 'main_process', 'side_dish' ]
    # find all the jobs with the 'main_process' AND 'side_dish' tags

Scheduler#running_jobs

Returns the list of Job instance that have currently running instances.

Whereas other "_jobs" method scan the scheduled job list, this method scans the thread list to find the job. It thus comprises jobs that are running but are not scheduled anymore (that happens for at and in jobs).

misc Scheduler methods

Scheduler#unschedule(job_or_job_id)

Unschedule a job given directly or by its id.

Scheduler#shutdown

Shuts down the scheduler, ceases any scheduler/triggering activity.

Scheduler#shutdown(:wait)

Shuts down the scheduler, waits (blocks) until all the jobs cease running.

Scheduler#shutdown(wait: n)

Shuts down the scheduler, waits (blocks) at most n seconds until all the jobs cease running. (Jobs are killed after n seconds have elapsed).

Scheduler#shutdown(:kill)

Kills all the job (threads) and then shuts the scheduler down. Radical.

Scheduler#down?

Returns true if the scheduler has been shut down.

Scheduler#started_at

Returns the Time instance at which the scheduler got started.

Scheduler #uptime / #uptime_s

Returns since the count of seconds for which the scheduler has been running.

#uptime_s returns this count in a String easier to grasp for humans, like "3d12m45s123".

Scheduler#join

Lets the current thread join the scheduling thread in rufus-scheduler. The thread comes back when the scheduler gets shut down.

#join is mostly used in standalone scheduling script (or tiny one file examples). Calling #join from a web application initializer will probably hijack the main thread and prevent the web application from being served. Do not put a #join in such a web application initializer file.

Scheduler#threads

Returns all the threads associated with the scheduler, including the scheduler thread itself.

Scheduler#work_threads(query=:all/:active/:vacant)

Lists the work threads associated with the scheduler. The query option defaults to :all.

  • :all : all the work threads
  • :active : all the work threads currently running a Job
  • :vacant : all the work threads currently not running a Job

Note that the main schedule thread will be returned if it is currently running a Job (ie one of those blocking: true jobs).

Scheduler#scheduled?(job_or_job_id)

Returns true if the arg is a currently scheduled job (see Job#scheduled?).

Scheduler#occurrences(time0, time1)

Returns a hash { job => [ t0, t1, ... ] } mapping jobs to their potential trigger time within the [ time0, time1 ] span.

Please note that, for interval jobs, the #mean_work_time is used, so the result is only a prediction.

Scheduler#timeline(time0, time1)

Like #occurrences but returns a list [ [ t0, job0 ], [ t1, job1 ], ... ] of time + job pairs.

dealing with job errors

The easy, job-granular way of dealing with errors is to rescue and deal with them immediately. The two next sections show examples. Skip them for explanations on how to deal with errors at the scheduler level.

block jobs

As said, jobs could take care of their errors themselves.

scheduler.every '10m' do
  begin
    # do something that might fail...
  rescue => e
    $stderr.puts '-' * 80
    $stderr.puts e.message
    $stderr.puts e.stacktrace
    $stderr.puts '-' * 80
  end
end

callable jobs

Jobs are not only shrunk to blocks, here is how the above would look like with a dedicated class.

scheduler.every '10m', Class.new do
  def call(job)
    # do something that might fail...
  rescue => e
    $stderr.puts '-' * 80
    $stderr.puts e.message
    $stderr.puts e.stacktrace
    $stderr.puts '-' * 80
  end
end

TODO: talk about callable#on_error (if implemented)

(see scheduling handler instances and scheduling handler classes for more about those "callable jobs")

Rufus::Scheduler#stderr=

By default, rufus-scheduler intercepts all errors (that inherit from StandardError) and dumps abundant details to $stderr.

If, for example, you'd like to divert that flow to another file (descriptor), you can reassign $stderr for the current Ruby process

$stderr = File.open('/var/log/myapplication.log', 'ab')

or, you can limit that reassignement to the scheduler itself

scheduler.stderr = File.open('/var/log/myapplication.log', 'ab')

Rufus::Scheduler#on_error(job, error)

We've just seen that, by default, rufus-scheduler dumps error information to $stderr. If one needs to completely change what happens in case of error, it's OK to overwrite #on_error

def scheduler.on_error(job, error)

  Logger.warn("intercepted error in #{job.id}: #{error.message}")
end

On Rails, the on_error method redefinition might look like:

def scheduler.on_error(job, error)

  Rails.logger.error(
    "err#{error.object_id} rufus-scheduler intercepted #{error.inspect}" +
    " in job #{job.inspect}")
  error.backtrace.each_with_index do |line, i|
    Rails.logger.error(
      "err#{error.object_id} #{i}: #{line}")
  end
end

Callbacks

Rufus::Scheduler #on_pre_trigger and #on_post_trigger callbacks

One can bind callbacks before and after jobs trigger:

s = Rufus::Scheduler.new

def s.on_pre_trigger(job, trigger_time)
  puts "triggering job #{job.id}..."
end

def s.on_post_trigger(job, trigger_time)
  puts "triggered job #{job.id}."
end

s.every '1s' do
  # ...
end

The trigger_time is the time at which the job triggers. It might be a bit before Time.now.

Warning: these two callbacks are executed in the scheduler thread, not in the work threads (the threads where the job execution really happens).

Rufus::Scheduler#around_trigger

One can create an around callback which will wrap a job:

def s.around_trigger(job)
  t = Time.now
  puts "Starting job #{job.id}..."
  yield
  puts "job #{job.id} finished in #{Time.now-t} seconds."
end

The around callback is executed in the thread.

Rufus::Scheduler#on_pre_trigger as a guard

Returning false in on_pre_trigger will prevent the job from triggering. Returning anything else (nil, -1, true, ...) will let the job trigger.

Note: your business logic should go in the scheduled block itself (or the scheduled instance). Don't put business logic in on_pre_trigger. Return false for admin reasons (backend down, etc), not for business reasons that are tied to the job itself.

def s.on_pre_trigger(job, trigger_time)

  return false if Backend.down?

  puts "triggering job #{job.id}..."
end

Rufus::Scheduler.new options

:frequency

By default, rufus-scheduler sleeps 0.300 second between every step. At each step it checks for jobs to trigger and so on.

The :frequency option lets you change that 0.300 second to something else.

scheduler = Rufus::Scheduler.new(frequency: 5)

It's OK to use a time string to specify the frequency.

scheduler = Rufus::Scheduler.new(frequency: '2h10m')
  # this scheduler will sleep 2 hours and 10 minutes between every "step"

Use with care.

lockfile: "mylockfile.txt"

This feature only works on OSes that support the flock (man 2 flock) call.

Starting the scheduler with lockfile: '.rufus-scheduler.lock' will make the scheduler attempt to create and lock the file .rufus-scheduler.lock in the current working directory. If that fails, the scheduler will not start.

The idea is to guarantee only one scheduler (in a group of schedulers sharing the same lockfile) is running.

This is useful in environments where the Ruby process holding the scheduler gets started multiple times.

If the lockfile mechanism here is not sufficient, you can plug your custom mechanism. It's explained in advanced lock schemes below.

:scheduler_lock

(since rufus-scheduler 3.0.9)

The scheduler lock is an object that responds to #lock and #unlock. The scheduler calls #lock when starting up. If the answer is false, the scheduler stops its initialization work and won't schedule anything.

Here is a sample of a scheduler lock that only lets the scheduler on host "coffee.example.com" start:

class HostLock
  def initialize(lock_name)
    @lock_name = lock_name
  end
  def lock
    @lock_name == `hostname -f`.strip
  end
  def unlock
    true
  end
end

scheduler =
  Rufus::Scheduler.new(scheduler_lock: HostLock.new('coffee.example.com'))

By default, the scheduler_lock is an instance of Rufus::Scheduler::NullLock, with a #lock that returns true.

:trigger_lock

(since rufus-scheduler 3.0.9)

The trigger lock in an object that responds to #lock. The scheduler calls that method on the job lock right before triggering any job. If the answer is false, the trigger doesn't happen, the job is not done (at least not in this scheduler).

Here is a (stupid) PingLock example, it'll only trigger if an "other host" is not responding to ping. Do not use that in production, you don't want to fork a ping process for each trigger attempt...

class PingLock
  def initialize(other_host)
    @other_host = other_host
  end
  def lock
    ! system("ping -c 1 #{@other_host}")
  end
end

scheduler =
  Rufus::Scheduler.new(trigger_lock: PingLock.new('main.example.com'))

By default, the trigger_lock is an instance of Rufus::Scheduler::NullLock, with a #lock that always returns true.

As explained in advanced lock schemes, another way to tune that behaviour is by overriding the scheduler's #confirm_lock method. (You could also do that with an #on_pre_trigger callback).

:max_work_threads

In rufus-scheduler 2.x, by default, each job triggering received its own, brand new, thread of execution. In rufus-scheduler 3.x, execution happens in a pooled work thread. The max work thread count (the pool size) defaults to 28.

One can set this maximum value when starting the scheduler.

scheduler = Rufus::Scheduler.new(max_work_threads: 77)

It's OK to increase the :max_work_threads of a running scheduler.

scheduler.max_work_threads += 10

Rufus::Scheduler.singleton

Do not want to store a reference to your rufus-scheduler instance? Then Rufus::Scheduler.singleton can help, it returns a singleton instance of the scheduler, initialized the first time this class method is called.

Rufus::Scheduler.singleton.every '10s' { puts "hello, world!" }

It's OK to pass initialization arguments (like :frequency or :max_work_threads) but they will only be taken into account the first time .singleton is called.

Rufus::Scheduler.singleton(max_work_threads: 77)
Rufus::Scheduler.singleton(max_work_threads: 277) # no effect

The .s is a shortcut for .singleton.

Rufus::Scheduler.s.every '10s' { puts "hello, world!" }

advanced lock schemes

As seen above, rufus-scheduler proposes the :lockfile system out of the box. If in a group of schedulers only one is supposed to run, the lockfile mechanism prevents schedulers that have not set/created the lockfile from running.

There are situations where this is not sufficient.

By overriding #lock and #unlock, one can customize how schedulers lock.

This example was provided by Eric Lindvall:

class ZookeptScheduler < Rufus::Scheduler

  def initialize(zookeeper, opts={})
    @zk = zookeeper
    super(opts)
  end

  def lock
    @zk_locker = @zk.exclusive_locker('scheduler')
    @zk_locker.lock # returns true if the lock was acquired, false else
  end

  def unlock
    @zk_locker.unlock
  end

  def confirm_lock
    return false if down?
    @zk_locker.assert!
  rescue ZK::Exceptions::LockAssertionFailedError => e
    # we've lost the lock, shutdown (and return false to at least prevent
    # this job from triggering
    shutdown
    false
  end
end

This uses a zookeeper to make sure only one scheduler in a group of distributed schedulers runs.

The methods #lock and #unlock are overridden and #confirm_lock is provided, to make sure that the lock is still valid.

The #confirm_lock method is called right before a job triggers (if it is provided). The more generic callback #on_pre_trigger is called right after #confirm_lock.

:scheduler_lock and :trigger_lock

(introduced in rufus-scheduler 3.0.9).

Another way of prodiving #lock, #unlock and #confirm_lock to a rufus-scheduler is by using the :scheduler_lock and :trigger_lock options.

See :trigger_lock and :scheduler_lock.

The scheduler lock may be used to prevent a scheduler from starting, while a trigger lock prevents individual jobs from triggering (the scheduler goes on scheduling).

One has to be careful with what goes in #confirm_lock or in a trigger lock, as it gets called before each trigger.

Warning: you may think you're heading towards "high availability" by using a trigger lock and having lots of schedulers at hand. It may be so if you limit yourself to scheduling the same set of jobs at scheduler startup. But if you add schedules at runtime, they stay local to their scheduler. There is no magic that propagates the jobs to all the schedulers in your pack.

parsing cronlines and time strings

(Please note that fugit does the heavy-lifting parsing work for rufus-scheduler).

Rufus::Scheduler provides a class method .parse to parse time durations and cron strings. It's what it's using when receiving schedules. One can use it directly (no need to instantiate a Scheduler).

require 'rufus-scheduler'

Rufus::Scheduler.parse('1w2d')
  # => 777600.0
Rufus::Scheduler.parse('1.0w1.0d')
  # => 777600.0

Rufus::Scheduler.parse('Sun Nov 18 16:01:00 2012').strftime('%c')
  # => 'Sun Nov 18 16:01:00 2012'

Rufus::Scheduler.parse('Sun Nov 18 16:01:00 2012 Europe/Berlin').strftime('%c %z')
  # => 'Sun Nov 18 15:01:00 2012 +0000'

Rufus::Scheduler.parse(0.1)
  # => 0.1

Rufus::Scheduler.parse('* * * * *')
  # => #<Fugit::Cron:0x00007fb7a3045508
  #      @original="* * * * *", @cron_s=nil,
  #      @seconds=[0], @minutes=nil, @hours=nil, @monthdays=nil, @months=nil,
  #      @weekdays=nil, @zone=nil, @timezone=nil>

It returns a number when the input is a duration and a Fugit::Cron instance when the input is a cron string.

It will raise an ArgumentError if it can't parse the input.

Beyond .parse, there are also .parse_cron and .parse_duration, for finer granularity.

There is an interesting helper method named .to_duration_hash:

require 'rufus-scheduler'

Rufus::Scheduler.to_duration_hash(60)
  # => { :m => 1 }
Rufus::Scheduler.to_duration_hash(62.127)
  # => { :m => 1, :s => 2, :ms => 127 }

Rufus::Scheduler.to_duration_hash(62.127, drop_seconds: true)
  # => { :m => 1 }

cronline notations specific to rufus-scheduler

first Monday, last Sunday et al

To schedule something at noon every first Monday of the month:

scheduler.cron('00 12 * * mon#1') do
  # ...
end

To schedule something at noon the last Sunday of every month:

scheduler.cron('00 12 * * sun#-1') do
  # ...
end
#
# OR
#
scheduler.cron('00 12 * * sun#L') do
  # ...
end

Such cronlines can be tested with scripts like:

require 'rufus-scheduler'

Time.now
  # => 2013-10-26 07:07:08 +0900
Rufus::Scheduler.parse('* * * * mon#1').next_time.to_s
  # => 2013-11-04 00:00:00 +0900

L (last day of month)

L can be used in the "day" slot:

In this example, the cronline is supposed to trigger every last day of the month at noon:

require 'rufus-scheduler'
Time.now
  # => 2013-10-26 07:22:09 +0900
Rufus::Scheduler.parse('00 12 L * *').next_time.to_s
  # => 2013-10-31 12:00:00 +0900

negative day (x days before the end of the month)

It's OK to pass negative values in the "day" slot:

scheduler.cron '0 0 -5 * *' do
  # do it at 00h00 5 days before the end of the month...
end

Negative ranges (-10--5-: 10 days before the end of the month to 5 days before the end of the month) are OK, but mixed positive / negative ranges will raise an ArgumentError.

Negative ranges with increments (-10---2/2) are accepted as well.

Descending day ranges are not accepted (10-8 or -8--10 for example).

a note about timezones

Cron schedules and at schedules support the specification of a timezone.

scheduler.cron '0 22 * * 1-5 America/Chicago' do
  # the job...
end

scheduler.at '2013-12-12 14:00 Pacific/Samoa' do
  puts "it's tea time!"
end

# or even

Rufus::Scheduler.parse("2013-12-12 14:00 Pacific/Saipan")
  # => #<Rufus::Scheduler::ZoTime:0x007fb424abf4e8 @seconds=1386820800.0, @zone=#<TZInfo::DataTimezone: Pacific/Saipan>, @time=nil>

I get "zotime.rb:41:in `initialize': cannot determine timezone from nil"

For when you see an error like:

rufus-scheduler/lib/rufus/scheduler/zotime.rb:41:
  in `initialize':
    cannot determine timezone from nil (etz:nil,tnz:"中国标准时间",tzid:nil)
      (ArgumentError)
    from rufus-scheduler/lib/rufus/scheduler/zotime.rb:198:in `new'
    from rufus-scheduler/lib/rufus/scheduler/zotime.rb:198:in `now'
    from rufus-scheduler/lib/rufus/scheduler.rb:561:in `start'
    ...

It may happen on Windows or on systems that poorly hint to Ruby which timezone to use. It should be solved by setting explicitly the ENV['TZ'] before the scheduler instantiation:

ENV['TZ'] = 'Asia/Shanghai'
scheduler = Rufus::Scheduler.new
scheduler.every '2s' do
  puts "#{Time.now} Hello #{ENV['TZ']}!"
end

On Rails you might want to try with:

ENV['TZ'] = Time.zone.name # Rails only
scheduler = Rufus::Scheduler.new
scheduler.every '2s' do
  puts "#{Time.now} Hello #{ENV['TZ']}!"
end

(Hat tip to Alexander in gh-230)

Rails sets its timezone under config/application.rb.

Rufus-Scheduler 3.3.3 detects the presence of Rails and uses its timezone setting (tested with Rails 4), so setting ENV['TZ'] should not be necessary.

The value can be determined thanks to https://en.wikipedia.org/wiki/List_of_tz_database_time_zones.

Use a "continent/city" identifier (for example "Asia/Shanghai"). Do not use an abbreviation (not "CST") and do not use a local time zone name (not "中国标准时间" nor "Eastern Standard Time" which, for instance, points to a time zone in America and to another one in Australia...).

If the error persists (and especially on Windows), try to add the tzinfo-data to your Gemfile, as in:

gem 'tzinfo-data'

or by manually requiring it before requiring rufus-scheduler (if you don't use Bundler):

require 'tzinfo/data'
require 'rufus-scheduler'

so Rails?

Yes, I know, all of the above is boring and you're only looking for a snippet to paste in your Ruby-on-Rails application to schedule...

Here is an example initializer:

#
# config/initializers/scheduler.rb

require 'rufus-scheduler'

# Let's use the rufus-scheduler singleton
#
s = Rufus::Scheduler.singleton


# Stupid recurrent task...
#
s.every '1m' do

  Rails.logger.info "hello, it's #{Time.now}"
  Rails.logger.flush
end

And now you tell me that this is good, but you want to schedule stuff from your controller.

Maybe:

class ScheController < ApplicationController

  # GET /sche/
  #
  def index

    job_id =
      Rufus::Scheduler.singleton.in '5s' do
        Rails.logger.info "time flies, it's now #{Time.now}"
      end

    render text: "scheduled job #{job_id}"
  end
end

The rufus-scheduler singleton is instantiated in the config/initializers/scheduler.rb file, it's then available throughout the webapp via Rufus::Scheduler.singleton.

Warning: this works well with single-process Ruby servers like Webrick and Thin. Using rufus-scheduler with Passenger or Unicorn requires a bit more knowledge and tuning, gently provided by a bit of googling and reading, see Faq above.

avoid scheduling when running the Ruby on Rails console

(Written in reply to gh-186)

If you don't want rufus-scheduler to trigger anything while running the Ruby on Rails console, running for tests/specs, or running from a Rake task, you can insert a conditional return statement before jobs are added to the scheduler instance:

#
# config/initializers/scheduler.rb

require 'rufus-scheduler'

return if defined?(Rails::Console) || Rails.env.test? || File.split($PROGRAM_NAME).last == 'rake'
  #
  # do not schedule when Rails is run from its console, for a test/spec, or
  # from a Rake task

# return if $PROGRAM_NAME.include?('spring')
  #
  # see https://github.com/jmettraux/rufus-scheduler/issues/186

s = Rufus::Scheduler.singleton

s.every '1m' do
  Rails.logger.info "hello, it's #{Time.now}"
  Rails.logger.flush
end

(Beware later version of Rails where Spring takes care pre-running the initializers. Running spring stop or disabling Spring might be necessary in some cases to see changes to initializers being taken into account.)

rails server -d

(Written in reply to https://github.com/jmettraux/rufus-scheduler/issues/165 )

There is the handy rails server -d that starts a development Rails as a daemon. The annoying thing is that the scheduler as seen above is started in the main process that then gets forked and daemonized. The rufus-scheduler thread (and any other thread) gets lost, no scheduling happens.

I avoid running -d in development mode and bother about daemonizing only for production deployment.

These are two well crafted articles on process daemonization, please read them:

If, anyway, you need something like rails server -d, why not try bundle exec unicorn -D instead? In my (limited) experience, it worked out of the box (well, had to add gem 'unicorn' to Gemfile first).

executor / reloader

You might benefit from wraping your scheduled code in the executor or reloader. Read more here: https://guides.rubyonrails.org/threading_and_code_execution.html

support

see getting help above.


Author: jmettraux
Source code: https://github.com/jmettraux/rufus-scheduler
License: MIT license

#ruby