Awesome  Rust

Awesome Rust

1658006220

RustSec Crates - Audit Cargo.lock Files for Dependencies

RustSec Crates 🦀🛡️📦

The RustSec Advisory Database is a repository of security advisories filed against Rust crates published via crates.io.

The advisory database itself can be found at:

https://github.com/RustSec/advisory-db

About this repository

This repository contains a Cargo Workspace with all of the crates maintained by the RustSec project:

NameDescriptionCrateDocumentationBuild
[cargo‑audit]Audit Cargo.lock against the advisory DBcrates.ioDocumentationCI
[cargo‑lock]Self-contained Cargo.lock parsercrates.ioDocumentationCI
[cvss]Common Vulnerability Scoring Systemcrates.ioDocumentationCI
[platforms]Rust platform registrycrates.ioDocumentationCI
[rustsec]Advisory DB client librarycrates.ioDocumentationCI
[rustsec‑admin]Linter and web site generatorcrates.ioDocumentationCI

Download Details:
Author: rustsec
Source Code: https://github.com/rustsec/rustsec
License:

#rust #rustlang #staticanalysis

What is GEEK

Buddha Community

RustSec Crates - Audit Cargo.lock Files for Dependencies

Rufus Scheduler: Job Scheduler for Ruby (at, Cron, in and Every Jobs)

rufus-scheduler

Job scheduler for Ruby (at, cron, in and every jobs).

It uses threads.

Note: maybe are you looking for the README of rufus-scheduler 2.x? (especially if you're using Dashing which is stuck on rufus-scheduler 2.0.24)

Quickstart:

# quickstart.rb

require 'rufus-scheduler'

scheduler = Rufus::Scheduler.new

scheduler.in '3s' do
  puts 'Hello... Rufus'
end

scheduler.join
  #
  # let the current thread join the scheduler thread
  #
  # (please note that this join should be removed when scheduling
  # in a web application (Rails and friends) initializer)

(run with ruby quickstart.rb)

Various forms of scheduling are supported:

require 'rufus-scheduler'

scheduler = Rufus::Scheduler.new

# ...

scheduler.in '10d' do
  # do something in 10 days
end

scheduler.at '2030/12/12 23:30:00' do
  # do something at a given point in time
end

scheduler.every '3h' do
  # do something every 3 hours
end
scheduler.every '3h10m' do
  # do something every 3 hours and 10 minutes
end

scheduler.cron '5 0 * * *' do
  # do something every day, five minutes after midnight
  # (see "man 5 crontab" in your terminal)
end

# ...

Rufus-scheduler uses fugit for parsing time strings, et-orbi for pairing time and tzinfo timezones.

non-features

Rufus-scheduler (out of the box) is an in-process, in-memory scheduler. It uses threads.

It does not persist your schedules. When the process is gone and the scheduler instance with it, the schedules are gone.

A rufus-scheduler instance will go on scheduling while it is present among the objects in a Ruby process. To make it stop scheduling you have to call its #shutdown method.

related and similar gems

  • Whenever - let cron call back your Ruby code, trusted and reliable cron drives your schedule
  • ruby-clock - a clock process / job scheduler for Ruby
  • Clockwork - rufus-scheduler inspired gem
  • Crono - an in-Rails cron scheduler
  • PerfectSched - highly available distributed cron built on Sequel and more

(please note: rufus-scheduler is not a cron replacement)

note about the 3.0 line

It's a complete rewrite of rufus-scheduler.

There is no EventMachine-based scheduler anymore.

I don't know what this Ruby thing is, where are my Rails?

I'll drive you right to the tracks.

notable changes:

  • As said, no more EventMachine-based scheduler
  • scheduler.every('100') { will schedule every 100 seconds (previously, it would have been 0.1s). This aligns rufus-scheduler with Ruby's sleep(100)
  • The scheduler isn't catching the whole of Exception anymore, only StandardError
  • The error_handler is #on_error (instead of #on_exception), by default it now prints the details of the error to $stderr (used to be $stdout)
  • Rufus::Scheduler::TimeOutError renamed to Rufus::Scheduler::TimeoutError
  • Introduction of "interval" jobs. Whereas "every" jobs are like "every 10 minutes, do this", interval jobs are like "do that, then wait for 10 minutes, then do that again, and so on"
  • Introduction of a lockfile: true/filename mechanism to prevent multiple schedulers from executing
  • "discard_past" is on by default. If the scheduler (its host) sleeps for 1 hour and a every '10m' job is on, it will trigger once at wakeup, not 6 times (discard_past was false by default in rufus-scheduler 2.x). No intention to re-introduce discard_past: false in 3.0 for now.
  • Introduction of Scheduler #on_pre_trigger and #on_post_trigger callback points

getting help

So you need help. People can help you, but first help them help you, and don't waste their time. Provide a complete description of the issue. If it works on A but not on B and others have to ask you: "so what is different between A and B" you are wasting everyone's time.

"hello", "please" and "thanks" are not swear words.

Go read how to report bugs effectively, twice.

Update: help_help.md might help help you.

on Gitter

You can find help via chat over at https://gitter.im/floraison/fugit. It's fugit, et-orbi, and rufus-scheduler combined chat room.

Please be courteous.

issues

Yes, issues can be reported in rufus-scheduler issues, I'd actually prefer bugs in there. If there is nothing wrong with rufus-scheduler, a Stack Overflow question is better.

faq

scheduling

Rufus-scheduler supports five kinds of jobs. in, at, every, interval and cron jobs.

Most of the rufus-scheduler examples show block scheduling, but it's also OK to schedule handler instances or handler classes.

in, at, every, interval, cron

In and at jobs trigger once.

require 'rufus-scheduler'

scheduler = Rufus::Scheduler.new

scheduler.in '10d' do
  puts "10 days reminder for review X!"
end

scheduler.at '2014/12/24 2000' do
  puts "merry xmas!"
end

In jobs are scheduled with a time interval, they trigger after that time elapsed. At jobs are scheduled with a point in time, they trigger when that point in time is reached (better to choose a point in the future).

Every, interval and cron jobs trigger repeatedly.

require 'rufus-scheduler'

scheduler = Rufus::Scheduler.new

scheduler.every '3h' do
  puts "change the oil filter!"
end

scheduler.interval '2h' do
  puts "thinking..."
  puts sleep(rand * 1000)
  puts "thought."
end

scheduler.cron '00 09 * * *' do
  puts "it's 9am! good morning!"
end

Every jobs try hard to trigger following the frequency they were scheduled with.

Interval jobs trigger, execute and then trigger again after the interval elapsed. (every jobs time between trigger times, interval jobs time between trigger termination and the next trigger start).

Cron jobs are based on the venerable cron utility (man 5 crontab). They trigger following a pattern given in (almost) the same language cron uses.

 

#schedule_x vs #x

schedule_in, schedule_at, schedule_cron, etc will return the new Job instance.

in, at, cron will return the new Job instance's id (a String).

job_id =
  scheduler.in '10d' do
    # ...
  end
job = scheduler.job(job_id)

# versus

job =
  scheduler.schedule_in '10d' do
    # ...
  end

# also

job =
  scheduler.in '10d', job: true do
    # ...
  end

#schedule and #repeat

Sometimes it pays to be less verbose.

The #schedule methods schedules an at, in or cron job. It just decides based on its input. It returns the Job instance.

scheduler.schedule '10d' do; end.class
  # => Rufus::Scheduler::InJob

scheduler.schedule '2013/12/12 12:30' do; end.class
  # => Rufus::Scheduler::AtJob

scheduler.schedule '* * * * *' do; end.class
  # => Rufus::Scheduler::CronJob

The #repeat method schedules and returns an EveryJob or a CronJob.

scheduler.repeat '10d' do; end.class
  # => Rufus::Scheduler::EveryJob

scheduler.repeat '* * * * *' do; end.class
  # => Rufus::Scheduler::CronJob

(Yes, no combination here gives back an IntervalJob).

schedule blocks arguments (job, time)

A schedule block may be given 0, 1 or 2 arguments.

The first argument is "job", it's simply the Job instance involved. It might be useful if the job is to be unscheduled for some reason.

scheduler.every '10m' do |job|

  status = determine_pie_status

  if status == 'burnt' || status == 'cooked'
    stop_oven
    takeout_pie
    job.unschedule
  end
end

The second argument is "time", it's the time when the job got cleared for triggering (not Time.now).

Note that time is the time when the job got cleared for triggering. If there are mutexes involved, now = mutex_wait_time + time...

"every" jobs and changing the next_time in-flight

It's OK to change the next_time of an every job in-flight:

scheduler.every '10m' do |job|

  # ...

  status = determine_pie_status

  job.next_time = Time.now + 30 * 60 if status == 'burnt'
    #
    # if burnt, wait 30 minutes for the oven to cool a bit
end

It should work as well with cron jobs, not so with interval jobs whose next_time is computed after their block ends its current run.

scheduling handler instances

It's OK to pass any object, as long as it responds to #call(), when scheduling:

class Handler
  def self.call(job, time)
    p "- Handler called for #{job.id} at #{time}"
  end
end

scheduler.in '10d', Handler

# or

class OtherHandler
  def initialize(name)
    @name = name
  end
  def call(job, time)
    p "* #{time} - Handler #{name.inspect} called for #{job.id}"
  end
end

oh = OtherHandler.new('Doe')

scheduler.every '10m', oh
scheduler.in '3d5m', oh

The call method must accept 2 (job, time), 1 (job) or 0 arguments.

Note that time is the time when the job got cleared for triggering. If there are mutexes involved, now = mutex_wait_time + time...

scheduling handler classes

One can pass a handler class to rufus-scheduler when scheduling. Rufus will instantiate it and that instance will be available via job#handler.

class MyHandler
  attr_reader :count
  def initialize
    @count = 0
  end
  def call(job)
    @count += 1
    puts ". #{self.class} called at #{Time.now} (#{@count})"
  end
end

job = scheduler.schedule_every '35m', MyHandler

job.handler
  # => #<MyHandler:0x000000021034f0>
job.handler.count
  # => 0

If you want to keep that "block feeling":

job_id =
  scheduler.every '10m', Class.new do
    def call(job)
      puts ". hello #{self.inspect} at #{Time.now}"
    end
  end

pause and resume the scheduler

The scheduler can be paused via the #pause and #resume methods. One can determine if the scheduler is currently paused by calling #paused?.

While paused, the scheduler still accepts schedules, but no schedule will get triggered as long as #resume isn't called.

job options

name: string

Sets the name of the job.

scheduler.cron '*/15 8 * * *', name: 'Robert' do |job|
  puts "A, it's #{Time.now} and my name is #{job.name}"
end

job1 =
  scheduler.schedule_cron '*/30 9 * * *', n: 'temporary' do |job|
    puts "B, it's #{Time.now} and my name is #{job.name}"
  end
# ...
job1.name = 'Beowulf'

blocking: true

By default, jobs are triggered in their own, new threads. When blocking: true, the job is triggered in the scheduler thread (a new thread is not created). Yes, while a blocking job is running, the scheduler is not scheduling.

overlap: false

Since, by default, jobs are triggered in their own new threads, job instances might overlap. For example, a job that takes 10 minutes and is scheduled every 7 minutes will have overlaps.

To prevent overlap, one can set overlap: false. Such a job will not trigger if one of its instances is already running.

The :overlap option is considered before the :mutex option when the scheduler is reviewing jobs for triggering.

mutex: mutex_instance / mutex_name / array of mutexes

When a job with a mutex triggers, the job's block is executed with the mutex around it, preventing other jobs with the same mutex from entering (it makes the other jobs wait until it exits the mutex).

This is different from overlap: false, which is, first, limited to instances of the same job, and, second, doesn't make the incoming job instance block/wait but give up.

:mutex accepts a mutex instance or a mutex name (String). It also accept an array of mutex names / mutex instances. It allows for complex relations between jobs.

Array of mutexes: original idea and implementation by Rainux Luo

Note: creating lots of different mutexes is OK. Rufus-scheduler will place them in its Scheduler#mutexes hash... And they won't get garbage collected.

The :overlap option is considered before the :mutex option when the scheduler is reviewing jobs for triggering.

timeout: duration or point in time

It's OK to specify a timeout when scheduling some work. After the time specified, it gets interrupted via a Rufus::Scheduler::TimeoutError.

scheduler.in '10d', timeout: '1d' do
  begin
    # ... do something
  rescue Rufus::Scheduler::TimeoutError
    # ... that something got interrupted after 1 day
  end
end

The :timeout option accepts either a duration (like "1d" or "2w3d") or a point in time (like "2013/12/12 12:00").

:first_at, :first_in, :first, :first_time

This option is for repeat jobs (cron / every) only.

It's used to specify the first time after which the repeat job should trigger for the first time.

In the case of an "every" job, this will be the first time (modulo the scheduler frequency) the job triggers. For a "cron" job as well, the :first will point to the first time the job has to trigger, the following trigger times are then determined by the cron string.

scheduler.every '2d', first_at: Time.now + 10 * 3600 do
  # ... every two days, but start in 10 hours
end

scheduler.every '2d', first_in: '10h' do
  # ... every two days, but start in 10 hours
end

scheduler.cron '00 14 * * *', first_in: '3d' do
  # ... every day at 14h00, but start after 3 * 24 hours
end

:first, :first_at and :first_in all accept a point in time or a duration (number or time string). Use the symbol you think makes your schedule more readable.

Note: it's OK to change the first_at (a Time instance) directly:

job.first_at = Time.now + 10
job.first_at = Rufus::Scheduler.parse('2029-12-12')

The first argument (in all its flavours) accepts a :now or :immediately value. That schedules the first occurrence for immediate triggering. Consider:

require 'rufus-scheduler'

s = Rufus::Scheduler.new

n = Time.now; p [ :scheduled_at, n, n.to_f ]

s.every '3s', first: :now do
  n = Time.now; p [ :in, n, n.to_f ]
end

s.join

that'll output something like:

[:scheduled_at, 2014-01-22 22:21:21 +0900, 1390396881.344438]
[:in, 2014-01-22 22:21:21 +0900, 1390396881.6453865]
[:in, 2014-01-22 22:21:24 +0900, 1390396884.648807]
[:in, 2014-01-22 22:21:27 +0900, 1390396887.651686]
[:in, 2014-01-22 22:21:30 +0900, 1390396890.6571937]
...

:last_at, :last_in, :last

This option is for repeat jobs (cron / every) only.

It indicates the point in time after which the job should unschedule itself.

scheduler.cron '5 23 * * *', last_in: '10d' do
  # ... do something every evening at 23:05 for 10 days
end

scheduler.every '10m', last_at: Time.now + 10 * 3600 do
  # ... do something every 10 minutes for 10 hours
end

scheduler.every '10m', last_in: 10 * 3600 do
  # ... do something every 10 minutes for 10 hours
end

:last, :last_at and :last_in all accept a point in time or a duration (number or time string). Use the symbol you think makes your schedule more readable.

Note: it's OK to change the last_at (nil or a Time instance) directly:

job.last_at = nil
  # remove the "last" bound

job.last_at = Rufus::Scheduler.parse('2029-12-12')
  # set the last bound

times: nb of times (before auto-unscheduling)

One can tell how many times a repeat job (CronJob or EveryJob) is to execute before unscheduling by itself.

scheduler.every '2d', times: 10 do
  # ... do something every two days, but not more than 10 times
end

scheduler.cron '0 23 * * *', times: 31 do
  # ... do something every day at 23:00 but do it no more than 31 times
end

It's OK to assign nil to :times to make sure the repeat job is not limited. It's useful when the :times is determined at scheduling time.

scheduler.cron '0 23 * * *', times: (nolimit ? nil : 10) do
  # ...
end

The value set by :times is accessible in the job. It can be modified anytime.

job =
  scheduler.cron '0 23 * * *' do
    # ...
  end

# later on...

job.times = 10
  # 10 days and it will be over

Job methods

When calling a schedule method, the id (String) of the job is returned. Longer schedule methods return Job instances directly. Calling the shorter schedule methods with the job: true also returns Job instances instead of Job ids (Strings).

  require 'rufus-scheduler'

  scheduler = Rufus::Scheduler.new

  job_id =
    scheduler.in '10d' do
      # ...
    end

  job =
    scheduler.schedule_in '1w' do
      # ...
    end

  job =
    scheduler.in '1w', job: true do
      # ...
    end

Those Job instances have a few interesting methods / properties:

id, job_id

Returns the job id.

job = scheduler.schedule_in('10d') do; end
job.id
  # => "in_1374072446.8923042_0.0_0"

scheduler

Returns the scheduler instance itself.

opts

Returns the options passed at the Job creation.

job = scheduler.schedule_in('10d', tag: 'hello') do; end
job.opts
  # => { :tag => 'hello' }

original

Returns the original schedule.

job = scheduler.schedule_in('10d', tag: 'hello') do; end
job.original
  # => '10d'

callable, handler

callable() returns the scheduled block (or the call method of the callable object passed in lieu of a block)

handler() returns nil if a block was scheduled and the instance scheduled otherwise.

# when passing a block

job =
  scheduler.schedule_in('10d') do
    # ...
  end

job.handler
  # => nil
job.callable
  # => #<Proc:0x00000001dc6f58@/home/jmettraux/whatever.rb:115>

and

# when passing something else than a block

class MyHandler
  attr_reader :counter
  def initialize
    @counter = 0
  end
  def call(job, time)
    @counter = @counter + 1
  end
end

job = scheduler.schedule_in('10d', MyHandler.new)

job.handler
  # => #<Method: MyHandler#call>
job.callable
  # => #<MyHandler:0x0000000163ae88 @counter=0>

source_location

Added to rufus-scheduler 3.8.0.

Returns the array [ 'path/to/file.rb', 123 ] like Proc#source_location does.

require 'rufus-scheduler'

scheduler = Rufus::Scheduler.new

job = scheduler.schedule_every('2h') { p Time.now }

p job.source_location
  # ==> [ '/home/jmettraux/rufus-scheduler/test.rb', 6 ]

scheduled_at

Returns the Time instance when the job got created.

job = scheduler.schedule_in('10d', tag: 'hello') do; end
job.scheduled_at
  # => 2013-07-17 23:48:54 +0900

last_time

Returns the last time the job triggered (is usually nil for AtJob and InJob).

job = scheduler.schedule_every('10s') do; end

job.scheduled_at
  # => 2013-07-17 23:48:54 +0900
job.last_time
  # => nil (since we've just scheduled it)

# after 10 seconds

job.scheduled_at
  # => 2013-07-17 23:48:54 +0900 (same as above)
job.last_time
  # => 2013-07-17 23:49:04 +0900

previous_time

Returns the previous #next_time

scheduler.every('10s') do |job|
  puts "job scheduled for #{job.previous_time} triggered at #{Time.now}"
  puts "next time will be around #{job.next_time}"
  puts "."
end

last_work_time, mean_work_time

The job keeps track of how long its work was in the last_work_time attribute. For a one time job (in, at) it's probably not very useful.

The attribute mean_work_time contains a computed mean work time. It's recomputed after every run (if it's a repeat job).

next_times(n)

Returns an array of EtOrbi::EoTime instances (Time instances with a designated time zone), listing the n next occurrences for this job.

Please note that for "interval" jobs, a mean work time is computed each time and it's used by this #next_times(n) method to approximate the next times beyond the immediate next time.

unschedule

Unschedule the job, preventing it from firing again and removing it from the schedule. This doesn't prevent a running thread for this job to run until its end.

threads

Returns the list of threads currently "hosting" runs of this Job instance.

kill

Interrupts all the work threads currently running for this job instance. They discard their work and are free for their next run (of whatever job).

Note: this doesn't unschedule the Job instance.

Note: if the job is pooled for another run, a free work thread will probably pick up that next run and the job will appear as running again. You'd have to unschedule and kill to make sure the job doesn't run again.

running?

Returns true if there is at least one running Thread hosting a run of this Job instance.

scheduled?

Returns true if the job is scheduled (is due to trigger). For repeat jobs it should return true until the job gets unscheduled. "at" and "in" jobs will respond with false as soon as they start running (execution triggered).

pause, resume, paused?, paused_at

These four methods are only available to CronJob, EveryJob and IntervalJob instances. One can pause or resume such jobs thanks to these methods.

job =
  scheduler.schedule_every('10s') do
    # ...
  end

job.pause
  # => 2013-07-20 01:22:22 +0900
job.paused?
  # => true
job.paused_at
  # => 2013-07-20 01:22:22 +0900

job.resume
  # => nil

tags

Returns the list of tags attached to this Job instance.

By default, returns an empty array.

job = scheduler.schedule_in('10d') do; end
job.tags
  # => []

job = scheduler.schedule_in('10d', tag: 'hello') do; end
job.tags
  # => [ 'hello' ]

[]=, [], key?, has_key?, keys, values, and entries

Threads have thread-local variables, similarly Rufus-scheduler jobs have job-local variables. Those are more like a dict with thread-safe access.

job =
  @scheduler.schedule_every '1s' do |job|
    job[:timestamp] = Time.now.to_f
    job[:counter] ||= 0
    job[:counter] += 1
  end

sleep 3.6

job[:counter]
  # => 3

job.key?(:timestamp) # => true
job.has_key?(:timestamp) # => true
job.keys # => [ :timestamp, :counter ]

Locals can be set at schedule time:

job0 =
  @scheduler.schedule_cron '*/15 12 * * *', locals: { a: 0 } do
    # ...
  end
job1 =
  @scheduler.schedule_cron '*/15 13 * * *', l: { a: 1 } do
    # ...
  end

One can fetch the Hash directly with Job#locals. Of course, direct manipulation is not thread-safe.

job.locals.entries do |k, v|
  p "#{k}: #{v}"
end

call

Job instances have a #call method. It simply calls the scheduled block or callable immediately.

job =
  @scheduler.schedule_every '10m' do |job|
    # ...
  end

job.call

Warning: the Scheduler#on_error handler is not involved. Error handling is the responsibility of the caller.

If the call has to be rescued by the error handler of the scheduler, call(true) might help:

require 'rufus-scheduler'

s = Rufus::Scheduler.new

def s.on_error(job, err)
  if job
    p [ 'error in scheduled job', job.class, job.original, err.message ]
  else
    p [ 'error while scheduling', err.message ]
  end
rescue
  p $!
end

job =
  s.schedule_in('1d') do
    fail 'again'
  end

job.call(true)
  #
  # true lets the error_handler deal with error in the job call

AtJob and InJob methods

time

Returns when the job will trigger (hopefully).

next_time

An alias for time.

EveryJob, IntervalJob and CronJob methods

next_time

Returns the next time the job will trigger (hopefully).

count

Returns how many times the job fired.

EveryJob methods

frequency

It returns the scheduling frequency. For a job scheduled "every 20s", it's 20.

It's used to determine if the job frequency is higher than the scheduler frequency (it raises an ArgumentError if that is the case).

IntervalJob methods

interval

Returns the interval scheduled between each execution of the job.

Every jobs use a time duration between each start of their execution, while interval jobs use a time duration between the end of an execution and the start of the next.

CronJob methods

brute_frequency

An expensive method to run, it's brute. It caches its results. By default it runs for 2017 (a non leap-year).

  require 'rufus-scheduler'

  Rufus::Scheduler.parse('* * * * *').brute_frequency
    #
    # => #<Fugit::Cron::Frequency:0x00007fdf4520c5e8
    #      @span=31536000.0, @delta_min=60, @delta_max=60,
    #      @occurrences=525600, @span_years=1.0, @yearly_occurrences=525600.0>
      #
      # Occurs 525600 times in a span of 1 year (2017) and 1 day.
      # There are least 60 seconds between "triggers" and at most 60 seconds.

  Rufus::Scheduler.parse('0 12 * * *').brute_frequency
    # => #<Fugit::Cron::Frequency:0x00007fdf451ec6d0
    #      @span=31536000.0, @delta_min=86400, @delta_max=86400,
    #      @occurrences=365, @span_years=1.0, @yearly_occurrences=365.0>
  Rufus::Scheduler.parse('0 12 * * *').brute_frequency.to_debug_s
    # => "dmin: 1D, dmax: 1D, ocs: 365, spn: 52W1D, spnys: 1, yocs: 365"
      #
      # 365 occurrences, at most 1 day between each, at least 1 day.

The CronJob#frequency method found in rufus-scheduler < 3.5 has been retired.

looking up jobs

Scheduler#job(job_id)

The scheduler #job(job_id) method can be used to look up Job instances.

  require 'rufus-scheduler'

  scheduler = Rufus::Scheduler.new

  job_id =
    scheduler.in '10d' do
      # ...
    end

  # later on...

  job = scheduler.job(job_id)

Scheduler #jobs #at_jobs #in_jobs #every_jobs #interval_jobs and #cron_jobs

Are methods for looking up lists of scheduled Job instances.

Here is an example:

  #
  # let's unschedule all the at jobs

  scheduler.at_jobs.each(&:unschedule)

Scheduler#jobs(tag: / tags: x)

When scheduling a job, one can specify one or more tags attached to the job. These can be used to look up the job later on.

  scheduler.in '10d', tag: 'main_process' do
    # ...
  end
  scheduler.in '10d', tags: [ 'main_process', 'side_dish' ] do
    # ...
  end

  # ...

  jobs = scheduler.jobs(tag: 'main_process')
    # find all the jobs with the 'main_process' tag

  jobs = scheduler.jobs(tags: [ 'main_process', 'side_dish' ]
    # find all the jobs with the 'main_process' AND 'side_dish' tags

Scheduler#running_jobs

Returns the list of Job instance that have currently running instances.

Whereas other "_jobs" method scan the scheduled job list, this method scans the thread list to find the job. It thus comprises jobs that are running but are not scheduled anymore (that happens for at and in jobs).

misc Scheduler methods

Scheduler#unschedule(job_or_job_id)

Unschedule a job given directly or by its id.

Scheduler#shutdown

Shuts down the scheduler, ceases any scheduler/triggering activity.

Scheduler#shutdown(:wait)

Shuts down the scheduler, waits (blocks) until all the jobs cease running.

Scheduler#shutdown(wait: n)

Shuts down the scheduler, waits (blocks) at most n seconds until all the jobs cease running. (Jobs are killed after n seconds have elapsed).

Scheduler#shutdown(:kill)

Kills all the job (threads) and then shuts the scheduler down. Radical.

Scheduler#down?

Returns true if the scheduler has been shut down.

Scheduler#started_at

Returns the Time instance at which the scheduler got started.

Scheduler #uptime / #uptime_s

Returns since the count of seconds for which the scheduler has been running.

#uptime_s returns this count in a String easier to grasp for humans, like "3d12m45s123".

Scheduler#join

Lets the current thread join the scheduling thread in rufus-scheduler. The thread comes back when the scheduler gets shut down.

#join is mostly used in standalone scheduling script (or tiny one file examples). Calling #join from a web application initializer will probably hijack the main thread and prevent the web application from being served. Do not put a #join in such a web application initializer file.

Scheduler#threads

Returns all the threads associated with the scheduler, including the scheduler thread itself.

Scheduler#work_threads(query=:all/:active/:vacant)

Lists the work threads associated with the scheduler. The query option defaults to :all.

  • :all : all the work threads
  • :active : all the work threads currently running a Job
  • :vacant : all the work threads currently not running a Job

Note that the main schedule thread will be returned if it is currently running a Job (ie one of those blocking: true jobs).

Scheduler#scheduled?(job_or_job_id)

Returns true if the arg is a currently scheduled job (see Job#scheduled?).

Scheduler#occurrences(time0, time1)

Returns a hash { job => [ t0, t1, ... ] } mapping jobs to their potential trigger time within the [ time0, time1 ] span.

Please note that, for interval jobs, the #mean_work_time is used, so the result is only a prediction.

Scheduler#timeline(time0, time1)

Like #occurrences but returns a list [ [ t0, job0 ], [ t1, job1 ], ... ] of time + job pairs.

dealing with job errors

The easy, job-granular way of dealing with errors is to rescue and deal with them immediately. The two next sections show examples. Skip them for explanations on how to deal with errors at the scheduler level.

block jobs

As said, jobs could take care of their errors themselves.

scheduler.every '10m' do
  begin
    # do something that might fail...
  rescue => e
    $stderr.puts '-' * 80
    $stderr.puts e.message
    $stderr.puts e.stacktrace
    $stderr.puts '-' * 80
  end
end

callable jobs

Jobs are not only shrunk to blocks, here is how the above would look like with a dedicated class.

scheduler.every '10m', Class.new do
  def call(job)
    # do something that might fail...
  rescue => e
    $stderr.puts '-' * 80
    $stderr.puts e.message
    $stderr.puts e.stacktrace
    $stderr.puts '-' * 80
  end
end

TODO: talk about callable#on_error (if implemented)

(see scheduling handler instances and scheduling handler classes for more about those "callable jobs")

Rufus::Scheduler#stderr=

By default, rufus-scheduler intercepts all errors (that inherit from StandardError) and dumps abundant details to $stderr.

If, for example, you'd like to divert that flow to another file (descriptor), you can reassign $stderr for the current Ruby process

$stderr = File.open('/var/log/myapplication.log', 'ab')

or, you can limit that reassignement to the scheduler itself

scheduler.stderr = File.open('/var/log/myapplication.log', 'ab')

Rufus::Scheduler#on_error(job, error)

We've just seen that, by default, rufus-scheduler dumps error information to $stderr. If one needs to completely change what happens in case of error, it's OK to overwrite #on_error

def scheduler.on_error(job, error)

  Logger.warn("intercepted error in #{job.id}: #{error.message}")
end

On Rails, the on_error method redefinition might look like:

def scheduler.on_error(job, error)

  Rails.logger.error(
    "err#{error.object_id} rufus-scheduler intercepted #{error.inspect}" +
    " in job #{job.inspect}")
  error.backtrace.each_with_index do |line, i|
    Rails.logger.error(
      "err#{error.object_id} #{i}: #{line}")
  end
end

Callbacks

Rufus::Scheduler #on_pre_trigger and #on_post_trigger callbacks

One can bind callbacks before and after jobs trigger:

s = Rufus::Scheduler.new

def s.on_pre_trigger(job, trigger_time)
  puts "triggering job #{job.id}..."
end

def s.on_post_trigger(job, trigger_time)
  puts "triggered job #{job.id}."
end

s.every '1s' do
  # ...
end

The trigger_time is the time at which the job triggers. It might be a bit before Time.now.

Warning: these two callbacks are executed in the scheduler thread, not in the work threads (the threads where the job execution really happens).

Rufus::Scheduler#around_trigger

One can create an around callback which will wrap a job:

def s.around_trigger(job)
  t = Time.now
  puts "Starting job #{job.id}..."
  yield
  puts "job #{job.id} finished in #{Time.now-t} seconds."
end

The around callback is executed in the thread.

Rufus::Scheduler#on_pre_trigger as a guard

Returning false in on_pre_trigger will prevent the job from triggering. Returning anything else (nil, -1, true, ...) will let the job trigger.

Note: your business logic should go in the scheduled block itself (or the scheduled instance). Don't put business logic in on_pre_trigger. Return false for admin reasons (backend down, etc), not for business reasons that are tied to the job itself.

def s.on_pre_trigger(job, trigger_time)

  return false if Backend.down?

  puts "triggering job #{job.id}..."
end

Rufus::Scheduler.new options

:frequency

By default, rufus-scheduler sleeps 0.300 second between every step. At each step it checks for jobs to trigger and so on.

The :frequency option lets you change that 0.300 second to something else.

scheduler = Rufus::Scheduler.new(frequency: 5)

It's OK to use a time string to specify the frequency.

scheduler = Rufus::Scheduler.new(frequency: '2h10m')
  # this scheduler will sleep 2 hours and 10 minutes between every "step"

Use with care.

lockfile: "mylockfile.txt"

This feature only works on OSes that support the flock (man 2 flock) call.

Starting the scheduler with lockfile: '.rufus-scheduler.lock' will make the scheduler attempt to create and lock the file .rufus-scheduler.lock in the current working directory. If that fails, the scheduler will not start.

The idea is to guarantee only one scheduler (in a group of schedulers sharing the same lockfile) is running.

This is useful in environments where the Ruby process holding the scheduler gets started multiple times.

If the lockfile mechanism here is not sufficient, you can plug your custom mechanism. It's explained in advanced lock schemes below.

:scheduler_lock

(since rufus-scheduler 3.0.9)

The scheduler lock is an object that responds to #lock and #unlock. The scheduler calls #lock when starting up. If the answer is false, the scheduler stops its initialization work and won't schedule anything.

Here is a sample of a scheduler lock that only lets the scheduler on host "coffee.example.com" start:

class HostLock
  def initialize(lock_name)
    @lock_name = lock_name
  end
  def lock
    @lock_name == `hostname -f`.strip
  end
  def unlock
    true
  end
end

scheduler =
  Rufus::Scheduler.new(scheduler_lock: HostLock.new('coffee.example.com'))

By default, the scheduler_lock is an instance of Rufus::Scheduler::NullLock, with a #lock that returns true.

:trigger_lock

(since rufus-scheduler 3.0.9)

The trigger lock in an object that responds to #lock. The scheduler calls that method on the job lock right before triggering any job. If the answer is false, the trigger doesn't happen, the job is not done (at least not in this scheduler).

Here is a (stupid) PingLock example, it'll only trigger if an "other host" is not responding to ping. Do not use that in production, you don't want to fork a ping process for each trigger attempt...

class PingLock
  def initialize(other_host)
    @other_host = other_host
  end
  def lock
    ! system("ping -c 1 #{@other_host}")
  end
end

scheduler =
  Rufus::Scheduler.new(trigger_lock: PingLock.new('main.example.com'))

By default, the trigger_lock is an instance of Rufus::Scheduler::NullLock, with a #lock that always returns true.

As explained in advanced lock schemes, another way to tune that behaviour is by overriding the scheduler's #confirm_lock method. (You could also do that with an #on_pre_trigger callback).

:max_work_threads

In rufus-scheduler 2.x, by default, each job triggering received its own, brand new, thread of execution. In rufus-scheduler 3.x, execution happens in a pooled work thread. The max work thread count (the pool size) defaults to 28.

One can set this maximum value when starting the scheduler.

scheduler = Rufus::Scheduler.new(max_work_threads: 77)

It's OK to increase the :max_work_threads of a running scheduler.

scheduler.max_work_threads += 10

Rufus::Scheduler.singleton

Do not want to store a reference to your rufus-scheduler instance? Then Rufus::Scheduler.singleton can help, it returns a singleton instance of the scheduler, initialized the first time this class method is called.

Rufus::Scheduler.singleton.every '10s' { puts "hello, world!" }

It's OK to pass initialization arguments (like :frequency or :max_work_threads) but they will only be taken into account the first time .singleton is called.

Rufus::Scheduler.singleton(max_work_threads: 77)
Rufus::Scheduler.singleton(max_work_threads: 277) # no effect

The .s is a shortcut for .singleton.

Rufus::Scheduler.s.every '10s' { puts "hello, world!" }

advanced lock schemes

As seen above, rufus-scheduler proposes the :lockfile system out of the box. If in a group of schedulers only one is supposed to run, the lockfile mechanism prevents schedulers that have not set/created the lockfile from running.

There are situations where this is not sufficient.

By overriding #lock and #unlock, one can customize how schedulers lock.

This example was provided by Eric Lindvall:

class ZookeptScheduler < Rufus::Scheduler

  def initialize(zookeeper, opts={})
    @zk = zookeeper
    super(opts)
  end

  def lock
    @zk_locker = @zk.exclusive_locker('scheduler')
    @zk_locker.lock # returns true if the lock was acquired, false else
  end

  def unlock
    @zk_locker.unlock
  end

  def confirm_lock
    return false if down?
    @zk_locker.assert!
  rescue ZK::Exceptions::LockAssertionFailedError => e
    # we've lost the lock, shutdown (and return false to at least prevent
    # this job from triggering
    shutdown
    false
  end
end

This uses a zookeeper to make sure only one scheduler in a group of distributed schedulers runs.

The methods #lock and #unlock are overridden and #confirm_lock is provided, to make sure that the lock is still valid.

The #confirm_lock method is called right before a job triggers (if it is provided). The more generic callback #on_pre_trigger is called right after #confirm_lock.

:scheduler_lock and :trigger_lock

(introduced in rufus-scheduler 3.0.9).

Another way of prodiving #lock, #unlock and #confirm_lock to a rufus-scheduler is by using the :scheduler_lock and :trigger_lock options.

See :trigger_lock and :scheduler_lock.

The scheduler lock may be used to prevent a scheduler from starting, while a trigger lock prevents individual jobs from triggering (the scheduler goes on scheduling).

One has to be careful with what goes in #confirm_lock or in a trigger lock, as it gets called before each trigger.

Warning: you may think you're heading towards "high availability" by using a trigger lock and having lots of schedulers at hand. It may be so if you limit yourself to scheduling the same set of jobs at scheduler startup. But if you add schedules at runtime, they stay local to their scheduler. There is no magic that propagates the jobs to all the schedulers in your pack.

parsing cronlines and time strings

(Please note that fugit does the heavy-lifting parsing work for rufus-scheduler).

Rufus::Scheduler provides a class method .parse to parse time durations and cron strings. It's what it's using when receiving schedules. One can use it directly (no need to instantiate a Scheduler).

require 'rufus-scheduler'

Rufus::Scheduler.parse('1w2d')
  # => 777600.0
Rufus::Scheduler.parse('1.0w1.0d')
  # => 777600.0

Rufus::Scheduler.parse('Sun Nov 18 16:01:00 2012').strftime('%c')
  # => 'Sun Nov 18 16:01:00 2012'

Rufus::Scheduler.parse('Sun Nov 18 16:01:00 2012 Europe/Berlin').strftime('%c %z')
  # => 'Sun Nov 18 15:01:00 2012 +0000'

Rufus::Scheduler.parse(0.1)
  # => 0.1

Rufus::Scheduler.parse('* * * * *')
  # => #<Fugit::Cron:0x00007fb7a3045508
  #      @original="* * * * *", @cron_s=nil,
  #      @seconds=[0], @minutes=nil, @hours=nil, @monthdays=nil, @months=nil,
  #      @weekdays=nil, @zone=nil, @timezone=nil>

It returns a number when the input is a duration and a Fugit::Cron instance when the input is a cron string.

It will raise an ArgumentError if it can't parse the input.

Beyond .parse, there are also .parse_cron and .parse_duration, for finer granularity.

There is an interesting helper method named .to_duration_hash:

require 'rufus-scheduler'

Rufus::Scheduler.to_duration_hash(60)
  # => { :m => 1 }
Rufus::Scheduler.to_duration_hash(62.127)
  # => { :m => 1, :s => 2, :ms => 127 }

Rufus::Scheduler.to_duration_hash(62.127, drop_seconds: true)
  # => { :m => 1 }

cronline notations specific to rufus-scheduler

first Monday, last Sunday et al

To schedule something at noon every first Monday of the month:

scheduler.cron('00 12 * * mon#1') do
  # ...
end

To schedule something at noon the last Sunday of every month:

scheduler.cron('00 12 * * sun#-1') do
  # ...
end
#
# OR
#
scheduler.cron('00 12 * * sun#L') do
  # ...
end

Such cronlines can be tested with scripts like:

require 'rufus-scheduler'

Time.now
  # => 2013-10-26 07:07:08 +0900
Rufus::Scheduler.parse('* * * * mon#1').next_time.to_s
  # => 2013-11-04 00:00:00 +0900

L (last day of month)

L can be used in the "day" slot:

In this example, the cronline is supposed to trigger every last day of the month at noon:

require 'rufus-scheduler'
Time.now
  # => 2013-10-26 07:22:09 +0900
Rufus::Scheduler.parse('00 12 L * *').next_time.to_s
  # => 2013-10-31 12:00:00 +0900

negative day (x days before the end of the month)

It's OK to pass negative values in the "day" slot:

scheduler.cron '0 0 -5 * *' do
  # do it at 00h00 5 days before the end of the month...
end

Negative ranges (-10--5-: 10 days before the end of the month to 5 days before the end of the month) are OK, but mixed positive / negative ranges will raise an ArgumentError.

Negative ranges with increments (-10---2/2) are accepted as well.

Descending day ranges are not accepted (10-8 or -8--10 for example).

a note about timezones

Cron schedules and at schedules support the specification of a timezone.

scheduler.cron '0 22 * * 1-5 America/Chicago' do
  # the job...
end

scheduler.at '2013-12-12 14:00 Pacific/Samoa' do
  puts "it's tea time!"
end

# or even

Rufus::Scheduler.parse("2013-12-12 14:00 Pacific/Saipan")
  # => #<Rufus::Scheduler::ZoTime:0x007fb424abf4e8 @seconds=1386820800.0, @zone=#<TZInfo::DataTimezone: Pacific/Saipan>, @time=nil>

I get "zotime.rb:41:in `initialize': cannot determine timezone from nil"

For when you see an error like:

rufus-scheduler/lib/rufus/scheduler/zotime.rb:41:
  in `initialize':
    cannot determine timezone from nil (etz:nil,tnz:"中国标准时间",tzid:nil)
      (ArgumentError)
    from rufus-scheduler/lib/rufus/scheduler/zotime.rb:198:in `new'
    from rufus-scheduler/lib/rufus/scheduler/zotime.rb:198:in `now'
    from rufus-scheduler/lib/rufus/scheduler.rb:561:in `start'
    ...

It may happen on Windows or on systems that poorly hint to Ruby which timezone to use. It should be solved by setting explicitly the ENV['TZ'] before the scheduler instantiation:

ENV['TZ'] = 'Asia/Shanghai'
scheduler = Rufus::Scheduler.new
scheduler.every '2s' do
  puts "#{Time.now} Hello #{ENV['TZ']}!"
end

On Rails you might want to try with:

ENV['TZ'] = Time.zone.name # Rails only
scheduler = Rufus::Scheduler.new
scheduler.every '2s' do
  puts "#{Time.now} Hello #{ENV['TZ']}!"
end

(Hat tip to Alexander in gh-230)

Rails sets its timezone under config/application.rb.

Rufus-Scheduler 3.3.3 detects the presence of Rails and uses its timezone setting (tested with Rails 4), so setting ENV['TZ'] should not be necessary.

The value can be determined thanks to https://en.wikipedia.org/wiki/List_of_tz_database_time_zones.

Use a "continent/city" identifier (for example "Asia/Shanghai"). Do not use an abbreviation (not "CST") and do not use a local time zone name (not "中国标准时间" nor "Eastern Standard Time" which, for instance, points to a time zone in America and to another one in Australia...).

If the error persists (and especially on Windows), try to add the tzinfo-data to your Gemfile, as in:

gem 'tzinfo-data'

or by manually requiring it before requiring rufus-scheduler (if you don't use Bundler):

require 'tzinfo/data'
require 'rufus-scheduler'

so Rails?

Yes, I know, all of the above is boring and you're only looking for a snippet to paste in your Ruby-on-Rails application to schedule...

Here is an example initializer:

#
# config/initializers/scheduler.rb

require 'rufus-scheduler'

# Let's use the rufus-scheduler singleton
#
s = Rufus::Scheduler.singleton


# Stupid recurrent task...
#
s.every '1m' do

  Rails.logger.info "hello, it's #{Time.now}"
  Rails.logger.flush
end

And now you tell me that this is good, but you want to schedule stuff from your controller.

Maybe:

class ScheController < ApplicationController

  # GET /sche/
  #
  def index

    job_id =
      Rufus::Scheduler.singleton.in '5s' do
        Rails.logger.info "time flies, it's now #{Time.now}"
      end

    render text: "scheduled job #{job_id}"
  end
end

The rufus-scheduler singleton is instantiated in the config/initializers/scheduler.rb file, it's then available throughout the webapp via Rufus::Scheduler.singleton.

Warning: this works well with single-process Ruby servers like Webrick and Thin. Using rufus-scheduler with Passenger or Unicorn requires a bit more knowledge and tuning, gently provided by a bit of googling and reading, see Faq above.

avoid scheduling when running the Ruby on Rails console

(Written in reply to gh-186)

If you don't want rufus-scheduler to trigger anything while running the Ruby on Rails console, running for tests/specs, or running from a Rake task, you can insert a conditional return statement before jobs are added to the scheduler instance:

#
# config/initializers/scheduler.rb

require 'rufus-scheduler'

return if defined?(Rails::Console) || Rails.env.test? || File.split($PROGRAM_NAME).last == 'rake'
  #
  # do not schedule when Rails is run from its console, for a test/spec, or
  # from a Rake task

# return if $PROGRAM_NAME.include?('spring')
  #
  # see https://github.com/jmettraux/rufus-scheduler/issues/186

s = Rufus::Scheduler.singleton

s.every '1m' do
  Rails.logger.info "hello, it's #{Time.now}"
  Rails.logger.flush
end

(Beware later version of Rails where Spring takes care pre-running the initializers. Running spring stop or disabling Spring might be necessary in some cases to see changes to initializers being taken into account.)

rails server -d

(Written in reply to https://github.com/jmettraux/rufus-scheduler/issues/165 )

There is the handy rails server -d that starts a development Rails as a daemon. The annoying thing is that the scheduler as seen above is started in the main process that then gets forked and daemonized. The rufus-scheduler thread (and any other thread) gets lost, no scheduling happens.

I avoid running -d in development mode and bother about daemonizing only for production deployment.

These are two well crafted articles on process daemonization, please read them:

If, anyway, you need something like rails server -d, why not try bundle exec unicorn -D instead? In my (limited) experience, it worked out of the box (well, had to add gem 'unicorn' to Gemfile first).

executor / reloader

You might benefit from wraping your scheduled code in the executor or reloader. Read more here: https://guides.rubyonrails.org/threading_and_code_execution.html

support

see getting help above.


Author: jmettraux
Source code: https://github.com/jmettraux/rufus-scheduler
License: MIT license

#ruby 

Swift Tips: A Collection Useful Tips for The Swift Language

SwiftTips

The following is a collection of tips I find to be useful when working with the Swift language. More content is available on my Twitter account!

Property Wrappers as Debugging Tools

Property Wrappers allow developers to wrap properties with specific behaviors, that will be seamlessly triggered whenever the properties are accessed.

While their primary use case is to implement business logic within our apps, it's also possible to use Property Wrappers as debugging tools!

For example, we could build a wrapper called @History, that would be added to a property while debugging and would keep track of all the values set to this property.

import Foundation

@propertyWrapper
struct History<Value> {
    private var value: Value
    private(set) var history: [Value] = []

    init(wrappedValue: Value) {
        self.value = wrappedValue
    }
    
    var wrappedValue: Value {
        get { value }

        set {
            history.append(value)
            value = newValue
        }
    }
    
    var projectedValue: Self {
        return self
    }
}

// We can then decorate our business code
// with the `@History` wrapper
struct User {
    @History var name: String = ""
}

var user = User()

// All the existing call sites will still
// compile, without the need for any change
user.name = "John"
user.name = "Jane"

// But now we can also access an history of
// all the previous values!
user.$name.history // ["", "John"]

Localization through String interpolation

Swift 5 gave us the possibility to define our own custom String interpolation methods.

This feature can be used to power many use cases, but there is one that is guaranteed to make sense in most projects: localizing user-facing strings.

import Foundation

extension String.StringInterpolation {
    mutating func appendInterpolation(localized key: String, _ args: CVarArg...) {
        let localized = String(format: NSLocalizedString(key, comment: ""), arguments: args)
        appendLiteral(localized)
    }
}


/*
 Let's assume that this is the content of our Localizable.strings:
 
 "welcome.screen.greetings" = "Hello %@!";
 */

let userName = "John"
print("\(localized: "welcome.screen.greetings", userName)") // Hello John!

Implementing pseudo-inheritance between structs

If you’ve always wanted to use some kind of inheritance mechanism for your structs, Swift 5.1 is going to make you very happy!

Using the new KeyPath-based dynamic member lookup, you can implement some pseudo-inheritance, where a type inherits the API of another one 🎉

(However, be careful, I’m definitely not advocating inheritance as a go-to solution 🙃)

import Foundation

protocol Inherits {
    associatedtype SuperType
    
    var `super`: SuperType { get }
}

extension Inherits {
    subscript<T>(dynamicMember keyPath: KeyPath<SuperType, T>) -> T {
        return self.`super`[keyPath: keyPath]
    }
}

struct Person {
    let name: String
}

@dynamicMemberLookup
struct User: Inherits {
    let `super`: Person
    
    let login: String
    let password: String
}

let user = User(super: Person(name: "John Appleseed"), login: "Johnny", password: "1234")

user.name // "John Appleseed"
user.login // "Johnny"

Composing NSAttributedString through a Function Builder

Swift 5.1 introduced Function Builders: a great tool for building custom DSL syntaxes, like SwiftUI. However, one doesn't need to be building a full-fledged DSL in order to leverage them.

For example, it's possible to write a simple Function Builder, whose job will be to compose together individual instances of NSAttributedString through a nicer syntax than the standard API.

import UIKit

@_functionBuilder
class NSAttributedStringBuilder {
    static func buildBlock(_ components: NSAttributedString...) -> NSAttributedString {
        let result = NSMutableAttributedString(string: "")
        
        return components.reduce(into: result) { (result, current) in result.append(current) }
    }
}

extension NSAttributedString {
    class func composing(@NSAttributedStringBuilder _ parts: () -> NSAttributedString) -> NSAttributedString {
        return parts()
    }
}

let result = NSAttributedString.composing {
    NSAttributedString(string: "Hello",
                       attributes: [.font: UIFont.systemFont(ofSize: 24),
                                    .foregroundColor: UIColor.red])
    NSAttributedString(string: " world!",
                       attributes: [.font: UIFont.systemFont(ofSize: 20),
                                    .foregroundColor: UIColor.orange])
}

Using switch and if as expressions

Contrary to other languages, like Kotlin, Swift does not allow switch and if to be used as expressions. Meaning that the following code is not valid Swift:

let constant = if condition {
                  someValue
               } else {
                  someOtherValue
               }

A common solution to this problem is to wrap the if or switch statement within a closure, that will then be immediately called. While this approach does manage to achieve the desired goal, it makes for a rather poor syntax.

To avoid the ugly trailing () and improve on the readability, you can define a resultOf function, that will serve the exact same purpose, in a more elegant way.

import Foundation

func resultOf<T>(_ code: () -> T) -> T {
    return code()
}

let randomInt = Int.random(in: 0...3)

let spelledOut: String = resultOf {
    switch randomInt {
    case 0:
        return "Zero"
    case 1:
        return "One"
    case 2:
        return "Two"
    case 3:
        return "Three"
    default:
        return "Out of range"
    }
}

print(spelledOut)

Avoiding double negatives within guard statements

A guard statement is a very convenient way for the developer to assert that a condition is met, in order for the execution of the program to keep going.

However, since the body of a guard statement is meant to be executed when the condition evaluates to false, the use of the negation (!) operator within the condition of a guard statement can make the code hard to read, as it becomes a double negative.

A nice trick to avoid such double negatives is to encapsulate the use of the ! operator within a new property or function, whose name does not include a negative.

import Foundation

extension Collection {
    var hasElements: Bool {
        return !isEmpty
    }
}

let array = Bool.random() ? [1, 2, 3] : []

guard array.hasElements else { fatalError("array was empty") }

print(array)

Defining a custom init without loosing the compiler-generated one

It's common knowledge for Swift developers that, when you define a struct, the compiler is going to automatically generate a memberwise init for you. That is, unless you also define an init of your own. Because then, the compiler won't generate any memberwise init.

Yet, there are many instances where we might enjoy the opportunity to get both. As it turns out, this goal is quite easy to achieve: you just need to define your own init in an extension rather than inside the type definition itself.

import Foundation

struct Point {
    let x: Int
    let y: Int
}

extension Point {
    init() {
        x = 0
        y = 0
    }
}

let usingDefaultInit = Point(x: 4, y: 3)
let usingCustomInit = Point()

Implementing a namespace through an empty enum

Swift does not really have an out-of-the-box support of namespaces. One could argue that a Swift module can be seen as a namespace, but creating a dedicated Framework for this sole purpose can legitimately be regarded as overkill.

Some developers have taken the habit to use a struct which only contains static fields to implement a namespace. While this does the job, it requires us to remember to implement an empty private init(), because it wouldn't make sense for such a struct to be instantiated.

It's actually possible to take this approach one step further, by replacing the struct with an enum. While it might seem weird to have an enum with no case, it's actually a very idiomatic way to declare a type that cannot be instantiated.

import Foundation

enum NumberFormatterProvider {
    static var currencyFormatter: NumberFormatter {
        let formatter = NumberFormatter()
        formatter.numberStyle = .currency
        formatter.roundingIncrement = 0.01
        return formatter
    }
    
    static var decimalFormatter: NumberFormatter {
        let formatter = NumberFormatter()
        formatter.numberStyle = .decimal
        formatter.decimalSeparator = ","
        return formatter
    }
}

NumberFormatterProvider() // ❌ impossible to instantiate by mistake

NumberFormatterProvider.currencyFormatter.string(from: 2.456) // $2.46
NumberFormatterProvider.decimalFormatter.string(from: 2.456) // 2,456

Using Never to represent impossible code paths

Never is quite a peculiar type in the Swift Standard Library: it is defined as an empty enum enum Never { }.

While this might seem odd at first glance, it actually yields a very interesting property: it makes it a type that cannot be constructed (i.e. it possesses no instances).

This way, Never can be used as a generic parameter to let the compiler know that a particular feature will not be used.

import Foundation

enum Result<Value, Error> {
    case success(value: Value)
    case failure(error: Error)
}

func willAlwaysSucceed(_ completion: @escaping ((Result<String, Never>) -> Void)) {
    completion(.success(value: "Call was successful"))
}

willAlwaysSucceed( { result in
    switch result {
    case .success(let value):
        print(value)
    // the compiler knows that the `failure` case cannot happen
    // so it doesn't require us to handle it.
    }
})

Providing a default value to a Decodable enum

Swift's Codable framework does a great job at seamlessly decoding entities from a JSON stream. However, when we integrate web-services, we are sometimes left to deal with JSONs that require behaviors that Codable does not provide out-of-the-box.

For instance, we might have a string-based or integer-based enum, and be required to set it to a default value when the data found in the JSON does not match any of its cases.

We might be tempted to implement this via an extensive switch statement over all the possible cases, but there is a much shorter alternative through the initializer init?(rawValue:):

import Foundation

enum State: String, Decodable {
    case active
    case inactive
    case undefined
    
    init(from decoder: Decoder) throws {
        let container = try decoder.singleValueContainer()
        let decodedString = try container.decode(String.self)
        
        self = State(rawValue: decodedString) ?? .undefined
    }
}

let data = """
["active", "inactive", "foo"]
""".data(using: .utf8)!

let decoded = try! JSONDecoder().decode([State].self, from: data)

print(decoded) // [State.active, State.inactive, State.undefined]

Another lightweight dependency injection through default values for function parameters

Dependency injection boils down to a simple idea: when an object requires a dependency, it shouldn't create it by itself, but instead it should be given a function that does it for him.

Now the great thing with Swift is that, not only can a function take another function as a parameter, but that parameter can also be given a default value.

When you combine both those features, you can end up with a dependency injection pattern that is both lightweight on boilerplate, but also type safe.

import Foundation

protocol Service {
    func call() -> String
}

class ProductionService: Service {
    func call() -> String {
        return "This is the production"
    }
}

class MockService: Service {
    func call() -> String {
        return "This is a mock"
    }
}

typealias Provider<T> = () -> T

class Controller {
    
    let service: Service
    
    init(serviceProvider: Provider<Service> = { return ProductionService() }) {
        self.service = serviceProvider()
    }
    
    func work() {
        print(service.call())
    }
}

let productionController = Controller()
productionController.work() // prints "This is the production"

let mockedController = Controller(serviceProvider: { return MockService() })
mockedController.work() // prints "This is a mock"

Lightweight dependency injection through protocol-oriented programming

Singletons are pretty bad. They make your architecture rigid and tightly coupled, which then results in your code being hard to test and refactor. Instead of using singletons, your code should rely on dependency injection, which is a much more architecturally sound approach.

But singletons are so easy to use, and dependency injection requires us to do extra-work. So maybe, for simple situations, we could find an in-between solution?

One possible solution is to rely on one of Swift's most know features: protocol-oriented programming. Using a protocol, we declare and access our dependency. We then store it in a private singleton, and perform the injection through an extension of said protocol.

This way, our code will indeed be decoupled from its dependency, while at the same time keeping the boilerplate to a minimum.

import Foundation

protocol Formatting {
    var formatter: NumberFormatter { get }
}

private let sharedFormatter: NumberFormatter = {
    let sharedFormatter = NumberFormatter()
    sharedFormatter.numberStyle = .currency
    return sharedFormatter
}()

extension Formatting {
    var formatter: NumberFormatter { return sharedFormatter }
}

class ViewModel: Formatting {
    var displayableAmount: String?
    
    func updateDisplay(to amount: Double) {
        displayableAmount = formatter.string(for: amount)
    }
}

let viewModel = ViewModel()

viewModel.updateDisplay(to: 42000.45)
viewModel.displayableAmount // "$42,000.45"

Getting rid of overabundant [weak self] and guard

Callbacks are a part of almost all iOS apps, and as frameworks such as RxSwift keep gaining in popularity, they become ever more present in our codebase.

Seasoned Swift developers are aware of the potential memory leaks that @escaping callbacks can produce, so they make real sure to always use [weak self], whenever they need to use self inside such a context. And when they need to have self be non-optional, they then add a guard statement along.

Consequently, this syntax of a [weak self] followed by a guard rapidly tends to appear everywhere in the codebase. The good thing is that, through a little protocol-oriented trick, it's actually possible to get rid of this tedious syntax, without loosing any of its benefits!

import Foundation
import PlaygroundSupport

PlaygroundPage.current.needsIndefiniteExecution = true

protocol Weakifiable: class { }

extension Weakifiable {
    func weakify(_ code: @escaping (Self) -> Void) -> () -> Void {
        return { [weak self] in
            guard let self = self else { return }
            
            code(self)
        }
    }
    
    func weakify<T>(_ code: @escaping (T, Self) -> Void) -> (T) -> Void {
        return { [weak self] arg in
            guard let self = self else { return }
            
            code(arg, self)
        }
    }
}

extension NSObject: Weakifiable { }

class Producer: NSObject {
    
    deinit {
        print("deinit Producer")
    }
    
    private var handler: (Int) -> Void = { _ in }
    
    func register(handler: @escaping (Int) -> Void) {
        self.handler = handler
        
        DispatchQueue.main.asyncAfter(deadline: .now() + 1.0, execute: { self.handler(42) })
    }
}

class Consumer: NSObject {
    
    deinit {
        print("deinit Consumer")
    }
    
    let producer = Producer()
    
    func consume() {
        producer.register(handler: weakify { result, strongSelf in
            strongSelf.handle(result)
        })
    }
    
    private func handle(_ result: Int) {
        print("🎉 \(result)")
    }
}

var consumer: Consumer? = Consumer()

consumer?.consume()

DispatchQueue.main.asyncAfter(deadline: .now() + 2.0, execute: { consumer = nil })

// This code prints:
// 🎉 42
// deinit Consumer
// deinit Producer

Solving callback hell with function composition

Asynchronous functions are a big part of iOS APIs, and most developers are familiar with the challenge they pose when one needs to sequentially call several asynchronous APIs.

This often results in callbacks being nested into one another, a predicament often referred to as callback hell.

Many third-party frameworks are able to tackle this issue, for instance RxSwift or PromiseKit. Yet, for simple instances of the problem, there is no need to use such big guns, as it can actually be solved with simple function composition.

import Foundation

typealias CompletionHandler<Result> = (Result?, Error?) -> Void

infix operator ~>: MultiplicationPrecedence

func ~> <T, U>(_ first: @escaping (CompletionHandler<T>) -> Void, _ second: @escaping (T, CompletionHandler<U>) -> Void) -> (CompletionHandler<U>) -> Void {
    return { completion in
        first({ firstResult, error in
            guard let firstResult = firstResult else { completion(nil, error); return }
            
            second(firstResult, { (secondResult, error) in
                completion(secondResult, error)
            })
        })
    }
}

func ~> <T, U>(_ first: @escaping (CompletionHandler<T>) -> Void, _ transform: @escaping (T) -> U) -> (CompletionHandler<U>) -> Void {
    return { completion in
        first({ result, error in
            guard let result = result else { completion(nil, error); return }
            
            completion(transform(result), nil)
        })
    }
}

func service1(_ completionHandler: CompletionHandler<Int>) {
    completionHandler(42, nil)
}

func service2(arg: String, _ completionHandler: CompletionHandler<String>) {
    completionHandler("🎉 \(arg)", nil)
}

let chainedServices = service1
    ~> { int in return String(int / 2) }
    ~> service2

chainedServices({ result, _ in
    guard let result = result else { return }
    
    print(result) // Prints: 🎉 21
})

Transform an asynchronous function into a synchronous one

Asynchronous functions are a great way to deal with future events without blocking a thread. Yet, there are times where we would like them to behave in exactly such a blocking way.

Think about writing unit tests and using mocked network calls. You will need to add complexity to your test in order to deal with asynchronous functions, whereas synchronous ones would be much easier to manage.

Thanks to Swift proficiency in the functional paradigm, it is possible to write a function whose job is to take an asynchronous function and transform it into a synchronous one.

import Foundation

func makeSynchrone<A, B>(_ asyncFunction: @escaping (A, (B) -> Void) -> Void) -> (A) -> B {
    return { arg in
        let lock = NSRecursiveLock()
        
        var result: B? = nil
        
        asyncFunction(arg) {
            result = $0
            lock.unlock()
        }
        
        lock.lock()
        
        return result!
    }
}

func myAsyncFunction(arg: Int, completionHandler: (String) -> Void) {
    completionHandler("🎉 \(arg)")
}

let syncFunction = makeSynchrone(myAsyncFunction)

print(syncFunction(42)) // prints 🎉 42

Using KeyPaths instead of closures

Closures are a great way to interact with generic APIs, for instance APIs that allow to manipulate data structures through the use of generic functions, such as filter() or sorted().

The annoying part is that closures tend to clutter your code with many instances of {, } and $0, which can quickly undermine its readably.

A nice alternative for a cleaner syntax is to use a KeyPath instead of a closure, along with an operator that will deal with transforming the provided KeyPath in a closure.

import Foundation

prefix operator ^

prefix func ^ <Element, Attribute>(_ keyPath: KeyPath<Element, Attribute>) -> (Element) -> Attribute {
    return { element in element[keyPath: keyPath] }
}

struct MyData {
    let int: Int
    let string: String
}

let data = [MyData(int: 2, string: "Foo"), MyData(int: 4, string: "Bar")]

data.map(^\.int) // [2, 4]
data.map(^\.string) // ["Foo", "Bar"]

Bringing some type-safety to a userInfo Dictionary

Many iOS APIs still rely on a userInfo Dictionary to handle use-case specific data. This Dictionary usually stores untyped values, and is declared as follows: [String: Any] (or sometimes [AnyHashable: Any].

Retrieving data from such a structure will involve some conditional casting (via the as? operator), which is prone to both errors and repetitions. Yet, by introducing a custom subscript, it's possible to encapsulate all the tedious logic, and end-up with an easier and more robust API.

import Foundation

typealias TypedUserInfoKey<T> = (key: String, type: T.Type)

extension Dictionary where Key == String, Value == Any {
    subscript<T>(_ typedKey: TypedUserInfoKey<T>) -> T? {
        return self[typedKey.key] as? T
    }
}

let userInfo: [String : Any] = ["Foo": 4, "Bar": "forty-two"]

let integerTypedKey = TypedUserInfoKey(key: "Foo", type: Int.self)
let intValue = userInfo[integerTypedKey] // returns 4
type(of: intValue) // returns Int?

let stringTypedKey = TypedUserInfoKey(key: "Bar", type: String.self)
let stringValue = userInfo[stringTypedKey] // returns "forty-two"
type(of: stringValue) // returns String?

Lightweight data-binding for an MVVM implementation

MVVM is a great pattern to separate business logic from presentation logic. The main challenge to make it work, is to define a mechanism for the presentation layer to be notified of model updates.

RxSwift is a perfect choice to solve such a problem. Yet, some developers don't feel confortable with leveraging a third-party library for such a central part of their architecture.

For those situation, it's possible to define a lightweight Variable type, that will make the MVVM pattern very easy to use!

import Foundation

class Variable<Value> {
    var value: Value {
        didSet {
            onUpdate?(value)
        }
    }
    
    var onUpdate: ((Value) -> Void)? {
        didSet {
            onUpdate?(value)
        }
    }
    
    init(_ value: Value, _ onUpdate: ((Value) -> Void)? = nil) {
        self.value = value
        self.onUpdate = onUpdate
        self.onUpdate?(value)
    }
}

let variable: Variable<String?> = Variable(nil)

variable.onUpdate = { data in
    if let data = data {
        print(data)
    }
}

variable.value = "Foo"
variable.value = "Bar"

// prints:
// Foo
// Bar

Using typealias to its fullest

The keyword typealias allows developers to give a new name to an already existing type. For instance, Swift defines Void as a typealias of (), the empty tuple.

But a less known feature of this mechanism is that it allows to assign concrete types for generic parameters, or to rename them. This can help make the semantics of generic types much clearer, when used in specific use cases.

import Foundation

enum Either<Left, Right> {
    case left(Left)
    case right(Right)
}

typealias Result<Value> = Either<Value, Error>

typealias IntOrString = Either<Int, String>

Writing an interruptible overload of forEach

Iterating through objects via the forEach(_:) method is a great alternative to the classic for loop, as it allows our code to be completely oblivious of the iteration logic. One limitation, however, is that forEach(_:) does not allow to stop the iteration midway.

Taking inspiration from the Objective-C implementation, we can write an overload that will allow the developer to stop the iteration, if needed.

import Foundation

extension Sequence {
    func forEach(_ body: (Element, _ stop: inout Bool) throws -> Void) rethrows {
        var stop = false
        for element in self {
            try body(element, &stop)
            
            if stop {
                return
            }
        }
    }
}

["Foo", "Bar", "FooBar"].forEach { element, stop in
    print(element)
    stop = (element == "Bar")
}

// Prints:
// Foo
// Bar

Optimizing the use of reduce()

Functional programing is a great way to simplify a codebase. For instance, reduce is an alternative to the classic for loop, without most the boilerplate. Unfortunately, simplicity often comes at the price of performance.

Consider that you want to remove duplicate values from a Sequence. While reduce() is a perfectly fine way to express this computation, the performance will be sub optimal, because of all the unnecessary Array copying that will happen every time its closure gets called.

That's when reduce(into:_:) comes into play. This version of reduce leverages the capacities of copy-on-write type (such as Array or Dictionnary) in order to avoid unnecessary copying, which results in a great performance boost.

import Foundation

func time(averagedExecutions: Int = 1, _ code: () -> Void) {
    let start = Date()
    for _ in 0..<averagedExecutions { code() }
    let end = Date()
    
    let duration = end.timeIntervalSince(start) / Double(averagedExecutions)
    
    print("time: \(duration)")
}

let data = (1...1_000).map { _ in Int(arc4random_uniform(256)) }


// runs in 0.63s
time {
    let noDuplicates: [Int] = data.reduce([], { $0.contains($1) ? $0 : $0 + [$1] })
}

// runs in 0.15s
time {
    let noDuplicates: [Int] = data.reduce(into: [], { if !$0.contains($1) { $0.append($1) } } )
}

Avoiding hardcoded reuse identifiers

UI components such as UITableView and UICollectionView rely on reuse identifiers in order to efficiently recycle the views they display. Often, those reuse identifiers take the form of a static hardcoded String, that will be used for every instance of their class.

Through protocol-oriented programing, it's possible to avoid those hardcoded values, and instead use the name of the type as a reuse identifier.

import Foundation
import UIKit

protocol Reusable {
    static var reuseIdentifier: String { get }
}

extension Reusable {
    static var reuseIdentifier: String {
        return String(describing: self)
    }
}

extension UITableViewCell: Reusable { }

extension UITableView {
    func register<T: UITableViewCell>(_ class: T.Type) {
        register(`class`, forCellReuseIdentifier: T.reuseIdentifier)
    }
    func dequeueReusableCell<T: UITableViewCell>(for indexPath: IndexPath) -> T {
        return dequeueReusableCell(withIdentifier: T.reuseIdentifier, for: indexPath) as! T
    }
}

class MyCell: UITableViewCell { }

let tableView = UITableView()

tableView.register(MyCell.self)
let myCell: MyCell = tableView.dequeueReusableCell(for: [0, 0])

Defining a union type

The C language has a construct called union, that allows a single variable to hold values from different types. While Swift does not provide such a construct, it provides enums with associated values, which allows us to define a type called Either that implements a union of two types.

import Foundation

enum Either<A, B> {
    case left(A)
    case right(B)
    
    func either(ifLeft: ((A) -> Void)? = nil, ifRight: ((B) -> Void)? = nil) {
        switch self {
        case let .left(a):
            ifLeft?(a)
        case let .right(b):
            ifRight?(b)
        }
    }
}

extension Bool { static func random() -> Bool { return arc4random_uniform(2) == 0 } }

var intOrString: Either<Int, String> = Bool.random() ? .left(2) : .right("Foo")

intOrString.either(ifLeft: { print($0 + 1) }, ifRight: { print($0 + "Bar") })

If you're interested by this kind of data structure, I strongly recommend that you learn more about Algebraic Data Types.

Asserting that classes have associated NIBs and vice-versa

Most of the time, when we create a .xib file, we give it the same name as its associated class. From that, if we later refactor our code and rename such a class, we run the risk of forgetting to rename the associated .xib.

While the error will often be easy to catch, if the .xib is used in a remote section of its app, it might go unnoticed for sometime. Fortunately it's possible to build custom test predicates that will assert that 1) for a given class, there exists a .nib with the same name in a given Bundle, 2) for all the .nib in a given Bundle, there exists a class with the same name.

import XCTest

public func XCTAssertClassHasNib(_ class: AnyClass, bundle: Bundle, file: StaticString = #file, line: UInt = #line) {
    let associatedNibURL = bundle.url(forResource: String(describing: `class`), withExtension: "nib")
    
    XCTAssertNotNil(associatedNibURL, "Class \"\(`class`)\" has no associated nib file", file: file, line: line)
}

public func XCTAssertNibHaveClasses(_ bundle: Bundle, file: StaticString = #file, line: UInt = #line) {
    guard let bundleName = bundle.infoDictionary?["CFBundleName"] as? String,
        let basePath = bundle.resourcePath,
        let enumerator = FileManager.default.enumerator(at: URL(fileURLWithPath: basePath),
                                                    includingPropertiesForKeys: nil,
                                                    options: [.skipsHiddenFiles, .skipsSubdirectoryDescendants]) else { return }
    
    var nibFilesURLs = [URL]()
    
    for case let fileURL as URL in enumerator {
        if fileURL.pathExtension.uppercased() == "NIB" {
            nibFilesURLs.append(fileURL)
        }
    }
    
    nibFilesURLs.map { $0.lastPathComponent }
        .compactMap { $0.split(separator: ".").first }
        .map { String($0) }
        .forEach {
            let associatedClass: AnyClass? = bundle.classNamed("\(bundleName).\($0)")
            
            XCTAssertNotNil(associatedClass, "File \"\($0).nib\" has no associated class", file: file, line: line)
        }
}

XCTAssertClassHasNib(MyFirstTableViewCell.self, bundle: Bundle(for: AppDelegate.self))
XCTAssertClassHasNib(MySecondTableViewCell.self, bundle: Bundle(for: AppDelegate.self))
        
XCTAssertNibHaveClasses(Bundle(for: AppDelegate.self))

Many thanks Benjamin Lavialle for coming up with the idea behind the second test predicate.

Small footprint type-erasing with functions

Seasoned Swift developers know it: a protocol with associated type (PAT) "can only be used as a generic constraint because it has Self or associated type requirements". When we really need to use a PAT to type a variable, the goto workaround is to use a type-erased wrapper.

While this solution works perfectly, it requires a fair amount of boilerplate code. In instances where we are only interested in exposing one particular function of the PAT, a shorter approach using function types is possible.

import Foundation
import UIKit

protocol Configurable {
    associatedtype Model
    
    func configure(with model: Model)
}

typealias Configurator<Model> = (Model) -> ()

extension UILabel: Configurable {
    func configure(with model: String) {
        self.text = model
    }
}

let label = UILabel()
let configurator: Configurator<String> = label.configure

configurator("Foo")

label.text // "Foo"

Performing animations sequentially

UIKit exposes a very powerful and simple API to perform view animations. However, this API can become a little bit quirky to use when we want to perform animations sequentially, because it involves nesting closure within one another, which produces notoriously hard to maintain code.

Nonetheless, it's possible to define a rather simple class, that will expose a really nicer API for this particular use case 👌

import Foundation
import UIKit

class AnimationSequence {
    typealias Animations = () -> Void
    
    private let current: Animations
    private let duration: TimeInterval
    private var next: AnimationSequence? = nil
    
    init(animations: @escaping Animations, duration: TimeInterval) {
        self.current = animations
        self.duration = duration
    }
    
    @discardableResult func append(animations: @escaping Animations, duration: TimeInterval) -> AnimationSequence {
        var lastAnimation = self
        while let nextAnimation = lastAnimation.next {
            lastAnimation = nextAnimation
        }
        lastAnimation.next = AnimationSequence(animations: animations, duration: duration)
        return self
    }
    
    func run() {
        UIView.animate(withDuration: duration, animations: current, completion: { finished in
            if finished, let next = self.next {
                next.run()
            }
        })
    }
}

var firstView = UIView()
var secondView = UIView()

firstView.alpha = 0
secondView.alpha = 0

AnimationSequence(animations: { firstView.alpha = 1.0 }, duration: 1)
            .append(animations: { secondView.alpha = 1.0 }, duration: 0.5)
            .append(animations: { firstView.alpha = 0.0 }, duration: 2.0)
            .run()

Debouncing a function call

Debouncing is a very useful tool when dealing with UI inputs. Consider a search bar, whose content is used to query an API. It wouldn't make sense to perform a request for every character the user is typing, because as soon as a new character is entered, the result of the previous request has become irrelevant.

Instead, our code will perform much better if we "debounce" the API call, meaning that we will wait until some delay has passed, without the input being modified, before actually performing the call.

import Foundation

func debounced(delay: TimeInterval, queue: DispatchQueue = .main, action: @escaping (() -> Void)) -> () -> Void {
    var workItem: DispatchWorkItem?
    
    return {
        workItem?.cancel()
        workItem = DispatchWorkItem(block: action)
        queue.asyncAfter(deadline: .now() + delay, execute: workItem!)
    }
}

let debouncedPrint = debounced(delay: 1.0) { print("Action performed!") }

debouncedPrint()
debouncedPrint()
debouncedPrint()

// After a 1 second delay, this gets
// printed only once to the console:

// Action performed!

Providing useful operators for Optional booleans

When we need to apply the standard boolean operators to Optional booleans, we often end up with a syntax unnecessarily crowded with unwrapping operations. By taking a cue from the world of three-valued logics, we can define a couple operators that make working with Bool? values much nicer.

import Foundation

func && (lhs: Bool?, rhs: Bool?) -> Bool? {
    switch (lhs, rhs) {
    case (false, _), (_, false):
        return false
    case let (unwrapLhs?, unwrapRhs?):
        return unwrapLhs && unwrapRhs
    default:
        return nil
    }
}

func || (lhs: Bool?, rhs: Bool?) -> Bool? {
    switch (lhs, rhs) {
    case (true, _), (_, true):
        return true
    case let (unwrapLhs?, unwrapRhs?):
        return unwrapLhs || unwrapRhs
    default:
        return nil
    }
}

false && nil // false
true && nil // nil
[true, nil, false].reduce(true, &&) // false

nil || true // true
nil || false // nil
[true, nil, false].reduce(false, ||) // true

Removing duplicate values from a Sequence

Transforming a Sequence in order to remove all the duplicate values it contains is a classic use case. To implement it, one could be tempted to transform the Sequence into a Set, then back to an Array. The downside with this approach is that it will not preserve the order of the sequence, which can definitely be a dealbreaker. Using reduce() it is possible to provide a concise implementation that preserves ordering:

import Foundation

extension Sequence where Element: Equatable {
    func duplicatesRemoved() -> [Element] {
        return reduce([], { $0.contains($1) ? $0 : $0 + [$1] })
    }
}

let data = [2, 5, 2, 3, 6, 5, 2]

data.duplicatesRemoved() // [2, 5, 3, 6]

Shorter syntax to deal with optional strings

Optional strings are very common in Swift code, for instance many objects from UIKit expose the text they display as a String?. Many times you will need to manipulate this data as an unwrapped String, with a default value set to the empty string for nil cases.

While the nil-coalescing operator (e.g. ??) is a perfectly fine way to a achieve this goal, defining a computed variable like orEmpty can help a lot in cleaning the syntax.

import Foundation
import UIKit

extension Optional where Wrapped == String {
    var orEmpty: String {
        switch self {
        case .some(let value):
            return value
        case .none:
            return ""
        }
    }
}

func doesNotWorkWithOptionalString(_ param: String) {
    // do something with `param`
}

let label = UILabel()
label.text = "This is some text."

doesNotWorkWithOptionalString(label.text.orEmpty)

Encapsulating background computation and UI update

Every seasoned iOS developers knows it: objects from UIKit can only be accessed from the main thread. Any attempt to access them from a background thread is a guaranteed crash.

Still, running a costly computation on the background, and then using it to update the UI can be a common pattern.

In such cases you can rely on asyncUI to encapsulate all the boilerplate code.

import Foundation
import UIKit

func asyncUI<T>(_ computation: @autoclosure @escaping () -> T, qos: DispatchQoS.QoSClass = .userInitiated, _ completion: @escaping (T) -> Void) {
    DispatchQueue.global(qos: qos).async {
        let value = computation()
        DispatchQueue.main.async {
            completion(value)
        }
    }
}

let label = UILabel()

func costlyComputation() -> Int { return (0..<10_000).reduce(0, +) }

asyncUI(costlyComputation()) { value in
    label.text = "\(value)"
}

Retrieving all the necessary data to build a debug view

A debug view, from which any controller of an app can be instantiated and pushed on the navigation stack, has the potential to bring some real value to a development process. A requirement to build such a view is to have a list of all the classes from a given Bundle that inherit from UIViewController. With the following extension, retrieving this list becomes a piece of cake 🍰

import Foundation
import UIKit
import ObjectiveC

extension Bundle {
    func viewControllerTypes() -> [UIViewController.Type] {
        guard let bundlePath = self.executablePath else { return [] }
        
        var size: UInt32 = 0
        var rawClassNames: UnsafeMutablePointer<UnsafePointer<Int8>>!
        var parsedClassNames = [String]()
        
        rawClassNames = objc_copyClassNamesForImage(bundlePath, &size)
        
        for index in 0..<size {
            let className = rawClassNames[Int(index)]
            
            if let name = NSString.init(utf8String:className) as String?,
                NSClassFromString(name) is UIViewController.Type {
                parsedClassNames.append(name)
            }
        }
        
        return parsedClassNames
            .sorted()
            .compactMap { NSClassFromString($0) as? UIViewController.Type }
    }
}

// Fetch all view controller types in UIKit
Bundle(for: UIViewController.self).viewControllerTypes()

I share the credit for this tip with Benoît Caron.

Defining a function to map over dictionaries

Update As it turns out, map is actually a really bad name for this function, because it does not preserve composition of transformations, a property that is required to fit the definition of a real map function.

Surprisingly enough, the standard library doesn't define a map() function for dictionaries that allows to map both keys and values into a new Dictionary. Nevertheless, such a function can be helpful, for instance when converting data across different frameworks.

import Foundation

extension Dictionary {
    func map<T: Hashable, U>(_ transform: (Key, Value) throws -> (T, U)) rethrows -> [T: U] {
        var result: [T: U] = [:]
        
        for (key, value) in self {
            let (transformedKey, transformedValue) = try transform(key, value)
            result[transformedKey] = transformedValue
        }
        
        return result
    }
}

let data = [0: 5, 1: 6, 2: 7]
data.map { ("\($0)", $1 * $1) } // ["2": 49, "0": 25, "1": 36]

A shorter syntax to remove nil values

Swift provides the function compactMap(), that can be used to remove nil values from a Sequence of optionals when calling it with an argument that just returns its parameter (i.e. compactMap { $0 }). Still, for such use cases it would be nice to get rid of the trailing closure.

The implementation isn't as straightforward as your usual extension, but once it has been written, the call site definitely gets cleaner 👌

import Foundation

protocol OptionalConvertible {
    associatedtype Wrapped
    func asOptional() -> Wrapped?
}

extension Optional: OptionalConvertible {
    func asOptional() -> Wrapped? {
        return self
    }
}

extension Sequence where Element: OptionalConvertible {
    func compacted() -> [Element.Wrapped] {
        return compactMap { $0.asOptional() }
    }
}

let data = [nil, 1, 2, nil, 3, 5, nil, 8, nil]
data.compacted() // [1, 2, 3, 5, 8]

Dealing with expirable values

It might happen that your code has to deal with values that come with an expiration date. In a game, it could be a score multiplier that will only last for 30 seconds. Or it could be an authentication token for an API, with a 15 minutes lifespan. In both instances you can rely on the type Expirable to encapsulate the expiration logic.

import Foundation

struct Expirable<T> {
    private var innerValue: T
    private(set) var expirationDate: Date
    
    var value: T? {
        return hasExpired() ? nil : innerValue
    }
    
    init(value: T, expirationDate: Date) {
        self.innerValue = value
        self.expirationDate = expirationDate
    }
    
    init(value: T, duration: Double) {
        self.innerValue = value
        self.expirationDate = Date().addingTimeInterval(duration)
    }
    
    func hasExpired() -> Bool {
        return expirationDate < Date()
    }
}

let expirable = Expirable(value: 42, duration: 3)

sleep(2)
expirable.value // 42
sleep(2)
expirable.value // nil

I share the credit for this tip with Benoît Caron.

Using parallelism to speed-up map()

Almost all Apple devices able to run Swift code are powered by a multi-core CPU, consequently making a good use of parallelism is a great way to improve code performance. map() is a perfect candidate for such an optimization, because it is almost trivial to define a parallel implementation.

import Foundation

extension Array {
    func parallelMap<T>(_ transform: (Element) -> T) -> [T] {
        let res = UnsafeMutablePointer<T>.allocate(capacity: count)
        
        DispatchQueue.concurrentPerform(iterations: count) { i in
            res[i] = transform(self[i])
        }
        
        let finalResult = Array<T>(UnsafeBufferPointer(start: res, count: count))
        res.deallocate(capacity: count)
        
        return finalResult
    }
}

let array = (0..<1_000).map { $0 }

func work(_ n: Int) -> Int {
    return (0..<n).reduce(0, +)
}

array.parallelMap { work($0) }

🚨 Make sure to only use parallelMap() when the transform function actually performs some costly computations. Otherwise performances will be systematically slower than using map(), because of the multithreading overhead.

Measuring execution time with minimum boilerplate

During development of a feature that performs some heavy computations, it can be helpful to measure just how much time a chunk of code takes to run. The time() function is a nice tool for this purpose, because of how simple it is to add and then to remove when it is no longer needed.

import Foundation

func time(averagedExecutions: Int = 1, _ code: () -> Void) {
    let start = Date()
    for _ in 0..<averagedExecutions { code() }
    let end = Date()
    
    let duration = end.timeIntervalSince(start) / Double(averagedExecutions)
    
    print("time: \(duration)")
}

time {
    (0...10_000).map { $0 * $0 }
}
// time: 0.183973908424377

Running two pieces of code in parallel

Concurrency is definitely one of those topics were the right encapsulation bears the potential to make your life so much easier. For instance, with this piece of code you can easily launch two computations in parallel, and have the results returned in a tuple.

import Foundation

func parallel<T, U>(_ left: @autoclosure () -> T, _ right: @autoclosure () -> U) -> (T, U) {
    var leftRes: T?
    var rightRes: U?
    
    DispatchQueue.concurrentPerform(iterations: 2, execute: { id in
        if id == 0 {
            leftRes = left()
        } else {
            rightRes = right()
        }
    })
    
    return (leftRes!, rightRes!)
}

let values = (1...100_000).map { $0 }

let results = parallel(values.map { $0 * $0 }, values.reduce(0, +))

Making good use of #file, #line and #function

Swift exposes three special variables #file, #line and #function, that are respectively set to the name of the current file, line and function. Those variables become very useful when writing custom logging functions or test predicates.

import Foundation

func log(_ message: String, _ file: String = #file, _ line: Int = #line, _ function: String = #function) {
    print("[\(file):\(line)] \(function) - \(message)")
}

func foo() {
    log("Hello world!")
}

foo() // [MyPlayground.playground:8] foo() - Hello world!

Comparing Optionals through Conditional Conformance

Swift 4.1 has introduced a new feature called Conditional Conformance, which allows a type to implement a protocol only when its generic type also does.

With this addition it becomes easy to let Optional implement Comparable only when Wrapped also implements Comparable:

import Foundation

extension Optional: Comparable where Wrapped: Comparable {
    public static func < (lhs: Optional, rhs: Optional) -> Bool {
        switch (lhs, rhs) {
        case let (lhs?, rhs?):
            return lhs < rhs
        case (nil, _?):
            return true // anything is greater than nil
        case (_?, nil):
            return false // nil in smaller than anything
        case (nil, nil):
            return true // nil is not smaller than itself
        }
    }
}

let data: [Int?] = [8, 4, 3, nil, 12, 4, 2, nil, -5]
data.sorted() // [nil, nil, Optional(-5), Optional(2), Optional(3), Optional(4), Optional(4), Optional(8), Optional(12)]

Safely subscripting a Collection

Any attempt to access an Array beyond its bounds will result in a crash. While it's possible to write conditions such as if index < array.count { array[index] } in order to prevent such crashes, this approach will rapidly become cumbersome.

A great thing is that this condition can be encapsulated in a custom subscript that will work on any Collection:

import Foundation

extension Collection {
    subscript (safe index: Index) -> Element? {
        return indices.contains(index) ? self[index] : nil
    }
}

let data = [1, 3, 4]

data[safe: 1] // Optional(3)
data[safe: 10] // nil

Easier String slicing using ranges

Subscripting a string with a range can be very cumbersome in Swift 4. Let's face it, no one wants to write lines like someString[index(startIndex, offsetBy: 0)..<index(startIndex, offsetBy: 10)] on a regular basis.

Luckily, with the addition of one clever extension, strings can be sliced as easily as arrays 🎉

import Foundation

extension String {
    public subscript(value: CountableClosedRange<Int>) -> Substring {
        get {
            return self[index(startIndex, offsetBy: value.lowerBound)...index(startIndex, offsetBy: value.upperBound)]
        }
    }
    
    public subscript(value: CountableRange<Int>) -> Substring {
        get {
            return self[index(startIndex, offsetBy: value.lowerBound)..<index(startIndex, offsetBy: value.upperBound)]
        }
    }
    
    public subscript(value: PartialRangeUpTo<Int>) -> Substring {
        get {
            return self[..<index(startIndex, offsetBy: value.upperBound)]
        }
    }
    
    public subscript(value: PartialRangeThrough<Int>) -> Substring {
        get {
            return self[...index(startIndex, offsetBy: value.upperBound)]
        }
    }
    
    public subscript(value: PartialRangeFrom<Int>) -> Substring {
        get {
            return self[index(startIndex, offsetBy: value.lowerBound)...]
        }
    }
}

let data = "This is a string!"

data[..<4]  // "This"
data[5..<9] // "is a"
data[10...] // "string!"

Concise syntax for sorting using a KeyPath

By using a KeyPath along with a generic type, a very clean and concise syntax for sorting data can be implemented:

import Foundation

extension Sequence {
    func sorted<T: Comparable>(by attribute: KeyPath<Element, T>) -> [Element] {
        return sorted(by: { $0[keyPath: attribute] < $1[keyPath: attribute] })
    }
}

let data = ["Some", "words", "of", "different", "lengths"]

data.sorted(by: \.count) // ["of", "Some", "words", "lengths", "different"]

If you like this syntax, make sure to checkout KeyPathKit!

Manufacturing cache-efficient versions of pure functions

By capturing a local variable in a returned closure, it is possible to manufacture cache-efficient versions of pure functions. Be careful though, this trick only works with non-recursive function!

import Foundation

func cached<In: Hashable, Out>(_ f: @escaping (In) -> Out) -> (In) -> Out {
    var cache = [In: Out]()
    
    return { (input: In) -> Out in
        if let cachedValue = cache[input] {
            return cachedValue
        } else {
            let result = f(input)
            cache[input] = result
            return result
        }
    }
}

let cachedCos = cached { (x: Double) in cos(x) }

cachedCos(.pi * 2) // value of cos for 2π is now cached

Simplifying complex conditions with pattern matching

When distinguishing between complex boolean conditions, using a switch statement along with pattern matching can be more readable than the classic series of if {} else if {}.

import Foundation

let expr1: Bool
let expr2: Bool
let expr3: Bool

if expr1 && !expr3 {
    functionA()
} else if !expr2 && expr3 {
    functionB()
} else if expr1 && !expr2 && expr3 {
    functionC()
}

switch (expr1, expr2, expr3) {
    
case (true, _, false):
    functionA()
case (_, false, true):
    functionB()
case (true, false, true):
    functionC()
default:
    break
}

Easily generating arrays of data

Using map() on a range makes it easy to generate an array of data.

import Foundation

func randomInt() -> Int { return Int(arc4random()) }

let randomArray = (1...10).map { _ in randomInt() }

Using @autoclosure for cleaner call sites

Using @autoclosure enables the compiler to automatically wrap an argument within a closure, thus allowing for a very clean syntax at call sites.

import UIKit

extension UIView {
    class func animate(withDuration duration: TimeInterval, _ animations: @escaping @autoclosure () -> Void) {
        UIView.animate(withDuration: duration, animations: animations)
    }
}

let view = UIView()

UIView.animate(withDuration: 0.3, view.backgroundColor = .orange)

Observing new and old value with RxSwift

When working with RxSwift, it's very easy to observe both the current and previous value of an observable sequence by simply introducing a shift using skip().

import RxSwift

let values = Observable.of(4, 8, 15, 16, 23, 42)

let newAndOld = Observable.zip(values, values.skip(1)) { (previous: $0, current: $1) }
    .subscribe(onNext: { pair in
        print("current: \(pair.current) - previous: \(pair.previous)")
    })

//current: 8 - previous: 4
//current: 15 - previous: 8
//current: 16 - previous: 15
//current: 23 - previous: 16
//current: 42 - previous: 23

Implicit initialization from literal values

Using protocols such as ExpressibleByStringLiteral it is possible to provide an init that will be automatically when a literal value is provided, allowing for nice and short syntax. This can be very helpful when writing mock or test data.

import Foundation

extension URL: ExpressibleByStringLiteral {
    public init(stringLiteral value: String) {
        self.init(string: value)!
    }
}

let url: URL = "http://www.google.fr"

NSURLConnection.canHandle(URLRequest(url: "http://www.google.fr"))

Achieving systematic validation of data

Through some clever use of Swift private visibility it is possible to define a container that holds any untrusted value (such as a user input) from which the only way to retrieve the value is by making it successfully pass a validation test.

import Foundation

struct Untrusted<T> {
    private(set) var value: T
}

protocol Validator {
    associatedtype T
    static func validation(value: T) -> Bool
}

extension Validator {
    static func validate(untrusted: Untrusted<T>) -> T? {
        if self.validation(value: untrusted.value) {
            return untrusted.value
        } else {
            return nil
        }
    }
}

struct FrenchPhoneNumberValidator: Validator {
    static func validation(value: String) -> Bool {
       return (value.count) == 10 && CharacterSet(charactersIn: value).isSubset(of: CharacterSet.decimalDigits)
    }
}

let validInput = Untrusted(value: "0122334455")
let invalidInput = Untrusted(value: "0123")

FrenchPhoneNumberValidator.validate(untrusted: validInput) // returns "0122334455"
FrenchPhoneNumberValidator.validate(untrusted: invalidInput) // returns nil

Implementing the builder pattern with keypaths

With the addition of keypaths in Swift 4, it is now possible to easily implement the builder pattern, that allows the developer to clearly separate the code that initializes a value from the code that uses it, without the burden of defining a factory method.

import UIKit

protocol With {}

extension With where Self: AnyObject {
    @discardableResult
    func with<T>(_ property: ReferenceWritableKeyPath<Self, T>, setTo value: T) -> Self {
        self[keyPath: property] = value
        return self
    }
}

extension UIView: With {}

let view = UIView()

let label = UILabel()
    .with(\.textColor, setTo: .red)
    .with(\.text, setTo: "Foo")
    .with(\.textAlignment, setTo: .right)
    .with(\.layer.cornerRadius, setTo: 5)

view.addSubview(label)

🚨 The Swift compiler does not perform OS availability checks on properties referenced by keypaths. Any attempt to use a KeyPath for an unavailable property will result in a runtime crash.

I share the credit for this tip with Marion Curtil.

Storing functions rather than values

When a type stores values for the sole purpose of parametrizing its functions, it’s then possible to not store the values but directly the function, with no discernable difference at the call site.

import Foundation

struct MaxValidator {
    let max: Int
    let strictComparison: Bool
    
    func isValid(_ value: Int) -> Bool {
        return self.strictComparison ? value < self.max : value <= self.max
    }
}

struct MaxValidator2 {
    var isValid: (_ value: Int) -> Bool
    
    init(max: Int, strictComparison: Bool) {
        self.isValid = strictComparison ? { $0 < max } : { $0 <= max }
    }
}

MaxValidator(max: 5, strictComparison: true).isValid(5) // false
MaxValidator2(max: 5, strictComparison: false).isValid(5) // true

Defining operators on function types

Functions are first-class citizen types in Swift, so it is perfectly legal to define operators for them.

import Foundation

let firstRange = { (0...3).contains($0) }
let secondRange = { (5...6).contains($0) }

func ||(_ lhs: @escaping (Int) -> Bool, _ rhs: @escaping (Int) -> Bool) -> (Int) -> Bool {
    return { value in
        return lhs(value) || rhs(value)
    }
}

(firstRange || secondRange)(2) // true
(firstRange || secondRange)(4) // false
(firstRange || secondRange)(6) // true

Typealiases for functions

Typealiases are great to express function signatures in a more comprehensive manner, which then enables us to easily define functions that operate on them, resulting in a nice way to write and use some powerful API.

import Foundation

typealias RangeSet = (Int) -> Bool

func union(_ left: @escaping RangeSet, _ right: @escaping RangeSet) -> RangeSet {
    return { left($0) || right($0) }
}

let firstRange = { (0...3).contains($0) }
let secondRange = { (5...6).contains($0) }

let unionRange = union(firstRange, secondRange)

unionRange(2) // true
unionRange(4) // false

Encapsulating state within a function

By returning a closure that captures a local variable, it's possible to encapsulate a mutable state within a function.

import Foundation

func counterFactory() -> () -> Int {
    var counter = 0
    
    return {
        counter += 1
        return counter
    }
}

let counter = counterFactory()

counter() // returns 1
counter() // returns 2

Generating all cases for an Enum

⚠️ Since Swift 4.2, allCases can now be synthesized at compile-time by simply conforming to the protocol CaseIterable. The implementation below should no longer be used in production code.

Through some clever leveraging of how enums are stored in memory, it is possible to generate an array that contains all the possible cases of an enum. This can prove particularly useful when writing unit tests that consume random data.

import Foundation

enum MyEnum { case first; case second; case third; case fourth }

protocol EnumCollection: Hashable {
    static var allCases: [Self] { get }
}

extension EnumCollection {
    public static var allCases: [Self] {
        var i = 0
        return Array(AnyIterator {
            let next = withUnsafePointer(to: &i) {
                $0.withMemoryRebound(to: Self.self, capacity: 1) { $0.pointee }
            }
            if next.hashValue != i { return nil }
            i += 1
            return next
        })
    }
}

extension MyEnum: EnumCollection { }

MyEnum.allCases // [.first, .second, .third, .fourth]

Using map on optional values

The if-let syntax is a great way to deal with optional values in a safe manner, but at times it can prove to be just a little bit to cumbersome. In such cases, using the Optional.map() function is a nice way to achieve a shorter code while retaining safeness and readability.

import UIKit

let date: Date? = Date() // or could be nil, doesn't matter
let formatter = DateFormatter()
let label = UILabel()

if let safeDate = date {
    label.text = formatter.string(from: safeDate)
}

label.text = date.map { return formatter.string(from: $0) }

label.text = date.map(formatter.string(from:)) // even shorter, tough less readable

📣 NEW 📣 Swift Tips are now available on YouTube 👇

Summary

Tips


Download Details:

Author: vincent-pradeilles
Source code: https://github.com/vincent-pradeilles/swift-tips

License: MIT license
#swift 

I am Developer

1597559012

Multiple File Upload in Laravel 7, 6

in this post, i will show you easy steps for multiple file upload in laravel 7, 6.

As well as how to validate file type, size before uploading to database in laravel.

Laravel 7/6 Multiple File Upload

You can easily upload multiple file with validation in laravel application using the following steps:

  1. Download Laravel Fresh New Setup
  2. Setup Database Credentials
  3. Generate Migration & Model For File
  4. Make Route For File uploading
  5. Create File Controller & Methods
  6. Create Multiple File Blade View
  7. Run Development Server

https://www.tutsmake.com/laravel-6-multiple-file-upload-with-validation-example/

#laravel multiple file upload validation #multiple file upload in laravel 7 #multiple file upload in laravel 6 #upload multiple files laravel 7 #upload multiple files in laravel 6 #upload multiple files php laravel

How to Install and Correct Dependencies Issues in Ubuntu

What is a Dependency?
A dependency is defined as a file, component, or software package that a program needs to work correctly. Almost every software package we install depends on another piece of code or software to work as expected. Because the overall theme of Linux has always been to have a program do one specific thing, and do it well, many software titles utilize other pieces of software to run correctly.

Introduction
Let’s review what dependencies are and why they are required. We all have, at one point or another, most certainly seen a message from our system when we were installing software regarding “missing dependencies.” This error denotes that a required part of the software package is outdated, unavailable or missing. Let’s review how to address those issues when we come across them on Ubuntu.

#tutorials #apt #apt-cache #apt-get #apt-mark #autoclean #autoremove #cache #clean #cleanall #component #dependencies #dependency errors #dpkg #file #linux #package #ppa #ppa-purge #purge #python #repository #showhold #software #ubuntu #unmet dependencies errors

Awesome  Rust

Awesome Rust

1658006220

RustSec Crates - Audit Cargo.lock Files for Dependencies

RustSec Crates 🦀🛡️📦

The RustSec Advisory Database is a repository of security advisories filed against Rust crates published via crates.io.

The advisory database itself can be found at:

https://github.com/RustSec/advisory-db

About this repository

This repository contains a Cargo Workspace with all of the crates maintained by the RustSec project:

NameDescriptionCrateDocumentationBuild
[cargo‑audit]Audit Cargo.lock against the advisory DBcrates.ioDocumentationCI
[cargo‑lock]Self-contained Cargo.lock parsercrates.ioDocumentationCI
[cvss]Common Vulnerability Scoring Systemcrates.ioDocumentationCI
[platforms]Rust platform registrycrates.ioDocumentationCI
[rustsec]Advisory DB client librarycrates.ioDocumentationCI
[rustsec‑admin]Linter and web site generatorcrates.ioDocumentationCI

Download Details:
Author: rustsec
Source Code: https://github.com/rustsec/rustsec
License:

#rust #rustlang #staticanalysis