Ruby is a multi-platform open-source, dynamic object-oriented interpreted language.

Example Of Ruby on Rails Web App with Julia Link Through ZMQ

This example shows a way to connect a Ruby on Rails web application with Julia through ZMQ.

I had no previous experience with ZMQ, Ruby, Rails, html or JavaScript before this and only basic Julia knowledge. I'm just learning as I go along and I hope this write-up migth be helpful to you.

All feedback welcome!

Basically we create a ZMQ server in Julia that will perform some predefined calculation as instructed from a web page. In this case, supply a number and magically multiply it by 3 :)

This example was inspired by a Julia-users forum topic, IJulia and an excellent samurails blog post on delayed_job. Parts of the code below is taken from that blog.


I write this on Ubuntu 14.04. If you are using another platform, you will have to google some installation procedures yourself.

To install Ruby on Rails, I simply use rvm:

curl -sSL | bash -s stable --rails

I prefer working with a PostgreSQL database, but I assume this example will work fine with MySQL or another database. Install PostgreSQL on Ubuntu (I know libpq-dev is necessary, the others I don't know):

sudo apt-get install postgresql-client pgadmin3 postgresql postgresql-contrib libpq-dev

For running the webapp locally you need nodejs installed:

sudo apt-get install nodejs

Of course you need Julia installed. I used v0.3 for this example. See details on In Julia you need to add the ZMQ package by simple Pkg.add("ZMQ") and similarly Pkg.add("JSON").

Once you download this example you should install the necessary gems listed in the gem file:

git clone
cd RoR_julia_eg
bundle install

To just run the example, I think you need to initiate and migrate the database:

rake db:create
rake db:migrate

and then see the Run It section below.

Follow along

I'll try to commit the outcome of each rails command and describe it in the commit message. Here's what I did.

First create a new rails app called triple, that will magically triple each input number you give it:

rails new triple --database=postgresql

Include the necessary gems in the gemfile

# Gemfile
gem 'simple_form' #for the number input on the web page
gem 'ffi-rzmq'    #for ZMQ client on Rails side
gem 'delayed_job_active_record' #longer calculations need to be run in background
gem 'daemons' #for bin/delayed_job start

then install the dependencies:

cd triple
bundle install

Initiate your database with

rake db:create

You need to configure simple_form with:

rails generate simple_form:install

For delayed_job you need to generate the table to store the job queue and actually execute the migration in the database:

rails generate delayed_job:active_record
rake db:migrate

Now if you start-up the server with rails s there's a welcome screen at (http://localhost:3000/).

Now let's create the app. We start by adding a controller.

rails g controller Triples index new

and we set the root for our app:

# triple/config/routes.rb
Rails.application.routes.draw do
      get 'triples/index'
      get 'triples/new'
      root 'triples#index'

For the database we will use a number model with a value and result field for the input and output (= 3*input) fields, as well as a field calculated to indicate if the calculation has finished

rails g model number value:float result:float calculated:boolean

In the migration file that was created we add a false default value to the calcaluted field. That line in triple/db/migrate/20150209001428_create_numbers.rb now becomes:

t.boolean :calculated, default: false

Execute the migration in the database with the usual:

rake db:migrate


A quick note on ZMQ, a high-performance asynchronous and distributed messaging library. That's a mouthful! Basically, it allows to send messages between different processes, programs or computers, that's how I look at it. And it does most of the dirty stuff for you. You just open a socket on one end and connect to it somewhere else, and it just works.

Here I'll use JSON to structure the message. We'll use the REQ-REP pattern. We'll start a julia server as a REP service and later connect to it from Rails as a REQ.

Here's the Julia server (file zmq_server.jl):

using JSON
using ZMQ

const ctx = Context()
sock = Socket(ctx, REP)
ZMQ.bind(sock, "tcp://*:7788")

function triple(x)

while true
    println("Server running.")
    msg = JSON.parse(bytestring(ZMQ.recv(sock)))
    @show result = triple(float(msg["value"]))
    ZMQ.send(sock, JSON.json({"result"=>result}))

and in the number model of the web app we will define a calculation method that connects to this server

require 'ffi-rzmq'
require 'json'

class Number < ActiveRecord::Base
  def calculate
    context =
    sock = context.socket(ZMQ::REQ)
    mgs_send = {:value => value}.to_json
    sock.send_string mgs_send

    msg_recv = ''

    result = JSON.parse(msg_recv)["result"]
    update_column :result, result
    update_column :calculated, true
  handle_asynchronously :calculate

Notice the handle_asynchronously command from delayed_job, which will run this in the background, non-blocking so there will not be a web page time-out.


Now back to the web app. This is the tricky part (at least for me). This is all in one commit, because I had to figure it out myself and these files are quite interconnected.

Rails apparently follows the Model-view-controller pattern.

Here we will use one model: Number with a single method that is just the client code from the section above.

There is one controller Triples with a few methods:

  • index is the starting point and lists all calculated Numbers so far
  • new is empty
  • calc creates a new Number and initiates the calculation
  • show allows you to show the result
  • status checks whether the calculation is finished and will be used by the javascript

There are a few basic views:

  • index lists the results so far and links to new. This is the root.
  • new handles the user input form that will point to calc
  • calc starts the calculation in Julia and uses a javascript to poll through the status controller method whether it is finished and then links to show
  • show just shows the result and links back to index

Then there is the route file (config/routes.rb), which I don't fully understand but somehow got it to work.

And finally there's javascript (app/javascripts/triples.coffe). I don't know how this works, I just copied it from the Blog mentioned in the beginning.

Run it

Start julia server:

../RoR_julia_eg$ julia zmq_server.jl

and while you keep this running, start the delayed_job in another terminal (I couldn't get it working with daemons, so I used rake):

../RoR_julia_eg/triple$ rake jobs:work

And finally in a third terminal start the Rails server

../RoR_julia_eg/triple$ rails s

Surf with your browser to (http://localhost:3000/) and voilà, it works!


This is my first attempt at connecting a web app with Julia. I'm happy to see it working. I hope this has been useful for you and again, feedback is a gift so feel free to tell me what you think by opening an issues, even just for comments.

Download Details:

Author: Ken-B
Source Code: 
License: MIT license

#julia #ruby #rails 

Example Of Ruby on Rails Web App with Julia Link Through ZMQ
Rupert  Beatty

Rupert Beatty


Tabnine-vscode: Visual Studio Code Client for Tabnine


AI assistant for software developers

Code faster with AI code completions

Tabnine main completions 

What’s Tabnine

Tabnine is an AI code assistant that makes you a better developer. Tabnine will increase your development velocity with real-time code completions in all the most popular coding languages and IDEs.

Whether you call it IntelliSense, intelliCode, autocomplete, AI-assisted code completion, AI-powered code completion, AI copilot, AI code snippets, code suggestion, code prediction, code hinting, or content assist, using Tabnine can massively impact your coding velocity, significantly cutting down your coding time.

Under the hood / Tabnine technology

Tabnine is powered by multiple language-specialized machine learning models that were pre-trained from the ground up on code. All of Tabnine’s AI models are trained on open-source code with permissive licenses. Tabnine's AI completions can be run on a developer's laptop, on a server behind your firewall, or in the cloud.

Complete code privacy

Your code always remains private. 

Tabnine NEVER stores or shares any of your code. Any action that shares your code with the Tabnine servers for the purpose of training team models requires explicit opt-in. Tabnine does not retain any user code beyond the immediate time frame required for training models. Any team model created by Tabnine is only accessible to your team members.

Trained on open-source code with permissive licenses

Tabnine only uses open-source code with permissive licenses for our Public Code trained AI model (MIT, Apache 2.0, BSD-2-Clause, BSD-3-Clause). Whether you’re using Tabnine’s Pro plan or our Starter plan, your code and AI data are NEVER used to train any models other than your own team models.
Learn more

Tabnine Pro - whole line, full function, and natural language to code completions

You’re in control - As you type, Tabnine Pro serves whole-line, full-function, and even natural language to code completions. You can accept your whole line completion or keep typing to get more real-time alternatives that keep adapting to your code context.
Start a Free Tabnine Pro 14-day trial 

Whole line completions - Tabnine serves whole line completions as you code and you can complete an entire line of code with a single keystroke. 

Full-function completions - With just a hint, Tabnine generates your entire function without ever having to exit your editor. 

Natural language to code completions - Describe the script or function you’re looking for, and Tabnine will suggest the right code for you to use. 

Private code repository models - Tabnine Pro offers custom models based on multiple repositories. Connect your GitHub/GitLab/Bitbucket repositories and train your own private AI model to get personalized code completions that match your coding style & patterns.

Start a free trial

Tabnine Enterprise

Everything Tabnine Pro & much more - The perfect solution for businesses with custom needs: 

Automate Remote Knowledge Sharing - Share knowledge effortlessly across countries and time zones. Tabnine learns your code patterns, providing expert guidance to every member of your team at any time of day. 

Improve Code Quality & Consistency - Tabnine Improves code consistency across your entire project, suggesting completions that align with your best practices for code that’s easier to read, manage, and maintain. 

Accelerate Developer Onboarding - Tabnine’s AI assistant helps speed new team members through the onboarding process with instant inline coding guidance minimizing the training burden placed on senior developers. 

Reduce Code Review Iterations - Your Tabnine AI assistant will help you get the right code the first time. Tabnine provides code guidance that’s consistent with your team’s best practices, saving costly and frustrating code review iterations. 

Self-hosting - Host Tabnine locally to comply with your business’ security requirements.

Contact us for more information 

Supported languages, frameworks, and IDEs




Something not working the way you hoped? Tabnine Support is always happy to help. Feel free to contact us anytime at 


Recommended by developers everywhere:

William Candillon Tweet Imed Boumalek Tweet ramnivas Tweet bob paskar Tweet Nick Radford Tweet Hugues BR Tweet JohnyTheCarrot Tweet Donald E Fredrick Tweet Joshua Kelly Tweet JDerek Braid Tweet 

Download Details:

Author: Codota
Source Code: 
License: MIT license

#swift #javascript #ruby #python #java #bash 

Tabnine-vscode: Visual Studio Code Client for Tabnine
Rupert  Beatty

Rupert Beatty


Dap-mode: Emacs Debug Adapter Protocol



Emacs client/library for Debug Adapter Protocol is a wire protocol for communication between client and Debug Server. It’s similar to the LSP but provides integration with debug server.

Project status

The API considered unstable until 1.0 release is out. It is tested against Java, Python, Ruby, Elixir and LLDB (C/C++/Objective-C/Swift).


The main entry points are dap-debug and dap-debug-edit-template. The first one asks for a registered debug template and starts the configuration using the default values for that particular configuration. The latter creates a debug template which could be customized before running. dap-debug-edit-template will prepare a template declaration inside a temporary buffer. You should execute this code using C-M-x for the changes to apply. You should also copy this code into your Emacs configuration if you wish to make it persistent.

dap-mode also provides a hydra with dap-hydra. You can automatically trigger the hydra when the program hits a breakpoint by using the following code.

(add-hook 'dap-stopped-hook
          (lambda (arg) (call-interactively #'dap-hydra)))

Docker usage

You can also use this tool with dockerized debug servers: configure it either with a .dir-locals file or drop an .lsp-docker.yml configuration file (use lsp-docker for general reference). Basically you have one function dap-docker-register that performs all the heavy lifting (finding the original debug template, patching it, registering a debug provider e.t.c). This function examines a configuration file or falls back to the default configuration (which can be patched using the .dir-locals approach, take a note that the default configuration doesn’t provide any sane defaults for debugging) and then operates on the combination of the two. This mechanism is the same as in lsp-docker.

Note: currently you cannot use this mode when using a network connection to connect to debuggers (this part is yet to be implemented). Still want to talk to debuggers over network? In order to do so you have to look at the launch-args patching done by dap-docker--dockerize-start-file-args, you have to somehow assign nil to dap-server-path before it is passed further into session creation.

If you want to stick to a configuration file, take a look at the example below:

    # 'lsp-docker' fields
    - source: "/your/host/source/path" # used both by 'lsp-docker' and 'dap-docker'
      destination: "/your/local/path/inside/a/container" # used both by 'lsp-docker' and 'dap-docker'
    type: docker # only docker is supported
    subtype: image # or 'container'
    name: <docker image or container that has the debugger in> # you can omit this field
    # in this case the 'lsp-docker' ('server' section) image name is used
    enabled: true # you can explicitly disable 'dap-docker' by using 'false'
    provider: <your default language debug provider, double quoted string>
    template: <your default language debug template, double quoted string>
    launch_command: <an explicit command if you want to override a default one provided by the debug provider>
    # e.g. if you have installed a debug server in a different directory, not used with 'container' subtype debuggers




Extending DAP with new Debug servers



  • Daniel Martin - LLDB integration.
  • Kien Nguyen - NodeJS debugger, Edge debuggers, automatic extension installation.
  • Aya Igarashi - Go debugger integration.
  • Nikita Bloshchanevich - launch.json support (+ variable expansion), debugpy support, (with some groundwork by yyoncho) runInTerminal support, various bug fixes.
  • Andrei Mochalov - Docker (debugging in containers) integration.

Download Details:

Author: Emacs-lsp
Source Code: 
License: GPL-3.0 license

#swift #javascript #ruby #java #go #debugger 

Dap-mode: Emacs Debug Adapter Protocol
Lawrence  Lesch

Lawrence Lesch


Lsp-mode: Emacs Client/library for The Language Server Protocol

LSP Mode - Language Server Protocol support for EmacsLanguage Server Protocol support with multiples languages support for

Language Server Protocol Support for Emacs

LSP mode


  • ❤️ Community Driven
  • 💎 Fully featured - supports all features in Language Server Protocol v3.14.
  • 🚀 Fast - see performance section.
  • 🌟 Flexible - choose between full-blown IDE with flashy UI or minimal distraction free.
  • ⚙️ Easy to configure - works out of the box and automatically upgrades if additional packages are present.


Client for Language Server Protocol (v3.14). lsp-mode aims to provide IDE-like experience by providing optional integration with the most popular Emacs packages like company, flycheck and projectile.

  • Non-blocking asynchronous calls
  • Real-time Diagnostics/linting via flycheck (recommended) or flymake when Emacs > 26 (requires flymake>=1.0.5)
  • Code completion - company-capf / completion-at-point (note that company-lsp is no longer supported).
  • Hovers - using lsp-ui
  • Code actions - via lsp-execute-code-action, modeline (recommended) or lsp-ui sideline.
  • Code outline - using builtin imenu or helm-imenu
  • Code navigation - using builtin xref, lsp-treemacs tree views or lsp-ui peek functions.
  • Code lens
  • Symbol highlights
  • Formatting
  • Project errors on modeline
  • Debugger - dap-mode
  • Breadcrumb on headerline
  • Helm integration - helm-lsp
  • Ivy integration - lsp-ivy
  • Consult integration - consult-lsp
  • Treemacs integration - lsp-treemacs
  • Semantic tokens as defined by LSP 3.16 (compatible language servers include recent development builds of clangd and rust-analyzer)
  • which-key integration for better discovery
  • iedit
  • dired
  • ido


See also

  • lsp-docker - provide docker image with preconfigured language servers with corresponding emacs configuration.
  • company-box - company frontend with icons.
  • dap-mode - Debugger integration for lsp-mode.
  • eglot - An alternative minimal LSP implementation.
  • which-key - Emacs package that displays available keybindings in popup
  • projectile - Project Interaction Library for Emacs
  • emacs-tree-sitter - Faster, fine-grained code highlighting via tree-sitter.
  • gccemacs - modified Emacs capable of compiling and running Emacs Lisp as native code.


Contributions are very much welcome!

NOTE Documentation for clients is generated from doc comments in the clients themselves (see lsp-doc.el) and some metadata (see lsp-clients.json) so please submit corrections accordingly.

Download Details:

Author: Emacs-lsp
Source Code: 
License: GPL-3.0 license

#typescript #javascript #ruby #python #java 

Lsp-mode: Emacs Client/library for The Language Server Protocol
Hermann  Frami

Hermann Frami


Jets: Ruby on Jets


Ruby and Lambda splat out a baby and that child's name is Jets.

Please watch/star this repo to help grow and support the project.

Upgrading: If you are upgrading Jets, please check on the Upgrading Notes.

What is Ruby on Jets?

Jets is a Ruby Serverless Framework. Jets allows you to create serverless applications with a beautiful language: Ruby. It includes everything required to build an application and deploy it to AWS Lambda.

It is key to understand AWS Lambda and API Gateway to understand Jets conceptually. Jets maps your code to Lambda functions and API Gateway resources.

  • AWS Lambda is Functions as a Service. It allows you to upload and run functions without worrying about the underlying infrastructure.
  • API Gateway is the routing layer for Lambda. It is used to route REST URL endpoints to Lambda functions.

The official documentation is at Ruby on Jets.

Refer to the official docs for more info, but here's a quick intro.

Jets Functions

Jets supports writing AWS Lambda functions with Ruby. You define them in the app/functions folder. A function looks like this:


def lambda_handler(event:, context:)
  puts "hello world"
  {hello: "world"}

Here's the function in the Lambda console:

Code Example in AWS Lambda console

Though simple functions are supported by Jets, they do not add much value as other ways to write Ruby code with Jets. Classes like Controllers and Jobs add many conveniences and are more powerful to use. We’ll cover them next.

Jets Controllers

A Jets controller handles a web request and renders a response. Here's an example:


class PostsController < ApplicationController
  def index
    # renders Lambda Proxy structure compatible with API Gateway
    render json: {hello: "world", action: "index"}

  def show
    id = params[:id] # params available
    # puts goes to the lambda logs
    puts event # raw lambda event available
    render json: {action: "show", id: id}

Helper methods like params provide the parameters from the API Gateway event. The render method renders a Lambda Proxy structure back that API Gateway understands.

Jets creates Lambda functions for each public method in your controller. Here they are in the Lambda console:

Lambda Functions for each public method in AWS Console

Jets Routing

You connect Lambda functions to API Gateway URL endpoints with a routes file:


Jets.application.routes.draw do
  get  "posts", to: "posts#index"
  get  "posts/new", to: "posts#new"
  get  "posts/:id", to: "posts#show"
  post "posts", to: "posts#create"
  get  "posts/:id/edit", to: "posts#edit"
  put  "posts", to: "posts#update"
  delete  "posts", to: "posts#delete"

  resources :comments # expands to the RESTful routes above

  any "posts/hot", to: "posts#hot" # GET, POST, PUT, etc request all work

The routes.rb gets translated to API Gateway resources:

API Gateway Resources generated from routes in AWS console

Test your API Gateway endpoints with curl or postman. Note, replace the URL endpoint with the one that is created:

$ curl -s "" | jq .
  "hello": "world",
  "action": "index"

Jets Jobs

A Jets job handles asynchronous background jobs performed outside of the web request/response cycle. Here's an example:


class HardJob < ApplicationJob
  rate "10 hours" # every 10 hours
  def dig
    puts "done digging"

  cron "0 */12 * * ? *" # every 12 hours
  def lift
    puts "done lifting"

HardJob#dig runs every 10 hours and HardJob#lift runs every 12 hours. The rate and cron methods created CloudWatch Event Rules. Example:

CloudWatch Event Rules in AWS Console

Jets Deployment

You can test your application with a local server that mimics API Gateway: Jets Local Server. Once ready, deploying to AWS Lambda is a single command.

jets deploy

After deployment, you can test the Lambda functions with the AWS Lambda console or the CLI.

AWS Lambda Console

Lambda Console

Live Demos

Here are some demos of Jets applications:

Please feel free to add your own example to the jets-examples repo.

Rails Support

Jets Afterburner Mode provides Rails support with little effort. This allows you to run a Rails application on AWS Lambda. Also here's a Tutorial Blog Post: Jets Afterburner: Rails Support.

More Info

For more documentation, check out the official docs: Ruby on Jets. Here's a list of useful links:

Learning Content

Download Details:

Author: Boltops-tools
Source Code: 
License: MIT license

#serverless #ruby #aws #lambda 

Jets: Ruby on Jets
Fabiola  Auma

Fabiola Auma


The Official ansible RVM Role to install and Manage Your Ruby Versions

What is rvm1-ansible?

It is an Ansible role to install and manage ruby versions using rvm.

Why should you use rvm?

In production it's useful because compiling a new version of ruby can easily take upwards of 10 minutes. That's 10 minutes of your CPU being pegged at 100%.

rvm has pre-compiled binaries for a lot of operating systems. That means you can install ruby in about 1 minute, even on a slow micro instance.

This role even adds the ruby binaries to your system path when doing a system wide install. This allows you to access them as if they were installed without using a version manager while still benefiting from what rvm has to offer.


$ ansible-galaxy install rvm.ruby

Role variables

Below is a list of default values that you can configure:


# Install 1 or more versions of ruby
# The last ruby listed will be set as the default ruby
  - 'ruby-2.3.1'

# Install the bundler gem
rvm1_bundler_install: True

# Delete a specific version of ruby (ie. ruby-2.1.0)

# Install path for rvm (defaults to single user)
# NOTE: If you are doing a ROOT BASED INSTALL then make sure you
#       set the install path to something like '/usr/local/rvm'
rvm1_install_path: '~/.rvm'

# Add or remove any install flags
# NOTE: If you are doing a ROOT BASED INSTALL then
#       make sure you REMOVE the --user-install flag below
rvm1_install_flags: '--auto-dotfiles  --user-install'

# Add additional ruby install flags

# Set the owner for the rvm directory
# NOTE: If you are doing a ROOT BASED INSTALL then
#       make sure you set rvm1_user to 'root'
rvm1_user: 'ubuntu'

# URL for the latest installer script
rvm1_rvm_latest_installer: ''

# rvm version to use
rvm1_rvm_version: 'stable'

# Check and update rvm, disabling this will force rvm to never update
rvm1_rvm_check_for_updates: True

# GPG key verification, use an empty string if you want to skip this
# Note: Unless you know what you're doing, just keep it as is
#           Identity proof:
#           PGP message:
rvm1_gpg_keys: '409B6B1796C275462A1703113804BB82D39DC0E3'

# The GPG key server
rvm1_gpg_key_server: 'hkp://'

# autolib mode, see
rvm1_autolib_mode: 3

# Symlink binaries to system path
rvm1_symlink: true

Example playbooks


- name: Configure servers with ruby support for single user
  hosts: all

    - { role: rvm.ruby,
        tags: ruby,
        rvm1_rubies: ['ruby-2.3.1'],
        rvm1_user: 'ubuntu'

If you need to pass a list of ruby versions, pass it in an array like so.

- name: Configure servers with ruby support system wide
  hosts: all
    - { role: rvm.ruby,
        tags: ruby,
        become: yes,

        rvm1_rubies: ['ruby-2.2.5','ruby-2.3.1'],
        rvm1_install_flags: '--auto-dotfiles',     # Remove --user-install from defaults
        rvm1_install_path: /usr/local/rvm,         # Set to system location
        rvm1_user: root                            # Need root account to access system location

rvm_rubies must be specified via ruby-x.x.x so that if you want _ruby 2.2.5, you will need to pass in an array rvm_rubies: ['ruby-2.2.5']_

System wide installation

The above example would setup ruby system wide. It's very important that you run the play as root because it will need to write to a system location specified by rvm1_install_path

To the same user as ansible_user

In this case, just overwrite rvm_install_path and by default is set the --user-install flag:

rvm1_install_flags: '--auto-dotfiles --user-install'
rvm1_install_path: '/home/{{ ansible_user }}/.rvm'

To a user that is not ansible_user

You will need root access here because you will be writing outside the ansible user's home directory. Other than that it's the same as above, except you will supply a different user account:

rvm1_install_flags: '--auto-dotfiles --user-install'
rvm1_install_path: '/home/someuser/.rvm'

Quick notes about rvm1_user

In some cases you may want the rvm folder and its files to be owned by a specific user instead of root. Simply set rvm1_user: 'foo' and when ruby gets installed it will ensure that foo owns the rvm directory.

This would use Ansible's become under the hood. In case of failures (e.g. Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user), check for details and possible solutions.

Upgrading and removing old versions of ruby

A common work flow for upgrading your ruby version would be:

  1. Install the new version
  2. Run your application role so that bundle install re-installs your gems
  3. Delete the previous version of ruby

Leverage ansible's --extra-vars

Just add --extra-vars 'rvm1_delete_ruby=ruby-2.1.0' to the end of your play book command and that version will be removed.


Potentially, any Linux/Unix system supported by Ansible and satisfying the RVM prerequisites should work.

Compatibility with Linux distributions based on Debian, Ubuntu or Redhat families is actively tested.

The continuous integration setup of this project currently covers following platforms:

  • CentOS 6, 7 and 8
  • Debian 8, 9 and 10
  • Ubuntu 14.04, 16.04, 18.04 and 20.04

Ansible galaxy

You can find it on the official ansible galaxy if you want to rate it.

Download Details:

Author: rvm
Source Code:

License: MIT license

#ansible #ruby 

The Official ansible RVM Role to install and Manage Your Ruby Versions
Fabiola  Auma

Fabiola Auma


Ansible: Ruby on Rails Server

Ansible: Ruby on Rails Server

Use this ansible playbook to setup a fresh server with the following components:

  • Nginx
  • Puma App Server
  • Certbot (Let's Encrypt)
  • MySQL
  • Memcached
  • Redis
  • Sidekiq
  • Monit (to keep Puma and Sidekiq runnig)
  • Elasticsearch
  • ruby-install
  • chruby
  • Directories to deploy Rails with Capistrano and Puma App Server (see below)
  • Swapfile (useful for small DO instances)
  • Locales
  • Tools (tmux, vim, htop, git, wget, curl etc.)

Prerequisites & Config

Rename hosts.example to hosts and modify the contents.

Rename group_vars/all.example to group_vars/all and modify the contentes.

There are a bunch of things you can set in group_vars/all. Don't forget to add your host address to hosts.

Install Playbook

Run ansible-playbook site.yml -i hosts.

Rails Setup

This is just a loose guideline for what you need to deploy your app with this playbook and server config. Please keep in mind, that you need to modify some values depending on your setup (especially passwords and paths!)


Add the following gems to your Gemfile and install via bundle install:

group :development do
  gem 'capistrano', '~> 3.6'
  gem 'capistrano-rails', '~> 1.2'
  gem 'capistrano-chruby'
  gem 'capistrano3-puma'
  gem 'capistrano-sidekiq'
  gem 'capistrano-npm'


Add the following lines to your Capfile:

# General
require 'capistrano/rails'
require 'capistrano/bundler'
require 'capistrano/rails/migrations'
require 'capistrano/rails/assets'
require 'capistrano/chruby'
require 'capistrano/npm'

# Puma
require 'capistrano/puma'
install_plugin Capistrano::Puma  # Default puma tasks
install_plugin Capistrano::Puma::Workers  # if you want to control the workers (in cluster mode)
# install_plugin Capistrano::Puma::Jungle # if you need the jungle tasks
install_plugin Capistrano::Puma::Monit  # if you need the monit tasks
install_plugin Capistrano::Puma::Nginx  # if you want to upload a nginx site template

# Sidekiq
require 'capistrano/sidekiq'
require 'capistrano/sidekiq/monit'


Please edit "deploy_app_name", "repo_url", "deploy_to" and "chruby_ruby" (if you've changed the Ruby version in group_vars/all).

Your config/deploy.rb should look similar to this example:

set :application, 'deploy_app_name'
set :repo_url, 'YOUR_GIT_REPO'
set :deploy_to, '/home/deploy/deploy_app_name'
set :chruby_ruby, 'ruby-2.3.3'
set :nginx_use_ssl, true
set :puma_init_active_record, true
set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml')
set :linked_dirs, fetch(:linked_dirs, []).push('log', 'tmp/pids', 'tmp/cache', 'tmp/sockets', 'public/system')
set :keep_releases, 5


Add the target host:

server 'your_host_address', user: 'deploy', roles: %w{app db web}


Feel free to send feedback or report problems via GitHub issues!

Download Details:

Author: aleks
Source Code:

#ansible #ruby-on-rails 

Ansible: Ruby on Rails Server
Elian  Harber

Elian Harber


Dependabot-core: Dependabot's Update PR Creation Logic


Welcome to the public home of Dependabot. This repository serves 2 purposes:

  1. It houses the source code for Dependabot Core, which is the heart of Dependabot. Dependabot Core handles the logic for updating dependencies on GitHub (including GitHub Enterprise), GitLab, and Azure DevOps. If you want to host your own automated dependency update bot then this repo should give you the tools you need. A reference implementation is available here.
  2. It is the public issue tracker for issues related to Dependabot's updating logic. For issues about Dependabot the service, please contact GitHub support. While the distinction between Dependabot Core and the service can be fuzzy, a good rule of thumb is if your issue is with the diff that Dependabot created, it belongs here and for most other things the GitHub support team is best equipped to help you.

Got feedback?

Contributing to Dependabot

Currently, the Dependabot team is not accepting support for new ecosystems. We are prioritising upgrades to already supported ecosystems at this time.

Please refer to the CONTRIBUTING guidelines for more information.

Disclosing security issues

If you believe you have found a security vulnerability in Dependabot please submit the vulnerability to GitHub Security Bug Bounty so that we can resolve the issue before it is disclosed publicly.

What's in this repo?

Dependabot Core is a collection of packages for automating dependency updating in Ruby, JavaScript, Python, PHP, Elixir, Elm, Go, Rust, Java and .NET. It can also update git submodules, Docker files, and Terraform files. Highlights include:

  • Logic to check for the latest version of a dependency that's resolvable given a project's other dependencies
  • Logic to generate updated manifest and lockfiles for a new dependency version
  • Logic to find changelogs, release notes, and commits for a dependency update

Other Dependabot resources

In addition to this library, you may be interested in the dependabot-script repo, which provides a collection of scripts that use this library to update dependencies on GitHub Enterprise, GitLab or Azure DevOps

Cloning the repository

Clone the repository with Git using:

git clone

On Windows this might fail with "Filename too long". To solve this, run the following commands in the cloned Git repository:

  1. git config core.longpaths true
  2. git reset --hard

You can read more about this in the Git for Windows wiki.


To run all of Dependabot Core, you'll need Ruby, Python, PHP, Elixir, Node, Go, Elm, and Rust installed. However, if you just wish to run it for a single language you can get away with just having that language and Ruby.

While you can run Dependabot Core without Docker, we provide a development Dockerfile that bakes in all required dependencies. In most cases this is the best way to work with the project.

Running with Docker

Start by pulling the developer image from the GitHub Container Registry and then start the developer shell:

$ docker pull
$ docker tag dependabot/dependabot-core-development
$ bin/docker-dev-shell
=> running docker development shell
[dependabot-core-dev] ~/dependabot-core $

Dry run script

You can use the "dry-run" script to simulate a dependency update job, printing the diff that would be generated to the terminal. It takes two positional arguments: the package manager and the GitHub repo name (including the account):

$ bin/docker-dev-shell
=> running docker development shell
$ bin/dry-run.rb go_modules rsc/quote
=> fetching dependency files
=> parsing dependency files
=> updating 2 dependencies

Note: If the dependency files are not in the top-level directory, then you must also pass the path to the subdirectory as an argument: --dir /<subdirectory>.

Running the tests

Run the tests by running rspec spec inside each of the packages, e.g.

$ cd go_modules
$ bundle exec rspec spec

Style is enforced by RuboCop. To check for style violations, simply run rubocop in each of the packages, e.g.

$ cd go_modules
$ bundle exec rubocop

Making changes to native helpers

Several Dependabot packages make use of 'native helpers', small executables in their host language.

Changes to these files are not automatically reflected inside the development container

Once you have made any edits to the helper files, run the appropriate build script to update the installed version with your changes like so:

$ bin/docker-dev-shell
=> running docker development shell
$ bundler/helpers/v1/build
$ bin/dry-run.rb bundler dependabot/demo --dir="/ruby"

Debugging native helpers

When you're making changes to native helpers or debugging a customer issue you often need to peek inside these scripts that run in a separate process.

Print all log statements from native helpers:

DEBUG_HELPERS=true bin/dry-run.rb bundler dependabot/demo --dir="/ruby"

Pause execution to debug a single native helper function:

DEBUG_FUNCTION=parsed_gemfile bin/dry-run.rb bundler dependabot/demo --dir="/ruby"

The function maps to a native helper function name, for example, one of the functions in bundler/helpers/v2/lib/functions.rb.

When this function is being executed a debugger is inserted, pausing execution of the bin/dry-run.rb script, this leaves the current updates tmp directory in place allowing you to cd into the directory and run the native helper function directly:

 DEBUG_FUNCTION=parsed_gemfile bin/dry-run.rb bundler dependabot/demo --dir="/ruby"
=> fetching dependency files
=> dumping fetched dependency files: ./dry-run/dependabot/demo/ruby
=> parsing dependency files
$ cd /home/dependabot/dependabot-core/tmp/dependabot_TEMP/ruby && echo "{\"function\":\"parsed_gemfile\",\"args\":{\"gemfile_name\":\"Gemfile\",\"lockfile_name\":\"Gemfile.lock\",\"dir\":\"/home/dependabot/dependabot-core/tmp/dependabot_TEMP/ruby\"}}" | BUNDLER_VERSION=1.17.3 BUNDLE_GEMFILE=/opt/bundler/v1/Gemfile GEM_HOME=/opt/bundler/v1/.bundle bundle exec ruby /opt/bundler/v1/run.rb

Copy and run the cd... command:

cd /home/dependabot/dependabot-core/tmp/dependabot_TEMP/ruby && echo "{\"function\":\"parsed_gemfile\",\"args\":{\"gemfile_name\":\"Gemfile\",\"lockfile_name\":\"Gemfile.lock\",\"dir\":\"/home/dependabot/dependabot-core/tmp/dependabot_TEMP/ruby\"}}" | BUNDLER_VERSION=1.17.3 BUNDLE_GEMFILE=/opt/bundler/v1/Gemfile GEM_HOME=/opt/bundler/v1/.bundle bundle exec ruby /opt/bundler/v1/run.rb

This should log out the output of the parsed_gemfile function:

{"result":[{"name":"business","requirement":"~> 1.0.0","groups":["default"],"source":null,"type":"runtime"},{"name":"uk_phone_numbers","requirement":"~> 0.1.0","groups":["default"],"source":null,"type":"runtime"}]}

Edit the native helper function and re-run the above, for example: vi /opt/bundler/v1/lib/functions/file_parser.rb.

Building the development image from source

The developer shell uses volume mounts to incorporate your local changes to Dependabot's source code. If you need to make changes to the development shell itself, you can rebuild it locally.

Start by building the initial Dependabot Core image, or pull it from the Docker registry.

$ docker pull dependabot/dependabot-core # OR
$ docker build -f Dockerfile -t dependabot/dependabot-core . # This may take a while

Once you have the base Docker image, you can build and run the development container using the docker-dev-shell script. The script will automatically build the container if it's not present and can be forced to rebuild with the --rebuild flag. The image includes all dependencies, and the script runs the image, mounting the local copy of Dependabot Core so changes made locally will be reflected inside the container. This means you can continue to use your editor of choice while running the tests inside the container.

$ bin/docker-dev-shell
=> building image from Dockerfile.development
=> running docker development shell
[dependabot-core-dev] ~/dependabot-core $
[dependabot-core-dev] ~/dependabot-core $ cd go_modules && rspec spec # to run tests for a particular package

Running locally on your computer

To work with Dependabot packages on your local machine you will need Ruby and the package's specific language installed.

For some languages there are additional steps required, please refer to the README file in each package.

Debugging with Visual Studio Code and Docker

There's built-in support for leveraging Visual Studio Code's ability for debugging inside a Docker container. After installing the recommended Remote - Containers extension, simply press Ctrl+Shift+P (⇧⌘P on macOS) and select Remote-Containers: Reopen in Container. You can also access the dropdown by clicking on the green button in the bottom-left corner of the editor. If the development Docker image isn't present on your machine, it will be built automatically. Once that's finished, start the Debug Dry Run configuration (F5) and you'll be prompted to select a package manager and a repository to perform a dry run on. Feel free to place breakpoints on the code.

There is also support to debug individual test runs by running the Debug Tests configuration (F5) and you'll be prompted to select an ecosystem and provide an rspec path.

⚠️ The Clone Repository ... commands of the Remote Containers extension are currently missing some functionality and are therefore not supported. You have to clone the repository manually and use the Reopen in Container or Open Folder in Container... command.


Triggering the jobs that will push the new gems is done by following the steps below.

  • Ensure you have the latest merged changes: git checkout main and git pull
  • Generate an updated CHANGELOG, version.rb, and the rest of the needed commands: bin/bump-version.rb patch
  • Edit the CHANGELOG file and remove any entries that aren't needed
  • Run the commands that were output by running bin/bump-version.rb patch


Dependabot Core is a collection of Ruby packages (gems), which contain the logic for updating dependencies in several languages.


The common package contains all general-purpose/shared functionality. For instance, the code for creating pull requests via GitHub's API lives here, as does most of the logic for handling Git dependencies (as most languages support Git dependencies in one way or another). There are also base classes defined for each of the major concerns required to implement support for a language or package manager.


There is a gem for each package manager or language that Dependabot supports. At a minimum, each of these gems will implement the following classes:

FileFetcherFetches the relevant dependency files for a project (e.g., the Gemfile and Gemfile.lock). See the README for more details.
FileParserParses a dependency file and extracts a list of dependencies for a project. See the README for more details.
UpdateCheckerChecks whether a given dependency is up-to-date. See the README for more details.
FileUpdaterUpdates a dependency file to use the latest version of a given dependency. See the README for more details.
MetadataFinderLooks up metadata about a dependency, such as its GitHub URL. See the README for more details.
VersionDescribes the logic for comparing dependency versions. See the hex Version class for an example.
RequirementDescribes the format of a dependency requirement (e.g. >= 1.2.3). See the hex Requirement class for an example.

The high-level flow looks like this:

Dependabot architecture


This is a "meta" gem, that simply depends on all the others. If you want to automatically include support for all languages, you can just include this gem and you'll get all you need.


You can profile a dry-run by passing the --profile flag when running it, or tag an rspec test with :profile. This will generate a stackprof-<datetime>.dump file in the tmp/ folder, and you can generate a flamegraph from this by running: stackprof --d3-flamegraph tmp/stackprof-<data or spec name>.dump > tmp/flamegraph.html.

Why is this public?

As the name suggests, Dependabot Core is the core of Dependabot (the rest of the app is pretty much just a UI and database). If we were paranoid about someone stealing our business then we'd be keeping it under lock and key.

Dependabot Core is public because we're more interested in it having an impact than we are in making a buck from it. We'd love you to use Dependabot so that we can continue to develop it, but if you want to build and host your own version then this library should make doing so a lot easier.

If you use Dependabot Core then we'd love to hear what you build!


We use the License Zero Prosperity Public License, which essentially enshrines the following:

  • If you would like to use Dependabot Core in a non-commercial capacity, such as to host a bot at your workplace, then we give you full permission to do so. In fact, we'd love you to and will help and support you however we can.
  • If you would like to add Dependabot's functionality to your for-profit company's offering then we DO NOT give you permission to use Dependabot Core to do so. Please contact us directly to discuss a partnership or licensing arrangement.

If you make a significant contribution to Dependabot Core then you will be asked to transfer the IP of that contribution to GitHub Inc. so that it can be licensed in the same way as the above.


Dependabot and Dependabot Core started life as Bump and Bump Core, back when Harry and Grey were working at GoCardless. We remain grateful for the help and support of GoCardless in helping make Dependabot possible - if you need to collect recurring payments from Europe, check them out.

Download Details:

Author: Dependabot
Source Code: 
License: View license

#go #golang #javascript #ruby #python 

Dependabot-core: Dependabot's Update PR Creation Logic
Fabiola  Auma

Fabiola Auma


Ansible Role GitLab: installs GitLab, A Ruby-based Front-end to Git

Ansible Role: GitLab

Installs GitLab, a Ruby-based front-end to Git, on any RedHat/CentOS or Debian/Ubuntu linux system.

GitLab's default administrator account details are below; be sure to login immediately after installation and change these credentials!




Role Variables

Available variables are listed below, along with default values (see defaults/main.yml):

gitlab_domain: gitlab
gitlab_external_url: "https://{{ gitlab_domain }}/"

The domain and URL at which the GitLab instance will be accessible. This is set as the external_url configuration setting in gitlab.rb, and if you want to run GitLab on a different port (besides 80/443), you can specify the port here (e.g. https://gitlab:8443/ for port 8443).

gitlab_git_data_dir: "/var/opt/gitlab/git-data"

The gitlab_git_data_dir is the location where all the Git repositories will be stored. You can use a shared drive or any path on the system.

gitlab_backup_path: "/var/opt/gitlab/backups"

The gitlab_backup_path is the location where Gitlab backups will be stored.

gitlab_edition: "gitlab-ce"

The edition of GitLab to install. Usually either gitlab-ce (Community Edition) or gitlab-ee (Enterprise Edition).

gitlab_version: ''

If you'd like to install a specific version, set the version here (e.g. 11.4.0-ce.0 for Debian/Ubuntu, or 11.4.0-ce.0.el7 for RedHat/CentOS).

gitlab_config_template: "gitlab.rb.j2"

The gitlab.rb.j2 template packaged with this role is meant to be very generic and serve a variety of use cases. However, many people would like to have a much more customized version, and so you can override this role's default template with your own, adding any additional customizations you need. To do this:

  • Create a templates directory at the same level as your playbook.
  • Create a templates\mygitlab.rb.j2 file (just choose a different name from the default template).
  • Set the variable like: gitlab_config_template: mygitlab.rb.j2 (with the name of your custom template).

SSL Configuration.

gitlab_redirect_http_to_https: true
gitlab_ssl_certificate: "/etc/gitlab/ssl/{{ gitlab_domain }}.crt"
gitlab_ssl_certificate_key: "/etc/gitlab/ssl/{{ gitlab_domain }}.key"

GitLab SSL configuration; tells GitLab to redirect normal http requests to https, and the path to the certificate and key (the default values will work for automatic self-signed certificate creation, if set to true in the variable below).

# SSL Self-signed Certificate Configuration.
gitlab_create_self_signed_cert: true
gitlab_self_signed_cert_subj: "/C=US/ST=Missouri/L=Saint Louis/O=IT/CN={{ gitlab_domain }}"

Whether to create a self-signed certificate for serving GitLab over a secure connection. Set gitlab_self_signed_cert_subj according to your locality and organization.

LetsEncrypt Configuration.

gitlab_letsencrypt_enable: false
gitlab_letsencrypt_contact_emails: [""]
gitlab_letsencrypt_auto_renew_hour: 1
gitlab_letsencrypt_auto_renew_minute: 30
gitlab_letsencrypt_auto_renew_day_of_month: "*/7"
gitlab_letsencrypt_auto_renew: true

GitLab LetsEncrypt configuration; tells GitLab whether to request and use a certificate from LetsEncrypt, if gitlab_letsencrypt_enable is set to true. Multiple contact emails can be configured under gitlab_letsencrypt_contact_emails as a list.

# LDAP Configuration.
gitlab_ldap_enabled: false
gitlab_ldap_host: ""
gitlab_ldap_port: "389"
gitlab_ldap_uid: "sAMAccountName"
gitlab_ldap_method: "plain"
gitlab_ldap_bind_dn: "CN=Username,CN=Users,DC=example,DC=com"
gitlab_ldap_password: "password"
gitlab_ldap_base: "DC=example,DC=com"

GitLab LDAP configuration; if gitlab_ldap_enabled is true, the rest of the configuration will tell GitLab how to connect to an LDAP server for centralized authentication.

  - openssh-server
  - postfix
  - curl
  - openssl
  - tzdata

Dependencies required by GitLab for certain functionality, like timezone support or email. You may change this list in your own playbook if, for example, you would like to install exim instead of postfix.

gitlab_time_zone: "UTC"

Gitlab timezone.

gitlab_backup_keep_time: "604800"

How long to keep local backups (useful if you don't want backups to fill up your drive!).

gitlab_download_validate_certs: true

Controls whether to validate certificates when downloading the GitLab installation repository install script.

# Email configuration.
gitlab_email_enabled: false
gitlab_email_from: ""
gitlab_email_display_name: "Gitlab"
gitlab_email_reply_to: ""

Gitlab system mail configuration. Disabled by default; set gitlab_email_enabled to true to enable, and make sure you enter valid from/reply-to values.

# SMTP Configuration
gitlab_smtp_enable: false
gitlab_smtp_address: "smtp.server"
gitlab_smtp_port: "465"
gitlab_smtp_user_name: "smtp user"
gitlab_smtp_password: "smtp password"
gitlab_smtp_domain: ""
gitlab_smtp_authentication: "login"
gitlab_smtp_enable_starttls_auto: true
gitlab_smtp_tls: false
gitlab_smtp_openssl_verify_mode: "none"
gitlab_smtp_ca_path: "/etc/ssl/certs"
gitlab_smtp_ca_file: "/etc/ssl/certs/ca-certificates.crt"

Gitlab SMTP configuration; of gitlab_smtp_enable is true, the rest of the configuration will tell GitLab how to send mails using an smtp server.

gitlab_nginx_listen_port: 8080

If you are running GitLab behind a reverse proxy, you may want to override the listen port to something else.

gitlab_nginx_listen_https: false

If you are running GitLab behind a reverse proxy, you may wish to terminate SSL at another proxy server or load balancer

gitlab_nginx_ssl_verify_client: ""
gitlab_nginx_ssl_client_certificate: ""

If you want to enable 2-way SSL Client Authentication, set gitlab_nginx_ssl_verify_client and add a path to the client certificate in gitlab_nginx_ssl_client_certificate.

gitlab_default_theme: 2

GitLab includes a number of themes, and you can set the default for all users with this variable. See the included GitLab themes to choose a default.

  - gitlab_rails:
      - key: "trusted_proxies"
        value: "['foo', 'bar']"
      - key: "env"
        type: "plain"
        value: |
          "http_proxy" => "",
          "https_proxy" => "",
          "no_proxy" => "localhost,,"
  - unicorn:
      - key: "worker_processes"
        value: 5
      - key: "pidfile"
        value: "/opt/gitlab/var/unicorn/"

Gitlab have many other settings (see official documentation), and you can add them with this special variable gitlab_extra_settings with the concerned setting and the key and value keywords.



Example Playbook

- hosts: servers
    - vars/main.yml
    - { role: geerlingguy.gitlab }

Inside vars/main.yml:

gitlab_external_url: ""



Author Information

This role was created in 2014 by Jeff Geerling, author of Ansible for DevOps.

Download Details:

Author: geerlingguy
Source Code:

License: MIT license

#gitlab #ansible #ruby 

Ansible Role GitLab: installs GitLab, A Ruby-based Front-end to Git

Schemadoc Gem: Document Your Database Schemas Uses SQLite


schemadoc gem - document your database schemas (tables, columns, etc.)

Usage Command Line

The schemadoc gem includes a command line tool named - surprise, surprise - schemadoc. Try:

$ schemadoc --help

resulting in:

schemadoc 1.0.0 - Lets you document your database tables, columns, etc.

Usage: schemadoc [options]
    -o, --output PATH            Output path (default is '.')
    -v, --verbose                Show debug trace

  schemadoc                # defaults to ./schemadoc.yml
  schemadoc football.yml


The schemadoc command line tool requires a configuration file (defaults to ./schemadoc.yml if not passed along).

Database Connection Settings - database Section

Use the database section to configure you database connection settings. Example:

  adapter:  sqlite3
  database: ./football.db

Schema Sections

All other sections are interpreted as database schemas. The first section is the "default" schema, that is, all tables not listed in other schemas will get auto-added to the "default" schema.

Example - schemadoc.yml

## connection spec

  adapter:  sqlite3
  database: ./football.db

## main tables

  name: Football

## world tables

  name: World
    - continents
    - countries
    - regions
    - cities
    - places
    - names
    - langs
    - usages


The schemadoc tool writes out two json files:

  • database.json - includes all schemas, tables, columns, etc.
  • symbols.json - includes all symbols from a to z

Examples. See the football.db - database.json, symbols.json or beer.db - database.json, symbols.json live examples.

Reports 'n' Templates

To generate web pages from you json files use a static site generator and a template pack (theme). For example, to use the schemadoc/schemadoc-theme theme copy your json files in the _data/ folder and rebuild the site (e.g. $ jekyll build). That's it. Enjoy your database schema docu.

Examples. See the football.db or beer.db live examples.

Free Schemadoc Template Packs / Themes


Just install the gem:

$ gem install schemadoc

In addition, install Active Record and your database adapter's gem such as mysql2

$ gem install activerecord
$ gem install mysql2


The schemadoc scripts are dedicated to the public domain. Use it as you please with no restrictions whatsoever.

Download Details:

Author: schemadoc
Source Code:

License: CC0-1.0 license

#sqlite #ruby 

Schemadoc Gem: Document Your Database Schemas Uses SQLite
Dexter  Goodwin

Dexter Goodwin


Semgrep: Lightweight Static analysis for Many Languages


Code scanning at ludicrous speed. 
Find bugs and reachable dependency vulnerabilities in code. 
Enforce your code standards on every commit.

Semgrep is a fast, open-source, static analysis engine for finding bugs, detecting vulnerabilities in third-party dependencies, and enforcing code standards. Get started →.

Semgrep analyzes code locally on your computer or in your build environment: code is never uploaded.

Its rules look like the code you already write; no abstract syntax trees, regex wrestling, or painful DSLs. Here's a quick rule for finding Python print() statements. Run it online in Semgrep’s Playground by clicking the image:

Semgrep rule example for finding Python print() statements

The Semgrep ecosystem includes:

  • Semgrep - The open-source command line tool at the heart of everything (this project).
  • Semgrep Supply Chain - high-signal dependency scanner that detects reachable vulnerabilities in third-party libraries and functions across the SDLC.
  • Semgrep App - Deploy, manage, and monitor Semgrep and Semgrep Supply Chain at scale with free and paid tiers. Integrates with CI providers such as GitHub, GitLab, CircleCI, and more.


  • Semgrep Playground - An online interactive tool for writing and sharing rules.
  • Semgrep Registry - 2,000+ community-driven rules covering security, correctness, and dependency vulnerabilities.

Join hundreds of thousands of other developers and security engineers already using Semgrep at companies like GitLab, Dropbox, Slack, Figma, Shopify, HashiCorp, Snowflake, and Trail of Bits.

Semgrep is developed and commercially supported by r2c, a software security company.

Language support

General availability

C# · Go · Java · JavaScript · JSX · JSON · PHP · Python · Ruby · Scala · TypeScript · TSX

Beta & experimental

See supported languages for the complete list.

Getting started

To install Semgrep use Homebrew or pip, or run without installation via Docker:

# For macOS
$ brew install semgrep

# For Ubuntu/WSL/Linux/macOS
$ python3 -m pip install semgrep

# To try Semgrep without installation run via Docker
$ docker run --rm -v "${PWD}:/src" returntocorp/semgrep semgrep

Once installed, Semgrep can run with single rules or entire rulesets. Visit Docs > Running rules to learn more or try the following:

# Check for Python == where the left and right hand sides are the same (often a bug)
$ semgrep -e '$X == $X' --lang=py path/to/src

# Fetch rules automatically by setting the `--config auto` flag.
# This will fetch rules relevant to your project from Semgrep Registry.
# Your source code is not uploaded.
$ semgrep --config auto

To run Semgrep Supply Chain, contact the Semgrep team. Visit the full documentation to learn more.

Rule examples

Visit Docs > Rule examples for use cases and ideas.

Use caseSemgrep rule
Ban dangerous APIsPrevent use of exec
Search routes and authenticationExtract Spring routes
Enforce the use secure defaultsSecurely set Flask cookies
Tainted data flowing into sinksExpressJS dataflow into
Enforce project best-practicesUse assertEqual for == checks, Always check subprocess calls
Codify project-specific knowledgeVerify transactions before making them
Audit security hotspotsFinding XSS in Apache Airflow, Hardcoded credentials
Audit configuration filesFind S3 ARN uses
Migrate from deprecated APIsDES is deprecated, Deprecated Flask APIs, Deprecated Bokeh APIs
Apply automatic fixesUse listenAndServeTLS


Visit Docs > Extensions to learn about using Semgrep in your editor or pre-commit. When integrated into CI and configured to scan pull requests, Semgrep will only report issues introduced by that pull request; this lets you start using Semgrep without fixing or ignoring pre-existing issues!


Browse the full Semgrep documentation on the website. If you’re new to Semgrep, check out Docs > Getting started or the interactive tutorial.


Using remote configuration from the Registry (like --config=p/ci) reports pseudonymous rule metrics to

Using configs from local files (like --config=xyz.yml) does not enable metrics.

To disable Registry rule metrics, use --metrics=off.

The Semgrep privacy policy describes the principles that guide data-collection decisions and the breakdown of the data that are and are not collected when the metrics are enabled.



To upgrade, run the command below associated with how you installed Semgrep:

# Using Homebrew
$ brew upgrade semgrep

# Using pip
$ python3 -m pip install --upgrade semgrep

# Using Docker
$ docker pull returntocorp/semgrep:latest

Download Details:

Author: Returntocorp
Source Code: 
License: View license

#typescript #javascript #ruby #python #c 

Semgrep: Lightweight Static analysis for Many Languages
Gordon  Matlala

Gordon Matlala


Jekyll-gist: Liquid Tag for Displaying GitHub Gists in Jekyll Sites


Liquid tag for displaying GitHub Gists in Jekyll sites: {% gist %}.


Add this line to your application's Gemfile:

$ gem 'jekyll-gist'

And then execute:

$ bundle

Or install it yourself as:

$ gem install jekyll-gist

Then add the following to your site's _config.yml:

  - jekyll-gist

💡 If you are using a Jekyll version less than 3.5.0, use the gems key instead of plugins.


Use the tag as follows in your Jekyll pages, posts and collections:

{% gist c08ee0f2726fd0e3909d %}

This will create the associated script tag:

<script src=""> </script>

You may optionally specify a filename after the gist_id:

{% gist c08ee0f2726fd0e3909d %}

This will produce the correct URL to show just the specified file in your post rather than the entire Gist.

Pro-tip: If you provide a personal access token with Gist scope, as the environmental variable JEKYLL_GITHUB_TOKEN, Jekyll Gist will use the Gist API to speed up site generation.

Disabling noscript support

By default, Jekyll Gist will make an HTTP call per Gist to retrieve the raw content of the Gist. This information is used to propagate noscript tags for search engines and browsers without JavaScript support. If you'd like to disable this feature, for example, to speed up builds locally, add the following to your site's _config.yml:

  noscript: false


  1. Fork it ( )
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create a new Pull Request

Download Details:

Author: jekyll
Source Code: 
License: MIT license

#jekyll #github #ruby 

 Jekyll-gist: Liquid Tag for Displaying GitHub Gists in Jekyll Sites
Gordon  Matlala

Gordon Matlala


Tutorials: Codebar's Tutorials

Getting started

This is a GitHub Pages repo, so you can render the pages with Jekyll. First make sure to install the version of Ruby indicated in .ruby-version, as well as the bundler gem. Then:

  1. bundle install, which will install Jekyll
  2. bundle exec jekyll serve --watch
  3. go to http://localhost:4000

(you could also use your favourite manager, chruby, rbenv, rvm, etc. See instructions for rvm at the end of this README)

If you are just updating or adding new tutorials, follow steps 1 to 3 only.

If you also want to make changes to the structure of the site (i.e. if you want to modify the site's Javascript files) and run the tests, you need to install Node (follow the link for installation instructions). Then:

$ npm install
$ gulp

and go to http://localhost:4000/test/specrunner.html to run the tests. Tests should be green.

Gulp is only used for development, not in production. In your local copy of this repo, it will concatenate and minify the files inside the javascripts-dev folder, as well as watch for changes in that folder. The concatenated and minified JS file will be generated inside the javascripts folder. You can push both folders when you are finished with your changes. GitHub pages will then generate the site in production with whatever is inside the javascripts folder.

Getting in Touch

You can go to the general codebar Slack channel here or the dedicated tutorials channel here. Use it to get in touch and chat to other codebar students/coaches, or if you need help.

If you are not on Slack use this link to get an invite.


We encourage you to contribute with your suggestions and corrections. Head to our issues page and open a new issue or help on the existing ones.

General tutorial rule

All tutorials get the students to build something that they are able to show around at the end of the workshop.

All tutorials follow a structure:

  • Objectives - "In this tutorial we are going to look at..."
  • Goals - "By the end of this tutorial you will have..."
  • Then the exercises.
    • Bonus - This is not always required but if you feel there is something that could be added then please include it.
    • Further reading - Again this is not always required but if you feel there was something in the tutorials that could be covered in more depth then please include any good reading materials/videos or extra tutorials.

Repetition is good. A tutorial can contain multiple exercises that ask the students to take similar steps (e.g. for HTTP Requests one exercise introduces GET, another has GET and POST etc).

Explaining and getting the students to focus on one new thing at a time, presenting students with lots of new content and usage examples can be confusing.

Before starting to write a new tutorial please speak with someone from codebar to see whether it is of interest to students.

To add downloadable files to a new or existing tutorial:

  • Add a folder with your exercise files inside the tutorial folder. For example, for Javascript lesson 3:
├── assets/
├── files/
│   ├── index.html
│   ├── jquery.js
│   ├── script.js
│   └── style.css
  • Add a frontmatter variable files to the tutorial page with a list of the files you added, including folder name:
layout: page
title: Introduction to jQuery
  - files/index.html
  - files/jquery.js
  - files/script.js
  - files/style.css
  • In the copy of the tutorial, add your link to the files, making it point to just download:
Download the files that you will need to work through the example

And you're done. Commit and push as usual.


Another way of installing the project dependencies is via RVM. Follow the quick installation guide and then run:

$ rvm install 2.2.1  # inside `codebar/tutorials` folder
$ rvm gemset use codebar-tutorial --create
$ gem install bundler
$ bundle install
$ jekyll serve  # go to

If you also want to make changes to the JavaScript of the site, you'll need to have Node installed. This can be done with a tool like NVM.

This is the source code for

Download Details:

Author: codebar
Source Code: 
License: CC BY-NC-SA 4.0

#jekyll #javascript #ruby #css #python 

Tutorials: Codebar's Tutorials
Chloe  Butler

Chloe Butler


Resque Perl: Perl Port Of The Original Ruby Library

Resque for perl

Redis-backed library for creating background jobs, placing them on multiple queues, and processing them later.


First you create a Resque instance where you configure the Redis backend and then you can start sending jobs to be done by workers:

use Resque;

my $r = Resque->new( redis => '' );

$r->push( my_queue => {
    class => 'My::Task',
    args => [ 'Hello world!' ]

Background jobs can be any perl module that implement a perform() function. The Resque::Job object is passed as the only argument to this function:

package My::Task;
use strict;
use 5.10.0;

sub perform {
    my $job = shift;
    say $job->args->[0];


Finally, you run your jobs by instancing a Resque::Worker and telling it to listen to one or more queues:

use Resque;

my $w = Resque->new( redis => '' )->worker;


Resque is a Redis-backed library for creating background jobs, placing them on multiple queues, and processing them later.

This library is a perl port of the original Ruby one:

My main goal doing this port is to use the same backend to be able to manage the system using ruby's resque-server webapp.

As extracted from the original docs, the main features of Resque are:

Resque workers can be distributed between multiple machines, support priorities, are resilient to memory leaks, tell you what they're doing, and expect failure.

Resque queues are persistent; support constant time, atomic push and pop (thanks to Redis); provide visibility into their contents; and store jobs as simple JSON hashes.

The Resque frontend tells you what workers are doing, what workers are not doing, what queues you're using, what's in those queues, provides general usage stats, and helps you track failures.

A lot more about Resque can be read on the original blog post:

Download Details:

Author: diegok
Source Code:

#perl #ruby 

Resque Perl: Perl Port Of The Original Ruby Library
Dexter  Goodwin

Dexter Goodwin


Awesomo: Cool Open Source Projects Written in Various Languages

A.W.E.S.O.M. O is an extensive list of interesting open source projects written in various languages.

If you are interested in Open Source and are considering to join the community of Open Source developers, then here you might find a project that will suit you.


We have a Telegram channel where we daily post news, announces and all the open-source goodies we found, so subscribe to us:



Want to add an interesting project?

  • Simply fork this repository.
  • Add the project to the list using similar formatting of other projects.
  • Open new pull request.

☝️ However, keep in mind that we don't accept mammoth's shit. Only active and interesting projects with good documentation are added. Dead and abandoned projects will be removed.

Want to support us?

Just share this list with your friends on Twitter, Facebook, Medium or somewhere else.


awesomo by @lk-geimfari

To the extent possible under law, the person who associated CC0 with awesomo has waived all copyright and related or neighboring rights to awesomo.

You should have received a copy of the CC0 legalcode along with this work. If not, see

Download Details:

Author: lk-geimfari
Source Code: 
License: CC0-1.0 license

#typescript #ruby #python #rust 

Awesomo: Cool Open Source Projects Written in Various Languages