Chloe  Butler

Chloe Butler


Pdf2gerb: Perl Script Converts PDF Files to Gerber format


Perl script converts PDF files to Gerber format

Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.

The general workflow is as follows:

  1. Design the PCB using your favorite CAD or drawing software.
  2. Print the top and bottom copper and top silk screen layers to a PDF file.
  3. Run Pdf2Gerb on the PDFs to create Gerber and Excellon files.
  4. Use a Gerber viewer to double-check the output against the original PCB design.
  5. Make adjustments as needed.
  6. Submit the files to a PCB manufacturer.

Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).

See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.

#Pdf2Gerb config settings:
#Put this file in same folder/directory as itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;

use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)

#configurable settings:
#change values here instead of in main file

use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call

#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)} ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software.  \nGerber files MAY CONTAIN ERRORS.  Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG

use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC

use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)

#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1); 

#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
#round or square pads (> 0) and drills (< 0):
    .010, -.001,  #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
    .031, -.014,  #used for vias
    .041, -.020,  #smallest non-filled plated hole
    .051, -.025,
    .056, -.029,  #useful for IC pins
    .070, -.033,
    .075, -.040,  #heavier leads
#    .090, -.043,  #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
    .100, -.046,
    .115, -.052,
    .130, -.061,
    .140, -.067,
    .150, -.079,
    .175, -.088,
    .190, -.093,
    .200, -.100,
    .220, -.110,
    .160, -.125,  #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
    .090, -.040,  #want a .090 pad option, but use dummy hole size
    .065, -.040, #.065 x .065 rect pad
    .035, -.040, #.035 x .065 rect pad
    .001,  #too thin for real traces; use only for board outlines
    .006,  #minimum real trace width; mainly used for text
    .008,  #mainly used for mid-sized text, not traces
    .010,  #minimum recommended trace width for low-current signals
    .015,  #moderate low-voltage current
    .020,  #heavier trace for power, ground (even if a lighter one is adequate)
    .030,  #heavy-current traces; be careful with these ones!
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);

#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size:   parsed PDF diameter:      error:
#  .014                .016                +.002
#  .020                .02267              +.00267
#  .025                .026                +.001
#  .029                .03167              +.00267
#  .033                .036                +.003
#  .040                .04267              +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
    HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
    RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
    SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
    RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
    TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
    REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects

#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
    CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
    CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
    SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
    WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
    RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found

#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches

#line join/cap styles:
use constant
    CAP_NONE => 0, #butt (none); line is exact length
    CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
    CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
    CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
#number of elements in each shape type:
use constant
    RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
    LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
    CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
    CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
    rect => RECT_SHAPELEN,
    line => LINE_SHAPELEN,
    curve => CURVE_SHAPELEN,
    circle => CIRCLE_SHAPELEN,

#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions

# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?

#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes. 
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes

#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches

# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)

# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time

# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const

use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool

my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time

print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load


#use Package::Constants;
#use Exporter qw(import); #

#my $caller = "pdf2gerb::";

#sub cfg
#    my $proto = shift;
#    my $class = ref($proto) || $proto;
#    my $settings =
#    {
#        $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
#    };
#    bless($settings, $class);
#    return $settings;

#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;

#print STDERR "read cfg file\n";

#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #; NOTE: "_OK" skips short/common names

#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }

Download Details:

Author: swannman
Source Code:

License: GPL-3.0 license


Pdf2gerb: Perl Script Converts PDF Files to Gerber format
Desmond  Gerber

Desmond Gerber


Geoip: Native NodeJS Implementation Of MaxMind's GeoIP API


A native NodeJS API for the GeoLite data from MaxMind.


MaxMind provides a set of data files for IP to Geo mapping along with opensource libraries to parse and lookup these data files. One would typically write a wrapper around their C API to get access to this data in other languages (like JavaScript).

GeoIP-lite instead attempts to be a fully native JavaScript library. A converter script converts the CSV files from MaxMind into an internal binary format (note that this is different from the binary data format provided by MaxMind). The geoip module uses this binary file to lookup IP addresses and return the country, region and city that it maps to.

Both IPv4 and IPv6 addresses are supported, however since the GeoLite IPv6 database does not currently contain any city or region information, city, region and postal code lookups are only supported for IPv4.


I was really aiming for a fast JavaScript native implementation for geomapping of IPs. My prime motivator was the fact that it was really hard to get libgeoip built for Mac OSX without using the library from MacPorts.

why geoip-lite

geoip-lite is a fully JavaScript implementation of the MaxMind geoip API. It is not as fully featured as bindings that use libgeoip. By reducing scope, this package is about 40% faster at doing lookups. On average, an IP to Location lookup should take 20 microseconds on a Macbook Pro. IPv4 addresses take about 6 microseconds, while IPv6 addresses take about 30 microseconds.


var geoip = require('geoip-lite');

var ip = "";
var geo = geoip.lookup(ip);

{ range: [ 3479298048, 3479300095 ],
  country: 'US',
  region: 'TX',
  eu: '0',
  timezone: 'America/Chicago',
  city: 'San Antonio',
  ll: [ 29.4969, -98.4032 ],
  metro: 641,
  area: 1000 }


1. get the library

$ npm install geoip-lite

2. update the datafiles (optional)

Run cd node_modules/geoip-lite && npm run-script updatedb license_key=YOUR_LICENSE_KEY to update the data files. (Replace YOUR_LICENSE_KEY with your license key obtained from

You can create maxmind account here

NOTE that this requires a lot of RAM. It is known to fail on on a Digital Ocean or AWS micro instance. There are no plans to change this. geoip-lite stores all data in RAM in order to be fast.


geoip-lite is completely synchronous. There are no callbacks involved. All blocking file IO is done at startup time, so all runtime calls are executed in-memory and are fast. Startup may take up to 200ms while it reads into memory and indexes data files.

Looking up an IP address

If you have an IP address in dotted quad notation, IPv6 colon notation, or a 32 bit unsigned integer (treated as an IPv4 address), pass it to the lookup method. Note that you should remove any [ and ] around an IPv6 address before passing it to this method.

var geo = geoip.lookup(ip);

If the IP address was found, the lookup method returns an object with the following structure:

   range: [ <low bound of IP block>, <high bound of IP block> ],
   country: 'XX',                 // 2 letter ISO-3166-1 country code
   region: 'RR',                  // Up to 3 alphanumeric variable length characters as ISO 3166-2 code
                                  // For US states this is the 2 letter state
                                  // For the United Kingdom this could be ENG as a country like “England
                                  // FIPS 10-4 subcountry code
   eu: '0',                       // 1 if the country is a member state of the European Union, 0 otherwise.
   timezone: 'Country/Zone',      // Timezone from IANA Time Zone Database
   city: "City Name",             // This is the full city name
   ll: [<latitude>, <longitude>], // The latitude and longitude of the city
   metro: <metro code>,           // Metro code
   area: <accuracy_radius>        // The approximate accuracy radius (km), around the latitude and longitude

The actual values for the range array depend on whether the IP is IPv4 or IPv6 and should be considered internal to geoip-lite. To get a human readable format, pass them to geoip.pretty()

If the IP address was not found, the lookup returns null

Pretty printing an IP address

If you have a 32 bit unsigned integer, or a number returned as part of the range array from the lookup method, the pretty method can be used to turn it into a human readable string.

    console.log("The IP is %s", geoip.pretty(ip));

This method returns a string if the input was in a format that geoip-lite can recognise, else it returns the input itself.

Built-in Updater

This package contains an update script that can pull the files from MaxMind and handle the conversion from CSV. A npm script alias has been setup to make this process easy. Please keep in mind this requires internet and MaxMind rate limits that amount of downloads on their servers.

You will need, at minimum, a free license key obtained from to run the update script.

Package stores checksums of MaxMind data and by default only downloads them if checksums have changed.

Ways to update data

#update data if new data is available
npm run-script updatedb license_key=YOUR_LICENSE_KEY

#force udpate data even if checkums have not changed
npm run-script updatedb-force license_key=YOUR_LICENSE_KEY

You can also run it by doing:

node ./node_modules/geoip-lite/scripts/updatedb.js license_key=YOUR_LICENSE_KEY

Ways to reload data in your app when update finished

If you have a server running geoip-lite, and you want to reload its geo data, after you finished update, without a restart.


You can do it programmatically, calling after scheduled data updates



Automatic Start and stop watching for data updates

You can enable the data watcher to automatically refresh in-memory geo data when a file changes in the data directory.


This tool can be used with npm run-script updatedb to periodically update geo data on a running server.


This package includes the GeoLite database from MaxMind. This database is not the most accurate database available, however it is the best available for free. You can use the commercial GeoIP database from MaxMind with better accuracy by buying a license from MaxMind, and then using the conversion utility to convert it to a format that geoip-lite understands. You will need to use the .csv files from MaxMind for conversion.

Also note that on occassion, the library may take up to 5 seconds to load into memory. This is largely dependent on how busy your disk is at that time. It can take as little as 200ms on a lightly loaded disk. This is a one time cost though, and you make it up at run time with very fast lookups.

Memory usage

Quick test on memory consumption shows that library uses around 100Mb per process

    var geoip = require('geoip-lite');
    * Outputs:
    * {
    *     rss: 126365696,
    *     heapTotal: 10305536,
    *     heapUsed: 5168944,
    *     external: 104347120
    * }

This product includes GeoLite data created by MaxMind, available from


If your use-case requires doing less than 100 queries through the lifetime of your application or if you need really fast latency on start-up, you might want to look into fast-geoip a package with a compatible API that is optimized for serverless environments and provides faster boot times and lower memory consumption at the expense of longer lookup times.



geoip-lite is Copyright 2011-2018 Philip Tellis and the latest version of the code is available at

Author: Geoip-lite
Source Code: 
License: View license

#node #api 

Geoip: Native NodeJS Implementation Of MaxMind's GeoIP API
Abdullah  Kozey

Abdullah Kozey


C++20 Thread Confinement and Dependency injection Framework.

What is this?

Ichor, Greek Mythos for ethereal fluid that is the blood of the gods/immortals, is a C++ framework/middleware for thread confinement and dependency injection.

Ichor informally stands for "Intuitive Compile-time Hoisted Object Resources".

Although initially started as a rewrite of OSGI-based framework Celix and to a lesser extent CppMicroservices, Ichor has carved out its own path, as god-fluids are wont to do.

Ichor's greater aim is to bring C++20 to the embedded realm. This means that classic requirements such as the possibility to have 0 dynamic memory allocations, control over scheduling and deterministic execution times are a focus, whilst still allowing much of standard modern C++ development.

Moreover, the concept of Fearless Concurrency from Rust is attempted to be applied in Ichor through thread confinement.

Thread confinement? Fearless Concurrency?

Multithreading is hard. There exist plenty of methods trying to make it easier, ranging from the actor framework, static analysis a la rust, software transaction memory and traditional manual lock-wrangling.

Thread confinement is one such approach. Instead of having to protect resources, Ichor attempts to make it well-defined on which thread an instance of a C++ class runs and pushes you to only access and modify memory from that thread. Thereby removing the need to think about atomics/mutexes, unless you use threads not managed by, or otherwise trying to circumvent, Ichor. In which case, you're on your own.


The minimal example requires a main function, which initiates at least one event loop, a framework logger and one service and quitting the program gracefully using ctrl+c.

The realtime example shows a trivial program running with realtime priorities and shows some usage of Ichor priorities.

More examples can be found in the examples directory.

Supported OSes

  • Linux

Currently Unsupported

  • Windows, untested, not sure if compiles but should be relatively easy to get up and running
  • Baremetal, might change if someone puts in the effort to modify Ichor to work with freestanding implementations of C++20
  • Far out future plans for any RTOS that supports C++20 such as VxWorks Wind River


Ubuntu 20.04:

sudo add-apt-repository ppa:ubuntu-toolchain-r/ppa
sudo apt update
sudo apt install g++-11 build-essential cmake

Optional Features

Some features are behind feature flags and have their own dependencies.

If using etcd:

sudo apt install libgrpc++-dev libprotobuf-dev

If using the Boost.BEAST (recommended boost 1.70 or newer):

sudo apt install libboost1.71-all-dev libssl-dev


Untested, latest MSVC should probably work.


Documentation can be found in the docs directory.

Current design focuses

  • Less magic configuration
    • code as configuration, as much as possible bundled in one place
  • As much type-safety as possible, prefer compile errors over run-time errors.
  • Well-defined and managed multi-threading to prevent data races and similar issues
    • Use of an event loop
    • Where multi-threading is desired, provide easy to use abstractions to prevent issues
  • Performance-oriented design in all-parts of the framework / making it easy to get high performance and low latency
  • Usage of memory allocators, enabling 0 heap allocation C++ usage.
  • Fully utilise OOP, RAII and C++20 Concepts to steer users to using the framework correctly
  • Implement business logic in the least amount of code possible
  • Hopefully this culminates and less error-prone code and better time to market

Supported features

The framework provides several core features and optional services behind cmake feature flags:

  • Coroutine-based event loop
  • Event-based message passing
  • Dependency Injection
  • Memory allocators for 0 heap allocations
  • Service lifecycle management (sort of like OSGi-lite services)
  • data race free communication between event loops
  • Http server/client

Optional services:

  • Websocket service through Boost.BEAST
  • HTTP client and server services through Boost.BEAST
  • Spdlog logging service
  • TCP communication service
  • RapidJson serialization services
  • Timer service
  • Partial etcd service


  • EDF scheduling / WCET measurements
  • CMake stuff to include ichor library from external project
  • expand etcd support, currently only simply put/get supported
  • Pubsub interfaces
    • Kafka? Pulsar? Ecal?
  • Shell Commands
  • Tracing interface
    • Opentracing? Jaeger?
  • Docker integration/compilation
  • ...

Preliminary benchmark results

These benchmarks are mainly used to identify bottlenecks, not to showcase the performance of the framework. Proper throughput and latency benchmarks are TBD.

Setup: AMD 3900X, 3600MHz@CL17 RAM, ubuntu 20.04

  • 1 thread inserting ~5 million events and then processing them in ~1,353 ms and ~647 MB memory usage
  • 8 threads inserting ~5 million events and then processing them in ~1,938 ms and ~5,137 MB memory usage
  • 1 thread creating 10,000 services with dependencies in ~3,060 ms and ~42 MB memory usage
  • 8 threads creating 10,000 services with dependencies in ~10,100 ms and ~320 MB memory usage
  • 1 thread starting/stopping 1 service 10,000 times in ~1,470 ms and ~4 MB memory usage
  • 8 threads starting/stopping 1 service 10,000 times in ~3,060 ms and ~6 MB memory usage
  • 1 thread serializing & deserializing 1,000,000 JSON messages in ~380 ms and ~4 MB memory usage
  • 8 threads serializing & deserializing 1,000,000 JSON messages in ~400 ms and ~5 MB memory usage

Realtime example on a vanilla linux:

root# echo 950000 > /proc/sys/kernel/sched_rt_runtime_us #force kernel to fail our deadlines
root# ./ichor_realtime_example 
duration of run 4,076 is 51,298 µs which exceeded maximum of 2,000 µs
duration of run 6,750 is 50,296 µs which exceeded maximum of 2,000 µs
duration of run 9,396 is 50,293 µs which exceeded maximum of 2,000 µs
duration of run 12,055 is 50,296 µs which exceeded maximum of 2,000 µs
duration of run 14,719 is 50,295 µs which exceeded maximum of 2,000 µs
duration of run 17,374 is 50,297 µs which exceeded maximum of 2,000 µs
duration min/max/avg: 349/51,298/371 µs
root# echo 999999 > /proc/sys/kernel/sched_rt_runtime_us
root# ./ichor_realtime_example 
duration min/max/avg: 179/838/204 µs
root# echo -1 > /proc/sys/kernel/sched_rt_runtime_us
root# ./ichor_realtime_example 
duration min/max/avg: 274/368/276 µs

These benchmarks currently lead to the characteristics:

  • creating services with dependencies overhead is likely O(N²).
  • Starting services, stopping services overhead is likely O(N)
  • event handling overhead is amortized O(1)
  • Creating more threads is not 100% linearizable in all cases (pure event creation/handling seems to be, otherwise not really)
  • Latency spikes while scheduling in user-space is in the order of 100's of microseconds, lower if a proper realtime tuning guide is used (except if the queue goes empty and is forced to sleep)

Help with improving memory usage and the O(N²) behaviour would be appreciated.


Feel free to make issues/pull requests and I'm sometimes online on Discord:

Business inquiries can be sent to michael AT


I want to have a non-voluntary pre-emption scheduler

Currently Ichor uses a mutex when inserting/extracting events from its queue. Because non-voluntary user-space scheduling requires lock-free data-structures, this is not possible yet.

Is it possible to have a completely stack-based allocation while using C++?

Yes, see the realtime example. This example terminates the program whenever there is a dynamic memory allocation, aside from some allowed exceptions like the std::locale allocation.

I'd like to use clang

To my knowledge, clang doesn't have certain C++20 features used in Ichor yet. As soon as clang implements those, I'll work on adding support.

Windows? OS X? VxWorks Wind River? Baremetal?

I don't have a machine with windows/OS X to program for (and also know if there is much demand for it), so I haven't started on it.

What is necessary to implement before using Ichor on these platforms:

  • Ichor STL functionality, namely the RealtimeMutex and ConditionVariable.
  • Compiler support for C++20 may not be adequate yet.

The same goes for Wind River. Freestanding implementations might be necessary for Baremetal support, but that would stray rather far from my expertise.

Why re-implement parts of the STL?

To add support for memory allocators and achieve 0 dynamic memory allocation support. Particularly std::any and the (read/write) mutexes.

But also because the real-time extensions to mutexes (PTHREAD_MUTEX_ADAPTIVE_NP/PTHREAD_PRIO_INHERIT/PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP) is either not a standard extension or not exposed by the standard library equivalents.

Author: volt-software
Source Code:
License: MIT License


C++20 Thread Confinement and Dependency injection Framework.

What is Force DAO (FORCE) | What is Force DAO token | What is FORCE token

In this article, we’ll discuss information about the Force DAO project and FORCE token

Force is a protocol and DAO dedicated to producing superior returns by adhering to community-proposed strategies, and rewarding the strategists with powerful incentives.

We leverage high returns from yield-bearing DeFi protocols to provide investors with permissionless, secure, and innovative finance that can’t be stopped. Let the people invest!

Our mission is to identify and exploit alpha across multiple chains and scalability layers, starting with L1 Ethereum.

Our vision is to lead the design and implementation of high performing, institutional-grade DeFi products, from investment strategies to analytics and infrastructure tech.

What characterizes us amongst our peers, is the belief that the future of finance requires:

  • Integrity: Non-anonymous, accountable and transparent.
  • Excellence: Rigorous security, and user-centric design.
  • **Empiricism: **Combining fundamental and quantitative methods.
  • Community: 100% driven, governed and owned by its members.

Our mandate is to fill this gap.

The Rise of DeFi and the Surge of Retail Investors

February 5th 2020 Global TVL in DeFi was $997M USD (source).

Financially active millennials are tech-first and vigilant of traditional finance’s drawbacks. Many believe coming out of 2008, Wall Street took enormous amounts of risk leaving retail investors holding the bag. In the aftermath of the recession, the world struggled to understand why large capital firms were bailed out while families suffered in distress.

Technology has empowered a new generation to become financially autonomous. When retail-grade financial apps such as Robinhood are coupled with the power and reach of the internet, the result is agency.

The  Wall Street Bets phenomenon marks the start of a pendulum swing, a pushback against the traditional finance establishment.

This is also the phenomenon behind the rise of DeFi. The search for permissionless, censorship resistant money gave birth to the need for decentralized financial instruments. In one year, DeFi TVL grew from  $997M to $30.1B.

A bet on crypto is a bet on free, sovereign humanity. This is our zeitgeist.

We’re remaking the financial system for ourselves, and inviting you to join us in the fight.


Core Vaults

  • Core Vaults are automated yield aggregators tracking the highest performing pools and farms for BTC, ETH and stablecoins. This set of vaults are maintained by the DAO’s operations team.
  • We seek to initially test our DAO structure with established strategies to secure our foundation. These initial strategies are designed to be low-risk and passive.

Edge Vaults

  • Edge Vaults are next-gen automated yield strategies proposed by community members.
  • Edge Vaults are the production version of Force Prize winners, a global competition for DeFi strategies. Prize winners are selected and incubated through the DAO (funding, development, marketing).

Profit Sharing Vaults

  • Profit Sharing Vaults are staking contracts that distribute profits generated from across all vaults and products in the ecosystem, pro rata.
  • To align incentives between strategists, investors and DAO members, profits generated from any Edge Vault are distributed: 80% to investors, 17% profit sharing and 3% to the individual(s) who designed/deployed the strategy. Core Vaults are distributed 80% to investors and 20% to profit sharing.

Force Prize

Force Prize is a large-scale global incentive competition to crowdsource DeFi’s highest performing investment strategies.

We believe solutions can come from anyone, anywhere. Scientists, engineers, entrepreneurs, academics and other innovators with new ideas from all over the world are invited to form teams and compete to win the prize.

We have partnered with Gitcoin and top DeFi protocols to setup prizes in pre-defined themes.


  • Token Rewards: All prize winners earn a baseline number of Force tokens. In the case of themes sponsored by other DeFi protocols, winners are rewarded an additional number of governance tokens.
  • Performance Fees: Earn 3% of all profits generated by your team’s vault or product.


Disclaimer: The Force is strictly a token to govern the DAO and drive the protocol’s direction. It has no monetary value.


As a DAO, our project will be always led by its community. Token holders drive the strategy and direction of the protocol.

Force’s governance token is Force. The token is designed to be used as the basis for our token governed organization, aligning incentives while keeping it 100% decentralized.

  • Symbol: FORCE
  • Supply: 100,000,000
  • Type: ERC-20 (Aragon minime)
  • Address:  0x6807d7f7df53b7739f6438eabd40ab8c262c0aa8


  • 25% — Airdrop: Large-scale Force distribution to reward and invite communities that share our vision and ethos.
  • 35% — Emissions: Incentives for protocol capital providers and to drive governance objectives.
  • 25% — Treasury: Reserves to finance strategies, products, tools, and infrastructure.
  • 10% — Genesis Team: Founder rewards vested linearly over 18 months to ensure the long-term success of the ecosystem.
  • 5% — Early Contributor Program: Milestone-based incentives for early members who help kickstart and operate the DAO.

Force DAO Airdrop Details

Welcome to the final stretch Jedis 🧙!

The Force DAO team is excited to announce the FORCE token airdrop, which is scheduled to start this coming Saturday, April 3rd at 12pm EST.

Here’s what you need to know:

  • Start Time: Saturday, April 3rd at 12pm EST
  • **End Time: **Saturday, April 24th at 12pm EST

There will be a 3-day deprecation period starting on April 24th_. _This means the amount of rewards claimable per user will reduce linearly on every block until the end of the 3-day deprecation period (Tuesday, April 27th). Claim before April 24 at 12pm EST, and there is no deprecation for your claim.

Unleashing FORCE Rewards

In preparation for the end of the Public Beta and the full-platform launch, Force DAO is turning on all Staking Pools starting tomorrow at 12pm EST.

A full economic breakdown will be provided separately on our docs. Here’s an overview of our short-term emissions schedule:

  • Beginning Tuesday, March 30th, all funds deposited and staked in a Reward Boost contract will earn a moderate APY, between the 10%-50% range.
  • Upon airdrop start, on Saturday April 3rd, rewards will increase marginally to the 100%+ APY range.

The former is designed to incentivize platform adoption, while the latter is designed to increase TVL, as new strategies and products are rolled out throughout the next 2 weeks.

We’re excited to unveil the novel financial products we’ve been working on in the cross-chain, indices and lending space.

🪂 Airdrop Distribution

When our team first started this project, there was a shared belief that our tokens shouldn’t be distributed via a public sale or private sale with VCs.

We wanted to attract DeFi’s brightest minds, inviting existing communities with common ethos and vision.

This is still the case, and the reason we chose an airdrop as our primary token distribution method.

With time, we’ve refined this thinking, and we’re excited to share how FORCE will be distributed over the next couple weeks.


This is the first multi-chain airdrop, and there will be 2 stages to it. The first stage (and the largest distribution) is for the Ethereum community. The second stage is for EVM compatible blockchains.

As mentioned in our introductory post, 25 million FORCE tokens of our fixed 100 million supply, will be distributed over the next month.

Ethereum Projects

  • Amount: 17,750,000 FORCE
  • Dates: Claimable throughout April 3rd — April 24th
  • Recipients: Participants in the Aave, Alchemix, Badger, Balancer, Curve, Maker DAO, Synthetix, Sushi, Vesper, and Yearn communities.
  • Rationale: Shared ethos and vision. The airdrop selection process will be made available soon after this post on

Force Public Beta

  • Amount: 4,500,000 FORCE
  • Dates: Claimable throughout April 3rd — April 24th
  • Recipients: Public Beta and Light Speed participants.
  • Rationale: Reward early supporters and testers helping bootstrap Force.

Harvest Finance

  • Amount: 250,000 FORCE
  • Recipient: Harvest Finance Multisig
  • Rationale: A gift for allowing us to use their smart contract infrastructure as the basis for our “Yield Aggregation” product. Thank you Harvest.


  • Amount: 2,500,000 FORCE
  • Target Dates: April 15th — April 24th
  • Recipients: Participants in the BSC, Polygon, xDAI, and Fantom DeFi ecosystems.
  • Rationale: The future of DeFi is chain agnostic, and we want to reward users who’ve used other EVM blockchains for open finance.

For transparency, our team will make the SQL queries and blockchain data feeds used to construct our airdrop list available on soon after this announcement is live.

📅 Claiming Period and Deprecation

Eligible recipients can claim full airdrop funds for up to three weeks. At the end of the three-week claim period, any unclaimed FORCE tokens for each unclaimed address progressively deprecates for 3 days.

Claim Period

  • Start Time: Saturday, April 3rd at 12pm EST
  • **End Time: **Saturday, April 24th at 12pm EST

Deprecation Period

  • Start Time: Wednesday, April 24th 12pm EST
  • **End Time: **Monday, April 29th at 12pm EST.

Throughout the three day deprecation period, the amount of claimable rewards are reduced linearly by each block.

Airdrop funds can be claimed at this time at deprecated amounts until the end of the deprecation period, at which point no FORCE will remain in unclaimed airdrop accounts.

The reclaimed FORCE airdrop funds will go back to the Force DAO treasury.

🗳️ Claiming The $FORCE Airdrop

Once the airdrop is live, eligible recipients can check on the page in the top-right corner to see their available xFORCE balance.

xFORCE is the “interest-bearing” version of FORCE. It represents your share in the FORCE profit-sharing pool. By airdropping xFORCE, all recipients earn the native FORCE Vault APY from the moment the airdrop is live.

Our team has the community’s best interest at heart. By airdropping xFORCE, users can earn the interest derived from the performance of our platform’s strategies and pay for the costs associated with claiming and withdrawing FORCE.

To redeem your xFORCE tokens, click the CLAIM button on the popup box indicated below, and enact a transaction. This transaction will require an ETH wallet and ETH for gas.

We are thrilled to take this next step with our community. Join in and earn rewards starting tomorrow, and watch them increase to 100%+ APY in the coming weeks.

We’re seeding Force DAO with your participation, and together we’ll build next-gen multi-chain investment strategies!

How and Where to Buy Force DAO (FORCE)?

Force DAO is now live on the Ethereum mainnet. The token address for FORCE is 0x6807d7f7df53b7739f6438eabd40ab8c262c0aa8. Be cautious not to purchase any other token with a smart contract different from this one (as this can be easily faked). We strongly advise to be vigilant and stay safe throughout the launch. Don’t let the excitement get the best of you.

Just be sure you have enough ETH in your wallet to cover the transaction fees.

You will have to first buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB)…

We will use Binance Exchange here as it is one of the largest crypto exchanges that accept fiat deposits.

Once you finished the KYC process. You will be asked to add a payment method. Here you can either choose to provide a credit/debit card or use a bank transfer, and buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB)…


Step by Step Guide : What is Binance | How to Create an account on Binance (Updated 2021)

Next step

You need a wallet address to Connect to Sushiswap Decentralized Exchange, we use Metamask wallet

If you don’t have a Metamask wallet, read this article and follow the steps
What is Metamask wallet | How to Create a wallet and Use

Connect Metamask wallet to Sushiswap Decentralized Exchange and Buy FORCE token

Contract: 0x6807d7f7df53b7739f6438eabd40ab8c262c0aa8

The top exchange for trading in FORCE token is currently Sushiswap, and 0x Protocol

Apart from the exchange(s) above, there are a few popular crypto exchanges where they have decent daily trading volumes and a huge user base. This will ensure you will be able to sell your coins at any time and the fees will usually be lower. It is suggested that you also register on these exchanges since once FORCE gets listed there it will attract a large amount of trading volumes from the users there, that means you will be having some great trading opportunities!

Top exchanges for token-coin trading. Follow instructions and make unlimited money

Find more information FORCE

WebsiteExplorerExplorer 2Social ChannelSocial Channel 2Social Channel 3Message BoardCoinmarketcap

🔺DISCLAIMER: Trading Cryptocurrency is VERY risky. Make sure that you understand these risks if you are a beginner. The Information in the post is my OPINION and not financial advice, is intended FOR GENERAL INFORMATION PURPOSES ONLY. You are responsible for what you do with your funds

If you are a beginner, learn about Cryptocurrency in this article ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner

Thank for visiting and reading this article! Please share if you liked it!

#bitcoin #cryptocurrency #force #force dao

What is Force DAO (FORCE) | What is Force DAO token | What is FORCE token