Lawrence  Lesch

Lawrence Lesch


Hotscript: Type-level Madness

Higher-Order TypeScript (HOTScript)

A lodash-like library for types, with support for type-level lambda functions.

🚧 work in progress 🚧

// prettier-ignore
type res1 = Pipe<
  //  ^? 95
  [1, 2, 3, 4, 3, 4],

// This is a type-level "lambda"!
interface Duplicate extends Fn {
  return: [this["arg0"], this["arg0"]];

type result1 = Call<Tuples.Map<Duplicate>, [1, 2, 3, 4]>;
//     ^? [[1, 1], [2, 2], [3, 3], [4, 4]]

type result2 = Call<Tuples.FlatMap<Duplicate>, [1, 2, 3, 4]>;
//     ^? [1, 1, 2, 2, 3, 3, 4, 4]

// Let's compose some functions to transform an object type:
type ToAPIPayload<T> = Pipe<
    Objects.Assign<{ metadata: { newUser: true } }>,
    Objects.Assign<{ id: string }>
type T = ToAPIPayload<{
  id: symbol;
  firstName: string;
  lastName: string;
// Returns:
type T = {
  id: string;
  metadata: { new_user: true };
  first_name: string;
  last_name: string;


  •  Core
    •  Pipe
    •  PipeRight
    •  Call
    •  Apply
    •  PartialApply
    •  Compose
    •  ComposeLeft
  •  Function
    •  ReturnType
    •  Parameters
    •  Parameter n
  •  Tuples
    •  Create
    •  Partition
    •  IsEmpty
    •  Zip
    •  ZipWith
    •  Sort
    •  Head
    •  At
    •  Tail
    •  Last
    •  FlatMap
    •  Find
    •  Sum
    •  Drop n
    •  Take n
    •  TakeWhile
    •  Join separator
    •  Map
    •  Filter
    •  Reduce
    •  ReduceRight
    •  Every
    •  Some
    •  ToUnion
  •  Object
    •  Readonly
    •  Mutable
    •  Required
    •  Partial
    •  ReadonlyDeep
    •  MutableDeep
    •  RequiredDeep
    •  PartialDeep
    •  Update
    •  Record
    •  Keys
    •  Values
    •  AllPaths
    •  Create
    •  Get
    •  FromEntries
    •  Entries
    •  MapValues
    •  MapKeys
    •  GroupBy
    •  Assign
    •  Pick
    •  PickBy
    •  Omit
    •  OmitBy
    •  CamelCase
    •  CamelCaseDeep
    •  SnakeCase
    •  SnakeCaseDeep
    •  KebabCase
    •  KebabCaseDeep
  •  Union
    •  Map
    •  Extract
    •  ExtractBy
    •  Exclude
    •  ExcludeBy
  •  String
    •  Length
    •  TrimLeft
    •  TrimRight
    •  Trim
    •  Join
    •  Replace
    •  Slice
    •  Split
    •  Repeat
    •  StartsWith
    •  EndsWith
    •  ToTuple
    •  ToNumber
    •  ToString
    •  Prepend
    •  Append
    •  Uppercase
    •  Lowercase
    •  Capitalize
    •  Uncapitalize
    •  SnakeCase
    •  CamelCase
    •  KebabCase
    •  Compare
    •  Equal
    •  NotEqual
    •  LessThan
    •  LessThanOrEqual
    •  GreaterThan
    •  GreaterThanOrEqual
  •  Number
    •  Add
    •  Multiply
    •  Subtract
    •  Negate
    •  Power
    •  Div
    •  Mod
    •  Abs
    •  Compare
    •  GreaterThan
    •  GreaterThanOrEqual
    •  LessThan
    •  LessThanOrEqual
  •  Boolean
    •  And
    •  Or
    •  XOr
    •  Not
    •  Extends
    •  Equals
    •  DoesNotExtend

Download Details:

Author: Gvergnaud
Source Code: 

#typescript #type #level 

Hotscript: Type-level Madness
Sheldon  Grant

Sheldon Grant


Service Level Agreement Benefits

Introduction Service Level Agreements

The demand for accurate, real-time data has never been greater for today's data engineering teams, yet data downtime has always been a reality. So, how do we break the cycle and obtain reliable data?

Data teams in the early 2020s, like their software engineering counterparts 20 years ago, experienced a severe conundrum: reliability. Businesses are ingesting more operational and third-party data than ever before. Employees from across the organization, including those on non-data teams, interact with data at all stages of its lifecycle. Simultaneously, data sources, pipelines, and workflows are becoming more complex.

While software engineers have resolved application downtime with specialized fields (such as DevOps and Site Reliability Engineering), frameworks (such as Service Level Agreements, Indicators, and Objectives), and a plethora of acronyms (SRE, SLAs, SLIs, and SLOs, respectively), data teams haven't yet given data downtime the due importance. Now it is up to data teams to do the same: prioritize, standardize, and evaluate data reliability. I believe that data quality or reliability engineering will become its specialization over the next decade, in charge of this crucial business component. In the meantime, let's look at what data reliability SLAs are, why they're essential, as well as how to develop them.

What is a Service Level Agreements?

"Slack's SLA guarantees 99.999 service uptime. If breached, they apply for a service credit."

The best way to describe Service Level Agreements (SLAs) is a method that many businesses use to define and measure the standard of service that a given vendor, product, or internal team will provideβ€”and potential remedies if they do not.

As an example, for customers on Plus plans and above, Slack's customer-facing SLA guarantees 99.99 percent uptime every fiscal quarter with no more than 10 hours of scheduled downtime. If they come up short, impacted customers will be given service credits for future use on their accounts.


Customers use service level agreements (SLAs) to guarantee that they receive what they paid for from a vendor: a robust, dependable product. Many software teams develop SLAs for internal projects or users instead of end-users.

Importance of data reliability Service Level Agreements for Data Engineers

As an example, consider internal software engineering SLAs. Why bother formalizing SLAs if you don't have a customer urging you to commit to certain thresholds in an agreement? Why not simply rely on everyone to do their best and aim for as close to 100 percent uptime as possible? Would that not be adding extraneous burdensome regulations?

No, not at all. The exercise of defining, complying with, and evaluating critical characteristics of what defines reliable software can be immensely beneficial while also setting clear expectations for internal stakeholders. SLAs can help developing, product, and business teams think about the bigger picture about their applications and prioritize incoming requests. SLAs provide confidence that different software engineering teams and their stakeholders mean the same thing, caring about the same metrics and sharing a pledge to thoroughly documented requirements.

Setting non-zero-uptime requirements allow for room to improve. There is no risk of downtime if there is no room for improvement. Furthermore, it is simply not feasible. Even with the best practices and techniques in place, systems will fail from time to time. However, with good SLAs, engineers will know precisely when and how to intervene if anything ever goes wrong.

Likewise, data teams and their data consumers must categorize, measure, and track the reliability of their data throughout its lifecycle. Consumers may make inaccurate assumptions or rely on empirical information about the trustworthiness of your data platform if these metrics are not strictly established. Attempting to determine data dependability SLAs help build trust and strengthen bonds between your data, your data team, and downstream consumers, whether your customers or cross-functional teams within your organization. In other words, data SLAs assist your organization in becoming more "data-driven" in its approach to data.

SLAs organize and streamline communication, ensuring that your team and stakeholders share a common language and refer to the same metrics. And, because defining SLAs helps your data team quickly identify the business's priority areas, they'll be able to prioritize more rapidly and respond more rapidly when cases arise.

What is DQ SLA (Data Quality Service Level Agreement)?

A DQ SLA, like a more traditional SLA, governs the roles and responsibilities of a hardware or software vendor in accordance with regulations and levels of acceptability, as well as realistic expectations for response and restoration when data errors and flaws are identified. DQ SLAs can be defined for any circumstance where a data provider transfers data to a data consumer.

More specifically, a data recipient would specify expectations regarding measurable aspects related to one or more dimensions of data quality (such as completeness, accuracy, consistency, timeliness, and so on) within any business process. The DQ SLA would then include an expected data quality level and even a list of processes to be followed if those expectations are not fulfilled, such as:

  1. The location in the business process flow that the SLA covers.
  2. The SLA covers critical data elements.
  3. Each data element has its own set of data quality dimensions.
  4. Quality expectations for each data element for each of the identified dimensions.
  5. Specified data quality rules that formalize those expectations.
  6. Business consequences of noncompliance with defined data quality rules.
  7. Methods for determining non-compliance with those expectations.
  8. Acceptance criteria for each measurement
  9. How and where should concerns be classified, prioritized, and documented.
  10. The individual(s) will be notified if the acceptability thresholds are not met.
  11. Expected resolution or restoration times for the issues.
  12. Method for keeping track of the status of the resolution process.
  13. When the resolution times are not met, an escalation tactic and hierarchy are implemented.


The DQ SLA is distinctive because it recognizes that data quality issues and resolution are almost always linked to business operations. To benefit from the processes suggested by the definition of a DQ SLA (particularly items 5, 7, 9, and 12), systems facilitating those operations, namely:

  1. Management of data quality rules
  2. Monitoring, measurement, and notification
  3. Categorization, prioritization, and tracking of data quality incidents

These concepts are critical in establishing the DQ SLA's goal: data quality control, which is based on the definition of rules based on agreed-upon data quality dimensions.

Suppose it is determined that the information does not meet the defined expectations. In that case, the remediation process can include a variety of tasks, such as writing the non-confirming text to an outlier file, emailing a system administrator or data steward to resolve the issue, running an immediate corrective data quality action, or any combination of these.

How to build Service Level Agreements for Data Platforms?

Creating and adhering to data reliability SLAs is a cohesive and precise exercise.

First, let's go over some terminology. According to Google's service level agreements (SLAs), clear service level indicators (SLIs), quantitative measures of quality service, and accepted service level objectives (SLOs), the expected values or ranges of values where each criterion must meet, are necessary. Many engineering teams, for example, use availability as a criterion of site reliability and set a goal of maintaining the availability of at least 99 percent.

Creating reliability SLAs for data teams typically involves three key steps: defining, measuring, and tracking.

Using SLAs to define Data Reliability

The first phase is to consent and clearly articulate what reliable data signifies to your company.

Setting a baseline is a good place to start. Begin by taking stock of your data, how it's being used, and by whom. Examine your data's historical performance to establish a baseline metric for reliability.

You should also solicit feedback from your data consumers on what "reliability" means to them. Even with a thorough knowledge of data lineage, data engineers are frequently isolated from their colleagues' day-to-day workflows and use cases. When developing reliability agreements with internal teams, it is crucial to know how consumers interact with data, what is most important, or which potential complications require the most stringent, critical intervention.

Furthermore, you'll want to ensure that all relevant stakeholders β€” all data leaders or business consumers with a stake in reliability β€” have assessed it and agreed on the descriptions of reliability you're constructing.

You'll be able to set clear, actionable SLAs once you understand

  1. What data you're working with
  2. How it's used, and
  3. Who uses it.

SLIs for measuring Data Reliability

Once you've established a comprehensive understanding and baseline, you can begin to home in on the key metrics that will serve as your service-level reliability indicators.

As a general rule, data SLIs should portray the mutually agreed-upon state of data you defined in step 1, as well as limitations on how data can and cannot be used and a detailed description of data downtime. This may include incomplete, duplicated, or out-of-date data.

Your particular use case will determine sLIs, so here are a few metrics used to assess data health:

  1. The number of data points associated with a specific data asset (N). Although this may be well outside your control, given that you most likely rely on external data sources, it would still be a significant cause of data downtime and, therefore, should be determined by measuring.
  2. Time-to-detection (TTD): This metric quantifies how quickly your team is alerted when an issue arises. This could take weeks or even months if you don't have proper detection and emergency notification strategies in place. Bad data can cause "silent errors," leading to costly issues that influence both your company and your customers.
  3. Time-to-resolution (TTR): This measures how quickly your team was capable of resolving an issue after being notified about it.

Using SLOs to track Data Reliability

You can set objectives, i.e., reasonable ranges of data downtime, when you've already identified key indicators (SLIs) for data reliability. All such SLOs should be appropriate based on your current situation. For instance, if you choose to include TTD as a metric but are not using automated monitoring tools, your SLO should be lower than that of a mature organization with extensive data reliability tooling. Aligning those scopes makes it easy to create a consistent framework that rates incidents depending on the severity, making it easier to interact and quickly respond when issues arise.

Once you've established these priorities and integrated them into your SLAs, you can create a dashboard to track and evaluate progress. Some data teams build ad hoc dashboards, whereas others depend on dedicated data observability options.

What are the challenges of Service Level Agreements in Data Platforms?

The delivery of services for millions of customers via data centers involves resource management challenges. Data processing of risk management, consumer-driven service management, and independent resource management, measuring the service, system design, and reiteration assessment resource allocation in SLA with virtualization are the challenges of service level agreements.

Consumer-Driven Service Management

To satisfy the customer requirement, three user-centric objectives are used: Receiving feedback from customers. Providing reliable communication between customers. Increasing access efficiency to understand the specific necessities of the customer. Believing the customer. When developing a service, if customer expectations are taken into account, those expectations are imported into the service provider.

Data Processing of Risk Management

The Risk Management process includes:

  1. Identifying risk factors and assessing them.
  2. Identifying risk management techniques.
  3. Reviewing the risk management plan.

Grid service customers' service quality conditions necessitate the formation of service level agreements between service providers and customers. Because resources are disrupted and unavailable, service providers must decide whether to continue or reject service level agreement requests.

Independent Resource Management

The data processing center should keep the reservation process going smoothly by managing the existing service requisition, improving the future service requisition, and changing the price for incoming requests. The resource management paradigm maps resource interactions to a platform-independent service level agreements pool. The resource management architecture with the cooperation of computing systems via numerous virtual machines enhances the effectiveness of computational models and the utilization of resources designed for on-demand resource utilization.

SLA Resource Allocation Using Virtualization

Virtual machines with various resource management policies facilitate resource allocation in SLA by meeting the needs of multiple users. An optimal joint multiple resource allocation method is used in the Allocation of Resource Model of Distributed Environment. A resource allocation methodology is introduced to execute user applications for the multi-dimensional resource allocation problem.

Measuring the Service

Various service providers offer various computing services. Original cloud impressions from numerous public documents must be assessed for service performance to design the application and service needs. As part of service level agreement, service measurement includes the current system's configuration and runtime information metrics.

System Design and Reiteration Valuation

Various sources and consumers with varying service standards are assessed to demonstrate the efficiency of resource management plans. Because resources are transferred, and service requisitions will come from multiple consumers at any stage, it is tedious to perform a performance evaluation of monitoring the resource plans in a repetitive and administrable fashion.


Data SLAs help the organization stay on track. They are defined as a public pledge to others. They are a bilateral agreement; you agree to continue providing data within specified criteria in exchange for people's participation and awareness. A lot can go wrong in data engineering, and a lot is due to misunderstanding. Documenting your SLA will go a long way toward setting the record straight, allowing you to achieve your primary objective of instilling greater data trust within your organization.The good news is when defining metrics, service, and deliverable targets for big data analytics, you don't have to start from scratch since the technique can be borrowed from the transactional side of your IT work. For so many businesses, it's simply a case of examining the level of service processes that are already in the place for their transactional applications, then applying these processes to big data and making the required changes to address distinct features of the big data environment, such as parallel processing and the handling of several types and forms of data.

Original article source at:

#service #level 

Service Level Agreement Benefits
Chloe  Butler

Chloe Butler


Pdf2gerb: Perl Script Converts PDF Files to Gerber format


Perl script converts PDF files to Gerber format

Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.

The general workflow is as follows:

  1. Design the PCB using your favorite CAD or drawing software.
  2. Print the top and bottom copper and top silk screen layers to a PDF file.
  3. Run Pdf2Gerb on the PDFs to create Gerber and Excellon files.
  4. Use a Gerber viewer to double-check the output against the original PCB design.
  5. Make adjustments as needed.
  6. Submit the files to a PCB manufacturer.

Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).

See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.

#Pdf2Gerb config settings:
#Put this file in same folder/directory as itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;

use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)

#configurable settings:
#change values here instead of in main file

use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call

#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)} ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software.  \nGerber files MAY CONTAIN ERRORS.  Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG

use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC

use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)

#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1); 

#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
#round or square pads (> 0) and drills (< 0):
    .010, -.001,  #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
    .031, -.014,  #used for vias
    .041, -.020,  #smallest non-filled plated hole
    .051, -.025,
    .056, -.029,  #useful for IC pins
    .070, -.033,
    .075, -.040,  #heavier leads
#    .090, -.043,  #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
    .100, -.046,
    .115, -.052,
    .130, -.061,
    .140, -.067,
    .150, -.079,
    .175, -.088,
    .190, -.093,
    .200, -.100,
    .220, -.110,
    .160, -.125,  #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
    .090, -.040,  #want a .090 pad option, but use dummy hole size
    .065, -.040, #.065 x .065 rect pad
    .035, -.040, #.035 x .065 rect pad
    .001,  #too thin for real traces; use only for board outlines
    .006,  #minimum real trace width; mainly used for text
    .008,  #mainly used for mid-sized text, not traces
    .010,  #minimum recommended trace width for low-current signals
    .015,  #moderate low-voltage current
    .020,  #heavier trace for power, ground (even if a lighter one is adequate)
    .030,  #heavy-current traces; be careful with these ones!
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);

#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size:   parsed PDF diameter:      error:
#  .014                .016                +.002
#  .020                .02267              +.00267
#  .025                .026                +.001
#  .029                .03167              +.00267
#  .033                .036                +.003
#  .040                .04267              +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
    HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
    RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
    SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
    RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
    TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
    REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects

#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
    CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
    CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
    SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
    WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
    RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found

#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches

#line join/cap styles:
use constant
    CAP_NONE => 0, #butt (none); line is exact length
    CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
    CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
    CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
#number of elements in each shape type:
use constant
    RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
    LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
    CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
    CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
    rect => RECT_SHAPELEN,
    line => LINE_SHAPELEN,
    curve => CURVE_SHAPELEN,
    circle => CIRCLE_SHAPELEN,

#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions

# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?

#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes. 
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes

#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches

# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)

# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time

# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const

use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool

my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time

print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load


#use Package::Constants;
#use Exporter qw(import); #

#my $caller = "pdf2gerb::";

#sub cfg
#    my $proto = shift;
#    my $class = ref($proto) || $proto;
#    my $settings =
#    {
#        $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
#    };
#    bless($settings, $class);
#    return $settings;

#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;

#print STDERR "read cfg file\n";

#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #; NOTE: "_OK" skips short/common names

#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }

Download Details:

Author: swannman
Source Code:

License: GPL-3.0 license


Pdf2gerb: Perl Script Converts PDF Files to Gerber format
Rory  West

Rory West


Building AWS DynamoDB CRUD in Golang with Example

This is a simple AWS DynamoDB CRUD example written in Golang. Just bear in mind, some files need improvement. There are hard-coded pieces, duplications so on. I had to keep this post as short as possible. Feel free to refactor it.


β”œβ”€β”€ internal
β”‚   β”œβ”€β”€ domain
β”‚   β”‚   └── error.go
β”‚   β”œβ”€β”€ pkg
β”‚   β”‚   └── storage
β”‚   β”‚       β”œβ”€β”€ aws
β”‚   β”‚       β”‚   β”œβ”€β”€ aws.go
β”‚   β”‚       β”‚   └── user_storage.go
β”‚   β”‚       └── user_storer.go
β”‚   └── user
β”‚       β”œβ”€β”€ controller.go
β”‚       └── models.go
β”œβ”€β”€ main.go
└── migrations
    └── users.json


First of all you need to create the users table with uuid field as partition key.

$ aws --profile localstack --endpoint-url http://localhost:4566 dynamodb create-table --cli-input-json file://codequs/users.json
  "TableName": "users",
  "AttributeDefinitions": [
      "AttributeName": "uuid",
      "AttributeType": "S"
  "KeySchema": [
      "AttributeName": "uuid",
      "KeyType": "HASH"
  "ProvisionedThroughput": {
    "ReadCapacityUnits": 5,
    "WriteCapacityUnits": 5



Never mind the endpoints because there shouldn't be suffixes like that!

package main
import (
func main() {
	// Create a session instance.
	ses, err := aws.New(aws.Config{
		Address: "http://localhost:4566",
		Region:  "eu-west-1",
		Profile: "localstack",
		ID:      "test",
		Secret:  "test",
	if err != nil {
	// Instantiate HTTP app
	usr := user.Controller{
		Storage: aws.NewUserStorage(ses, time.Second*5),
	// Instantiate HTTP router
	rtr := http.NewServeMux()
	rtr.HandleFunc("/api/v1/users/create", usr.Create)
	rtr.HandleFunc("/api/v1/users/find", usr.Find)
	rtr.HandleFunc("/api/v1/users/delete", usr.Delete)
	rtr.HandleFunc("/api/v1/users/update", usr.Update)
	// Start HTTP server
	log.Fatalln(http.ListenAndServe(":8080", rtr))


package user
import (
type Controller struct {
	Storage storage.UserStorer
// POST /api/v1/users/create
func (c Controller) Create(w http.ResponseWriter, r *http.Request) {
	var req User
	if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
	id := uuid.New().String()
	err := c.Storage.Insert(r.Context(), storage.User{
		UUID:      id,
		Name:      req.Name,
		Level:     req.Level,
		IsBlocked: req.IsBlocked,
		CreatedAt: req.CreatedAt,
		Roles:     req.Roles,
	if err != nil {
		switch err {
		case domain.ErrConflict:
	_, _ = w.Write([]byte(id))
// GET /api/v1/users/find?id={UUID}
func (c Controller) Find(w http.ResponseWriter, r *http.Request) {
	res, err := c.Storage.Find(r.Context(), r.URL.Query().Get("id"))
	if err != nil {
		switch err {
		case domain.ErrNotFound:
	user := User{
		UUID:      res.UUID,
		Name:      res.Name,
		Level:     res.Level,
		IsBlocked: res.IsBlocked,
		CreatedAt: res.CreatedAt,
		Roles:     res.Roles,
	data, err := json.Marshal(user)
	if err != nil {
	w.Header().Set("Content-Type", "application/json; charset=utf-8")
	_, _ = w.Write(data)
// DELETE /api/v1/users/delete?id={UUID}
func (c Controller) Delete(w http.ResponseWriter, r *http.Request) {
	err := c.Storage.Delete(r.Context(), r.URL.Query().Get("id"))
	if err != nil {
		switch err {
		case domain.ErrNotFound:
// PATCH /api/v1/users/update?id={UUID}
func (c Controller) Update(w http.ResponseWriter, r *http.Request) {
	var req User
	if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
	err := c.Storage.Update(r.Context(), storage.User{
		UUID:  r.URL.Query().Get("id"),
		Name:  req.Name,
		Level: req.Level,
		Roles: req.Roles,
	if err != nil {
		switch err {
		case domain.ErrNotFound:


package user
import "time"
type User struct {
	UUID      string    `json:"uuid"`
	Name      string    `json:"name"`
	Level     int       `json:"level"`
	IsBlocked bool      `json:"is_blocked"`
	CreatedAt time.Time `json:"created_at"`
	Roles     []string  `json:"roles"`


package domain
import "errors"
var (
	ErrInternal = errors.New("internal")
	ErrNotFound = errors.New("not found")
	ErrConflict = errors.New("conflict")


package storage
import (
type User struct {
	UUID      string    `json:"uuid"`
	Name      string    `json:"name"`
	Level     int       `json:"level"`
	IsBlocked bool      `json:"is_blocked"`
	CreatedAt time.Time `json:"created_at"`
	Roles     []string  `json:"roles"`
type UserStorer interface {
	Insert(ctx context.Context, user User) error
	Find(ctx context.Context, uuid string) (User, error)
	Delete(ctx context.Context, uuid string) error
	Update(ctx context.Context, user User) error


package aws
import (
var _ storage.UserStorer = UserStorage{}
type UserStorage struct {
	timeout time.Duration
	client  *dynamodb.DynamoDB
func NewUserStorage(session *session.Session, timeout time.Duration) UserStorage {
	return UserStorage{
		timeout: timeout,
		client:  dynamodb.New(session),
func (u UserStorage) Insert(ctx context.Context, user storage.User) error {
	ctx, cancel := context.WithTimeout(ctx, u.timeout)
	defer cancel()
	item, err := dynamodbattribute.MarshalMap(user)
	if err != nil {
		return domain.ErrInternal
	input := &dynamodb.PutItemInput{
		TableName: aws.String("users"),
		Item:      item,
		ExpressionAttributeNames: map[string]*string{
			"#uuid": aws.String("uuid"),
		ConditionExpression: aws.String("attribute_not_exists(#uuid)"),
	if _, err := u.client.PutItemWithContext(ctx, input); err != nil {
		if _, ok := err.(*dynamodb.ConditionalCheckFailedException); ok {
			return domain.ErrConflict
		return domain.ErrInternal
	return nil
func (u UserStorage) Find(ctx context.Context, uuid string) (storage.User, error) {
	ctx, cancel := context.WithTimeout(ctx, u.timeout)
	defer cancel()
	input := &dynamodb.GetItemInput{
		TableName: aws.String("users"),
		Key: map[string]*dynamodb.AttributeValue{
			"uuid": {S: aws.String(uuid)},
	res, err := u.client.GetItemWithContext(ctx, input)
	if err != nil {
		return storage.User{}, domain.ErrInternal
	if res.Item == nil {
		return storage.User{}, domain.ErrNotFound
	var user storage.User
	if err := dynamodbattribute.UnmarshalMap(res.Item, &user); err != nil {
		return storage.User{}, domain.ErrInternal
	return user, nil
func (u UserStorage) Delete(ctx context.Context, uuid string) error {
	ctx, cancel := context.WithTimeout(ctx, u.timeout)
	defer cancel()
	input := &dynamodb.DeleteItemInput{
		TableName: aws.String("users"),
		Key: map[string]*dynamodb.AttributeValue{
			"uuid": {S: aws.String(uuid)},
	if _, err := u.client.DeleteItemWithContext(ctx, input); err != nil {
		return domain.ErrInternal
	return nil
func (u UserStorage) Update(ctx context.Context, user storage.User) error {
	ctx, cancel := context.WithTimeout(ctx, u.timeout)
	defer cancel()
	roles := make([]*dynamodb.AttributeValue, len(user.Roles))
	for i, role := range user.Roles {
		roles[i] = &dynamodb.AttributeValue{S: aws.String(role)}
	input := &dynamodb.UpdateItemInput{
		TableName: aws.String("users"),
		Key: map[string]*dynamodb.AttributeValue{
			"uuid": {S: aws.String(user.UUID)},
		ExpressionAttributeNames: map[string]*string{
			"#name":  aws.String("name"),
			"#level": aws.String("level"),
			"#roles": aws.String("roles"),
		ExpressionAttributeValues: map[string]*dynamodb.AttributeValue{
			":name":  {S: aws.String(user.Name)},
			":level": {N: aws.String(fmt.Sprint(user.Level))},
			":roles": {L: roles},
		UpdateExpression: aws.String("set #name = :name, #level = :level, #roles = :roles"),
		ReturnValues:     aws.String("UPDATED_NEW"),
	if _, err := u.client.UpdateItemWithContext(ctx, input); err != nil {
		return domain.ErrInternal
	return nil


package aws
import (
type Config struct {
	Address string
	Region  string
	Profile string
	ID      string
	Secret  string
func New(config Config) (*session.Session, error) {
	return session.NewSessionWithOptions(
			Config: aws.Config{
				Credentials:      credentials.NewStaticCredentials(config.ID, config.Secret, ""),
				Region:           aws.String(config.Region),
				Endpoint:         aws.String(config.Address),
				S3ForcePathStyle: aws.Bool(true),
			Profile: config.Profile,


# Create
curl --location --request POST 'http://localhost:8080/api/v1/users/create' \
--data-raw '{
    "name": "inanzzz",
    "level": 3,
    "is_blocked": false,
    "created_at": "2020-01-31T23:59:00Z",
    "roles": ["accounts", "admin"]
# Find
curl --location --request GET 'http://localhost:8080/api/v1/users/find?id=80638f40-d248-49be-90ce-88d5b1b4ecd4'
# Delete
curl --location --request DELETE 'http://localhost:8080/api/v1/users/delete?id=80638f40-d248-49be-90ce-88d5b1b4ecd4'
# Update
curl --location --request PATCH 'http://localhost:8080/api/v1/users/update?id=347ac592-b024-4001-9d1b-925abe10c236' \
--data-raw '{
    "name": "inanzzz",
    "level": 1,
    "roles": ["accounts", "admin"]

Happy Coding!!!

#aws #golang #dynamodb #crud

Building AWS DynamoDB CRUD in Golang with Example