1563522639
The Serverless Framework helps you develop and deploy your AWS Lambda functions, along with the AWS infrastructure resources they require. It’s a CLI that offers structure, automation and best practices out-of-the-box, allowing you to focus on building sophisticated, event-driven, serverless architectures, comprised of Functions and Events.
The Serverless Framework is different from other application frameworks because:
Here are the Framework’s main concepts and how they pertain to AWS and Lambda…
A Function is an AWS Lambda function. It’s an independent unit of deployment, like a microservice. It’s merely code, deployed in the cloud, that is most often written to perform a single job such as:
You can perform multiple jobs in your code, but we don’t recommend doing that without good reason. Separation of concerns is best and the Framework is designed to help you easily develop and deploy Functions, as well as manage lots of them.
Anything that triggers an AWS Lambda Function to execute is regarded by the Framework as an Event. Events are infrastructure events on AWS such as:
When you define an event for your AWS Lambda functions in the Serverless Framework, the Framework will automatically create any infrastructure necessary for that event (e.g., an API Gateway endpoint) and configure your AWS Lambda Functions to listen to it.
Resources are AWS infrastructure components which your Functions use such as:
The Serverless Framework not only deploys your Functions and the Events that trigger them, but it also deploys the AWS infrastructure components your Functions depend upon.
A Service is the Framework’s unit of organization. You can think of it as a project file, though you can have multiple services for a single application. It’s where you define your Functions, the Events that trigger them, and the Resources your Functions use, all in one file entitled serverless.yml
(or serverless.json
or serverless.js
). It looks like this:
# serverless.yml
service: users
functions: # Your "Functions"
usersCreate:
events: # The "Events" that trigger this function
- http: post users/create
usersDelete:
events:
- http: delete users/delete
resources: # The "Resources" your "Functions" use. Raw AWS CloudFormation goes in here.
When you deploy with the Framework by running serverless deploy
, everything in serverless.yml
is deployed at once.
You can overwrite or extend the functionality of the Framework using Plugins. Every serverless.yml
can contain a plugins:
property, which features multiple plugins.
# serverless.yml
plugins:
- serverless-offline
- serverless-secrets
Thanks for reading ❤
If you liked this post, share it with all of your programming buddies!
Follow us on Facebook | Twitter
#aws #microservices #web-service #serverless
1655630160
Install via pip:
$ pip install pytumblr
Install from source:
$ git clone https://github.com/tumblr/pytumblr.git
$ cd pytumblr
$ python setup.py install
A pytumblr.TumblrRestClient
is the object you'll make all of your calls to the Tumblr API through. Creating one is this easy:
client = pytumblr.TumblrRestClient(
'<consumer_key>',
'<consumer_secret>',
'<oauth_token>',
'<oauth_secret>',
)
client.info() # Grabs the current user information
Two easy ways to get your credentials to are:
interactive_console.py
tool (if you already have a consumer key & secret)client.info() # get information about the authenticating user
client.dashboard() # get the dashboard for the authenticating user
client.likes() # get the likes for the authenticating user
client.following() # get the blogs followed by the authenticating user
client.follow('codingjester.tumblr.com') # follow a blog
client.unfollow('codingjester.tumblr.com') # unfollow a blog
client.like(id, reblogkey) # like a post
client.unlike(id, reblogkey) # unlike a post
client.blog_info(blogName) # get information about a blog
client.posts(blogName, **params) # get posts for a blog
client.avatar(blogName) # get the avatar for a blog
client.blog_likes(blogName) # get the likes on a blog
client.followers(blogName) # get the followers of a blog
client.blog_following(blogName) # get the publicly exposed blogs that [blogName] follows
client.queue(blogName) # get the queue for a given blog
client.submission(blogName) # get the submissions for a given blog
Creating posts
PyTumblr lets you create all of the various types that Tumblr supports. When using these types there are a few defaults that are able to be used with any post type.
The default supported types are described below.
We'll show examples throughout of these default examples while showcasing all the specific post types.
Creating a photo post
Creating a photo post supports a bunch of different options plus the described default options * caption - a string, the user supplied caption * link - a string, the "click-through" url for the photo * source - a string, the url for the photo you want to use (use this or the data parameter) * data - a list or string, a list of filepaths or a single file path for multipart file upload
#Creates a photo post using a source URL
client.create_photo(blogName, state="published", tags=["testing", "ok"],
source="https://68.media.tumblr.com/b965fbb2e501610a29d80ffb6fb3e1ad/tumblr_n55vdeTse11rn1906o1_500.jpg")
#Creates a photo post using a local filepath
client.create_photo(blogName, state="queue", tags=["testing", "ok"],
tweet="Woah this is an incredible sweet post [URL]",
data="/Users/johnb/path/to/my/image.jpg")
#Creates a photoset post using several local filepaths
client.create_photo(blogName, state="draft", tags=["jb is cool"], format="markdown",
data=["/Users/johnb/path/to/my/image.jpg", "/Users/johnb/Pictures/kittens.jpg"],
caption="## Mega sweet kittens")
Creating a text post
Creating a text post supports the same options as default and just a two other parameters * title - a string, the optional title for the post. Supports markdown or html * body - a string, the body of the of the post. Supports markdown or html
#Creating a text post
client.create_text(blogName, state="published", slug="testing-text-posts", title="Testing", body="testing1 2 3 4")
Creating a quote post
Creating a quote post supports the same options as default and two other parameter * quote - a string, the full text of the qote. Supports markdown or html * source - a string, the cited source. HTML supported
#Creating a quote post
client.create_quote(blogName, state="queue", quote="I am the Walrus", source="Ringo")
Creating a link post
#Create a link post
client.create_link(blogName, title="I like to search things, you should too.", url="https://duckduckgo.com",
description="Search is pretty cool when a duck does it.")
Creating a chat post
Creating a chat post supports the same options as default and two other parameters * title - a string, the title of the chat post * conversation - a string, the text of the conversation/chat, with diablog labels (no html)
#Create a chat post
chat = """John: Testing can be fun!
Renee: Testing is tedious and so are you.
John: Aw.
"""
client.create_chat(blogName, title="Renee just doesn't understand.", conversation=chat, tags=["renee", "testing"])
Creating an audio post
Creating an audio post allows for all default options and a has 3 other parameters. The only thing to keep in mind while dealing with audio posts is to make sure that you use the external_url parameter or data. You cannot use both at the same time. * caption - a string, the caption for your post * external_url - a string, the url of the site that hosts the audio file * data - a string, the filepath of the audio file you want to upload to Tumblr
#Creating an audio file
client.create_audio(blogName, caption="Rock out.", data="/Users/johnb/Music/my/new/sweet/album.mp3")
#lets use soundcloud!
client.create_audio(blogName, caption="Mega rock out.", external_url="https://soundcloud.com/skrillex/sets/recess")
Creating a video post
Creating a video post allows for all default options and has three other options. Like the other post types, it has some restrictions. You cannot use the embed and data parameters at the same time. * caption - a string, the caption for your post * embed - a string, the HTML embed code for the video * data - a string, the path of the file you want to upload
#Creating an upload from YouTube
client.create_video(blogName, caption="Jon Snow. Mega ridiculous sword.",
embed="http://www.youtube.com/watch?v=40pUYLacrj4")
#Creating a video post from local file
client.create_video(blogName, caption="testing", data="/Users/johnb/testing/ok/blah.mov")
Editing a post
Updating a post requires you knowing what type a post you're updating. You'll be able to supply to the post any of the options given above for updates.
client.edit_post(blogName, id=post_id, type="text", title="Updated")
client.edit_post(blogName, id=post_id, type="photo", data="/Users/johnb/mega/awesome.jpg")
Reblogging a Post
Reblogging a post just requires knowing the post id and the reblog key, which is supplied in the JSON of any post object.
client.reblog(blogName, id=125356, reblog_key="reblog_key")
Deleting a post
Deleting just requires that you own the post and have the post id
client.delete_post(blogName, 123456) # Deletes your post :(
A note on tags: When passing tags, as params, please pass them as a list (not a comma-separated string):
client.create_text(blogName, tags=['hello', 'world'], ...)
Getting notes for a post
In order to get the notes for a post, you need to have the post id and the blog that it is on.
data = client.notes(blogName, id='123456')
The results include a timestamp you can use to make future calls.
data = client.notes(blogName, id='123456', before_timestamp=data["_links"]["next"]["query_params"]["before_timestamp"])
# get posts with a given tag
client.tagged(tag, **params)
This client comes with a nice interactive console to run you through the OAuth process, grab your tokens (and store them for future use).
You'll need pyyaml
installed to run it, but then it's just:
$ python interactive-console.py
and away you go! Tokens are stored in ~/.tumblr
and are also shared by other Tumblr API clients like the Ruby client.
The tests (and coverage reports) are run with nose, like this:
python setup.py test
Author: tumblr
Source Code: https://github.com/tumblr/pytumblr
License: Apache-2.0 license
1667425440
Perl script converts PDF files to Gerber format
Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.
The general workflow is as follows:
Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).
See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.
#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;
use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)
##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file
use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call
#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software. \nGerber files MAY CONTAIN ERRORS. Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG
use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC
use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)
#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1);
#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
.010, -.001, #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
.031, -.014, #used for vias
.041, -.020, #smallest non-filled plated hole
.051, -.025,
.056, -.029, #useful for IC pins
.070, -.033,
.075, -.040, #heavier leads
# .090, -.043, #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
.100, -.046,
.115, -.052,
.130, -.061,
.140, -.067,
.150, -.079,
.175, -.088,
.190, -.093,
.200, -.100,
.220, -.110,
.160, -.125, #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
.090, -.040, #want a .090 pad option, but use dummy hole size
.065, -.040, #.065 x .065 rect pad
.035, -.040, #.035 x .065 rect pad
#traces:
.001, #too thin for real traces; use only for board outlines
.006, #minimum real trace width; mainly used for text
.008, #mainly used for mid-sized text, not traces
.010, #minimum recommended trace width for low-current signals
.012,
.015, #moderate low-voltage current
.020, #heavier trace for power, ground (even if a lighter one is adequate)
.025,
.030, #heavy-current traces; be careful with these ones!
.040,
.050,
.060,
.080,
.100,
.120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);
#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size: parsed PDF diameter: error:
# .014 .016 +.002
# .020 .02267 +.00267
# .025 .026 +.001
# .029 .03167 +.00267
# .033 .036 +.003
# .040 .04267 +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};
#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
CIRCLE_ADJUST_MINX => 0,
CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
CIRCLE_ADJUST_MAXY => 0,
SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};
#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches
#line join/cap styles:
use constant
{
CAP_NONE => 0, #butt (none); line is exact length
CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
#number of elements in each shape type:
use constant
{
RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
rect => RECT_SHAPELEN,
line => LINE_SHAPELEN,
curve => CURVE_SHAPELEN,
circle => CIRCLE_SHAPELEN,
);
#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions
# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?
#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes.
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes
#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches
# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)
# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time
# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const
use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool
my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time
print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load
#############################################################################################
#junk/experiment:
#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html
#my $caller = "pdf2gerb::";
#sub cfg
#{
# my $proto = shift;
# my $class = ref($proto) || $proto;
# my $settings =
# {
# $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
# };
# bless($settings, $class);
# return $settings;
#}
#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;
#print STDERR "read cfg file\n";
#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names
#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }
Author: swannman
Source Code: https://github.com/swannman/pdf2gerb
License: GPL-3.0 license
1659396000
Humidifier is a ruby tool for managing AWS CloudFormation stacks. You can use it to build and manage stacks programmatically or you can use it as a command line tool to manage stacks through configuration files.
Add this line to your application's Gemfile:
gem 'humidifier'
And then execute:
$ bundle
Or install it yourself as:
$ gem install humidifier
Stacks are represented by the Humidifier::Stack
class. You can set any of the top-level JSON attributes (such as name
and description
) through the initializer.
Resources are represented by an exact mapping from AWS
resource names to Humidifier
resources names (e.g. AWS::EC2::Instance
becomes Humidifier::EC2::Instance
). Resources have accessors for each JSON attribute. Each attribute can also be set through the initialize
, update
, and update_attribute
methods.
The below example will create a stack with two resources, a loader balancer and an auto scaling group. It then deploys the new stack and pauses execution until the stack is finished being created.
stack = Humidifier::Stack.new(name: 'Example-Stack')
stack.add(
'LoaderBalancer',
Humidifier::ElasticLoadBalancing::LoadBalancer.new(
scheme: 'internal',
listeners: [
{
load_balancer_port: 80,
protocol: 'http',
instance_port: 80,
instance_protocol: 'http'
}
]
)
)
stack.add(
'AutoScalingGroup',
Humidifier::AutoScaling::AutoScalingGroup.new(
min_size: '1',
max_size: '20',
availability_zones: ['us-east-1a'],
load_balancer_names: [Humidifier.ref('LoadBalancer')]
)
)
stack.deploy_and_wait
Once stacks have the appropriate resources, you can query AWS to handle all stack CRUD operations. The operations themselves are intuitively named (i.e. #create
, #update
, #delete
). There are also convenience methods for validating a stack body (#valid?
), checking the existence of a stack (#exists?
), and creating or updating based on existence (#deploy
).
There are additionally four functions on Humidifier::Stack
that support waiting for execution in AWS to finish. They all have non-blocking corollaries, and are named after them. They are: #create_and_wait
, #update_and_wait
, #delete_and_wait
, and #deploy_and_wait
.
You can use CFN intrinsic functions and references using Humidifier.fn.[name]
and Humidifier.ref
. They will build appropriate structures that know how to be dumped to CFN syntax.
Instead of immediately pushing your changes to CloudFormation, Humidifier also supports change sets. Change sets are a powerful feature that allow you to see the changes that will be made before you make them. To read more about change sets see the announcement article. To use them in Humidifier, Humidifier::Stack
has the #create_change_set
and #deploy_change_set
methods. The #create_change_set
method will create a change set on the stack. The #deploy_change_set
method will create a change set if the stack currently exists, and otherwise will create the stack.
To see the template body, you can check the #to_cf
method on stacks, resources, fns, and refs. All of them will output a hash of what will be uploaded (except the stack, which will output a string representation).
Humidifier itself contains a registry of all possible resources that it supports. You can access it with Humidifier::registry
which is a hash of AWS resource name pointing to the class.
Resources have an ::aws_name
method to see how AWS references them. They also contain a ::props
method that contains a hash of the name that Humidifier uses to reference the prop pointing to the appropriate prop object.
When templates are especially large (larger than 51,200 bytes), they cannot be uploaded directly through the AWS SDK. You can configure Humidifier
to seamlessly upload the templates to S3 and reference them using an S3 URL instead by:
Humidifier.configure do |config|
config.s3_bucket = 'my.s3.bucket'
config.s3_prefix = 'my-prefix/' # optional
end
You can force a stack to upload its template to S3 regardless of the size of the template. This is a useful option if you're going to be deploying multiple copies of a template or if you want a backup. You can set this option on a per-stack basis:
stack.deploy(force_upload: true)
or globally, by setting the configuration option:
Humidifier.configure do |config|
config.force_upload = true
end
Humidifier
can also be used as a CLI for managing resources through configuration files. For a step-by-step guide, read on, but if you'd like to see a working example, check out the example directory.
To get started, build a ruby script (for example humidifier
) that executes the Humidifier::CLI
class, like so:
#!/usr/bin/env ruby
require 'humidifier'
Humidifier.configure do |config|
# optional, defaults to the current working directory, so that all of the
# directories from the location that you run the CLI are assumed to contain
# resource specifications
config.stack_path = 'stacks'
# optional, a default prefix to use before deploying to AWS
config.stack_prefix = 'humidifier-'
# specifies that `users.yml` files contain specifications for `AWS::IAM::User`
# resources
config.map :users, to: 'IAM::User'
end
Humidifier::CLI.start(ARGV)
Inside of the stacks
directory configured above, create a subdirectory for each CloudFormation stack that you want to deploy. With the above configuration, we can create YAML files in the form of users.yml
for each stack, which will specify IAM users to create. The file format looks like the below:
EngUser:
path: /humidifier/
user_name: EngUser
groups:
- Engineering
- Testing
- Deployment
AdminUser:
path: /humidifier/
user_name: AdminUser
groups:
- Management
- Administration
The top-level keys are the logical resource names that will be displayed in the CloudFormation screen. They point to a map of key/value pairs that will be passed on to humidifier
. Any humidifier
(and therefore any CloudFormation) attribute may be specified. For more information on CloudFormation templates and which attributes may be specified, see both the humidifier
docs and the CloudFormation docs.
Oftentimes, specifying these attributes can become repetitive, e.g., each user should automatically receive the same "path" attribute. Other times, you may want custom logic to execute depending on which AWS environment you're running in. Finally, you may want to reference resources in the same or other stacks.
Humidifier
's solution for this is to allow customized "mapper" classes to take the user-provided attributes and transform them into the attributes that CloudFormation expects. Consider the following example for mapping a user:
class UserMapper < Humidifier::Config::Mapper
GROUPS = {
'eng' => %w[Engineering Testing Deployment],
'admin' => %w[Management Administration]
}
defaults do |logical_name|
{ path: '/humidifier/', user_name: logical_name }
end
attribute :group do |group|
groups = GROUPS[group]
groups.any? ? { groups: GROUPS[group] } : {}
end
end
Humidifier.configure do |config|
config.map :users, to: 'IAM::User', using: UserMapper
end
This means that by default, all entries in the users.yml
files will get a /humidifier/
path, the user_name
attribute will be set based on the logical name that was provided for the resource, and you can additionally specify a group
attribute, even though it is not native to CloudFormation. With this group
attribute, it will actually map to the groups
attribute that CloudFormation expects.
With this new mapper in place, we can simplify our YAML file to:
EngUser:
group: eng
AdminUser:
group: admin
Now that you've configured your CLI, your resources, and your mappers, you can use the CLI to display, validate, and deploy your infrastructure to CloudFormation. Run your script without any arguments to get the help message and explanations for each command.
Each command has an --aws-profile
(or -p
) option for specifying which profile to authenticate against when querying AWS. You should ensure that this profile has the correct permissions for creating whatever resources are going to part of your stack. You can also rely on the AWS_*
environment variables, or the EC2 instance profile if you're deploying from an instance. For more information, see the AWS docs under the "Configuration" section.
Below are the list of commands and some of their options.
change [?stack]
Creates a change set for either the specified stack or all stacks in the repo. The change set represents the changes between what is currently deployed versus the resources represented by the configuration.
deploy [?stack] [*parameters]
Creates or updates (depending on if the stack already exists) one or all stacks in the repo.
The deploy
command also allows a --prefix
command line argument that will override the default prefix (if one is configured) for the stack that is being deployed. This is especially useful when you're deploying multiple copies of the same stack (for instance, multiple autoscaling groups) that have different purposes or semantically mean newer versions of resources.
display [stack] [?pattern]
Displays the specified stack in JSON format on the command line. If you optionally pass a pattern argument, it will filter the resources down to just ones whose names match the given pattern.
stacks
Displays the names of all of the stacks that humidifier
is managing.
upgrade
Downloads the latest CloudFormation resource specification. Periodically AWS will update the file that humidifier
is based on, in which case the attributes of the resources that were changed could change. This gem usually stays relatively in sync, but if you need to use the latest specs and this gem has not yet released a new version containing them, then you can run this command to download the latest specs onto your system.
upload [?stack]
Upload one or all stacks in the repo to S3 for reference later. Note that this must be combined with the humidifier
s3_bucket
configuration option.
validate [?stack]
Validate that one or all stacks in the repo are properly configured and using values that CloudFormation understands.
version
Output the version of Humidifier
as well as the version of the CloudFormation resource specification that you are using.
CloudFormation template parameters can be specified by having a special parameters.yml
file in your stack directory. This file should contain a YAML-encoded object whose keys are the names of the parameters and whose values are the parameter configuration (using the same underscore paradigm as humidifier
resources for specifying configuration).
You can pass values to the CLI deploy command after the stack name on the command line as in:
humidifier deploy foobar Param1=Foo Param2=Bar
Those parameters will get passed in as values when the stack is deployed.
A couple of convenient shortcuts are built into humidifier
so that writing templates and mappers both can be more concise.
There are a lot of properties in the AWS CloudFormation resource specification that are simply pointers to other entities within the AWS ecosystem. For example, an AWS::EC2::VPCGatewayAttachment
entity has a VpcId
property that represents the ID of the associated AWS::EC2::VPC
.
Because this pattern is so common, humidifier
detects all properties ending in Id
and allows you to specify them without the suffix. If you choose to use this format, humidifier
will automatically turn that value into a CloudFormation resource reference.
A lot of the time, mappers that you create will not be overly complicated, especially if you're using automatic id properties. So, the config.map
method optionally takes a block, and allows you to specify the mapper inline. This is recommended for mappers that aren't too complicated as to warrant their own class (for instance, for testing purposes). An example of this using the UserMapper
from above is below:
Humidifier.configure do |config|
config.map :users, to: 'IAM::User' do
GROUPS = {
'eng' => %w[Engineering Testing Deployment],
'admin' => %w[Management Administration]
}
defaults do |logical_name|
{ path: '/humidifier/', user_name: logical_name }
end
attribute :group do |group|
groups = GROUPS[group]
groups.any? ? { groups: GROUPS[group] } : {}
end
end
end
AWS allows cross-stack references through the intrinsic Fn::ImportValue
function. You can take advantage of this with humidifier
by using the export: true
option on resources in your stacks. For instance, if in one stack you have a subnet that you need to reference in another, you could (stacks/vpc/subnets.yml
):
ProductionPrivateSubnet2a:
vpc: ProductionVPC
cidr_block: 10.0.0.0/19
availability_zone: us-west-2a
export: true
ProductionPrivateSubnet2b:
vpc: ProductionVPC
cidr_block: 10.0.64.0/19
availability_zone: us-west-2b
export: true
ProductionPrivateSubnet2c:
vpc: ProductionVPC
cidr_block: 10.0.128.0/19
availability_zone: us-west-2c
export: true
And then in another stack, you could reference those values (stacks/rds/db_subnets_groups.yml
):
ProductionDBSubnetGroup:
db_subnet_group_description: Production DB private subnet group
subnets:
- ProductionPrivateSubnet2a
- ProductionPrivateSubnet2b
- ProductionPrivateSubnet2c
Within the configuration, you would specify to use the Fn::ImportValue
function like so:
Humidifier.configure do |config|
config.stack_path = 'stacks'
config.map :subnets, to: 'EC2::Subnet'
config.map :db_subnet_groups, to: 'RDS::DBSubnetGroup' do
attribute :subnets do |subnet_names|
subnet_ids =
subnet_names.map do |subnet_name|
Humidifier.fn.import_value(subnet_name)
end
{ subnet_ids: subnet_ids }
end
end
end
If you specify export: true
it will by default export a reference to the resource listed in the stack. You can also choose to export a different attribute by specifying the attribute as the value to export. For example, if we were creating instance profiles and wanted to export the Arn
so that it could be referenced by an instance later, we could:
APIRoleInstanceProfile:
depends_on: APIRole
roles:
- APIRole
export: Arn
To get started, ensure you have ruby installed, version 2.4 or later. From there, install the bundler
gem: gem install bundler
and then bundle install
in the root of the repository.
The default rake task runs the tests. Styling is governed by rubocop. The docs are generated with yard. To run all three of these, run:
$ bundle exec rake
$ bundle exec rubocop
$ bundle exec rake yard
The specs pulled from the CFN docs is saved to CloudFormationResourceSpecification.json
. You can update it by running bundle exec rake specs
. This script will pull down the latest resource specification to be used with Humidifier.
Bug reports and pull requests are welcome on GitHub at https://github.com/kddnewton/humidifier.
The gem is available as open source under the terms of the MIT License.
Author: kddnewton
Source code: https://github.com/kddnewton/humidifier
License: MIT license
1655426640
Serverless M (or Serverless Modular) is a plugin for the serverless framework. This plugins helps you in managing multiple serverless projects with a single serverless.yml file. This plugin gives you a super charged CLI options that you can use to create new features, build them in a single file and deploy them all in parallel
Currently this plugin is tested for the below stack only
Make sure you have the serverless CLI installed
# Install serverless globally
$ npm install serverless -g
To start the serverless modular project locally you can either start with es5 or es6 templates or add it as a plugin
# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es6 --path myModularService
# Step 2. Change directory
$ cd myModularService
# Step 3. Create a package.json file
$ npm init
# Step 3. Install dependencies
$ npm i serverless-modular serverless-webpack webpack --save-dev
# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es5 --path myModularService
# Step 2. Change directory
$ cd myModularService
# Step 3. Create a package.json file
$ npm init
# Step 3. Install dependencies
$ npm i serverless-modular --save-dev
If you dont want to use the templates above you can just add in your existing project
plugins:
- serverless-modular
Now you are all done to start building your serverless modular functions
The serverless CLI can be accessed by
# Serverless Modular CLI
$ serverless modular
# shorthand
$ sls m
Serverless Modular CLI is based on 4 main commands
sls m init
sls m feature
sls m function
sls m build
sls m deploy
sls m init
The serverless init command helps in creating a basic .gitignore
that is useful for serverless modular.
The basic .gitignore
for serverless modular looks like this
#node_modules
node_modules
#sm main functions
sm.functions.yml
#serverless file generated by build
src/**/serverless.yml
#main serverless directories generated for sls deploy
.serverless
#feature serverless directories generated sls deploy
src/**/.serverless
#serverless logs file generated for main sls deploy
.sm.log
#serverless logs file generated for feature sls deploy
src/**/.sm.log
#Webpack config copied in each feature
src/**/webpack.config.js
The feature command helps in building new features for your project
This command comes with three options
--name: Specify the name you want for your feature
--remove: set value to true if you want to remove the feature
--basePath: Specify the basepath you want for your feature, this base path should be unique for all features. helps in running offline with offline plugin and for API Gateway
options | shortcut | required | values | default value |
---|---|---|---|---|
--name | -n | ✅ | string | N/A |
--remove | -r | ❎ | true, false | false |
--basePath | -p | ❎ | string | same as name |
Creating a basic feature
# Creating a jedi feature
$ sls m feature -n jedi
Creating a feature with different base path
# A feature with different base path
$ sls m feature -n jedi -p tatooine
Deleting a feature
# Anakin is going to delete the jedi feature
$ sls m feature -n jedi -r true
The function command helps in adding new function to a feature
This command comes with four options
--name: Specify the name you want for your function
--feature: Specify the name of the existing feature
--path: Specify the path for HTTP endpoint helps in running offline with offline plugin and for API Gateway
--method: Specify the path for HTTP method helps in running offline with offline plugin and for API Gateway
options | shortcut | required | values | default value |
---|---|---|---|---|
--name | -n | ✅ | string | N/A |
--feature | -f | ✅ | string | N/A |
--path | -p | ❎ | string | same as name |
--method | -m | ❎ | string | 'GET' |
Creating a basic function
# Creating a cloak function for jedi feature
$ sls m function -n cloak -f jedi
Creating a basic function with different path and method
# Creating a cloak function for jedi feature with custom path and HTTP method
$ sls m function -n cloak -f jedi -p powers -m POST
The build command helps in building the project for local or global scope
This command comes with four options
--scope: Specify the scope of the build, use this with "--feature" tag
--feature: Specify the name of the existing feature you want to build
options | shortcut | required | values | default value |
---|---|---|---|---|
--scope | -s | ❎ | string | local |
--feature | -f | ❎ | string | N/A |
Saving build Config in serverless.yml
You can also save config in serverless.yml file
custom:
smConfig:
build:
scope: local
all feature build (local scope)
# Building all local features
$ sls m build
Single feature build (local scope)
# Building a single feature
$ sls m build -f jedi -s local
All features build global scope
# Building all features with global scope
$ sls m build -s global
The deploy command helps in deploying serverless projects to AWS (it uses sls deploy
command)
This command comes with four options
--sm-parallel: Specify if you want to deploy parallel (will only run in parallel when doing multiple deployments)
--sm-scope: Specify if you want to deploy local features or global
--sm-features: Specify the local features you want to deploy (comma separated if multiple)
options | shortcut | required | values | default value |
---|---|---|---|---|
--sm-parallel | ❎ | ❎ | true, false | true |
--sm-scope | ❎ | ❎ | local, global | local |
--sm-features | ❎ | ❎ | string | N/A |
--sm-ignore-build | ❎ | ❎ | string | false |
Saving deploy Config in serverless.yml
You can also save config in serverless.yml file
custom:
smConfig:
deploy:
scope: local
parallel: true
ignoreBuild: true
Deploy all features locally
# deploy all local features
$ sls m deploy
Deploy all features globally
# deploy all global features
$ sls m deploy --sm-scope global
Deploy single feature
# deploy all global features
$ sls m deploy --sm-features jedi
Deploy Multiple features
# deploy all global features
$ sls m deploy --sm-features jedi,sith,dark_side
Deploy Multiple features in sequence
# deploy all global features
$ sls m deploy --sm-features jedi,sith,dark_side --sm-parallel false
Author: aa2kb
Source Code: https://github.com/aa2kb/serverless-modular
License: MIT license
1656636720
Serverless Framework: Deploy on Scaleway Functions
The Scaleway functions plugin for Serverless Framework allows users to deploy their functions and containers to Scaleway Functions with a simple serverless deploy
.
Serverless Framework handles everything from creating namespaces to function/code deployment by calling APIs endpoint under the hood.
npm install serverless -g
)Let's work into ~/my-srvless-projects
# mkdir ~/my-srvless-projects
# cd ~/my-srvless-projects
The easiest way to create a project is to use one of our templates. The list of templates is here
Let's use python3
serverless create --template-url https://github.com/scaleway/serverless-scaleway-functions/tree/master/examples/python3 --path myService
Once it's done, we can install mandatory node packages used by serverless
cd mypython3functions
npm i
Note: these packages are only used by serverless, they are not shipped with your functions.
Your functions are defined in the serverless.yml
file created:
service: scaleway-python3
configValidationMode: off
useDotenv: true
provider:
name: scaleway
runtime: python310
# Global Environment variables - used in every functions
env:
test: test
# Storing credentials in this file is strongly not recommanded for security concerns, please refer to README.md about best practices
scwToken: <scw-token>
scwProject: <scw-project-id>
# region in which the deployment will happen (default: fr-par)
scwRegion: <scw-region>
plugins:
- serverless-scaleway-functions
package:
patterns:
- '!node_modules/**'
- '!.gitignore'
- '!.git/**'
functions:
first:
handler: handler.py
# Local environment variables - used only in given function
env:
local: local
Note: provider.name
and plugins
MUST NOT be changed, they enable us to use the scaleway provider
This file contains the configuration of one namespace containing one or more functions (in this example, only one) of the same runtime (here python3
)
The different parameters are:
service
: your namespace nameuseDotenv
: Load environment variables from .env files (default: false), read Security and secret managementconfigValidationMode
: Configuration validation: 'error' (fatal error), 'warn' (logged to the output) or 'off' (default: warn)provider.runtime
: the runtime of your functions (check the supported runtimes above)provider.env
: environment variables attached to your namespace are injected to all your namespace functionsprovider.secret
: secret environment variables attached to your namespace are injected to all your namespace functions, see this example projectscwToken
: Scaleway token you got in prerequisitesscwProject
: Scaleway org id you got in prerequisitesscwRegion
: Scaleway region in which the deployment will take place (default: fr-par
)package.patterns
: usually, you don't need to configure it. Enable to include/exclude directories to/from the deploymentfunctions
: Configure of your fonctions. It's a yml dictionary, with the key being the function namehandler
(Required): file or function which will be executed. See the next section for runtime specific handlersenv
(Optional): environment variables specific for the current functionsecret
(Optional): secret environment variables specific for the current function, see this example projectminScale
(Optional): how many function instances we keep running (default: 0)maxScale
(Optional): maximum number of instances this function can scale to (default: 20)memoryLimit
: ram allocated to the function instances. See the introduction for the list of supported valuestimeout
: is the maximum duration in seconds that the request will wait to be served before it times out (default: 300 seconds)runtime
: (Optional) runtime of the function, if you need to deploy multiple functions with different runtimes in your Serverless Project. If absent, provider.runtime
will be used to deploy the function, see this example project.events
(Optional): List of events to trigger your functions (e.g, trigger a function based on a schedule with CRONJobs
). See events
section belowcustom_domains
(Optional): List of custom domains, refer to Custom Domain DocumentationYou configuration file may contains sensitive data, your project ID and your Token must not be shared and must not be commited in VCS.
To keep your informations safe and be able to share or commit your serverles.yml
file you should remove your credentials from the file. Then you can :
.env
file and keep it secretTo use .env
file you can modify your serverless.yml
file as following :
# This will alow the plugin to read your .env file
useDotenv: true
provider:
name: scaleway
runtime: node16
scwToken: ${env:SCW_SECRET_KEY}
scwProject: ${env:SCW_DEFAULT_PROJECT_ID}
scwRegion: ${env:SCW_REGION}
And then create a .env
file next to your serverless.yml
file, containing following values :
SCW_SECRET_KEY=XXX
SCW_DEFAULT_PROJECT_ID=XXX
SCW_REGION=fr-par
You can use this pattern to hide your secrets (for example a connexion string to a database or a S3 bucket).
Based on the chosen runtime, the handler
variable on function might vary.
Node has two module systems: CommonJS
modules and ECMAScript
(ES
) modules. By default, Node treats your code files as CommonJS modules, however ES modules have also been available since the release of node16
runtime on Scaleway Serverless Functions. ES modules give you a more modern way to re-use your code.
According to the official documentation, to use ES modules you can specify the module type in package.json
, as in the following example:
...
"type": "module",
...
This then enables you to write your code for ES modules:
export {handle};
function handle (event, context, cb) {
return {
body: process.version,
statusCode: 200,
};
};
The use of ES modules is encouraged, since they are more efficient and make setup and debugging much easier.
Note that using "type": "module"
or "type": "commonjs"
in your package.json will enable/disable some features in Node runtime. For a comprehensive list of differences, please refer to the official documentation, the following is a summary only:
commonjs
is used as default valuecommonjs
allows you to use require/module.exports
(synchronous code loading, it basically copies all file contents)module
allows you to use import/export
ES6 instructions (asynchronous loading, more optimized as it imports only the pieces of code you need)Path to your handler file (from serverless.yml), omit ./
, ../
, and add the exported function to use as a handler :
- src
- handlers
- firstHandler.js => module.exports.myFirstHandler = ...
- secondHandler.js => module.exports.mySecondHandler = ...
- serverless.yml
In serverless.yml:
provider:
# ...
runtime: node16
functions:
first:
handler: src/handlers/firstHandler.myFirstHandler
second:
handler: src/handlers/secondHandler.mySecondHandler
Similar to node
, path to handler file src/testing/handler.py
:
- src
- handlers
- firstHandler.py => def my_first_handler
- secondHandler.py => def my_second_handler
- serverless.yml
In serverless.yml:
provider:
# ...
runtime: python310 # or python37, python38, python39
functions:
first:
handler: src/handlers/firstHandler.my_first_handler
second:
handler: src/handlers/secondHandler.my_second_handler
Path to your handler's package, for example if I have the following structure:
- src
- testing
- handler.go -> package main in src/testing subdirectory
- second
- handler.go -> package main in src/second subdirectory
- serverless.yml
- handler.go -> package main at the root of project
Your serverless.yml functions
should look something like this:
provider:
# ...
runtime: go118
functions:
main:
handler: "."
testing:
handler: src/testing
second:
handler: src/second
With events
, you may link your functions with specific triggers, which might include CRON Schedule (Time based)
, MQTT Queues
(Publish on a topic will trigger the function), S3 Object update
(Upload an object will trigger the function).
Note that we do not include HTTP triggers in our event types, as a HTTP endpoint is created for every function. Triggers are just a new way to trigger your Function, but you will always be able to execute your code via HTTP.
Here is a list of supported triggers on Scaleway Serverless, and the configuration parameters required to deploy them:
rate
: CRON Schedule (UNIX Format) on which your function will be executedinput
: key-value mapping to define arguments that will be passed into your function's event object during execution.To link a Trigger to your function, you may define a key events
in your function:
functions:
handler: myHandler.handle
events:
# "events" is a list of triggers, the first key being the type of trigger.
- schedule:
# CRON Job Schedule (UNIX Format)
rate: '1 * * * *'
# Input variable are passed in your function's event during execution
input:
key: value
key2: value2
You may link Events to your Containers too (See section Managing containers
below for more informations on how to deploy containers):
custom:
containers:
mycontainer:
directory: my-directory
# Events key
events:
- schedule:
rate: '1 * * * *'
input:
key: value
key2: value2
You may refer to the follow examples:
Custom domains allows users to use their own domains.
For domain configuration please Refer to Scaleway Documentation
Integration with serverless framework example :
functions:
first:
handler: handler.handle
# Local environment variables - used only in given function
env:
local: local
custom_domains:
- func1.scaleway.com
- func2.scaleway.com
Note As your domain must have a record to your function hostname, you should deploy your function once to read its hostname. Custom Domains configurations will be available after the first deploy.
Note: Serverless Framework will consider the configuration file as the source of truth.
If you create a domain with other tools (Scaleway's Console, CLI or API) you must refer created domain into your serverless configuration file. Otherwise it will be deleted as Serverless Framework will give the priority to its configuration.
Requirements: You need to have Docker installed to be able to build and push your image to your Scaleway registry.
You must define your containers inside the custom.containers
field in your serverless.yml manifest. Each container must specify the relative path of its application directory (containing the Dockerfile, and all files related to the application to deploy):
custom:
containers:
mycontainer:
directory: my-container-directory
# port: 8080
# Environment only available in this container
env:
MY_VARIABLE: "my-value"
Here is an example of the files you should have, the directory
containing your Dockerfile and scripts is my-container-directory
.
.
├── my-container-directory
│ ├── Dockerfile
│ ├── requirements.txt
│ ├── server.py
│ └── (...)
├── node_modules
│ ├── serverless-scaleway-functions
│ └── (...)
├── package-lock.json
├── package.json
└── serverless.yml
Scaleway's platform will automatically inject a PORT environment variable on which your server should be listening for incoming traffic. By default, this PORT is 8080. You may change the port
in your serverless.yml
.
You may use the container example to getting started.
The serverless logs
command lets you watch the logs of a specific function or container.
Pass the function or container name you want to fetch the logs for with --function
:
serverless logs --function <function_or_container_name>
serverless info
command gives you informations your current deployement state in JSON format.
MUST
use this library if you plan to develop with Golang).This plugin is mainly developed and maintained by Scaleway Serverless Team
but you are free to open issues or discuss with us on our Community Slack Channels #serverless-containers and #serverless-functions.
Author: Scaleway
Source Code: https://github.com/scaleway/serverless-scaleway-functions
License: MIT license