1638365940
In this article, we are talking about how to create an Alexa skill deployment pipeline With Step by Step Example Code
We already discussed what is CodePipeline and Codebuild in the previous article so let’s not jump into that and start creating our pipeline.
1603872480
As we saw in the previous post, we have developed an entire pipeline for an Alexa Skill using CircleCI. Now we are going to build the same, but using the new continuous integration tool provided by GitHub, GitHub Actions in order to understand how it works and see the differences compared to the previous CI/CD platform used.
In turn, we are going to use the ASK CLI v2 and we will also use the file structure from an Alexa Skill provided by this new second version.
Here are the technologies used in this project:
The Alexa Skills Kit Command Line Interface (ASK CLI) is a tool for us to manage our Alexa Skills and its related resources, such as AWS Lambda functions. With the ASK CLI, we have access to the Skill Management API, which allows us to manage Alexa Skills through the command line.
If you want to create a skill with ASK CLI v2, follow the steps described in the official Amazon Alexa documentation.
We are going to use this tool to perform some steps in our pipeline.
Let’s DevOps!
#github #alexa #alexa skills #continious integration #alexa app development #alexa skills development #alexa skill #alexa skill development #alexa skills developer #github actions
1655630160
Install via pip:
$ pip install pytumblr
Install from source:
$ git clone https://github.com/tumblr/pytumblr.git
$ cd pytumblr
$ python setup.py install
A pytumblr.TumblrRestClient
is the object you'll make all of your calls to the Tumblr API through. Creating one is this easy:
client = pytumblr.TumblrRestClient(
'<consumer_key>',
'<consumer_secret>',
'<oauth_token>',
'<oauth_secret>',
)
client.info() # Grabs the current user information
Two easy ways to get your credentials to are:
interactive_console.py
tool (if you already have a consumer key & secret)client.info() # get information about the authenticating user
client.dashboard() # get the dashboard for the authenticating user
client.likes() # get the likes for the authenticating user
client.following() # get the blogs followed by the authenticating user
client.follow('codingjester.tumblr.com') # follow a blog
client.unfollow('codingjester.tumblr.com') # unfollow a blog
client.like(id, reblogkey) # like a post
client.unlike(id, reblogkey) # unlike a post
client.blog_info(blogName) # get information about a blog
client.posts(blogName, **params) # get posts for a blog
client.avatar(blogName) # get the avatar for a blog
client.blog_likes(blogName) # get the likes on a blog
client.followers(blogName) # get the followers of a blog
client.blog_following(blogName) # get the publicly exposed blogs that [blogName] follows
client.queue(blogName) # get the queue for a given blog
client.submission(blogName) # get the submissions for a given blog
Creating posts
PyTumblr lets you create all of the various types that Tumblr supports. When using these types there are a few defaults that are able to be used with any post type.
The default supported types are described below.
We'll show examples throughout of these default examples while showcasing all the specific post types.
Creating a photo post
Creating a photo post supports a bunch of different options plus the described default options * caption - a string, the user supplied caption * link - a string, the "click-through" url for the photo * source - a string, the url for the photo you want to use (use this or the data parameter) * data - a list or string, a list of filepaths or a single file path for multipart file upload
#Creates a photo post using a source URL
client.create_photo(blogName, state="published", tags=["testing", "ok"],
source="https://68.media.tumblr.com/b965fbb2e501610a29d80ffb6fb3e1ad/tumblr_n55vdeTse11rn1906o1_500.jpg")
#Creates a photo post using a local filepath
client.create_photo(blogName, state="queue", tags=["testing", "ok"],
tweet="Woah this is an incredible sweet post [URL]",
data="/Users/johnb/path/to/my/image.jpg")
#Creates a photoset post using several local filepaths
client.create_photo(blogName, state="draft", tags=["jb is cool"], format="markdown",
data=["/Users/johnb/path/to/my/image.jpg", "/Users/johnb/Pictures/kittens.jpg"],
caption="## Mega sweet kittens")
Creating a text post
Creating a text post supports the same options as default and just a two other parameters * title - a string, the optional title for the post. Supports markdown or html * body - a string, the body of the of the post. Supports markdown or html
#Creating a text post
client.create_text(blogName, state="published", slug="testing-text-posts", title="Testing", body="testing1 2 3 4")
Creating a quote post
Creating a quote post supports the same options as default and two other parameter * quote - a string, the full text of the qote. Supports markdown or html * source - a string, the cited source. HTML supported
#Creating a quote post
client.create_quote(blogName, state="queue", quote="I am the Walrus", source="Ringo")
Creating a link post
#Create a link post
client.create_link(blogName, title="I like to search things, you should too.", url="https://duckduckgo.com",
description="Search is pretty cool when a duck does it.")
Creating a chat post
Creating a chat post supports the same options as default and two other parameters * title - a string, the title of the chat post * conversation - a string, the text of the conversation/chat, with diablog labels (no html)
#Create a chat post
chat = """John: Testing can be fun!
Renee: Testing is tedious and so are you.
John: Aw.
"""
client.create_chat(blogName, title="Renee just doesn't understand.", conversation=chat, tags=["renee", "testing"])
Creating an audio post
Creating an audio post allows for all default options and a has 3 other parameters. The only thing to keep in mind while dealing with audio posts is to make sure that you use the external_url parameter or data. You cannot use both at the same time. * caption - a string, the caption for your post * external_url - a string, the url of the site that hosts the audio file * data - a string, the filepath of the audio file you want to upload to Tumblr
#Creating an audio file
client.create_audio(blogName, caption="Rock out.", data="/Users/johnb/Music/my/new/sweet/album.mp3")
#lets use soundcloud!
client.create_audio(blogName, caption="Mega rock out.", external_url="https://soundcloud.com/skrillex/sets/recess")
Creating a video post
Creating a video post allows for all default options and has three other options. Like the other post types, it has some restrictions. You cannot use the embed and data parameters at the same time. * caption - a string, the caption for your post * embed - a string, the HTML embed code for the video * data - a string, the path of the file you want to upload
#Creating an upload from YouTube
client.create_video(blogName, caption="Jon Snow. Mega ridiculous sword.",
embed="http://www.youtube.com/watch?v=40pUYLacrj4")
#Creating a video post from local file
client.create_video(blogName, caption="testing", data="/Users/johnb/testing/ok/blah.mov")
Editing a post
Updating a post requires you knowing what type a post you're updating. You'll be able to supply to the post any of the options given above for updates.
client.edit_post(blogName, id=post_id, type="text", title="Updated")
client.edit_post(blogName, id=post_id, type="photo", data="/Users/johnb/mega/awesome.jpg")
Reblogging a Post
Reblogging a post just requires knowing the post id and the reblog key, which is supplied in the JSON of any post object.
client.reblog(blogName, id=125356, reblog_key="reblog_key")
Deleting a post
Deleting just requires that you own the post and have the post id
client.delete_post(blogName, 123456) # Deletes your post :(
A note on tags: When passing tags, as params, please pass them as a list (not a comma-separated string):
client.create_text(blogName, tags=['hello', 'world'], ...)
Getting notes for a post
In order to get the notes for a post, you need to have the post id and the blog that it is on.
data = client.notes(blogName, id='123456')
The results include a timestamp you can use to make future calls.
data = client.notes(blogName, id='123456', before_timestamp=data["_links"]["next"]["query_params"]["before_timestamp"])
# get posts with a given tag
client.tagged(tag, **params)
This client comes with a nice interactive console to run you through the OAuth process, grab your tokens (and store them for future use).
You'll need pyyaml
installed to run it, but then it's just:
$ python interactive-console.py
and away you go! Tokens are stored in ~/.tumblr
and are also shared by other Tumblr API clients like the Ruby client.
The tests (and coverage reports) are run with nose, like this:
python setup.py test
Author: tumblr
Source Code: https://github.com/tumblr/pytumblr
License: Apache-2.0 license
1614867120
In these steps, we have our Alexa Skill properly dockerized. As we are not going to package all the software components (Alexa Skill + MongoDB) yet, in this fourth step, we will set up all the Kubernetes objects of our Alexa Skill using MongoDB Atlas.
Here, you have the technologies used in this project:
#docker #kubernetes #nginx #alexa #alexa skills #alexa skills development #alexa skill #alexa skill development #alexa skills developer
1603894260
It is always good practice in the world of programming to try to develop things that are reusable. So anyone can integrate what has been developed and can quickly start using it.
This is the philosophy behind a GitHub Action. Small individual and reusable tasks that we can combine to create jobs and customize our GitHub Actions workflows.
Here are the technologies used in this project:
The Alexa Skills Kit Command Line Interface (ASK CLI) is a tool for us to manage our Alexa Skills and its related resources, such as AWS Lambda functions. With the ASK CLI, we have access to the Skill Management API, which allows us to manage Alexa Skills through the command line.
GitHub Actions helps us to automate tasks within the software development lifecycle. GitHub Actions is event-driven, which means that we can run a series of commands after a specific event has occurred. For example, whenever someone creates a pull request for a repository, we can automatically run a pipeline on GitHub Actions.
An event automatically triggers the workflow
, which contains one or more jobs. Then the jobs
use steps
to control the order in which the actions are executed. These actions are the commands that automate certain processes.
#github #alexa #alexa skills #continious integration #alexa app development #alexa skills development #alexa skill #alexa skill development #alexa skills developer #github actions
1659396000
Humidifier is a ruby tool for managing AWS CloudFormation stacks. You can use it to build and manage stacks programmatically or you can use it as a command line tool to manage stacks through configuration files.
Add this line to your application's Gemfile:
gem 'humidifier'
And then execute:
$ bundle
Or install it yourself as:
$ gem install humidifier
Stacks are represented by the Humidifier::Stack
class. You can set any of the top-level JSON attributes (such as name
and description
) through the initializer.
Resources are represented by an exact mapping from AWS
resource names to Humidifier
resources names (e.g. AWS::EC2::Instance
becomes Humidifier::EC2::Instance
). Resources have accessors for each JSON attribute. Each attribute can also be set through the initialize
, update
, and update_attribute
methods.
The below example will create a stack with two resources, a loader balancer and an auto scaling group. It then deploys the new stack and pauses execution until the stack is finished being created.
stack = Humidifier::Stack.new(name: 'Example-Stack')
stack.add(
'LoaderBalancer',
Humidifier::ElasticLoadBalancing::LoadBalancer.new(
scheme: 'internal',
listeners: [
{
load_balancer_port: 80,
protocol: 'http',
instance_port: 80,
instance_protocol: 'http'
}
]
)
)
stack.add(
'AutoScalingGroup',
Humidifier::AutoScaling::AutoScalingGroup.new(
min_size: '1',
max_size: '20',
availability_zones: ['us-east-1a'],
load_balancer_names: [Humidifier.ref('LoadBalancer')]
)
)
stack.deploy_and_wait
Once stacks have the appropriate resources, you can query AWS to handle all stack CRUD operations. The operations themselves are intuitively named (i.e. #create
, #update
, #delete
). There are also convenience methods for validating a stack body (#valid?
), checking the existence of a stack (#exists?
), and creating or updating based on existence (#deploy
).
There are additionally four functions on Humidifier::Stack
that support waiting for execution in AWS to finish. They all have non-blocking corollaries, and are named after them. They are: #create_and_wait
, #update_and_wait
, #delete_and_wait
, and #deploy_and_wait
.
You can use CFN intrinsic functions and references using Humidifier.fn.[name]
and Humidifier.ref
. They will build appropriate structures that know how to be dumped to CFN syntax.
Instead of immediately pushing your changes to CloudFormation, Humidifier also supports change sets. Change sets are a powerful feature that allow you to see the changes that will be made before you make them. To read more about change sets see the announcement article. To use them in Humidifier, Humidifier::Stack
has the #create_change_set
and #deploy_change_set
methods. The #create_change_set
method will create a change set on the stack. The #deploy_change_set
method will create a change set if the stack currently exists, and otherwise will create the stack.
To see the template body, you can check the #to_cf
method on stacks, resources, fns, and refs. All of them will output a hash of what will be uploaded (except the stack, which will output a string representation).
Humidifier itself contains a registry of all possible resources that it supports. You can access it with Humidifier::registry
which is a hash of AWS resource name pointing to the class.
Resources have an ::aws_name
method to see how AWS references them. They also contain a ::props
method that contains a hash of the name that Humidifier uses to reference the prop pointing to the appropriate prop object.
When templates are especially large (larger than 51,200 bytes), they cannot be uploaded directly through the AWS SDK. You can configure Humidifier
to seamlessly upload the templates to S3 and reference them using an S3 URL instead by:
Humidifier.configure do |config|
config.s3_bucket = 'my.s3.bucket'
config.s3_prefix = 'my-prefix/' # optional
end
You can force a stack to upload its template to S3 regardless of the size of the template. This is a useful option if you're going to be deploying multiple copies of a template or if you want a backup. You can set this option on a per-stack basis:
stack.deploy(force_upload: true)
or globally, by setting the configuration option:
Humidifier.configure do |config|
config.force_upload = true
end
Humidifier
can also be used as a CLI for managing resources through configuration files. For a step-by-step guide, read on, but if you'd like to see a working example, check out the example directory.
To get started, build a ruby script (for example humidifier
) that executes the Humidifier::CLI
class, like so:
#!/usr/bin/env ruby
require 'humidifier'
Humidifier.configure do |config|
# optional, defaults to the current working directory, so that all of the
# directories from the location that you run the CLI are assumed to contain
# resource specifications
config.stack_path = 'stacks'
# optional, a default prefix to use before deploying to AWS
config.stack_prefix = 'humidifier-'
# specifies that `users.yml` files contain specifications for `AWS::IAM::User`
# resources
config.map :users, to: 'IAM::User'
end
Humidifier::CLI.start(ARGV)
Inside of the stacks
directory configured above, create a subdirectory for each CloudFormation stack that you want to deploy. With the above configuration, we can create YAML files in the form of users.yml
for each stack, which will specify IAM users to create. The file format looks like the below:
EngUser:
path: /humidifier/
user_name: EngUser
groups:
- Engineering
- Testing
- Deployment
AdminUser:
path: /humidifier/
user_name: AdminUser
groups:
- Management
- Administration
The top-level keys are the logical resource names that will be displayed in the CloudFormation screen. They point to a map of key/value pairs that will be passed on to humidifier
. Any humidifier
(and therefore any CloudFormation) attribute may be specified. For more information on CloudFormation templates and which attributes may be specified, see both the humidifier
docs and the CloudFormation docs.
Oftentimes, specifying these attributes can become repetitive, e.g., each user should automatically receive the same "path" attribute. Other times, you may want custom logic to execute depending on which AWS environment you're running in. Finally, you may want to reference resources in the same or other stacks.
Humidifier
's solution for this is to allow customized "mapper" classes to take the user-provided attributes and transform them into the attributes that CloudFormation expects. Consider the following example for mapping a user:
class UserMapper < Humidifier::Config::Mapper
GROUPS = {
'eng' => %w[Engineering Testing Deployment],
'admin' => %w[Management Administration]
}
defaults do |logical_name|
{ path: '/humidifier/', user_name: logical_name }
end
attribute :group do |group|
groups = GROUPS[group]
groups.any? ? { groups: GROUPS[group] } : {}
end
end
Humidifier.configure do |config|
config.map :users, to: 'IAM::User', using: UserMapper
end
This means that by default, all entries in the users.yml
files will get a /humidifier/
path, the user_name
attribute will be set based on the logical name that was provided for the resource, and you can additionally specify a group
attribute, even though it is not native to CloudFormation. With this group
attribute, it will actually map to the groups
attribute that CloudFormation expects.
With this new mapper in place, we can simplify our YAML file to:
EngUser:
group: eng
AdminUser:
group: admin
Now that you've configured your CLI, your resources, and your mappers, you can use the CLI to display, validate, and deploy your infrastructure to CloudFormation. Run your script without any arguments to get the help message and explanations for each command.
Each command has an --aws-profile
(or -p
) option for specifying which profile to authenticate against when querying AWS. You should ensure that this profile has the correct permissions for creating whatever resources are going to part of your stack. You can also rely on the AWS_*
environment variables, or the EC2 instance profile if you're deploying from an instance. For more information, see the AWS docs under the "Configuration" section.
Below are the list of commands and some of their options.
change [?stack]
Creates a change set for either the specified stack or all stacks in the repo. The change set represents the changes between what is currently deployed versus the resources represented by the configuration.
deploy [?stack] [*parameters]
Creates or updates (depending on if the stack already exists) one or all stacks in the repo.
The deploy
command also allows a --prefix
command line argument that will override the default prefix (if one is configured) for the stack that is being deployed. This is especially useful when you're deploying multiple copies of the same stack (for instance, multiple autoscaling groups) that have different purposes or semantically mean newer versions of resources.
display [stack] [?pattern]
Displays the specified stack in JSON format on the command line. If you optionally pass a pattern argument, it will filter the resources down to just ones whose names match the given pattern.
stacks
Displays the names of all of the stacks that humidifier
is managing.
upgrade
Downloads the latest CloudFormation resource specification. Periodically AWS will update the file that humidifier
is based on, in which case the attributes of the resources that were changed could change. This gem usually stays relatively in sync, but if you need to use the latest specs and this gem has not yet released a new version containing them, then you can run this command to download the latest specs onto your system.
upload [?stack]
Upload one or all stacks in the repo to S3 for reference later. Note that this must be combined with the humidifier
s3_bucket
configuration option.
validate [?stack]
Validate that one or all stacks in the repo are properly configured and using values that CloudFormation understands.
version
Output the version of Humidifier
as well as the version of the CloudFormation resource specification that you are using.
CloudFormation template parameters can be specified by having a special parameters.yml
file in your stack directory. This file should contain a YAML-encoded object whose keys are the names of the parameters and whose values are the parameter configuration (using the same underscore paradigm as humidifier
resources for specifying configuration).
You can pass values to the CLI deploy command after the stack name on the command line as in:
humidifier deploy foobar Param1=Foo Param2=Bar
Those parameters will get passed in as values when the stack is deployed.
A couple of convenient shortcuts are built into humidifier
so that writing templates and mappers both can be more concise.
There are a lot of properties in the AWS CloudFormation resource specification that are simply pointers to other entities within the AWS ecosystem. For example, an AWS::EC2::VPCGatewayAttachment
entity has a VpcId
property that represents the ID of the associated AWS::EC2::VPC
.
Because this pattern is so common, humidifier
detects all properties ending in Id
and allows you to specify them without the suffix. If you choose to use this format, humidifier
will automatically turn that value into a CloudFormation resource reference.
A lot of the time, mappers that you create will not be overly complicated, especially if you're using automatic id properties. So, the config.map
method optionally takes a block, and allows you to specify the mapper inline. This is recommended for mappers that aren't too complicated as to warrant their own class (for instance, for testing purposes). An example of this using the UserMapper
from above is below:
Humidifier.configure do |config|
config.map :users, to: 'IAM::User' do
GROUPS = {
'eng' => %w[Engineering Testing Deployment],
'admin' => %w[Management Administration]
}
defaults do |logical_name|
{ path: '/humidifier/', user_name: logical_name }
end
attribute :group do |group|
groups = GROUPS[group]
groups.any? ? { groups: GROUPS[group] } : {}
end
end
end
AWS allows cross-stack references through the intrinsic Fn::ImportValue
function. You can take advantage of this with humidifier
by using the export: true
option on resources in your stacks. For instance, if in one stack you have a subnet that you need to reference in another, you could (stacks/vpc/subnets.yml
):
ProductionPrivateSubnet2a:
vpc: ProductionVPC
cidr_block: 10.0.0.0/19
availability_zone: us-west-2a
export: true
ProductionPrivateSubnet2b:
vpc: ProductionVPC
cidr_block: 10.0.64.0/19
availability_zone: us-west-2b
export: true
ProductionPrivateSubnet2c:
vpc: ProductionVPC
cidr_block: 10.0.128.0/19
availability_zone: us-west-2c
export: true
And then in another stack, you could reference those values (stacks/rds/db_subnets_groups.yml
):
ProductionDBSubnetGroup:
db_subnet_group_description: Production DB private subnet group
subnets:
- ProductionPrivateSubnet2a
- ProductionPrivateSubnet2b
- ProductionPrivateSubnet2c
Within the configuration, you would specify to use the Fn::ImportValue
function like so:
Humidifier.configure do |config|
config.stack_path = 'stacks'
config.map :subnets, to: 'EC2::Subnet'
config.map :db_subnet_groups, to: 'RDS::DBSubnetGroup' do
attribute :subnets do |subnet_names|
subnet_ids =
subnet_names.map do |subnet_name|
Humidifier.fn.import_value(subnet_name)
end
{ subnet_ids: subnet_ids }
end
end
end
If you specify export: true
it will by default export a reference to the resource listed in the stack. You can also choose to export a different attribute by specifying the attribute as the value to export. For example, if we were creating instance profiles and wanted to export the Arn
so that it could be referenced by an instance later, we could:
APIRoleInstanceProfile:
depends_on: APIRole
roles:
- APIRole
export: Arn
To get started, ensure you have ruby installed, version 2.4 or later. From there, install the bundler
gem: gem install bundler
and then bundle install
in the root of the repository.
The default rake task runs the tests. Styling is governed by rubocop. The docs are generated with yard. To run all three of these, run:
$ bundle exec rake
$ bundle exec rubocop
$ bundle exec rake yard
The specs pulled from the CFN docs is saved to CloudFormationResourceSpecification.json
. You can update it by running bundle exec rake specs
. This script will pull down the latest resource specification to be used with Humidifier.
Bug reports and pull requests are welcome on GitHub at https://github.com/kddnewton/humidifier.
The gem is available as open source under the terms of the MIT License.
Author: kddnewton
Source code: https://github.com/kddnewton/humidifier
License: MIT license