1587107140
In this tutorial, we will implement a custom skill for Amazon Alexa by using nodejs, npm and AWS Lambda Functions. This skill is basically a Hello World example. With this tutorial you will be able to create a custom skill for Amazon Alexa, implement functionality by using nodejs and start your custom skill both from your local computer and from AWS.
This tutorial contains materials from different resources that can be seen on Resources section.
npm install --save ask-sdk
First create the request handlers needed to handle the different types of incoming requests to your skill.
The following code example shows how to configure a handler to be invoked when the skill receives a LaunchRequest
. The LaunchRequest
event occurs when the skill is invoked without a specific intent.
Create a file called index.js
and paste in the following code.
const LaunchRequestHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'LaunchRequest';
},
handle(handlerInput) {
const speechText = 'Welcome to the Alexa Skills Kit, you can say hello!';
return handlerInput.responseBuilder
.speak(speechText)
.reprompt(speechText)
.withSimpleCard('Hello World', speechText)
.getResponse();
}
};
The canHandle
function returns true if the incoming request is a LaunchRequest
. The handle
function generates and returns a basic greeting response.
The following code example shows how to configure a handler to be invoked when the skill receives the HelloWorldIntent
.
Paste the following code into your index.js
file, after the previous handler.
const HelloWorldIntentHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'IntentRequest'
&& handlerInput.requestEnvelope.request.intent.name === 'HelloWorldIntent';
},
handle(handlerInput) {
const speechText = 'Hello World!';
return handlerInput.responseBuilder
.speak(speechText)
.withSimpleCard('Hello World', speechText)
.getResponse();
}
};
The canHandle
function detects if the incoming request is an IntentRequest
, and returns true if the intent name is HelloWorldIntent
. The handle
function generates and returns a basic “Hello world” response.
The following code example shows how to configure a handler to be invoked when the skill receives the built in intent AMAZON.HelpIntent
.
Paste the following code into your index.js
file, after the previous handler.
const HelpIntentHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'IntentRequest'
&& handlerInput.requestEnvelope.request.intent.name === 'AMAZON.HelpIntent';
},
handle(handlerInput) {
const speechText = 'You can say hello to me!';
return handlerInput.responseBuilder
.speak(speechText)
.reprompt(speechText)
.withSimpleCard('Hello World', speechText)
.getResponse();
}
};
Similar to the previous handler, this handler matches an IntentRequest
with the expected intent name. Basic help instructions are returned.
The CancelAndStopIntenthandler is similar to the HelpIntent handler, as it is also triggered by built-in intents. The following example uses a single handler to respond to two different intents, Amazon.CancelIntent
and Amazon.StopIntent
.
Paste the following code into your index.js
file, after the previous handler.
const CancelAndStopIntentHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'IntentRequest'
&& (handlerInput.requestEnvelope.request.intent.name === 'AMAZON.CancelIntent'
|| handlerInput.requestEnvelope.request.intent.name === 'AMAZON.StopIntent');
},
handle(handlerInput) {
const speechText = 'Goodbye!';
return handlerInput.responseBuilder
.speak(speechText)
.withSimpleCard('Hello World', speechText)
.getResponse();
}
};
The response to both intents is the same, so having a single handler reduces repetitive code.
Although you can not return a response with any speech, card or directives after receiving a SessionEndedRequest
, the SessionEndedRequestHandler is a good place to put your cleanup logic.
Paste the following code into your index.js
file, after the previous handler.
const SessionEndedRequestHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'SessionEndedRequest';
},
handle(handlerInput) {
//any cleanup logic goes here
return handlerInput.responseBuilder.getResponse();
}
};
ASK SDK v2 for Node.js brings better support for error handling, making it easy for skill to ensure a fluent user experience. Error handler is a good place to inject your error handling logic such as unhandled request, api service time out, etc. The following sample adds a catch all error handler to your skill to ensure skill returns a meaningful message in case of all errors.
Paste the following code into your index.js
file, after the previous handler.
const ErrorHandler = {
canHandle() {
return true;
},
handle(handlerInput, error) {
console.log(`Error handled: ${error.message}`);
return handlerInput.responseBuilder
.speak('Sorry, I can\'t understand the command. Please say again.')
.reprompt('Sorry, I can\'t understand the command. Please say again.')
.getResponse();
},
};
The Lambda handler is the entry point for your AWS Lambda function. The following code example creates a Lambda handler function to route all inbound request to your skill. The Lambda handler function creates an SDK Skill
instance configured with the request handlers that you just created.
Add the following code at the beginning of your index.js
file. The code should be before the handlers you created earlier. (Remeber, this is the ASK SDK v2!)
'use strict';
const Alexa = require('ask-sdk-core');
// use 'ask-sdk' if standard SDK module is installed
////////////////////////////////
// Code for the handlers here //
////////////////////////////////
exports.handler = Alexa.SkillBuilders.custom()
.addRequestHandlers(LaunchRequestHandler,
HelloWorldIntentHandler,
HelpIntentHandler,
CancelAndStopIntentHandler,
SessionEndedRequestHandler)
.lambda();
Select the Invocation option from the sidebar and enter “greeter” for the Skill Invocation Name.
Next, add an intent called HelloWorldIntent
to the interaction model. Click the Add button under the Intents section of the Interaction Model.
Leave “Create custom intent” selected, enter “HelloWorldIntent” for the intent name, and create the intent. On the intent detail page, add some sample utterances that users can say to invoke the intent. For this example, you can use these:
say hello
say hello world
hello
say hi
say hi world
hi
how are you
Since AMAZON.CancelIntent
, AMAZON.HelpIntent
, and AMAZON.StopIntent
are built-in Alexa intents, you do not need to provide sample utterances for them.
The Developer Console also allows you to edit the entire skill model in JSON format. Select JSON Editor from the sidebar. For this sample, you can use the following JSON schema.
{
"interactionModel": {
"languageModel": {
"invocationName": "greeter",
"intents": [
{
"name": "AMAZON.CancelIntent",
"samples": []
},
{
"name": "AMAZON.HelpIntent",
"samples": []
},
{
"name": "AMAZON.StopIntent",
"samples": []
},
{
"name": "HelloWorldIntent",
"slots": [],
"samples": [
"how are you",
"hi",
"say hi world",
"say hi",
"hello",
"say hello world",
"say hello"
]
}
],
"types": []
}
}
}
Once you are done editing the interaction model, be sure to save and build the model.
We will use a npm packet called alexa-skill-local for this part. In order to start your application you should also create a asl-config.json file in your root directory. Config file has to be of following format (If you are not sure of the “stage”, in most cases it is “development”):
{
"skillId" : "your_skill_id_here",
"stage" : "stage_of_the_skill"
}
You can find your skill id on the Alexa Console page, it looks like amzn1.ask.skill.6f2f04b5-abba-3f47–9fc9–0sbba79b1535. Use Node.js v8.x.x to run. You can install alexa-skill-local globally (recommended) or in your project directory (in this case you many want to run it from npm scripts in package.json).
$ npm install -g alexa-skill-local
Usage
Run following command. When prompted open http://localhost:3001 in your browser. Login with Amazon to grant alexa-skill-local an access to update your skill’s endpoint.
$ alexa-skill-local
After that follow the instructions on the console.
Configure the endpoint for the skill. Under Endpoint select HTTPS and paste in the URL that has been provided on the command line. Select SSL certificate type as My Development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority.
The rest of the settings can be left at their default values. Click Save Endpoints.
Once you have your skill ID, add the trigger to the function:
With the skill code complete, you can create the skill package. To prepare the skill for upload to AWS Lambda, create a zip file that contains the skill file plus the node_modules
folder. Make sure to compress all project files directly, NOT the project folder.
Once you’ve created your AWS Lambda function and configured “Alexa Skills Kit” as a trigger, upload the .zip file produced in the previous step and leave the handler as default index.handler
. Finally, copy the ARN for your AWS Lambda function because you’ll need it when configuring your skill in the Amazon Developer console.
Configure the endpoint for the skill. Under Endpoint select AWS Lambda ARN and paste in the ARN of the function you created previously. The rest of the settings can be left at their default values. Click Save Endpoints.
At this point you can test the skill. Click Test in the top navigation to go to the Test page. Make sure that the Test is enabled for this skill option is enabled. You can use the Test page to simulate requests, in text and voice form. Use the invocation name along with one of the sample utterances. For example, “tell greeter to say hi world” should result in your skill responding with “Hello world”. There are multiple options for testing a new skill:
Use the invocation name along with one of the sample utterances we just configured as a guide. For example, “tell greeter to say hello” should result in your skill responding with “Hello world”. You should also be able to go to the Alexa App (on your phone or at https://alexa.amazon.com) and see your skill listed under Your Skills. From here, you can enable the skill on your account for testing from an Alexa enabled device.
You need to track your bot’s performance on a regular basis, keep an eye on how it is performing to make it hit the charts.
Finally, connect your skill to Botanalytics for getting your free analytics for Amazon Alexa skills. You cannot improve what you don’t measure, right? You can use our official Node.js library to easily integrate Botanalytics.
Thanks for reading!
#nodejs #alexa tutorials #aws lambda #alexa skills
1603872480
As we saw in the previous post, we have developed an entire pipeline for an Alexa Skill using CircleCI. Now we are going to build the same, but using the new continuous integration tool provided by GitHub, GitHub Actions in order to understand how it works and see the differences compared to the previous CI/CD platform used.
In turn, we are going to use the ASK CLI v2 and we will also use the file structure from an Alexa Skill provided by this new second version.
Here are the technologies used in this project:
The Alexa Skills Kit Command Line Interface (ASK CLI) is a tool for us to manage our Alexa Skills and its related resources, such as AWS Lambda functions. With the ASK CLI, we have access to the Skill Management API, which allows us to manage Alexa Skills through the command line.
If you want to create a skill with ASK CLI v2, follow the steps described in the official Amazon Alexa documentation.
We are going to use this tool to perform some steps in our pipeline.
Let’s DevOps!
#github #alexa #alexa skills #continious integration #alexa app development #alexa skills development #alexa skill #alexa skill development #alexa skills developer #github actions
1632537859
Not babashka. Node.js babashka!?
Ad-hoc CLJS scripting on Node.js.
Experimental. Please report issues here.
Nbb's main goal is to make it easy to get started with ad hoc CLJS scripting on Node.js.
Additional goals and features are:
Nbb requires Node.js v12 or newer.
CLJS code is evaluated through SCI, the same interpreter that powers babashka. Because SCI works with advanced compilation, the bundle size, especially when combined with other dependencies, is smaller than what you get with self-hosted CLJS. That makes startup faster. The trade-off is that execution is less performant and that only a subset of CLJS is available (e.g. no deftype, yet).
Install nbb
from NPM:
$ npm install nbb -g
Omit -g
for a local install.
Try out an expression:
$ nbb -e '(+ 1 2 3)'
6
And then install some other NPM libraries to use in the script. E.g.:
$ npm install csv-parse shelljs zx
Create a script which uses the NPM libraries:
(ns script
(:require ["csv-parse/lib/sync$default" :as csv-parse]
["fs" :as fs]
["path" :as path]
["shelljs$default" :as sh]
["term-size$default" :as term-size]
["zx$default" :as zx]
["zx$fs" :as zxfs]
[nbb.core :refer [*file*]]))
(prn (path/resolve "."))
(prn (term-size))
(println (count (str (fs/readFileSync *file*))))
(prn (sh/ls "."))
(prn (csv-parse "foo,bar"))
(prn (zxfs/existsSync *file*))
(zx/$ #js ["ls"])
Call the script:
$ nbb script.cljs
"/private/tmp/test-script"
#js {:columns 216, :rows 47}
510
#js ["node_modules" "package-lock.json" "package.json" "script.cljs"]
#js [#js ["foo" "bar"]]
true
$ ls
node_modules
package-lock.json
package.json
script.cljs
Nbb has first class support for macros: you can define them right inside your .cljs
file, like you are used to from JVM Clojure. Consider the plet
macro to make working with promises more palatable:
(defmacro plet
[bindings & body]
(let [binding-pairs (reverse (partition 2 bindings))
body (cons 'do body)]
(reduce (fn [body [sym expr]]
(let [expr (list '.resolve 'js/Promise expr)]
(list '.then expr (list 'clojure.core/fn (vector sym)
body))))
body
binding-pairs)))
Using this macro we can look async code more like sync code. Consider this puppeteer example:
(-> (.launch puppeteer)
(.then (fn [browser]
(-> (.newPage browser)
(.then (fn [page]
(-> (.goto page "https://clojure.org")
(.then #(.screenshot page #js{:path "screenshot.png"}))
(.catch #(js/console.log %))
(.then #(.close browser)))))))))
Using plet
this becomes:
(plet [browser (.launch puppeteer)
page (.newPage browser)
_ (.goto page "https://clojure.org")
_ (-> (.screenshot page #js{:path "screenshot.png"})
(.catch #(js/console.log %)))]
(.close browser))
See the puppeteer example for the full code.
Since v0.0.36, nbb includes promesa which is a library to deal with promises. The above plet
macro is similar to promesa.core/let
.
$ time nbb -e '(+ 1 2 3)'
6
nbb -e '(+ 1 2 3)' 0.17s user 0.02s system 109% cpu 0.168 total
The baseline startup time for a script is about 170ms seconds on my laptop. When invoked via npx
this adds another 300ms or so, so for faster startup, either use a globally installed nbb
or use $(npm bin)/nbb script.cljs
to bypass npx
.
Nbb does not depend on any NPM dependencies. All NPM libraries loaded by a script are resolved relative to that script. When using the Reagent module, React is resolved in the same way as any other NPM library.
To load .cljs
files from local paths or dependencies, you can use the --classpath
argument. The current dir is added to the classpath automatically. So if there is a file foo/bar.cljs
relative to your current dir, then you can load it via (:require [foo.bar :as fb])
. Note that nbb
uses the same naming conventions for namespaces and directories as other Clojure tools: foo-bar
in the namespace name becomes foo_bar
in the directory name.
To load dependencies from the Clojure ecosystem, you can use the Clojure CLI or babashka to download them and produce a classpath:
$ classpath="$(clojure -A:nbb -Spath -Sdeps '{:aliases {:nbb {:replace-deps {com.github.seancorfield/honeysql {:git/tag "v2.0.0-rc5" :git/sha "01c3a55"}}}}}')"
and then feed it to the --classpath
argument:
$ nbb --classpath "$classpath" -e "(require '[honey.sql :as sql]) (sql/format {:select :foo :from :bar :where [:= :baz 2]})"
["SELECT foo FROM bar WHERE baz = ?" 2]
Currently nbb
only reads from directories, not jar files, so you are encouraged to use git libs. Support for .jar
files will be added later.
The name of the file that is currently being executed is available via nbb.core/*file*
or on the metadata of vars:
(ns foo
(:require [nbb.core :refer [*file*]]))
(prn *file*) ;; "/private/tmp/foo.cljs"
(defn f [])
(prn (:file (meta #'f))) ;; "/private/tmp/foo.cljs"
Nbb includes reagent.core
which will be lazily loaded when required. You can use this together with ink to create a TUI application:
$ npm install ink
ink-demo.cljs
:
(ns ink-demo
(:require ["ink" :refer [render Text]]
[reagent.core :as r]))
(defonce state (r/atom 0))
(doseq [n (range 1 11)]
(js/setTimeout #(swap! state inc) (* n 500)))
(defn hello []
[:> Text {:color "green"} "Hello, world! " @state])
(render (r/as-element [hello]))
Working with callbacks and promises can become tedious. Since nbb v0.0.36 the promesa.core
namespace is included with the let
and do!
macros. An example:
(ns prom
(:require [promesa.core :as p]))
(defn sleep [ms]
(js/Promise.
(fn [resolve _]
(js/setTimeout resolve ms))))
(defn do-stuff
[]
(p/do!
(println "Doing stuff which takes a while")
(sleep 1000)
1))
(p/let [a (do-stuff)
b (inc a)
c (do-stuff)
d (+ b c)]
(prn d))
$ nbb prom.cljs
Doing stuff which takes a while
Doing stuff which takes a while
3
Also see API docs.
Since nbb v0.0.75 applied-science/js-interop is available:
(ns example
(:require [applied-science.js-interop :as j]))
(def o (j/lit {:a 1 :b 2 :c {:d 1}}))
(prn (j/select-keys o [:a :b])) ;; #js {:a 1, :b 2}
(prn (j/get-in o [:c :d])) ;; 1
Most of this library is supported in nbb, except the following:
:syms
.-x
notation. In nbb, you must use keywords.See the example of what is currently supported.
See the examples directory for small examples.
Also check out these projects built with nbb:
See API documentation.
See this gist on how to convert an nbb script or project to shadow-cljs.
Prequisites:
To build:
bb release
Run bb tasks
for more project-related tasks.
Download Details:
Author: borkdude
Download Link: Download The Source Code
Official Website: https://github.com/borkdude/nbb
License: EPL-1.0
#node #javascript
1614867120
In these steps, we have our Alexa Skill properly dockerized. As we are not going to package all the software components (Alexa Skill + MongoDB) yet, in this fourth step, we will set up all the Kubernetes objects of our Alexa Skill using MongoDB Atlas.
Here, you have the technologies used in this project:
#docker #kubernetes #nginx #alexa #alexa skills #alexa skills development #alexa skill #alexa skill development #alexa skills developer
1616671994
If you look at the backend technology used by today’s most popular apps there is one thing you would find common among them and that is the use of NodeJS Framework. Yes, the NodeJS framework is that effective and successful.
If you wish to have a strong backend for efficient app performance then have NodeJS at the backend.
WebClues Infotech offers different levels of experienced and expert professionals for your app development needs. So hire a dedicated NodeJS developer from WebClues Infotech with your experience requirement and expertise.
So what are you waiting for? Get your app developed with strong performance parameters from WebClues Infotech
For inquiry click here: https://www.webcluesinfotech.com/hire-nodejs-developer/
Book Free Interview: https://bit.ly/3dDShFg
#hire dedicated node.js developers #hire node.js developers #hire top dedicated node.js developers #hire node.js developers in usa & india #hire node js development company #hire the best node.js developers & programmers
1603894260
It is always good practice in the world of programming to try to develop things that are reusable. So anyone can integrate what has been developed and can quickly start using it.
This is the philosophy behind a GitHub Action. Small individual and reusable tasks that we can combine to create jobs and customize our GitHub Actions workflows.
Here are the technologies used in this project:
The Alexa Skills Kit Command Line Interface (ASK CLI) is a tool for us to manage our Alexa Skills and its related resources, such as AWS Lambda functions. With the ASK CLI, we have access to the Skill Management API, which allows us to manage Alexa Skills through the command line.
GitHub Actions helps us to automate tasks within the software development lifecycle. GitHub Actions is event-driven, which means that we can run a series of commands after a specific event has occurred. For example, whenever someone creates a pull request for a repository, we can automatically run a pipeline on GitHub Actions.
An event automatically triggers the workflow
, which contains one or more jobs. Then the jobs
use steps
to control the order in which the actions are executed. These actions are the commands that automate certain processes.
#github #alexa #alexa skills #continious integration #alexa app development #alexa skills development #alexa skill #alexa skill development #alexa skills developer #github actions