Toby Rogers

Toby Rogers

1565169825

A guide to NPM commands and concepts

Given Node.js’ module ecosystem, one could argue that NPM is literally the bread and butter of any Node project. In fact, one could even go as far as to say that NPM is one of the most important tools the Node.js developer have under their communal belts. After all, they use it everyday to manage the packages their projects use.

Having said that, one could also say that it’s quite sad how little developers actually know about NPM, other than it can, indeed, install packages.

Package Management

We all know you can install packages with NPM, but what exactly does that mean? A package is basically a folder containing the code you need and you can either install it locally or globally.

Local installation

A local install means you’re literally downloading the files into your project’s folder. Inside it, you’ll find a directory you didn’t create, called “node_modules”. Because of this simple mechanics, this local folder can potentially grow quite big.

There is a good reason why this meme was born after all:

No wonder this meme was created!

That being said, normally you can just ignore the folder, and let Node.js take care of it for you.

To perform a local install all you have to do is:

$ npm install [package-name]

You can even add the --save flag, so the package name and version is saved into your package.json file. And this is important (crucial even), because when working as part of a team, you do not distribute, nor add the node_modules folder into version control system (be it GIT, SVN or whatever you’re using), instead you simply share the package.json file and let your teammates run $npm install by themselves. This is much faster and easier to maintain than sharing a whole folder which can grow to contain Gigabytes of data.

Here is how a simple package.json file looks like:

Yours might change a bit, depending on which packages you’ve installed, or which fields of the file (there are many others I didn’t use in the sample above) you need.

Global installation

You can also install packages globally, which means Node.js will be able to access them from any project you might need them. The problem? Global packages aren’t added to the package.json file, which makes sense. So why would you install global packages?

One of the many great things you can do with Node.js and NPM, is build what people usually call “binaries”, which are simply scripts that can be installed globally and thus, be accessible from anywhere on your box. That means you can create command line tools and use NPM to install them!

Without having to go too far, packages such as ExpressJS (one of the most popular Web frameworks for Node.js) or mocha (a very popular testing library) also come with executable binaries you can use. For example, mocha requires you to install it both, globally and locally in order to have a CLI tool available called “mocha” and the ability to run tests on your local project.

Global packages create a symlink (or shortcut) inside a general path that you need to add to your PATH environment variable.

Classic commands with NPM

The install command is but one of the many you can use with NPM. In fact, leaving aside the almost 60 different commands (yeap, you read that right!) that I’m going to briefly cover in a second, NPM also allows you to create your own custom commands in case the built-in ones aren’t enough for you.

Here is the list of the most common commands, taken from the official documentation:

  • access: Sets access level on published packages, restricting or enabling access to others aside from its author. Example: $ npm access public
  • adduser: Adds a user account to the registry (by default, the registry is npm’s registry, but you can specify a custom one). Example: $ npm addUser and the user credentials (username and password) as well as their email will be entered when prompted to.
  • audit: Runs a security audit on your installed dependencies, making sure no known vulnerabilities are affecting them (and by extension, your project). You can even use the flag fix to automatically fix any problems you might find during this audit.
  • bin: Shows NPM’s bin folder for the current project.
  • bugs: Opens the list of bugs inside a new browser window. The interesting bit about this command, is that it tries to guess the current bug tracker for the package and once it finds it, then it’ll launch a new browser window.
  • cache: Although not normally used by developers, this command allows them to either clear, verify or add something to NPM’s cache. In that cache HTTP request information and extra package data is stored. Normally this is handled directly by NPM and works transparently to devs, but if you see some strange behavior, especially when switching between different packages and different versions of them, it might be a good idea to try and clear the cache (just to be on the safe side).
  • ci: Pretty much the same as npm install but meant to be used in automated environments (such as a Continuous Integration process). This command is more strict than install and makes sure the installation is always clean (it automatically deletes the node_modules folder if it’s present).
  • completion: Enables Tab Completion for npm and its sub commands. Read the full documentation for more details.
  • config: Allows you to set, get and edit the configuration options for NPM.
  • dedupe: Attempts to reduce duplication of dependencies by traversing the dependency tree and moving duplicated entries as far up the hierarchy as possible. This is especially useful when your application starts to grow and incorporate a growing number of modules. Using this command is definitely optional, but it will provide a considerable reduction during installation times (most useful on CI/CD environments) if you have a lot of dependencies.
  • deprecate: Adds a deprecation warning on the library’s registry for a particular version (or range of versions).
  • dist-tag: Helps mange tags for a particular package. Tags can act as version aliases to help identify versions without having to remember the numbers. For example, by default the latest tag is used for the last version of all libraries and you can simply run npm install library-name@latest and NPM will understand which version of the library to download.
  • docs: Just like bugs this command attempts to guess where the official documentation for the package is, and opens that URL in a local browser.
  • doctor: Performs a set of pre-defined checks to make sure the system where NPM is being executed from has the minimum requirements ready: the node and git commands are accessible and executable, the node_modules folders (both local and global) are writable by NPM, the registry or any custom version of it is accessible and finally, that the NPM cache exists and it’s working.
  • help-search/help: Help will display the documentation page for a given term, and if no results are found, help-search will perform a full-text search on NPM’s markdown help files and display a list of relevant results.
  • hook: Allows you to configure new NPM hooks, which in turn will notify custom URLs when changes are made to packages of interest. For example, you can get notified when a new version of ExpressJS is released by typing: $npm hook add express http://your-url.com/new-express-version-endpoint and in turn, you can do anything you like with that information (such as auto-updating your dependencies).
  • init: Helps to initialize a project by asking a series of question such as name, version, author and so on. At the end a brand new package.json file is created with that information. You also have the ability to provide a custom initializer to customize the processed to your particular stack.
  • install: Installs a new package. You can specify where the package is located, and its format (i.e you can provide only a name so it’ll look for it in the main registry, or the path to tarball file where you’ve downloaded the package to install). You can also specify the version to install if you don’t want the latest to be installed every-time you run this command (especially useful to automated environments, such as CI/CD).
  • ls: Lists all the installed packages for the current project. You can make it list global packages or locally installed ones. In either case, it’ll list not only the names and versions visible in the package.json file, but it will also list their dependencies and their versions.
  • outdated: Checks for outdated packages in your project. It’ll provide you with a report of the installed packages, their current version, the version your package.json file is expecting and the latest version published in the main registry.
  • owner: Allows you to manage package owners. This is important if you’re either a library owner or maintainer, but not if you’re just limited to consuming packages.
  • ping: Pings the currently configured main npm registry and tests the authentication as well. this is only useful if you’re having issues download or installing any package. And it will only help you troubleshoot part of the problem, but it’s important to have remember it nevertheless.
  • prefix: Displays the current prefix, or in other words, the path to the closest folder with a package.json file inside it. You can use the -g flag and you’ll get the actual place where the global packages are installed.
  • publish: Enables developers to share their modules with others publicly or privately by the use of groups and organizations.

These are the either the most common or most useful NPM commands available to you, but there are still more than 10 extra commands for you to review so I’d recommend you bookmark their documentation and make a note to go back and double check it!

Publishing my own packages

The last bit of NPM knowledge I wanted to impart on you was how easy it is to actually share your work with others. In the previous list, the very last command was the publish one, which basically allows you to do just that, but here I want to give you a bit more detail.

Preparing your project’s metadata

NPM’s registry is essentially a huge packages search engine, capable of both, hosting everything so you don’t have to and at the same time, index every bit of metadata it can get on your work, in order to help others find your modules as quickly as possible.

In other words, make sure your package.json is properly setup. These are the main points of interest for you (and others!) to start looking into sharing packages with them.

  • Name: This is the most obvious and common from the list, and one that you’ve probably already setup when you created the package.json file to keep track of your dependencies. Just be mindful of it and add it in you haven’t already.
  • Description: Again, a quick and easy-to-understand one. That being said here is where you want to both: describe you package so others can quickly understand what they’re getting when installing. And make sure you add as many important keywords inside the descripiption so the search engine knows how to find you quickly as well. It’s a balance between the needs of the developers trying to find your package and the engine’s trying to correctly index it first.
  • Tags: This is simply put, a comma separated list of keywords. That being said, these tags are very important once you start publishing packages, because on NPM’s main site, they act as categories you can easily browse. So neglecting to add this property to your package.json prevents developers from finding your work through navigation.
  • Private: Unless you’re just publishing content for you and you alone, you’ll want to set this property to false as soon as you can, otherwise no one will be able to find your modules through keyword search.
  • Bugs: This make sure that if you’re hosting your content somewhere such as Github where there is public issue tracking, you set this property to the right URL. This’ll help NPM show a link and display the number of currently open issues right there on the package’s page.
  • Repository: Another property that is not strictly required, but if you add it, NPM will be able to show extra information such as a link to it, activity, list of collaborators, just to name a few.
  • Homepage: Like the previous one, it’ll help NPM display a separate link to this URL if present. This is especially relevant when you have your code in one URL (such as a Github repo) and a particular website dedicated to your module in another URL.
  • License: This is used to display to actual license you’ve setup on your project. It’ll appear on a different and more prominent way if you add it as part of your package.json file. You can also just mention it on your readme.md, but adding it here will provide extra knowledge about your project to NPM.

By providing the metadata I mentioned above, NPM is able to showcase that data and highlight it for developers to see. Take the following example, the package page for Winston, a fantastic logging library:

Notice how many links and extra bits and details have been added thanks to the metadata added by its team.

Writing a nice documentation

This step shouldn’t, but it’s completely optional. I say shouldn’t, of course, because if you’re trying to publish a module that is meant to be used by other developers, you need to provide good documentation.

You can’t really expect your tool to be “trivial to use”, or “easy to understand and figure out”. The point of NPM’s registry, is to provide others with pre-made tools that’ll help them solve problems they don’t want nor have the time to solve by themselves. So avoiding to provide a simple set of instructions and explanations prevents them from actually wanting to try and use your tool.

With that being said, NPM’s main site takes a cue from Github in the sense that they also look for a file called readme.md in the root of your project’s directory. If present, they’ll turn your markdown documentation into a nice homepage as you can see in the above screenshot.

So there isn’t really any excuse when it comes to writing the basic documentation others will need, so just do it in the readme.md and you’ll have it available in two places at once.

Actually publishing your package

After coding, setting up the right amount of data into your package.json and writing a useful readme.md file, you’re ready to publish.

To perform this, you’ll have to do two things:

  1. Log-in to your NPM account (assuming you’ve created one using their website) using the actual npm CLI.
  2. Publish your code.

That’s it, 2 steps, and you’re done. To log-in simply type:

$ npm login

That’ll prompt you to enter your credentials, and once you’ve successfully logged in, you can type:

$ npm publish

Remember to do this from within your project’s folder, otherwise the second command will fail.

Also, remember that the name of your package will be given by the name property from your package.json file and not from the folder’s name (which usually tends to coincide, but doesn’t mean a thing). So if you’re having a repeated name error (which could happen given the amount of packages available in NPM), that is where you’ll have to make the change.

Conclusion

Thanks for reading and I hope that by now, you’ve manged to understand the complexity and beauty of NPM. It is not just a simple tool for you to install package, but you can do a lot more with it if you take the time to check their documentation.

Let me know down in the comments if you were aware of everything I just mentioned and if I missed something else you’re currently using NPM for, I’d love to know!

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

The Complete Node.js Developer Course (3rd Edition)

Angular & NodeJS - The MEAN Stack Guide

NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)

Best 50 Nodejs interview questions from Beginners to Advanced in 2019

Node.js 12: The future of server-side JavaScript

Creating your first npm package

Top 10 npm Security Best Practices

How to publish a React Native component to NPM

npm and the Future of JavaScript

A Beginner’s Guide to npm — the Node Package Manager

Step by step: Building and publishing an NPM Package.

A Beginner’s Guide to npm: The Node Package Manager


#node-js #npm #web-development

What is GEEK

Buddha Community

A guide to NPM commands and concepts

Amazon Rekognition Video Analyzer Written in Opencv

Create a Serverless Pipeline for Video Frame Analysis and Alerting

Introduction

Imagine being able to capture live video streams, identify objects using deep learning, and then trigger actions or notifications based on the identified objects -- all with low latency and without a single server to manage.

This is exactly what this project is going to help you accomplish with AWS. You will be able to setup and run a live video capture, analysis, and alerting solution prototype.

The prototype was conceived to address a specific use case, which is alerting based on a live video feed from an IP security camera. At a high level, the solution works as follows. A camera surveils a particular area, streaming video over the network to a video capture client. The client samples video frames and sends them over to AWS, where they are analyzed and stored along with metadata. If certain objects are detected in the analyzed video frames, SMS alerts are sent out. Once a person receives an SMS alert, they will likely want to know what caused it. For that, sampled video frames can be monitored with low latency using a web-based user interface.

Here's the prototype's conceptual architecture:

Architecture

Let's go through the steps necessary to get this prototype up and running. If you are starting from scratch and are not familiar with Python, completing all steps can take a few hours.

Preparing your development environment

Here’s a high-level checklist of what you need to do to setup your development environment.

  1. Sign up for an AWS account if you haven't already and create an Administrator User. The steps are published here.
  2. Ensure that you have Python 2.7+ and Pip on your machine. Instructions for that varies based on your operating system and OS version.
  3. Create a Python virtual environment for the project with Virtualenv. This helps keep project’s python dependencies neatly isolated from your Operating System’s default python installation. Once you’ve created a virtual python environment, activate it before moving on with the following steps.
  4. Use Pip to install AWS CLI. Configure the AWS CLI. It is recommended that the access keys you configure are associated with an IAM User who has full access to the following:
  • Amazon S3
  • Amazon DynamoDB
  • Amazon Kinesis
  • AWS Lambda
  • Amazon CloudWatch and CloudWatch Logs
  • AWS CloudFormation
  • Amazon Rekognition
  • Amazon SNS
  • Amazon API Gateway
  • Creating IAM Roles

The IAM User can be the Administrator User you created in Step 1.

5.   Make sure you choose a region where all of the above services are available. Regions us-east-1 (N. Virginia), us-west-2 (Oregon), and eu-west-1 (Ireland) fulfill this criterion. Visit this page to learn more about service availability in AWS regions.

6.   Use Pip to install Open CV 3 python dependencies and then compile, build, and install Open CV 3 (required by Video Cap clients). You can follow this guide to get Open CV 3 up and running on OS X Sierra with Python 2.7. There's another guide for Open CV 3 and Python 3.5 on OS X Sierra. Other guides exist as well for Windows and Raspberry Pi.

7.   Use Pip to install Boto3. Boto is the Amazon Web Services (AWS) SDK for Python, which allows Python developers to write software that makes use of Amazon services like S3 and EC2. Boto provides an easy to use, object-oriented API as well as low-level direct access to AWS services.

8.   Use Pip to install Pynt. Pynt enables you to write project build scripts in Python.

9.   Clone this GitHub repository. Choose a directory path for your project that does not contain spaces (I'll refer to the full path to this directory as <path-to-project-dir>).

10.   Use Pip to install pytz. Pytz is needed for timezone calculations. Use the following commands:

pip install pytz # Install pytz in your virtual python env

pip install pytz -t <path-to-project-dir>/lambda/imageprocessor/ # Install pytz to be packaged and deployed with the Image Processor lambda function

Finally, obtain an IP camera. If you don’t have an IP camera, you can use your smartphone with an IP camera app. This is useful in case you want to test things out before investing in an IP camera. Also, you can simply use your laptop’s built-in camera or a connected USB camera. If you use an IP camera, make sure your camera is connected to the same Local Area Network as the Video Capture client.

Configuring the project

In this section, I list every configuration file, parameters within it, and parameter default values. The build commands detailed later extract the majority of their parameters from these configuration files. Also, the prototype's two AWS Lambda functions - Image Processor and Frame Fetcher - extract parameters at runtime from imageprocessor-params.json and framefetcher-params.json respectively.

NOTE: Do not remove any of the attributes already specified in these files.

NOTE: You must set the value of any parameter that has the tag NO-DEFAULT

config/global-params.json

Specifies “global” build configuration parameters. It is read by multiple build scripts.

{
    "StackName" : "video-analyzer-stack"
}

Parameters:

  • StackName - The name of the stack to be created in your AWS account.

config/cfn-params.json

Specifies and overrides default values of AWS CloudFormation parameters defined in the template (located at aws-infra/aws-infra-cfn.yaml). This file is read by a number of build scripts, including createstack, deploylambda, and webui.

{
    "SourceS3BucketParameter" : "<NO-DEFAULT>",
    "ImageProcessorSourceS3KeyParameter" : "src/lambda_imageprocessor.zip",
    "FrameFetcherSourceS3KeyParameter" : "src/lambda_framefetcher.zip",

    "FrameS3BucketNameParameter" : "<NO-DEFAULT>",

    "FrameFetcherApiResourcePathPart" : "enrichedframe",
    "ApiGatewayRestApiNameParameter" : "VidAnalyzerRestApi",
    "ApiGatewayStageNameParameter": "development",
    "ApiGatewayUsagePlanNameParameter" : "development-plan"
}

Parameters:

SourceS3BucketParameter - The Amazon S3 bucket to which your AWS Lambda function packages (.zip files) will be deployed. If a bucket with such a name does not exist, the deploylambda build command will create it for you with appropriate permissions. AWS CloudFormation will access this bucket to retrieve the .zip files for Image Processor and Frame Fetcher AWS Lambda functions.

ImageProcessorSourceS3KeyParameter - The Amazon S3 key under which the Image Processor function .zip file will be stored.

FrameFetcherSourceS3KeyParameter - The Amazon S3 key under which the Frame Fetcher function .zip file will be stored.

FrameS3BucketNameParameter - The Amazon S3 bucket that will be used for storing video frame images. There must not be an existing S3 bucket with the same name.

FrameFetcherApiResourcePathPart - The name of the Frame Fetcher API resource path part in the API Gateway URL.

ApiGatewayRestApiNameParameter - The name of the API Gateway REST API to be created by AWS CloudFormation.

ApiGatewayStageNameParameter - The name of the API Gateway stage to be created by AWS CloudFormation.

ApiGatewayUsagePlanNameParameter - The name of the API Gateway usage plan to be created by AWS CloudFormation.

config/imageprocessor-params.json

Specifies configuration parameters to be used at run-time by the Image Processor lambda function. This file is packaged along with the Image Processor lambda function code in a single .zip file using the packagelambda build script.

{
    "s3_bucket" : "<NO-DEFAULT>",
    "s3_key_frames_root" : "frames/",

    "ddb_table" : "EnrichedFrame",

    "rekog_max_labels" : 123,
    "rekog_min_conf" : 50.0,

    "label_watch_list" : ["Human", "Pet", "Bag", "Toy"],
    "label_watch_min_conf" : 90.0,
    "label_watch_phone_num" : "",
    "label_watch_sns_topic_arn" : "",
    "timezone" : "US/Eastern"
}

s3_bucket - The Amazon S3 bucket in which Image Processor will store captured video frame images. The value specified here must match the value specified for the FrameS3BucketNameParameter parameter in the cfn-params.json file.

s3_key_frames_root - The Amazon S3 key prefix that will be prepended to the keys of all stored video frame images.

ddb_table - The Amazon DynamoDB table in which Image Processor will store video frame metadata. The default value,EnrichedFrame, matches the default value of the AWS CloudFormation template parameter DDBTableNameParameter in the aws-infra/aws-infra-cfn.yaml template file.

rekog_max_labels - The maximum number of labels that Amazon Rekognition can return to Image Processor.

rekog_min_conf - The minimum confidence required for a label identified by Amazon Rekognition. Any labels with confidence below this value will not be returned to Image Processor.

label_watch_list - A list of labels for to watch out for. If any of the labels specified in this parameter are returned by Amazon Rekognition, an SMS alert will be sent via Amazon SNS. The label's confidence must exceed label_watch_min_conf.

label_watch_min_conf - The minimum confidence required for a label to trigger a Watch List alert.

label_watch_phone_num - The mobile phone number to which a Watch List SMS alert will be sent. Does not have a default value. You must configure a valid phone number adhering to the E.164 format (e.g. +1404XXXYYYY) for the Watch List feature to become active.

label_watch_sns_topic_arn - The SNS topic ARN to which you want Watch List alert messages to be sent. The alert message contains a notification text in addition to a JSON formatted list of Watch List labels found. This can be used to publish alerts to any SNS subscribers, such as Amazon SQS queues.

timezone - The timezone used to report time and date in SMS alerts. By default, it is "US/Eastern". See this list of country codes, names, continents, capitals, and pytz timezones).

config/framefetcher-params.json

Specifies configuration parameters to be used at run-time by the Frame Fetcher lambda function. This file is packaged along with the Frame Fetcher lambda function code in a single .zip file using the packagelambda build script.

{
    "s3_pre_signed_url_expiry" : 1800,

    "ddb_table" : "EnrichedFrame",
    "ddb_gsi_name" : "processed_year_month-processed_timestamp-index",

    "fetch_horizon_hrs" : 24,
    "fetch_limit" : 3
}

s3_pre_signed_url_expiry - Frame Fetcher returns video frame metadata. Along with the returned metadata, Frame Fetcher generates and returns a pre-signed URL for every video frame. Using a pre-signed URL, a client (such as the Web UI) can securely access the JPEG image associated with a particular frame. By default, the pre-signed URLs expire in 30 minutes.

ddb_table - The Amazon DynamoDB table from which Frame Fetcher will fetch video frame metadata. The default value,EnrichedFrame, matches the default value of the AWS CloudFormation template parameter DDBTableNameParameter in the aws-infra/aws-infra-cfn.yaml template file.

ddb_gsi_name - The name of the Amazon DynamoDB Global Secondary Index that Frame Fetcher will use to query frame metadata. The default value matches the default value of the AWS CloudFormation template parameter DDBGlobalSecondaryIndexNameParameter in the aws-infra/aws-infra-cfn.yaml template file.

fetch_horizon_hrs - Frame Fetcher will exclude any video frames that were ingested prior to the point in the past represented by (time now - fetch_horizon_hrs).

fetch_limit - The maximum number of video frame metadata items that Frame Fetcher will retrieve from Amazon DynamoDB.

Building the prototype

Common interactions with the project have been simplified for you. Using pynt, the following tasks are automated with simple commands:

  • Creating, deleting, and updating the AWS infrastructure stack with AWS CloudFormation
  • Packaging lambda code into .zip files and deploying them into an Amazon S3 bucket
  • Running the video capture client to stream from a built-in laptop webcam or a USB camera
  • Running the video capture client to stream from an IP camera (MJPEG stream)
  • Build a simple web user interface (Web UI)
  • Run a lightweight local HTTP server to serve Web UI for development and demo purposes

For a list of all available tasks, enter the following command in the root directory of this project:

pynt -l

The output represents the list of build commands available to you:

pynt -l output

Build commands are implemented as python scripts in the file build.py. The scripts use the AWS Python SDK (Boto) under the hood. They are documented in the following section.

Prior to using these build commands, you must configure the project. Configuration parameters are split across JSON-formatted files located under the config/ directory. Configuration parameters are described in detail in an earlier section.

Build commands

This section describes important build commands and how to use them. If you want to use these commands right away to build the prototype, you may skip to the section titled "Deploy and run the prototype".

The packagelambda build command

Run this command to package the prototype's AWS Lambda functions and their dependencies (Image Processor and Frame Fetcher) into separate .zip packages (one per function). The deployment packages are created under the build/ directory.

pynt packagelambda # Package both functions and their dependencies into zip files.

pynt packagelambda[framefetcher] # Package only Frame Fetcher.

Currently, only Image Processor requires an external dependency, pytz. If you add features to Image Processor or Frame Fetcher that require external dependencies, you should install the dependencies using Pip by issuing the following command.

pip install <module-name> -t <path-to-project-dir>/lambda/<lambda-function-dir>

For example, let's say you want to perform image processing in the Image Processor Lambda function. You may decide on using the Pillow image processing library. To ensure Pillow is packaged with your Lambda function in one .zip file, issue the following command:

pip install Pillow -t <path-to-project-dir>/lambda/imageprocessor #Install Pillow dependency

You can find more details on installing AWS Lambda dependencies here.

The deploylambda build command

Run this command before you run createstack. The deploylambda command uploads Image Processor and Frame Fetcher .zip packages to Amazon S3 for pickup by AWS CloudFormation while creating the prototype's stack. This command will parse the deployment Amazon S3 bucket name and keys names from the cfn-params.json file. If the bucket does not exist, the script will create it. This bucket must be in the same AWS region as the AWS CloudFormation stack, or else the stack creation will fail. Without parameters, the command will deploy the .zip packages of both Image Processor and Frame Fetcher. You can specify either “imageprocessor” or “framefetcher” as a parameter between square brackets to deploy an individual function.

Here are sample command invocations.

pynt deploylambda # Deploy both functions to Amazon S3.

pynt deploylambda[framefetcher] # Deploy only Frame Fetcher to Amazon S3.

The createstack build command

The createstack command creates the prototype's AWS CloudFormation stack behind the scenes by invoking the create_stack() API. The AWS CloudFormation template used is located at aws-infra/aws-infra-cfn.yaml under the project’s root directory. The prototype's stack requires a number of parameters to be successfully created. The createstack script reads parameters from both global-params.json and cfn-params.json configuration files. The script then passes those parameters to the create_stack() call.

Note that you must, first, package and deploy Image Processor and Frame Fetcher functions to Amazon S3 using the packagelambda and deploylambda commands (documented later in this guid) for the AWS CloudFormation stack creation to succeed.

You can issue the command as follows:

pynt createstack

Stack creation should take only a couple of minutes. At any time, you can check on the prototype's stack status either through the AWS CloudFormation console or by issuing the following command.

pynt stackstatus

Congratulations! You’ve just created the prototype's entire architecture in your AWS account.

The deletestack build command

The deletestack command, once issued, does a few things. First, it empties the Amazon S3 bucket used to store video frame images. Next, it calls the AWS CloudFormation delete_stack() API to delete the prototype's stack from your account. Finally, it removes any unneeded resources not deleted by the stack (for example, the prototype's API Gateway Usage Plan resource).

You can issue the deletestack command as follows.

pynt deletestack

As with createstack, you can monitor the progress of stack deletion using the stackstatus build command.

The deletedata build command

The deletedata command, once issued, empties the Amazon S3 bucket used to store video frame images. Next, it also deletes all items in the DynamoDB table used to store frame metadata.

Use this command to clear all previously ingested video frames and associated metadata. The command will ask for confirmation [Y/N] before proceeding with deletion.

You can issue the deletedata command as follows.

pynt deletedata

The stackstatus build command

The stackstatus command will query AWS CloudFormation for the status of the prototype's stack. This command is most useful for quickly checking that the prototype is up and running (i.e. status is "CREATE_COMPLETE" or "UPDATE_COMPLETE") and ready to serve requests from the Web UI.

You can issue the command as follows.

pynt stackstatus # Get the prototype's Stack Status

The webui build command

Run this command when the prototype's stack has been created (using createstack). The webui command “builds” the Web UI through which you can monitor incoming captured video frames. First, the script copies the webui/ directory verbatim into the project’s build/ directory. Next, the script generates an apigw.js file which contains the API Gateway base URL and the API key to be used by Web UI for invoking the Fetch Frames function deployed in AWS Lambda. This file is created in the Web UI build directory.

You can issue the Web UI build command as follows.

pynt webui

The webuiserver build command

The webuiserver command starts a local, lightweight, Python-based HTTP server on your machine to serve Web UI from the build/web-ui/ directory. Use this command to serve the prototype's Web UI for development and demonstration purposes. You can specify the server’s port as pynt task parameter, between square brackets.

Here’s sample invocation of the command.

pynt webuiserver # Starts lightweight HTTP Server on port 8080.

The videocaptureip and videocapture build commands

The videocaptureip command fires up the MJPEG-based video capture client (source code under the client/ directory). This command accepts, as parameters, an MJPEG stream URL and an optional frame capture rate. The capture rate is defined as 1 every X number of frames. Captured frames are packaged, serialized, and sent to the Kinesis Frame Stream. The video capture client for IP cameras uses Open CV 3 to do simple image processing operations on captured frame images – mainly image rotation.

Here’s a sample command invocation.

pynt videocaptureip["http://192.168.0.2/video",20] # Captures 1 frame every 20.

On the other hand, the videocapture command (without the trailing 'ip'), fires up a video capture client that captures frames from a camera attached to the machine on which it runs. If you run this command on your laptop, for instance, the client will attempt to access its built-in video camera. This video capture client relies on Open CV 3 to capture video from physically connected cameras. Captured frames are packaged, serialized, and sent to the Kinesis Frame Stream.

Here’s a sample invocation.

pynt videocapture[20] # Captures one frame every 20.

Deploy and run the prototype

In this section, we are going use project's build commands to deploy and run the prototype in your AWS account. We’ll use the commands to create the prototype's AWS CloudFormation stack, build and serve the Web UI, and run the Video Cap client.

Prepare your development environment, and ensure configuration parameters are set as you wish.

On your machine, in a command line terminal change into the root directory of the project. Activate your virtual Python environment. Then, enter the following commands:

$ pynt packagelambda #First, package code & configuration files into .zip files

#Command output without errors

$ pynt deploylambda #Second, deploy your lambda code to Amazon S3

#Command output without errors

$ pynt createstack #Now, create the prototype's CloudFormation stack

#Command output without errors

$ pynt webui #Build the Web UI

#Command output without errors
  • On your machine, in a separate command line terminal:
$ pynt webuiserver #Start the Web UI server on port 8080 by default
  • In your browser, access http://localhost:8080 to access the prototype's Web UI. You should see a screen similar to this:

Empty Web UI

Now turn on your IP camera or launch the app on your smartphone. Ensure that your camera is accepting connections for streaming MJPEG video over HTTP, and identify the local URL for accessing that stream.

Then, in a terminal window at the root directory of the project, issue this command:

$ pynt videocaptureip["<your-ip-cam-mjpeg-url>",<capture-rate>]
  • Or, if you don’t have an IP camera and would like to use a built-in camera:
$ pynt videocapture[<frame-capture-rate>]
  • Few seconds after you execute this step, the dashed area in the Web UI will auto-populate with captured frames, side by side with labels recognized in them.

When you are done

After you are done experimenting with the prototype, perform the following steps to avoid unwanted costs.

  • Terminate video capture client(s) (press Ctrl+C in command line terminal where you got it running)
  • Close all open Web UI browser windows or tabs.
  • Execute the pynt deletestack command (see docs above)
  • After you run deletestack, visit the AWS CloudFormation console to double-check the stack is deleted.
  • Ensure that Amazon S3 buckets and objects within them are deleted.

Remember, you can always setup the entire prototype again with a few simple commands.

License

Licensed under the Amazon Software License.

A copy of the License is located at

http://aws.amazon.com/asl/

The AWS CloudFormation Stack (optional read)

Let’s quickly go through the stack that AWS CloudFormation sets up in your account based on the template. AWS CloudFormation uses as much parallelism as possible while creating resources. As a result, some resources may be created in an order different than what I’m going to describe here.

First, AWS CloudFormation creates the IAM roles necessary to allow AWS services to interact with one another. This includes the following.

ImageProcessorLambdaExecutionRole – a role to be assumed by the Image Processor lambda function. It allows full access to Amazon DynamoDB, Amazon S3, Amazon SNS, and AWS CloudWatch Logs. The role also allows read-only access to Amazon Kinesis and Amazon Rekognition. For simplicity, only managed AWS role permission policies are used.

FrameFetcherLambdaExecutionRole – a role to be assumed by the Frame Fetcher lambda function. It allows full access to Amazon S3, Amazon DynamoDB, and AWS CloudWatch Logs. For simplicity, only managed AWS permission policies are used. In parallel, AWS CloudFormation creates the Amazon S3 bucket to be used to store the captured video frame images. It also creates the Kinesis Frame Stream to receive captured video frame images from the Video Cap client.

Next, the Image Processor lambda function is created in addition to an AWS Lambda Event Source Mapping to allow Amazon Kinesis to trigger Image Processor once new captured video frames are available.

The Frame Fetcher lambda function is also created. Frame Fetcher is a simple lambda function that responds to a GET request by returning the latest list of frames, in descending order by processing timestamp, up to a configurable number of hours, called the “fetch horizon” (check the framefetcher-params.json file for more run-time configuration parameters). Necessary AWS Lambda Permissions are also created to permit Amazon API Gateway to invoke the Frame Fetcher lambda function.

AWS CloudFormation also creates the DynamoDB table where Enriched Frame metadata is stored by the Image Processor lambda function as described in the architecture overview section of this post. A Global Secondary Index (GSI) is also created; to be used by the Frame Fetcher lambda function in fetching Enriched Frame metadata in descending order by time of capture.

Finally, AWS CloudFormation creates the Amazon API Gateway resources necessary to allow the Web UI to securely invoke the Frame Fetcher lambda function with a GET request to a public API Gateway URL.

The following API Gateway resources are created.

REST API named “RtRekogRestAPI” by default.

An API Gateway resource with a path part set to “enrichedframe” by default.

A GET API Gateway method associated with the “enrichedframe” resource. This method is configured with Lambda proxy integration with the Frame Fetcher lambda function (learn more about AWS API Gateway proxy integration here). The method is also configured such that an API key is required.

An OPTIONS API Gateway method associated with the “enrichedframe” resource. This method’s purpose is to enable Cross-Origin Resource Sharing (CORS). Enabling CORS allows the Web UI to make Ajax requests to the Frame Fetcher API Gateway URL. Note that the Frame Fetcher lambda function must, itself, also return the Access-Control-Allow-Origin CORS header in its HTTP response.

A “development” API Gateway deployment to allow the invocation of the prototype's API over the Internet.

A “development” API Gateway stage for the API deployment along with an API Gateway usage plan named “development-plan” by default.

An API Gateway API key, name “DevApiKey” by default. The key is associated with the “development” stage and “development-plan” usage plan.

All defaults can be overridden in the cfn-params.json configuration file. That’s it for the prototype's AWS CloudFormation stack! This stack was designed primarily for development/demo purposes, especially how the Amazon API Gateway resources are set up.

FAQ

Q: Why is this project titled "amazon-rekognition-video-analyzer" despite the security-focused use case?

A: Although this prototype was conceived to address the security monitoring and alerting use case, you can use the prototype's architecture and code as a starting point to address a wide variety of use cases involving low-latency analysis of live video frames with Amazon Rekognition.

Download Details:
Author: aws-samples
Source Code: https://github.com/aws-samples/amazon-rekognition-video-analyzer
License: View license

#opencv  #python #aws 

Modesto  Bailey

Modesto Bailey

1596739800

NPM Install and NPM CI: In What Aspect They Differ

Nodejs web development has achieved such a huge acclamation all over the world just because of its large ecosystem of libraries known as NPM modules. It is the largest software package library in the world, with over 500,000+ packages. Each time a Command Line Interface (CLI) for npm comes as an add-on with Nodejs installation which allows developers to connect with packages locally on their machine.

The idea of npm modules had come with some technical advancement in package management like reusable components, with easy installation via an online repository, with version and dependency management.

In general,NPM is a default package manager for every Nodejs development project. Npm eases the installing and updating dependencies processes. A dependency list on npmjs even provides you with the installation command, so that you can simply copy and paste in the terminal to initiate installation procedures.

All npm users have an advantage of a new install command called “npm ci(i.e. npm continuous integration)”. These commands provide enormous improvements to both the performance and reliability of default builds for continuous integration processes. In turn, it enables a consistent and fast experience for developers using continuous integration in their workflow.

In npm install, it reads the package.json to generate a list of dependencies and uses package-lock.json to know the exact version of these dependencies to install. If the dependency is found in package-lock.jso, it will be added by npm install.

Whereas here, the npm ci (continuous integration) installs dependencies from package-lock.json directly and use up package.json just to verify that there are no mismatched versions exists. If any dependencies mismatching versions, it will show an error.

#npm-install #npm-ci #npm #node-package-manager

Toby Rogers

Toby Rogers

1565169825

A guide to NPM commands and concepts

Given Node.js’ module ecosystem, one could argue that NPM is literally the bread and butter of any Node project. In fact, one could even go as far as to say that NPM is one of the most important tools the Node.js developer have under their communal belts. After all, they use it everyday to manage the packages their projects use.

Having said that, one could also say that it’s quite sad how little developers actually know about NPM, other than it can, indeed, install packages.

Package Management

We all know you can install packages with NPM, but what exactly does that mean? A package is basically a folder containing the code you need and you can either install it locally or globally.

Local installation

A local install means you’re literally downloading the files into your project’s folder. Inside it, you’ll find a directory you didn’t create, called “node_modules”. Because of this simple mechanics, this local folder can potentially grow quite big.

There is a good reason why this meme was born after all:

No wonder this meme was created!

That being said, normally you can just ignore the folder, and let Node.js take care of it for you.

To perform a local install all you have to do is:

$ npm install [package-name]

You can even add the --save flag, so the package name and version is saved into your package.json file. And this is important (crucial even), because when working as part of a team, you do not distribute, nor add the node_modules folder into version control system (be it GIT, SVN or whatever you’re using), instead you simply share the package.json file and let your teammates run $npm install by themselves. This is much faster and easier to maintain than sharing a whole folder which can grow to contain Gigabytes of data.

Here is how a simple package.json file looks like:

Yours might change a bit, depending on which packages you’ve installed, or which fields of the file (there are many others I didn’t use in the sample above) you need.

Global installation

You can also install packages globally, which means Node.js will be able to access them from any project you might need them. The problem? Global packages aren’t added to the package.json file, which makes sense. So why would you install global packages?

One of the many great things you can do with Node.js and NPM, is build what people usually call “binaries”, which are simply scripts that can be installed globally and thus, be accessible from anywhere on your box. That means you can create command line tools and use NPM to install them!

Without having to go too far, packages such as ExpressJS (one of the most popular Web frameworks for Node.js) or mocha (a very popular testing library) also come with executable binaries you can use. For example, mocha requires you to install it both, globally and locally in order to have a CLI tool available called “mocha” and the ability to run tests on your local project.

Global packages create a symlink (or shortcut) inside a general path that you need to add to your PATH environment variable.

Classic commands with NPM

The install command is but one of the many you can use with NPM. In fact, leaving aside the almost 60 different commands (yeap, you read that right!) that I’m going to briefly cover in a second, NPM also allows you to create your own custom commands in case the built-in ones aren’t enough for you.

Here is the list of the most common commands, taken from the official documentation:

  • access: Sets access level on published packages, restricting or enabling access to others aside from its author. Example: $ npm access public
  • adduser: Adds a user account to the registry (by default, the registry is npm’s registry, but you can specify a custom one). Example: $ npm addUser and the user credentials (username and password) as well as their email will be entered when prompted to.
  • audit: Runs a security audit on your installed dependencies, making sure no known vulnerabilities are affecting them (and by extension, your project). You can even use the flag fix to automatically fix any problems you might find during this audit.
  • bin: Shows NPM’s bin folder for the current project.
  • bugs: Opens the list of bugs inside a new browser window. The interesting bit about this command, is that it tries to guess the current bug tracker for the package and once it finds it, then it’ll launch a new browser window.
  • cache: Although not normally used by developers, this command allows them to either clear, verify or add something to NPM’s cache. In that cache HTTP request information and extra package data is stored. Normally this is handled directly by NPM and works transparently to devs, but if you see some strange behavior, especially when switching between different packages and different versions of them, it might be a good idea to try and clear the cache (just to be on the safe side).
  • ci: Pretty much the same as npm install but meant to be used in automated environments (such as a Continuous Integration process). This command is more strict than install and makes sure the installation is always clean (it automatically deletes the node_modules folder if it’s present).
  • completion: Enables Tab Completion for npm and its sub commands. Read the full documentation for more details.
  • config: Allows you to set, get and edit the configuration options for NPM.
  • dedupe: Attempts to reduce duplication of dependencies by traversing the dependency tree and moving duplicated entries as far up the hierarchy as possible. This is especially useful when your application starts to grow and incorporate a growing number of modules. Using this command is definitely optional, but it will provide a considerable reduction during installation times (most useful on CI/CD environments) if you have a lot of dependencies.
  • deprecate: Adds a deprecation warning on the library’s registry for a particular version (or range of versions).
  • dist-tag: Helps mange tags for a particular package. Tags can act as version aliases to help identify versions without having to remember the numbers. For example, by default the latest tag is used for the last version of all libraries and you can simply run npm install library-name@latest and NPM will understand which version of the library to download.
  • docs: Just like bugs this command attempts to guess where the official documentation for the package is, and opens that URL in a local browser.
  • doctor: Performs a set of pre-defined checks to make sure the system where NPM is being executed from has the minimum requirements ready: the node and git commands are accessible and executable, the node_modules folders (both local and global) are writable by NPM, the registry or any custom version of it is accessible and finally, that the NPM cache exists and it’s working.
  • help-search/help: Help will display the documentation page for a given term, and if no results are found, help-search will perform a full-text search on NPM’s markdown help files and display a list of relevant results.
  • hook: Allows you to configure new NPM hooks, which in turn will notify custom URLs when changes are made to packages of interest. For example, you can get notified when a new version of ExpressJS is released by typing: $npm hook add express http://your-url.com/new-express-version-endpoint and in turn, you can do anything you like with that information (such as auto-updating your dependencies).
  • init: Helps to initialize a project by asking a series of question such as name, version, author and so on. At the end a brand new package.json file is created with that information. You also have the ability to provide a custom initializer to customize the processed to your particular stack.
  • install: Installs a new package. You can specify where the package is located, and its format (i.e you can provide only a name so it’ll look for it in the main registry, or the path to tarball file where you’ve downloaded the package to install). You can also specify the version to install if you don’t want the latest to be installed every-time you run this command (especially useful to automated environments, such as CI/CD).
  • ls: Lists all the installed packages for the current project. You can make it list global packages or locally installed ones. In either case, it’ll list not only the names and versions visible in the package.json file, but it will also list their dependencies and their versions.
  • outdated: Checks for outdated packages in your project. It’ll provide you with a report of the installed packages, their current version, the version your package.json file is expecting and the latest version published in the main registry.
  • owner: Allows you to manage package owners. This is important if you’re either a library owner or maintainer, but not if you’re just limited to consuming packages.
  • ping: Pings the currently configured main npm registry and tests the authentication as well. this is only useful if you’re having issues download or installing any package. And it will only help you troubleshoot part of the problem, but it’s important to have remember it nevertheless.
  • prefix: Displays the current prefix, or in other words, the path to the closest folder with a package.json file inside it. You can use the -g flag and you’ll get the actual place where the global packages are installed.
  • publish: Enables developers to share their modules with others publicly or privately by the use of groups and organizations.

These are the either the most common or most useful NPM commands available to you, but there are still more than 10 extra commands for you to review so I’d recommend you bookmark their documentation and make a note to go back and double check it!

Publishing my own packages

The last bit of NPM knowledge I wanted to impart on you was how easy it is to actually share your work with others. In the previous list, the very last command was the publish one, which basically allows you to do just that, but here I want to give you a bit more detail.

Preparing your project’s metadata

NPM’s registry is essentially a huge packages search engine, capable of both, hosting everything so you don’t have to and at the same time, index every bit of metadata it can get on your work, in order to help others find your modules as quickly as possible.

In other words, make sure your package.json is properly setup. These are the main points of interest for you (and others!) to start looking into sharing packages with them.

  • Name: This is the most obvious and common from the list, and one that you’ve probably already setup when you created the package.json file to keep track of your dependencies. Just be mindful of it and add it in you haven’t already.
  • Description: Again, a quick and easy-to-understand one. That being said here is where you want to both: describe you package so others can quickly understand what they’re getting when installing. And make sure you add as many important keywords inside the descripiption so the search engine knows how to find you quickly as well. It’s a balance between the needs of the developers trying to find your package and the engine’s trying to correctly index it first.
  • Tags: This is simply put, a comma separated list of keywords. That being said, these tags are very important once you start publishing packages, because on NPM’s main site, they act as categories you can easily browse. So neglecting to add this property to your package.json prevents developers from finding your work through navigation.
  • Private: Unless you’re just publishing content for you and you alone, you’ll want to set this property to false as soon as you can, otherwise no one will be able to find your modules through keyword search.
  • Bugs: This make sure that if you’re hosting your content somewhere such as Github where there is public issue tracking, you set this property to the right URL. This’ll help NPM show a link and display the number of currently open issues right there on the package’s page.
  • Repository: Another property that is not strictly required, but if you add it, NPM will be able to show extra information such as a link to it, activity, list of collaborators, just to name a few.
  • Homepage: Like the previous one, it’ll help NPM display a separate link to this URL if present. This is especially relevant when you have your code in one URL (such as a Github repo) and a particular website dedicated to your module in another URL.
  • License: This is used to display to actual license you’ve setup on your project. It’ll appear on a different and more prominent way if you add it as part of your package.json file. You can also just mention it on your readme.md, but adding it here will provide extra knowledge about your project to NPM.

By providing the metadata I mentioned above, NPM is able to showcase that data and highlight it for developers to see. Take the following example, the package page for Winston, a fantastic logging library:

Notice how many links and extra bits and details have been added thanks to the metadata added by its team.

Writing a nice documentation

This step shouldn’t, but it’s completely optional. I say shouldn’t, of course, because if you’re trying to publish a module that is meant to be used by other developers, you need to provide good documentation.

You can’t really expect your tool to be “trivial to use”, or “easy to understand and figure out”. The point of NPM’s registry, is to provide others with pre-made tools that’ll help them solve problems they don’t want nor have the time to solve by themselves. So avoiding to provide a simple set of instructions and explanations prevents them from actually wanting to try and use your tool.

With that being said, NPM’s main site takes a cue from Github in the sense that they also look for a file called readme.md in the root of your project’s directory. If present, they’ll turn your markdown documentation into a nice homepage as you can see in the above screenshot.

So there isn’t really any excuse when it comes to writing the basic documentation others will need, so just do it in the readme.md and you’ll have it available in two places at once.

Actually publishing your package

After coding, setting up the right amount of data into your package.json and writing a useful readme.md file, you’re ready to publish.

To perform this, you’ll have to do two things:

  1. Log-in to your NPM account (assuming you’ve created one using their website) using the actual npm CLI.
  2. Publish your code.

That’s it, 2 steps, and you’re done. To log-in simply type:

$ npm login

That’ll prompt you to enter your credentials, and once you’ve successfully logged in, you can type:

$ npm publish

Remember to do this from within your project’s folder, otherwise the second command will fail.

Also, remember that the name of your package will be given by the name property from your package.json file and not from the folder’s name (which usually tends to coincide, but doesn’t mean a thing). So if you’re having a repeated name error (which could happen given the amount of packages available in NPM), that is where you’ll have to make the change.

Conclusion

Thanks for reading and I hope that by now, you’ve manged to understand the complexity and beauty of NPM. It is not just a simple tool for you to install package, but you can do a lot more with it if you take the time to check their documentation.

Let me know down in the comments if you were aware of everything I just mentioned and if I missed something else you’re currently using NPM for, I’d love to know!

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

The Complete Node.js Developer Course (3rd Edition)

Angular & NodeJS - The MEAN Stack Guide

NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)

Best 50 Nodejs interview questions from Beginners to Advanced in 2019

Node.js 12: The future of server-side JavaScript

Creating your first npm package

Top 10 npm Security Best Practices

How to publish a React Native component to NPM

npm and the Future of JavaScript

A Beginner’s Guide to npm — the Node Package Manager

Step by step: Building and publishing an NPM Package.

A Beginner’s Guide to npm: The Node Package Manager


#node-js #npm #web-development

Trystan  Doyle

Trystan Doyle

1593008507

Up your npm game with these 4 practices

If you don’t know what npm is then you should probably read about it before reading this article. This article is going to touch on recommendations and advanced concepts for those experienced with it. If you’re not, don’t worry, it’s not that complicated. I can recommend reading this article to get you started.

#npm #npm-package #node-package-manager #npm-weekly #up #programming

Examples of the dig command in Linux

Dig Command Line Options and Examples
Here is the frequently used command line options and example’s of dig command.
1. Basic Dig Command
A basic dig command accept domain name as command line parameter and prints Address record.
2. Query With Specific DNS Server
The default dig command queries to dns server configured on your system. For example, the Linux systems keep default DNS entry in /etc/resolv.conf.
3. Print Short Answer
Use +short command line option to print result in short form. This is basically useful with the shell scripting and other automation tasks.
4. Print Detailed but Specific Result
Use +noall with +answer to print detailed information but specific. This will print only answer section including few more details as a result.

#linux commands #command #dig #dig command #useful examples #linux