1588739400
Most of the features described in this post have been deprecated and some have been forbidden by EcmaScript from being used. They are still commomly found in many libraries, so it is worth learning them.
#javascript #programming
1659817260
The AWS IoT Device SDK for Embedded C (C-SDK) is a collection of C source files under the MIT open source license that can be used in embedded applications to securely connect IoT devices to AWS IoT Core. It contains MQTT client, HTTP client, JSON Parser, AWS IoT Device Shadow, AWS IoT Jobs, and AWS IoT Device Defender libraries. This SDK is distributed in source form, and can be built into customer firmware along with application code, other libraries and an operating system (OS) of your choice. These libraries are only dependent on standard C libraries, so they can be ported to various OS's - from embedded Real Time Operating Systems (RTOS) to Linux/Mac/Windows. You can find sample usage of C-SDK libraries on POSIX systems using OpenSSL (e.g. Linux demos in this repository), and on FreeRTOS using mbedTLS (e.g. FreeRTOS demos in FreeRTOS repository).
For the latest release of C-SDK, please see the section for Releases and Documentation.
C-SDK includes libraries that are part of the FreeRTOS 202012.01 LTS release. Learn more about the FreeRTOS 202012.01 LTS libraries by clicking here.
The C-SDK libraries are licensed under the MIT open source license.
C-SDK simplifies access to various AWS IoT services. C-SDK has been tested to work with AWS IoT Core and an open source MQTT broker to ensure interoperability. The AWS IoT Device Shadow, AWS IoT Jobs, and AWS IoT Device Defender libraries are flexible to work with any MQTT client and JSON parser. The MQTT client and JSON parser libraries are offered as choices without being tightly coupled with the rest of the SDK. C-SDK contains the following libraries:
The coreMQTT library provides the ability to establish an MQTT connection with a broker over a customer-implemented transport layer, which can either be a secure channel like a TLS session (mutually authenticated or server-only authentication) or a non-secure channel like a plaintext TCP connection. This MQTT connection can be used for performing publish operations to MQTT topics and subscribing to MQTT topics. The library provides a mechanism to register customer-defined callbacks for receiving incoming PUBLISH, acknowledgement and keep-alive response events from the broker. The library has been refactored for memory optimization and is compliant with the MQTT 3.1.1 standard. It has no dependencies on any additional libraries other than the standard C library, a customer-implemented network transport interface, and optionally a customer-implemented platform time function. The refactored design embraces different use-cases, ranging from resource-constrained platforms using only QoS 0 MQTT PUBLISH messages to resource-rich platforms using QoS 2 MQTT PUBLISH over TLS connections.
See memory requirements for the latest release here.
The coreHTTP library provides the ability to establish an HTTP connection with a server over a customer-implemented transport layer, which can either be a secure channel like a TLS session (mutually authenticated or server-only authentication) or a non-secure channel like a plaintext TCP connection. The HTTP connection can be used to make "GET" (include range requests), "PUT", "POST" and "HEAD" requests. The library provides a mechanism to register a customer-defined callback for receiving parsed header fields in an HTTP response. The library has been refactored for memory optimization, and is a client implementation of a subset of the HTTP/1.1 standard.
See memory requirements for the latest release here.
The coreJSON library is a JSON parser that strictly enforces the ECMA-404 JSON standard. It provides a function to validate a JSON document, and a function to search for a key and return its value. A search can descend into nested structures using a compound query key. A JSON document validation also checks for illegal UTF8 encodings and illegal Unicode escape sequences.
See memory requirements for the latest release here.
The corePKCS11 library is an implementation of the PKCS #11 interface (API) that makes it easier to develop applications that rely on cryptographic operations. Only a subset of the PKCS #11 v2.4 standard has been implemented, with a focus on operations involving asymmetric keys, random number generation, and hashing.
The Cryptoki or PKCS #11 standard defines a platform-independent API to manage and use cryptographic tokens. The name, "PKCS #11", is used interchangeably to refer to the API itself and the standard which defines it.
The PKCS #11 API is useful for writing software without taking a dependency on any particular implementation or hardware. By writing against the PKCS #11 standard interface, code can be used interchangeably with multiple algorithms, implementations and hardware.
Generally vendors for secure cryptoprocessors such as Trusted Platform Module (TPM), Hardware Security Module (HSM), Secure Element, or any other type of secure hardware enclave, distribute a PKCS #11 implementation with the hardware. The purpose of corePKCS11 mock is therefore to provide a PKCS #11 implementation that allows for rapid prototyping and development before switching to a cryptoprocessor specific PKCS #11 implementation in production devices.
Since the PKCS #11 interface is defined as part of the PKCS #11 specification replacing corePKCS11 with another implementation should require little porting effort, as the interface will not change. The system tests distributed in corePKCS11 repository can be leveraged to verify the behavior of a different implementation is similar to corePKCS11.
See memory requirements for the latest release here.
The AWS IoT Device Shadow library enables you to store and retrieve the current state one or more shadows of every registered device. A device’s shadow is a persistent, virtual representation of your device that you can interact with from AWS IoT Core even if the device is offline. The device state is captured in its "shadow" is represented as a JSON document. The device can send commands over MQTT to get, update and delete its latest state as well as receive notifications over MQTT about changes in its state. The device’s shadow(s) are uniquely identified by the name of the corresponding "thing", a representation of a specific device or logical entity on the AWS Cloud. See Managing Devices with AWS IoT for more information on IoT "thing". This library supports named shadows, a feature of the AWS IoT Device Shadow service that allows you to create multiple shadows for a single IoT device. More details about AWS IoT Device Shadow can be found in AWS IoT documentation.
The AWS IoT Device Shadow library has no dependencies on additional libraries other than the standard C library. It also doesn’t have any platform dependencies, such as threading or synchronization. It can be used with any MQTT library and any JSON library (see demos with coreMQTT and coreJSON).
See memory requirements for the latest release here.
The AWS IoT Jobs library enables you to interact with the AWS IoT Jobs service which notifies one or more connected devices of a pending “Job”. A Job can be used to manage your fleet of devices, update firmware and security certificates on your devices, or perform administrative tasks such as restarting devices and performing diagnostics. For documentation of the service, please see the AWS IoT Developer Guide. Interactions with the Jobs service use the MQTT protocol. This library provides an API to compose and recognize the MQTT topic strings used by the Jobs service.
The AWS IoT Jobs library has no dependencies on additional libraries other than the standard C library. It also doesn’t have any platform dependencies, such as threading or synchronization. It can be used with any MQTT library and any JSON library (see demos with libmosquitto and coreJSON).
See memory requirements for the latest release here.
The AWS IoT Device Defender library enables you to interact with the AWS IoT Device Defender service to continuously monitor security metrics from devices for deviations from what you have defined as appropriate behavior for each device. If something doesn’t look right, AWS IoT Device Defender sends out an alert so you can take action to remediate the issue. More details about Device Defender can be found in AWS IoT Device Defender documentation. This library supports custom metrics, a feature that helps you monitor operational health metrics that are unique to your fleet or use case. For example, you can define a new metric to monitor the memory usage or CPU usage on your devices.
The AWS IoT Device Defender library has no dependencies on additional libraries other than the standard C library. It also doesn’t have any platform dependencies, such as threading or synchronization. It can be used with any MQTT library and any JSON library (see demos with coreMQTT and coreJSON).
See memory requirements for the latest release here.
The AWS IoT Over-the-air Update (OTA) library enables you to manage the notification of a newly available update, download the update, and perform cryptographic verification of the firmware update. Using the OTA library, you can logically separate firmware updates from the application running on your devices. You can also use the library to send other files (e.g. images, certificates) to one or more devices registered with AWS IoT. More details about OTA library can be found in AWS IoT Over-the-air Update documentation.
The AWS IoT Over-the-air Update library has a dependency on coreJSON for parsing of JSON job document and tinyCBOR for decoding encoded data streams, other than the standard C library. It can be used with any MQTT library, HTTP library, and operating system (e.g. Linux, FreeRTOS) (see demos with coreMQTT and coreHTTP over Linux).
See memory requirements for the latest release here.
The AWS IoT Fleet Provisioning library enables you to interact with the AWS IoT Fleet Provisioning MQTT APIs in order to provison IoT devices without preexisting device certificates. With AWS IoT Fleet Provisioning, devices can securely receive unique device certificates from AWS IoT when they connect for the first time. For an overview of all provisioning options offered by AWS IoT, see device provisioning documentation. For details about Fleet Provisioning, refer to the AWS IoT Fleet Provisioning documentation.
See memory requirements for the latest release here.
The AWS SigV4 library enables you to sign HTTP requests with Signature Version 4 Signing Process. Signature Version 4 (SigV4) is the process to add authentication information to HTTP requests to AWS services. For security, most requests to AWS must be signed with an access key. The access key consists of an access key ID and secret access key.
See memory requirements for the latest release here.
The backoffAlgorithm library is a utility library to calculate backoff period using an exponential backoff with jitter algorithm for retrying network operations (like failed network connection with server). This library uses the "Full Jitter" strategy for the exponential backoff with jitter algorithm. More information about the algorithm can be seen in the Exponential Backoff and Jitter AWS blog.
Exponential backoff with jitter is typically used when retrying a failed connection or network request to the server. An exponential backoff with jitter helps to mitigate the failed network operations with servers, that are caused due to network congestion or high load on the server, by spreading out retry requests across multiple devices attempting network operations. Besides, in an environment with poor connectivity, a client can get disconnected at any time. A backoff strategy helps the client to conserve battery by not repeatedly attempting reconnections when they are unlikely to succeed.
The backoffAlgorithm library has no dependencies on libraries other than the standard C library.
See memory requirements for the latest release here.
When establishing a connection with AWS IoT, users can optionally report the Operating System, Hardware Platform and MQTT client version information of their device to AWS. This information can help AWS IoT provide faster issue resolution and technical support. If users want to report this information, they can send a specially formatted string (see below) in the username field of the MQTT CONNECT packet.
Format
The format of the username string with metrics is:
<Actual_Username>?SDK=<OS_Name>&Version=<OS_Version>&Platform=<Hardware_Platform>&MQTTLib=<MQTT_Library_name>@<MQTT_Library_version>
Where
Example
/* Username string:
* iotuser?SDK=Ubuntu&Version=20.10&Platform=RaspberryPi&MQTTLib=coremqtt@1.1.0
*/
#define OS_NAME "Ubuntu"
#define OS_VERSION "20.10"
#define HARDWARE_PLATFORM_NAME "RaspberryPi"
#define MQTT_LIB "coremqtt@1.1.0"
#define USERNAME_STRING "iotuser?SDK=" OS_NAME "&Version=" OS_VERSION "&Platform=" HARDWARE_PLATFORM_NAME "&MQTTLib=" MQTT_LIB
#define USERNAME_STRING_LENGTH ( ( uint16_t ) ( sizeof( USERNAME_STRING ) - 1 ) )
MQTTConnectInfo_t connectInfo;
connectInfo.pUserName = USERNAME_STRING;
connectInfo.userNameLength = USERNAME_STRING_LENGTH;
mqttStatus = MQTT_Connect( pMqttContext, &connectInfo, NULL, CONNACK_RECV_TIMEOUT_MS, pSessionPresent );
C-SDK releases will now follow a date based versioning scheme with the format YYYYMM.NN, where:
For example, a second release in June 2021 would be 202106.01. Although the SDK releases have moved to date-based versioning, each library within the SDK will still retain semantic versioning. In semantic versioning, the version number itself (X.Y.Z) indicates whether the release is a major, minor, or point release. You can use the semantic version of a library to assess the scope and impact of a new release on your application.
All of the released versions of the C-SDK libraries are available as git tags. For example, the last release of the v3 SDK version is available at tag 3.1.2.
API documentation of 202108.00 release
This release introduces the refactored AWS IoT Fleet Provisioning library and the new AWS SigV4 library.
Additionally, this release brings minor version updates in the AWS IoT Over-the-Air Update and corePKCS11 libraries.
API documentation of 202103.00 release
This release includes a major update to the APIs of the AWS IoT Over-the-air Update library.
Additionally, AWS IoT Device Shadow library introduces a minor update by adding support for named shadow, a feature of the AWS IoT Device Shadow service that allows you to create multiple shadows for a single IoT device. AWS IoT Jobs library introduces a minor update by introducing macros for $next
job ID and compile-time generation of topic strings. AWS IoT Device Defender library introduces a minor update that adds macros to API for custom metrics feature of AWS IoT Device Defender service.
corePKCS11 also introduces a patch update by removing the pkcs11configPAL_DESTROY_SUPPORTED
config and mbedTLS platform abstraction layer of DestroyObject
. Lastly, no code changes are introduced for backoffAlgorithm, coreHTTP, coreMQTT, and coreJSON; however, patch updates are made to improve documentation and CI.
API documentation of 202012.01 release
This release includes AWS IoT Over-the-air Update(Release Candidate), backoffAlgorithm, and PKCS #11 libraries. Additionally, there is a major update to the coreJSON and coreHTTP APIs. All libraries continue to undergo code quality checks (e.g. MISRA-C compliance), and Coverity static analysis. In addition, all libraries except AWS IoT Over-the-air Update and backoffAlgorithm undergo validation of memory safety with the C Bounded Model Checker (CBMC) automated reasoning tool.
API documentation of 202011.00 release
This release includes refactored HTTP client, AWS IoT Device Defender, and AWS IoT Jobs libraries. Additionally, there is a major update to the coreJSON API. All libraries continue to undergo code quality checks (e.g. MISRA-C compliance), Coverity static analysis, and validation of memory safety with the C Bounded Model Checker (CBMC) automated reasoning tool.
API documentation of 202009.00 release
This release includes refactored MQTT, JSON Parser, and AWS IoT Device Shadow libraries for optimized memory usage and modularity. These libraries are included in the SDK via Git submoduling. These libraries have gone through code quality checks including verification that no function has a GNU Complexity score over 8, and checks against deviations from mandatory rules in the MISRA coding standard. Deviations from the MISRA C:2012 guidelines are documented under MISRA Deviations. These libraries have also undergone both static code analysis from Coverity static analysis, and validation of memory safety and data structure invariance through the CBMC automated reasoning tool.
If you are upgrading from v3.x API of the C-SDK to the 202009.00 release, please refer to Migration guide from v3.1.2 to 202009.00 and newer releases. If you are using the C-SDK v4_beta_deprecated branch, note that we will continue to maintain this branch for critical bug fixes and security patches but will not add new features to it. See the C-SDK v4_beta_deprecated branch README for additional details.
Details available here.
All libraries depend on the ISO C90 standard library and additionally on the stdint.h
library for fixed-width integers, including uint8_t
, int8_t
, uint16_t
, uint32_t
and int32_t
, and constant macros like UINT16_MAX
. If your platform does not support the stdint.h
library, definitions of the mentioned fixed-width integer types will be required for porting any C-SDK library to your platform.
Guide for porting coreMQTT library to your platform is available here.
Guide for porting coreHTTP library is available here.
Guide for porting AWS IoT Device Shadow library is available here.
Guide for porting AWS IoT Device Defender library is available here.
Guide for porting OTA library to your platform is available here.
Migration guide for MQTT library is available here.
Migration guide for Shadow library is available here.
Migration guide for Jobs library is available here.
The main branch hosts the continuous development of the AWS IoT Embedded C SDK (C-SDK) libraries. Please be aware that the development at the tip of the main branch is continuously in progress, and may have bugs. Consider using the tagged releases of the C-SDK for production ready software.
The v4_beta_deprecated branch contains a beta version of the C-SDK libraries, which is now deprecated. This branch was earlier named as v4_beta, and was renamed to v4_beta_deprecated. The libraries in this branch will not be released. However, critical bugs will be fixed and tested. No new features will be added to this branch.
This repository uses Git Submodules to bring in the C-SDK libraries (eg, MQTT ) and third-party dependencies (eg, mbedtls for POSIX platform transport layer). Note: If you download the ZIP file provided by GitHub UI, you will not get the contents of the submodules (The ZIP file is also not a valid git repository). If you download from the 202012.00 Release Page page, you will get the entire repository (including the submodules) in the ZIP file, aws-iot-device-sdk-embedded-c-202012.00.zip. To clone the latest commit to main branch using HTTPS:
git clone --recurse-submodules https://github.com/aws/aws-iot-device-sdk-embedded-C.git
Using SSH:
git clone --recurse-submodules git@github.com:aws/aws-iot-device-sdk-embedded-C.git
If you have downloaded the repo without using the --recurse-submodules
argument, you need to run:
git submodule update --init --recursive
When building with CMake, submodules are also recursively cloned automatically. However, -DBUILD_CLONE_SUBMODULES=0
can be passed as a CMake flag to disable this functionality. This is useful when you'd like to build CMake while using a different commit from a submodule.
The libraries in this SDK are not dependent on any operating system. However, the demos for the libraries in this SDK are built and tested on a Linux platform. The demos build with CMake, a cross-platform build tool.
stdint.h
is required for fixed-width integer types that include uint8_t
, int8_t
, uint16_t
, uint32_t
and int32_t
, and constant macros like UINT16_MAX
, while stdbool.h
is required for boolean parameters in coreMQTT. For compilers that do not provide these header files, coreMQTT provides the files stdint.readme and stdbool.readme, which can be renamed to stdint.h
and stdbool.h
, respectively, to provide the required type definitions.Build Dependencies
The follow table shows libraries that need to be installed in your system to run certain demos. If a dependency is not installed and cannot be built from source, demos that require that dependency will be excluded from the default all
target.
Dependency | Version | Usage |
---|---|---|
OpenSSL | 1.1.0 or later | All TLS demos and tests with the exception of PKCS11 |
Mosquitto Client | 1.4.10 or later | AWS IoT Jobs Mosquitto demo |
You need to setup an AWS account and access the AWS IoT console for running the AWS IoT Device Shadow library, AWS IoT Device Defender library, AWS IoT Jobs library, AWS IoT OTA library and coreHTTP S3 download demos. Also, the AWS account can be used for running the MQTT mutual auth demo against AWS IoT broker. Note that running the AWS IoT Device Defender, AWS IoT Jobs and AWS IoT Device Shadow library demos require the setup of a Thing resource for the device running the demo. Follow the links to:
The MQTT Mutual Authentication and AWS IoT Shadow demos include example AWS IoT policy documents to run each respective demo with AWS IoT. You may use the MQTT Mutual auth and Shadow example policies by replacing [AWS_REGION]
and [AWS_ACCOUNT_ID]
with the strings of your region and account identifier. While the IoT Thing name and MQTT client identifier do not need to match for the demos to run, the example policies have the Thing name and client identifier identical as per AWS IoT best practices.
It can be very helpful to also have the AWS Command Line Interface tooling installed.
You can pass the following configuration settings as command line options in order to run the mutual auth demos. Make sure to run the following command in the root directory of the C-SDK:
## optionally find your-aws-iot-endpoint from the command line
aws iot describe-endpoint --endpoint-type iot:Data-ATS
cmake -S . -Bbuild
-DAWS_IOT_ENDPOINT="<your-aws-iot-endpoint>" -DCLIENT_CERT_PATH="<your-client-certificate-path>" -DCLIENT_PRIVATE_KEY_PATH="<your-client-private-key-path>"
In order to set these configurations manually, edit demo_config.h
in demos/mqtt/mqtt_demo_mutual_auth/
and demos/http/http_demo_mutual_auth/
to #define
the following:
AWS_IOT_ENDPOINT
to your custom endpoint. This is found on the Settings page of the AWS IoT Console and has a format of ABCDEFG1234567.iot.<aws-region>.amazonaws.com
where <aws-region>
can be an AWS region like us-east-2
.aws iot describe-endpoint --endpoint-type iot:Data-ATS
.CLIENT_CERT_PATH
to the path of the client certificate downloaded when setting up the device certificate in AWS IoT Account Setup.CLIENT_PRIVATE_KEY_PATH
to the path of the private key downloaded when setting up the device certificate in AWS IoT Account Setup.It is possible to configure ROOT_CA_CERT_PATH
to any PEM-encoded Root CA Certificate. However, this is optional because CMake will download and set it to AmazonRootCA1.pem when unspecified.
To build the AWS IoT Device Defender and AWS IoT Device Shadow demos, you can pass the following configuration settings as command line options. Make sure to run the following command in the root directory of the C-SDK:
cmake -S . -Bbuild -DAWS_IOT_ENDPOINT="<your-aws-iot-endpoint>" -DROOT_CA_CERT_PATH="<your-path-to-amazon-root-ca>" -DCLIENT_CERT_PATH="<your-client-certificate-path>" -DCLIENT_PRIVATE_KEY_PATH="<your-client-private-key-path>" -DTHING_NAME="<your-registered-thing-name>"
An Amazon Root CA certificate can be downloaded from here.
In order to set these configurations manually, edit demo_config.h
in the demo folder to #define
the following:
AWS_IOT_ENDPOINT
to your custom endpoint. This is found on the Settings page of the AWS IoT Console and has a format of ABCDEFG1234567.iot.us-east-2.amazonaws.com
.ROOT_CA_CERT_PATH
to the path of the root CA certificate downloaded when setting up the device certificate in AWS IoT Account Setup.CLIENT_CERT_PATH
to the path of the client certificate downloaded when setting up the device certificate in AWS IoT Account Setup.CLIENT_PRIVATE_KEY_PATH
to the path of the private key downloaded when setting up the device certificate in AWS IoT Account Setup.THING_NAME
to the name of the Thing created in AWS IoT Account Setup.To build the AWS IoT Fleet Provisioning Demo, you can pass the following configuration settings as command line options. Make sure to run the following command in the root directory of the C-SDK:
cmake -S . -Bbuild -DAWS_IOT_ENDPOINT="<your-aws-iot-endpoint>" -DROOT_CA_CERT_PATH="<your-path-to-amazon-root-ca>" -DCLAIM_CERT_PATH="<your-claim-certificate-path>" -DCLAIM_PRIVATE_KEY_PATH="<your-claim-private-key-path>" -DPROVISIONING_TEMPLATE_NAME="<your-template-name>" -DDEVICE_SERIAL_NUMBER="<your-serial-number>"
An Amazon Root CA certificate can be downloaded from here.
To create a provisioning template and claim credentials, sign into your AWS account and visit here. Make sure to enable the "Use the AWS IoT registry to manage your device fleet" option. Once you have created the template and credentials, modify the claim certificate's policy to match the sample policy.
In order to set these configurations manually, edit demo_config.h
in the demo folder to #define
the following:
AWS_IOT_ENDPOINT
to your custom endpoint. This is found on the Settings page of the AWS IoT Console and has a format of ABCDEFG1234567.iot.us-east-2.amazonaws.com
.ROOT_CA_CERT_PATH
to the path of the root CA certificate downloaded when setting up the device certificate in AWS IoT Account Setup.CLAIM_CERT_PATH
to the path of the claim certificate downloaded when setting up the template and claim credentials.CLAIM_PRIVATE_KEY_PATH
to the path of the private key downloaded when setting up the template and claim credentials.PROVISIONING_TEMPLATE_NAME
to the name of the provisioning template created.DEVICE_SERIAL_NUMBER
to an arbitrary string representing a device identifier.You can pass the following configuration settings as command line options in order to run the S3 demos. Make sure to run the following command in the root directory of the C-SDK:
cmake -S . -Bbuild -DS3_PRESIGNED_GET_URL="s3-get-url" -DS3_PRESIGNED_PUT_URL="s3-put-url"
S3_PRESIGNED_PUT_URL
is only needed for the S3 upload demo.
In order to set these configurations manually, edit demo_config.h
in demos/http/http_demo_s3_download_multithreaded
, and demos/http/http_demo_s3_upload
to #define
the following:
S3_PRESIGNED_GET_URL
to a S3 presigned URL with GET access.S3_PRESIGNED_PUT_URL
to a S3 presigned URL with PUT access.You can generate the presigned urls using demos/http/common/src/presigned_urls_gen.py. More info can be found here.
Refer this demos/http/http_demo_s3_download/README.md to follow the steps needed to configure and run the S3 Download HTTP Demo using SigV4 Library that generates the authorization HTTP header needed to authenticate the HTTP requests send to S3.
apt install curl libmosquitto-dev
If the platform does not contain the libmosquitto
library, the demo will build the library from source.
libmosquitto
1.4.10 or any later version of the first major release is required to run this demo.
The following creates a job that specifies a Linux Kernel link for downloading.
aws iot create-job \
--job-id 'job_1' \
--targets arn:aws:iot:us-west-2:<account-id>:thing/<thing-name> \
--document '{"url":"https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.8.5.tar.xz"}'
After you build and run the initial executable you will have to create another executable and schedule an OTA update job with this image.
APP_VERSION_BUILD
in demos/ota/ota_demo_core_[mqtt/http]/demo_config.h
to a different version than what is running.build-dir-2
.mv ota_demo_core_mqtt ota_demo_core_mqtt2
build-dir-2/bin/ota_demo2
./home/ubuntu/aws-iot-device-sdk-embedded-C-staging/build-dir/bin/ota_demo_core_mqtt2
.sudo ./ota_demo_core_mqtt
or sudo ./ota_demo_core_http
.chmod 775 ota_demo_core_mqtt2
sudo ./ota_demo_core_mqtt2
Before building the demos, ensure you have installed the prerequisite software. On Ubuntu 18.04 and 20.04, gcc
, cmake
, and OpenSSL can be installed with:
sudo apt install build-essential cmake libssl-dev
cmake -S . -Bbuild && cd build
make help | grep demo
:defender_demo
http_demo_basic_tls
http_demo_mutual_auth
http_demo_plaintext
http_demo_s3_download
http_demo_s3_download_multithreaded
http_demo_s3_upload
jobs_demo_mosquitto
mqtt_demo_basic_tls
mqtt_demo_mutual_auth
mqtt_demo_plaintext
mqtt_demo_serializer
mqtt_demo_subscription_manager
ota_demo_core_http
ota_demo_core_mqtt
pkcs11_demo_management_and_rng
pkcs11_demo_mechanisms_and_digests
pkcs11_demo_objects
pkcs11_demo_sign_and_verify
shadow_demo_main
demo_name
with your desired demo then build it: make demo_name
build/bin
directory and run any demo executables from there.cmake -S . -Bbuild && cd build
make
build/bin
directory and run any demo executables from there.The corePKCS11 demos do not require any AWS IoT resources setup, and are standalone. The demos build upon each other to introduce concepts in PKCS #11 sequentially. Below is the recommended order.
pkcs11_demo_management_and_rng
pkcs11_demo_mechanisms_and_digests
pkcs11_demo_objects
pkcs11_demo_sign_and_verify
pkcs11_demo_objects
to be in the directory the demo is executed from.Install Docker:
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
Installing Mosquitto to run MQTT demos locally
The following instructions have been tested on an Ubuntu 18.04 environment with Docker and OpenSSL installed.
Download the official Docker image for Mosquitto 1.6.14. This version is deliberately chosen so that the Docker container can load certificates from the host system. Any version after 1.6.14 will drop privileges as soon as the configuration file has been read (before TLS certificates are loaded).
docker pull eclipse-mosquitto:1.6.14
If a Mosquitto broker with TLS communication needs to be run, ignore this step and proceed to the next step. A Mosquitto broker with plain text communication can be run by executing the command below.
docker run -it -p 1883:1883 --name mosquitto-plain-text eclipse-mosquitto:1.6.14
Set BROKER_ENDPOINT
defined in demos/mqtt/mqtt_demo_plaintext/demo_config.h
to localhost
.
Ignore the remaining steps unless a Mosquitto broker with TLS communication also needs to be run.
For TLS communication with Mosquitto broker, server and CA credentials need to be created. Use OpenSSL commands to generate the credentials for the Mosquitto server.
# Generate CA key and certificate. Provide the Subject field information as appropriate for CA certificate.
openssl req -x509 -nodes -sha256 -days 365 -newkey rsa:2048 -keyout ca.key -out ca.crt
# Generate server key and certificate.# Provide the Subject field information as appropriate for Server certificate. Make sure the Common Name (CN) field is different from the root CA certificate.
openssl req -nodes -sha256 -new -keyout server.key -out server.csr # Sign with the CA cert.
openssl x509 -req -sha256 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 365
Note: Make sure to use different Common Name (CN) detail between the CA and server certificates; otherwise, SSL handshake fails with exactly same Common Name (CN) detail in both the certificates.
port 8883
cafile /mosquitto/config/ca.crt
certfile /mosquitto/config/server.crt
keyfile /mosquitto/config/server.key
# Use this option for TLS mutual authentication (where client will provide CA signed certificate)
#require_certificate true
tls_version tlsv1.2
#use_identity_as_username true
Create a mosquitto.conf file to use port 8883 (for TLS communication) and providing path to the generated credentials.
Run the docker container from the local directory containing the generated credential and mosquitto.conf files.
docker run -it -p 8883:8883 -v $(pwd):/mosquitto/config/ --name mosquitto-basic-tls eclipse-mosquitto:1.6.14
Update demos/mqtt/mqtt_demo_basic_tls/demo_config.h
to the following:
Set BROKER_ENDPOINT
to localhost
.
Set ROOT_CA_CERT_PATH
to the absolute path of the CA certificate created in step 4. for the local Mosquitto server.
Installing httpbin to run HTTP demos locally
Run httpbin through port 80:
docker pull kennethreitz/httpbin
docker run -p 80:80 kennethreitz/httpbin
SERVER_HOST
defined in demos/http/http_demo_plaintext/demo_config.h
can now be set to localhost
.
To run http_demo_basic_tls
, download ngrok in order to create an HTTPS tunnel to the httpbin server currently hosted on port 80:
./ngrok http 80 # May have to use ./ngrok.exe depending on OS or filename of the executable
ngrok
will provide an https link that can be substituted in demos/http/http_demo_basic_tls/demo_config.h
and has a format of https://ABCDEFG12345.ngrok.io
.
Set SERVER_HOST
in demos/http/http_demo_basic_tls/demo_config.h
to the https link provided by ngrok, without https://
preceding it.
You must also download the Root CA certificate provided by the ngrok https link and set ROOT_CA_CERT_PATH
in demos/http/http_demo_basic_tls/demo_config.h
to the file path of the downloaded certificate.
The C-SDK libraries and platform abstractions can be installed to a file system through CMake. To do so, run the following command in the root directory of the C-SDK. Note that installation is not required to run any of the demos.
cmake -S . -Bbuild -DBUILD_DEMOS=0 -DBUILD_TESTS=0
cd build
sudo make install
Note that because make install
will automatically build the all
target, it may be useful to disable building demos and tests with -DBUILD_DEMOS=0 -DBUILD_TESTS=0
unless they have already been configured. Super-user permissions may be needed if installing to a system include or system library path.
To install only a subset of all libraries, pass -DINSTALL_LIBS
to install only the libraries you need. By default, all libraries will be installed, but you may exclude any library that you don't need from this list:
-DINSTALL_LIBS="DEFENDER;SHADOW;JOBS;OTA;OTA_HTTP;OTA_MQTT;BACKOFF_ALGORITHM;HTTP;JSON;MQTT;PKCS"
By default, the install path will be in the project
directory of the SDK. You can also set -DINSTALL_TO_SYSTEM=1
to install to the system path for headers and libraries in your OS (e.g. /usr/local/include
& /usr/local/lib
for Linux).
Upon entering make install
, the location of each library will be specified first followed by the location of all installed headers:
-- Installing: /usr/local/lib/libaws_iot_defender.so
-- Installing: /usr/local/lib/libaws_iot_shadow.so
...
-- Installing: /usr/local/include/aws/defender.h
-- Installing: /usr/local/include/aws/defender_config_defaults.h
-- Installing: /usr/local/include/aws/shadow.h
-- Installing: /usr/local/include/aws/shadow_config_defaults.h
You may also set an installation path of your choice by passing the following flags through CMake. Make sure to run the following command in the root directory of the C-SDK:
cmake -S . -Bbuild -DBUILD_DEMOS=0 -DBUILD_TESTS=0 \
-DCSDK_HEADER_INSTALL_PATH="/header/path" -DCSDK_LIB_INSTALL_PATH="/lib/path"
cd build
sudo make install
POSIX platform abstractions are used together with the C-SDK libraries in the demos. By default, these abstractions are also installed but can be excluded by passing the flag: -DINSTALL_PLATFORM_ABSTRACTIONS=0
.
Lastly, a custom config path for any specific library can also be specified through the following CMake flags, allowing libraries to be compiled with a config of your choice:
-DDEFENDER_CUSTOM_CONFIG_DIR="defender-config-directory"
-DSHADOW_CUSTOM_CONFIG_DIR="shadow-config-directory"
-DJOBS_CUSTOM_CONFIG_DIR="jobs-config-directory"
-DOTA_CUSTOM_CONFIG_DIR="ota-config-directory"
-DHTTP_CUSTOM_CONFIG_DIR="http-config-directory"
-DJSON_CUSTOM_CONFIG_DIR="json-config-directory"
-DMQTT_CUSTOM_CONFIG_DIR="mqtt-config-directory"
-DPKCS_CUSTOM_CONFIG_DIR="pkcs-config-directory"
Note that the file name of the header should not be included in the directory.
Note: For pre-generated documentation, please visit Releases and Documentation section.
The Doxygen references were created using Doxygen version 1.9.2. To generate the Doxygen pages, use the provided Python script at tools/doxygen/generate_docs.py. Please ensure that each of the library submodules under libraries/standard/
and libraries/aws/
are cloned before using this script.
cd <CSDK_ROOT>
git submodule update --init --recursive --checkout
python3 tools/doxygen/generate_docs.py
The generated documentation landing page is located at docs/doxygen/output/html/index.html
.
Author: aws
Source code: https://github.com/aws/aws-iot-device-sdk-embedded-C
License: MIT license
1667425440
Perl script converts PDF files to Gerber format
Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.
The general workflow is as follows:
Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).
See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.
#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;
use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)
##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file
use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call
#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software. \nGerber files MAY CONTAIN ERRORS. Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG
use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC
use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)
#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1);
#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
.010, -.001, #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
.031, -.014, #used for vias
.041, -.020, #smallest non-filled plated hole
.051, -.025,
.056, -.029, #useful for IC pins
.070, -.033,
.075, -.040, #heavier leads
# .090, -.043, #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
.100, -.046,
.115, -.052,
.130, -.061,
.140, -.067,
.150, -.079,
.175, -.088,
.190, -.093,
.200, -.100,
.220, -.110,
.160, -.125, #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
.090, -.040, #want a .090 pad option, but use dummy hole size
.065, -.040, #.065 x .065 rect pad
.035, -.040, #.035 x .065 rect pad
#traces:
.001, #too thin for real traces; use only for board outlines
.006, #minimum real trace width; mainly used for text
.008, #mainly used for mid-sized text, not traces
.010, #minimum recommended trace width for low-current signals
.012,
.015, #moderate low-voltage current
.020, #heavier trace for power, ground (even if a lighter one is adequate)
.025,
.030, #heavy-current traces; be careful with these ones!
.040,
.050,
.060,
.080,
.100,
.120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);
#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size: parsed PDF diameter: error:
# .014 .016 +.002
# .020 .02267 +.00267
# .025 .026 +.001
# .029 .03167 +.00267
# .033 .036 +.003
# .040 .04267 +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};
#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
CIRCLE_ADJUST_MINX => 0,
CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
CIRCLE_ADJUST_MAXY => 0,
SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};
#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches
#line join/cap styles:
use constant
{
CAP_NONE => 0, #butt (none); line is exact length
CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
#number of elements in each shape type:
use constant
{
RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
rect => RECT_SHAPELEN,
line => LINE_SHAPELEN,
curve => CURVE_SHAPELEN,
circle => CIRCLE_SHAPELEN,
);
#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions
# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?
#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes.
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes
#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches
# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)
# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time
# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const
use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool
my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time
print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load
#############################################################################################
#junk/experiment:
#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html
#my $caller = "pdf2gerb::";
#sub cfg
#{
# my $proto = shift;
# my $class = ref($proto) || $proto;
# my $settings =
# {
# $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
# };
# bless($settings, $class);
# return $settings;
#}
#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;
#print STDERR "read cfg file\n";
#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names
#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }
Author: swannman
Source Code: https://github.com/swannman/pdf2gerb
License: GPL-3.0 license
1606912089
#how to build a simple calculator in javascript #how to create simple calculator using javascript #javascript calculator tutorial #javascript birthday calculator #calculator using javascript and html
1588739400
Most of the features described in this post have been deprecated and some have been forbidden by EcmaScript from being used. They are still commomly found in many libraries, so it is worth learning them.
#javascript #programming
1670062320
I’m a huge fan of automation when the scenario allows for it. Maybe you need to keep track of guest information when they RSVP to your event, or maybe you need to monitor and react to feeds of data. These are two of many possible scenarios where you probably wouldn’t want to do things manually.
There are quite a few tools that are designed to automate your life. Some of the popular tools include IFTTT, Zapier, and Automate. The idea behind these services is that given a trigger, you can do a series of events.
In this tutorial, we’re going to see how to collect Twitter data with Zapier, store it in MongoDB using a Realm webhook function, and then run aggregations on it using the MongoDB query language (MQL).
There are a few requirements that must be met prior to starting this tutorial:
There is a Zapier free tier, but because we plan to use webhooks, which are premium in Zapier, a paid account is necessary. To consume data from Twitter in Zapier, a Twitter account is necessary, even if we plan to consume data that isn’t related to our account. This data will be stored in MongoDB, so a cluster with properly configured IP access and user permissions is required.
You can get started with MongoDB Atlas by launching a free M0 cluster, no credit card required.
While not necessary to create a database and collection prior to use, we’ll be using a zapier database and a tweets collection throughout the scope of this tutorial.
Since the plan is to store tweets from Twitter within MongoDB and then create queries to make sense of it, we should probably get an understanding of the data prior to trying to work with it.
We’ll be using the “Search Mention” functionality within Zapier for Twitter. Essentially, it allows us to provide a Twitter query and trigger an automation when the data is found. More on that soon.
As a result, we’ll end up with the following raw data:
{
"created_at": "Tue Feb 02 20:31:58 +0000 2021",
"id": "1356701917603238000",
"id_str": "1356701917603237888",
"full_text": "In case anyone is interested in learning about how to work with streaming data using Node.js, I wrote a tutorial about it on the @MongoDB Developer Hub. https://t.co/Dxt80lD8xj #javascript",
"truncated": false,
"display_text_range": [0, 188],
"metadata": {
"iso_language_code": "en",
"result_type": "recent"
},
"source": "<a href='https://about.twitter.com/products/tweetdeck' rel='nofollow'>TweetDeck</a>",
"in_reply_to_status_id": null,
"in_reply_to_status_id_str": null,
"in_reply_to_user_id": null,
"in_reply_to_user_id_str": null,
"in_reply_to_screen_name": null,
"user": {
"id": "227546834",
"id_str": "227546834",
"name": "Nic Raboy",
"screen_name": "nraboy",
"location": "Tracy, CA",
"description": "Advocate of modern web and mobile development technologies. I write tutorials and speak at events to make app development easier to understand. I work @MongoDB.",
"url": "https://t.co/mRqzaKrmvm",
"entities": {
"url": {
"urls": [
{
"url": "https://t.co/mRqzaKrmvm",
"expanded_url": "https://www.thepolyglotdeveloper.com",
"display_url": "thepolyglotdeveloper.com",
"indices": [0, 23]
}
]
},
"description": {
"urls": ""
}
},
"protected": false,
"followers_count": 4599,
"friends_count": 551,
"listed_count": 265,
"created_at": "Fri Dec 17 03:33:03 +0000 2010",
"favourites_count": 4550,
"verified": false
},
"lang": "en",
"url": "https://twitter.com/227546834/status/1356701917603237888",
"text": "In case anyone is interested in learning about how to work with streaming data using Node.js, I wrote a tutorial about it on the @MongoDB Developer Hub. https://t.co/Dxt80lD8xj #javascript"
}
The data we have access to is probably more than we need. However, it really depends on what you’re interested in. For this example, we’ll be storing the following within MongoDB:
{
"created_at": "Tue Feb 02 20:31:58 +0000 2021",
"user": {
"screen_name": "nraboy",
"location": "Tracy, CA",
"followers_count": 4599,
"friends_count": 551
},
"text": "In case anyone is interested in learning about how to work with streaming data using Node.js, I wrote a tutorial about it on the @MongoDB Developer Hub. https://t.co/Dxt80lD8xj #javascript"
}
Without getting too far ahead of ourselves, our analysis will be based off the followers_count
and the location
of the user. We want to be able to make sense of where our users are and give priority to users that meet a certain followers threshold.
Before we start connecting Zapier and MongoDB, we need to develop the middleware that will be responsible for receiving tweet data from Zapier.
Remember, you’ll need to have a properly configured MongoDB Atlas cluster.
We need to create a Realm application. Within the MongoDB Atlas dashboard, click the Realm tab.
For simplicity, we’re going to want to create a new application. Click the Create a New App button and proceed to fill in the information about your application.
From the Realm Dashboard, click the 3rd Party Services tab.
We’re going to want to create an HTTP service. The name doesn’t matter, but it might make sense to name it Twitter based on what we’re planning to do.
Because we plan to work with tweet data, it makes sense to call our webhook function tweet, but the name doesn’t truly matter.
With the exception of the HTTP Method, the defaults are fine for this webhook. We want the method to be POST because we plan to create data with this particular webhook function. Make note of the Webhook URL because it will be used when we connect Zapier.
The next step is to open the Function Editor so we can add some logic behind this function. Add the following JavaScript code:
exports = function (payload, response) {
const tweet = EJSON.parse(payload.body.text());
const collection = context.services.get("mongodb-atlas").db("zapier").collection("tweets");
return collection.insertOne(tweet);
};
In the above code, we are taking the request payload, getting a handle to the tweets collection within the zapier database, and then doing an insert operation to store the data in the payload.
There are a few things to note in the above code:
When we call our function, a new document should be created within MongoDB.
By default, the function will not deploy when saving. After saving, make sure to review and deploy the changes through the notification at the top of the browser window.
So, we know the data we’ll be working with and we have a MongoDB Realm webhook function that is ready for receiving data. Now, we need to bring everything together with Zapier.
For clarity, new Twitter matches will be our trigger in Zapier, and the webhook function will be our event.
Within Zapier, choose to create a new “Zap,” which is an automation. The trigger needs to be a Search Mention in Twitter, which means that when a new Tweet is detected using a search query, our events happen.
For this example, we’re going to use the following Twitter search query:
url:developer.mongodb.com -filter:retweets filter:safe lang:en -from:mongodb -from:realm
The above query says that we are looking for tweets that include a URL to developer.mongodb.com. The URL doesn’t need to match exactly as long as the domain matches. The query also says that we aren’t interested in retweets. We only want original tweets, they have to be in English, and they have to be detected as safe for work.
In addition to the mentioned search criteria, we are also excluding tweets that originate from one of the MongoDB accounts.
In theory, the above search query could be used to see what people are saying about the MongoDB Developer Hub.
With the trigger in place, we need to identify the next stage of the automation pipeline. The next stage is taking the data from the trigger and sending it to our Realm webhook function.
As the event, make sure to choose Webhooks by Zapier and specify a POST request. From here, you’ll be prompted to enter your Realm webhook URL and the method, which should be POST. Realm is expecting the payload to be JSON, so it is important to select JSON within Zapier.
We have the option to choose which data from the previous automation stage to pass to our webhook. Select the fields you’re interested in and save your automation.
The data I chose to send looks like this:
{
"created_at": "Tue Feb 02 20:31:58 +0000 2021",
"username": "nraboy",
"location": "Tracy, CA",
"follower_count": "4599",
"following_count": "551",
"message": "In case anyone is interested in learning about how to work with streaming data using Node.js, I wrote a tutorial about it on the @MongoDB Developer Hub. https://t.co/Dxt80lD8xj #javascript"
}
The fields do not match the original fields brought in by Twitter. It is because I chose to map them to what made sense for me.
When deploying the Zap, anytime a tweet is found that matches our query, it will be saved into our MongoDB cluster.
With tweet data populating in MongoDB, it’s time to start querying it to make sense of it. In this fictional example, we want to know what people are saying about our Developer Hub and how popular these individuals are.
To do this, we’re going to want to make use of an aggregation pipeline within MongoDB.
Take the following, for example:
[
{
"$addFields": {
"follower_count": {
"$toInt": "$follower_count"
},
"following_count": {
"$toInt": "$following_count"
}
}
}, {
"$match": {
"follower_count": {
"$gt": 1000
}
}
}, {
"$group": {
"_id": {
"location": "$location"
},
"location": {
"$sum": 1
}
}
}
]
There are three stages in the above aggregation pipeline.
We want to understand the follower data for the individual who made the tweet, but that data comes into MongoDB as a string rather than an integer. The first stage of the pipeline takes the follower_count
and following_count
fields and converts them from string to integer. In reality, we are using $addFields
to create new fields, but because they have the same name as existing fields, the existing fields are replaced.
The next stage is where we want to identify people with more than 1,000 followers as a person of interest. While people with fewer followers might be saying great things, in this example, we don’t care.
After we’ve filtered out people by their follower count, we do a group based on their location. It might be valuable for us to know where in the world people are talking about MongoDB. We might want to know where our target audience exists.
The aggregation pipeline we chose to use can be executed with any of the MongoDB drivers, through the MongoDB Atlas dashboard, or through the CLI.
You just saw how to use Zapier with MongoDB to automate certain tasks and store the results as documents within the NoSQL database. In this example, we chose to store Twitter data that matched certain criteria, later to be analyzed with an aggregation pipeline. The automations and analysis options that you can do are quite limitless.
If you enjoyed this tutorial and want to get engaged with more content and like-minded developers, check out the MongoDB Community.
This content first appeared on MongoDB.
Original article source at: https://www.thepolyglotdeveloper.com/