1591594860
Tmap is a very fast Visualization library for large, high-dimensional data sets. Currently, tmap is available for Python. tmaps graph layouts are based on the OGDF library.
#visualization
1651383480
This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.
Install
npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator
Usage
This plugin relies on your serverless yml file and on the serverless-offline
plugin.
plugins:
- serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
- serverless-appsync-simulator
- serverless-offline
Note: Order is important serverless-appsync-simulator
must go before serverless-offline
To start the simulator, run the following command:
sls offline start
You should see in the logs something like:
...
Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002
...
Configuration
Put options under custom.appsync-simulator
in your serverless.yml
file
| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789
| When using API_KEY
as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref
function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt
function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue
function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |
Example:
custom:
appsync-simulator:
location: '.webpack/service' # use webpack build directory
dynamoDb:
endpoint: 'http://my-custom-dynamo:8000'
Hot-reloading
By default, the simulator will hot-relad when changes to *.graphql
or *.vtl
files are detected. Changes to *.yml
files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.
Hot-reloading relies on watchman. Make sure it is installed on your system.
You can change the files being watched with the watch
option, which is then passed to watchman as the match expression.
e.g.
custom:
appsync-simulator:
watch:
- ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
- "*.graphql" # => string like this is equivalent to `["match", "*.graphql"]`
Or you can opt-out by leaving an empty array or set the option to false
Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.
Resource CloudFormation functions resolution
This plugin supports some resources resolution from the Ref
, Fn::GetAtt
and Fn::ImportValue
functions in your yaml file. It also supports some other Cfn functions such as Fn::Join
, Fb::Sub
, etc.
Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation
You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.
provider:
environment:
BUCKET_NAME:
Ref: MyBucket # resolves to `my-bucket-name`
resources:
Resources:
MyDbTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: myTable
...
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket-name
...
# in your appsync config
dataSources:
- type: AMAZON_DYNAMODB
name: dynamosource
config:
tableName:
Ref: MyDbTable # resolves to `myTable`
Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.
In those cases, you can define (or override) those values using the refMap
, getAttMap
and importValueMap
options.
refMap
takes a mapping of resource name to value pairsgetAttMap
takes a mapping of resource name to attribute/values pairsimportValueMap
takes a mapping of import name to values pairsExample:
custom:
appsync-simulator:
refMap:
# Override `MyDbTable` resolution from the previous example.
MyDbTable: 'mock-myTable'
getAttMap:
# define ElasticSearchInstance DomainName
ElasticSearchInstance:
DomainEndpoint: 'localhost:9200'
importValueMap:
other-service-api-url: 'https://other.api.url.com/graphql'
# in your appsync config
dataSources:
- type: AMAZON_ELASTICSEARCH
name: elasticsource
config:
# endpoint resolves as 'http://localhost:9200'
endpoint:
Fn::Join:
- ''
- - https://
- Fn::GetAtt:
- ElasticSearchInstance
- DomainEndpoint
In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}
) in the import name.
This notation can be used with all mocks - refMap
, getAttMap
and importValueMap
provider:
environment:
FINISH_ACTIVITY_FUNCTION_ARN:
Fn::ImportValue: other-service-api-${self:provider.stage}-url
custom:
serverless-appsync-simulator:
importValueMap:
- key: other-service-api-${self:provider.stage}-url
value: 'https://other.api.url.com/graphql'
This plugin only tries to resolve the following parts of the yml tree:
provider.environment
functions[*].environment
custom.appSync
If you have the need of resolving others, feel free to open an issue and explain your use case.
For now, the supported resources to be automatically resovled by Ref:
are:
Feel free to open a PR or an issue to extend them as well.
External functions
When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".
custom:
appsync-simulator:
functions:
addUser:
url: http://localhost:3016/2015-03-31/functions/addUser/invocations
method: post
addPost:
url: https://jsonplaceholder.typicode.com/posts
method: post
Supported Resolver types
This plugin supports resolvers implemented by amplify-appsync-simulator
, as well as custom resolvers.
From Aws Amplify:
Implemented by this plugin
#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#set( $discard = $cols.add("$toSnake") )
#if( $util.isBoolean($ctx.args.input[$entry]) )
#if( $ctx.args.input[$entry] )
#set( $discard = $vals.add("1") )
#else
#set( $discard = $vals.add("0") )
#end
#else
#set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
#end
#end
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
#set( $valStr = "($valStr)" )
#end
#if ( $colStr.substring(0, 1) != '(' )
#set( $colStr = "($colStr)" )
#end
{
"version": "2018-05-29",
"statements": ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM <name-of-table> ORDER BY id DESC LIMIT 1"]
}
#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $cur = $ctx.args.input[$entry] )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#if( $util.isBoolean($cur) )
#if( $cur )
#set ( $cur = "1" )
#else
#set ( $cur = "0" )
#end
#end
#if ( $util.isNullOrEmpty($update) )
#set($update = "$toSnake$equals'$cur'" )
#else
#set($update = "$update,$toSnake$equals'$cur'" )
#end
#end
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> SET $update WHERE id=$ctx.args.input.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.input.id"]
}
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$ctx.args.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.id"]
}
#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
#set ($index = $index + 1)
#if ( $column["typeName"] == "timestamptz" )
#set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
#set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
#set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
$util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
#end
#end
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
#set ( $s = $mapKey.split("_") )
#set ( $camelCase="" )
#set ( $isFirst=true )
#foreach($entry in $s)
#if ( $isFirst )
#set ( $first = $entry.substring(0,1) )
#else
#set ( $first = $entry.substring(0,1).toUpperCase() )
#end
#set ( $isFirst=false )
#set ( $stringLength = $entry.length() )
#set ( $remaining = $entry.substring(1, $stringLength) )
#set ( $camelCase = "$camelCase$first$remaining" )
#end
$util.qr( $response.put("$camelCase", $res[$mapKey]) )
#end
$utils.toJson($response)
Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.
Will be escaped properly: null
, true
, and false
values.
{
"version": "2018-05-29",
"statements": [
"UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
"SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
],
variableMap: {
":ID": $ctx.args.id,
## ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!
}
}
Requires
Author: Serverless-appsync
Source Code: https://github.com/serverless-appsync/serverless-appsync-simulator
License: MIT License
1620466520
If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.
If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.
In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.
#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition
1617988080
Using data to inform decisions is essential to product management, or anything really. And thankfully, we aren’t short of it. Any online application generates an abundance of data and it’s up to us to collect it and then make sense of it.
Google Data Studio helps us understand the meaning behind data, enabling us to build beautiful visualizations and dashboards that transform data into stories. If it wasn’t already, data literacy is as much a fundamental skill as learning to read or write. Or it certainly will be.
Nothing is more powerful than data democracy, where anyone in your organization can regularly make decisions informed with data. As part of enabling this, we need to be able to visualize data in a way that brings it to life and makes it more accessible. I’ve recently been learning how to do this and wanted to share some of the cool ways you can do this in Google Data Studio.
#google-data-studio #blending-data #dashboard #data-visualization #creating-visualizations #how-to-visualize-data #data-analysis #data-visualisation
1620629020
The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.
This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.
As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).
This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.
#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management
1623171540
Data visualization is a fundamental ingredient of data science. It helps us understand the data better by providing insights. We also use data visualization to deliver the results or findings.
Python, being the predominant choice of programming language in the data science ecosystem, offers a rich selection of data visualization libraries. In this article, we will do a practical comparison of 3 popular ones.
The libraries we will cover are Seaborn, Altair, and Plotly. The examples will consist of 3 fundamental data visualization types which are scatter plot, histogram, and line plot.
We will do the comparison by creating the same visualizations with all 3 libraries. We will be using the Melbourne housing dataset available on Kaggle for the examples.
#data-visualization #python #data-science #programming #clash of python data visualization libraries #libraries