1604101320
Imbalanced data refers to where the number of observations per class is not equally distributed and often there is a major class that has a much larger percentage of the dataset and minor classes which doesn’t have enough examples.
Small Training Sets also suffer from not having enough examples. Both of the problems are very common in real-world applications but luckily there are several ways to overcome this problem. This article will walk through many different techniques and perspectives to combat Imbalance data. In particular, you will learn about:
Imbalanced data is a common problem in data science. From image classification to fraud detection or medical diagnosis, data scientists face imbalanced datasets. Having an imbalanced dataset decreases the sensitivity of the model towards minority classes. Lets put this on with simple maths:
Imagine you have 10000 lung X-Ray images and only 100 of them are diagnosed with Pneumonia which is an infectious disease that inflames the air sacs in one or both lungs and fills them with liquid. If you train a model that predicts every example as healthy you will get 99% accuracy. Wow, how awesome is that? Wrong, you just killed many people with your model.
#resampling #tutorial #imbalanced-data #machine-learning #data-science
1651383480
This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.
Install
npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator
Usage
This plugin relies on your serverless yml file and on the serverless-offline
plugin.
plugins:
- serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
- serverless-appsync-simulator
- serverless-offline
Note: Order is important serverless-appsync-simulator
must go before serverless-offline
To start the simulator, run the following command:
sls offline start
You should see in the logs something like:
...
Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002
...
Configuration
Put options under custom.appsync-simulator
in your serverless.yml
file
| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789
| When using API_KEY
as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref
function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt
function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue
function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |
Example:
custom:
appsync-simulator:
location: '.webpack/service' # use webpack build directory
dynamoDb:
endpoint: 'http://my-custom-dynamo:8000'
Hot-reloading
By default, the simulator will hot-relad when changes to *.graphql
or *.vtl
files are detected. Changes to *.yml
files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.
Hot-reloading relies on watchman. Make sure it is installed on your system.
You can change the files being watched with the watch
option, which is then passed to watchman as the match expression.
e.g.
custom:
appsync-simulator:
watch:
- ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
- "*.graphql" # => string like this is equivalent to `["match", "*.graphql"]`
Or you can opt-out by leaving an empty array or set the option to false
Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.
Resource CloudFormation functions resolution
This plugin supports some resources resolution from the Ref
, Fn::GetAtt
and Fn::ImportValue
functions in your yaml file. It also supports some other Cfn functions such as Fn::Join
, Fb::Sub
, etc.
Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation
You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.
provider:
environment:
BUCKET_NAME:
Ref: MyBucket # resolves to `my-bucket-name`
resources:
Resources:
MyDbTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: myTable
...
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket-name
...
# in your appsync config
dataSources:
- type: AMAZON_DYNAMODB
name: dynamosource
config:
tableName:
Ref: MyDbTable # resolves to `myTable`
Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.
In those cases, you can define (or override) those values using the refMap
, getAttMap
and importValueMap
options.
refMap
takes a mapping of resource name to value pairsgetAttMap
takes a mapping of resource name to attribute/values pairsimportValueMap
takes a mapping of import name to values pairsExample:
custom:
appsync-simulator:
refMap:
# Override `MyDbTable` resolution from the previous example.
MyDbTable: 'mock-myTable'
getAttMap:
# define ElasticSearchInstance DomainName
ElasticSearchInstance:
DomainEndpoint: 'localhost:9200'
importValueMap:
other-service-api-url: 'https://other.api.url.com/graphql'
# in your appsync config
dataSources:
- type: AMAZON_ELASTICSEARCH
name: elasticsource
config:
# endpoint resolves as 'http://localhost:9200'
endpoint:
Fn::Join:
- ''
- - https://
- Fn::GetAtt:
- ElasticSearchInstance
- DomainEndpoint
In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}
) in the import name.
This notation can be used with all mocks - refMap
, getAttMap
and importValueMap
provider:
environment:
FINISH_ACTIVITY_FUNCTION_ARN:
Fn::ImportValue: other-service-api-${self:provider.stage}-url
custom:
serverless-appsync-simulator:
importValueMap:
- key: other-service-api-${self:provider.stage}-url
value: 'https://other.api.url.com/graphql'
This plugin only tries to resolve the following parts of the yml tree:
provider.environment
functions[*].environment
custom.appSync
If you have the need of resolving others, feel free to open an issue and explain your use case.
For now, the supported resources to be automatically resovled by Ref:
are:
Feel free to open a PR or an issue to extend them as well.
External functions
When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".
custom:
appsync-simulator:
functions:
addUser:
url: http://localhost:3016/2015-03-31/functions/addUser/invocations
method: post
addPost:
url: https://jsonplaceholder.typicode.com/posts
method: post
Supported Resolver types
This plugin supports resolvers implemented by amplify-appsync-simulator
, as well as custom resolvers.
From Aws Amplify:
Implemented by this plugin
#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#set( $discard = $cols.add("$toSnake") )
#if( $util.isBoolean($ctx.args.input[$entry]) )
#if( $ctx.args.input[$entry] )
#set( $discard = $vals.add("1") )
#else
#set( $discard = $vals.add("0") )
#end
#else
#set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
#end
#end
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
#set( $valStr = "($valStr)" )
#end
#if ( $colStr.substring(0, 1) != '(' )
#set( $colStr = "($colStr)" )
#end
{
"version": "2018-05-29",
"statements": ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM <name-of-table> ORDER BY id DESC LIMIT 1"]
}
#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $cur = $ctx.args.input[$entry] )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#if( $util.isBoolean($cur) )
#if( $cur )
#set ( $cur = "1" )
#else
#set ( $cur = "0" )
#end
#end
#if ( $util.isNullOrEmpty($update) )
#set($update = "$toSnake$equals'$cur'" )
#else
#set($update = "$update,$toSnake$equals'$cur'" )
#end
#end
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> SET $update WHERE id=$ctx.args.input.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.input.id"]
}
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$ctx.args.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.id"]
}
#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
#set ($index = $index + 1)
#if ( $column["typeName"] == "timestamptz" )
#set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
#set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
#set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
$util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
#end
#end
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
#set ( $s = $mapKey.split("_") )
#set ( $camelCase="" )
#set ( $isFirst=true )
#foreach($entry in $s)
#if ( $isFirst )
#set ( $first = $entry.substring(0,1) )
#else
#set ( $first = $entry.substring(0,1).toUpperCase() )
#end
#set ( $isFirst=false )
#set ( $stringLength = $entry.length() )
#set ( $remaining = $entry.substring(1, $stringLength) )
#set ( $camelCase = "$camelCase$first$remaining" )
#end
$util.qr( $response.put("$camelCase", $res[$mapKey]) )
#end
$utils.toJson($response)
Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.
Will be escaped properly: null
, true
, and false
values.
{
"version": "2018-05-29",
"statements": [
"UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
"SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
],
variableMap: {
":ID": $ctx.args.id,
## ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!
}
}
Requires
Author: Serverless-appsync
Source Code: https://github.com/serverless-appsync/serverless-appsync-simulator
License: MIT License
1620466520
If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.
If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.
In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.
#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition
1623722640
Data has become a catchall expression for organizations. The amount of data filling up the organization through regularly exhausting channels is faltering. Last two years more data has been created in all of earlier history. The speed at which businesses are moving today, combined with the sheer volume of data made by the digitized world, requires other ways to derive value from information.
The expressions “Big Data” and “Small Data” have become popular in the last five to ten years. However, it’s not in every case clear about what both of these terms mean or how they assist us with a better understanding of our customers.
Big Data will be data developed in untold manners, for example, through transactions, clicks, radio-frequency identification readers (RFID), financial data, sensors, and an increasing number of IoT connected devices. Small Data, then again, is the data we assemble through primary research. It isn’t simply assembled from qualitative research -in-home ethnographies, online communities, focus groups, etc – yet in addition from quantitative study research. It’s the place where we ask or notice individuals legitimately to reveal their mentalities, inspirations, and values.
#big data #latest news #leveraging the power of big data and small data #big data and small data #small data #big data
1620629020
The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.
This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.
As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).
This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.
#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management
1623656052
Well, it is common and you all must be aware that Big Data is mainly defined by 3V’s i.e, variety, velocity, and volume.
**VOLUME: **The amount of data is huge.
**VARIETY: **Contains multiple forms of data.
**VELOCITY: **Huge amount of streaming data is continuously analyzed in near real-time.
But there is something more that differentiates big data from small data. Let’s have a look over these:
GOAL: Small data helps to accomplish a single task by analyzing the data. Whereas in Big data, the goal evolves and redirects to some unexpected situations. We can have one specific goal in the beginning over time it changes.
**LOCATION: **Small data is located usually in one place in a single file in either a database or a local PC. But big data is spread across multiple servers over the cloud in multiple locations.
STRUCTURE: Big data can be semi-structured or unstructured across many sources as compared to structured small data which is available in a single table.
**DATA PREPARATION: **Small data is usually prepared by end-users for their own specific goals. So the person who is putting the data knows the use of it and what to get from data. On the other hand, big data is prepared by a group of people who may not be the end-users. So, the coordination required to process the data is very much advanced.
#data-analysis #big-data #data-science #small data vs big data #small data