1660374120

# A Set Of Protocols for Arithmetic, Statistics and Logical Operations

Arithmosophi - Arithmosoϕ

Arithmosophi is a set of missing protocols that simplify arithmetic and statistics on generic objects or functions. As `Equatable` define the `==` operator , `Addable` will define the `+` operator.

``````protocol Addable {
func + (lhs: Self, rhs: Self) -> Self
}
[1, 2, 3, 4].sum //  1 + 2 + 3 + 4
[0, 1, 2, 3, 4].average // 2
[13, 2.4, 3, 4].varianceSample``````
• As you might guess `Substractable` define `-` operator, `Multiplicatable` define `*` operator, etc..., all defined in Arithmosophi.swift

## Generic functions

Take a look at `sumOf` function

``````func sumOf<T where T:Addable, T:Initializable>(input : [T]) -> T {
return reduce(input, T()) {\$0 + \$1}
}``````

Array of `Int`, `Double` and even `String` could be passed as argument to this function. Any `Addable` objects.

No need to implement a function for `Double`, one for `Float`, one more for `Int`, etc...

`sumOf` and `productOf` functions are available in Arithmosophi.swift

## CollectionType

This framework contains some useful extensions on `CollectionType`

``````[1, 2, 3, 4].sum //  1 + 2 + 3 + 4
[1, 2, 3, 4].product //  1 * 2 * 3 * 4

["a","b","c","d"].sum // "abcd" same as joinWithSeparator("")
[["a","b"],["c"],["d"]].sum // ["a","b","c","d"] same as flatMap{\$0}``````

### Average

with MesosOros.swift

Computes arithmetic average/mean

``[1, 2, 3, 4].average //  (1 + 2 + 3 + 4) / 4``

A type is `Averagable` if it can be dividable by an `Int` and define an operator to do that

``func /(lhs: Self, rhs: Int) -> Self``

All arithmetic type conform to this protocol and you can get an average for a `CollectionType`

P.S. You can conform to this protocol and `Addable` to make a custom average.

### Median

with MesosOros.swift

Get the median value from the array

• Returns the average of the two middle values if there is an even number of elements in the `CollectionType`.
``[1, 11, 19, 4, -7].median // 4``
• Returns the lower of the two middle values if there is an even number of elements in the `CollectionType`.
``[1.0, 11, 19.5, 4, 12, -7].medianLow // 4``
• Returns the higher of the two middle values if there is an even number of elements in the `CollectionType`.
``[1, 11, 19, 4, 12, -7].medianHigh // 11``

### Variance

with Sigma.swift

Computes variance.

``[1.0, 11, 19.5, 4, 12, -7].varianceSample``
``[1.0, 11, 19.5, 4, 12, -7].variancePopulation``

### Standard deviation

with Sigma.swift

Computes standard deviation.

``[1.0, 11, 19.5, 4, 12, -7].standardDeviationSample``
``[[1.0, 11, 19.5, 4, 12, -7].standardDeviationPopulation``

### Skewness

with Sigma.swift

Computes skewness.

``[1.0, 11, 19.5, 4, 12, -7].skewness // or .moment.skewness``

### Kurtosis

with Sigma.swift

Computes kurtosis.

``[1.0, 11, 19.5, 4, 12, -7].kurtosis // or .moment.kurtosis``

### Covariance

with Sigma.swift

Computes covariance with another `CollectionType`

``[1, 2, 3.5, 3.7, 8, 12].covarianceSample([0.5, 1, 2.1, 3.4, 3.4, 4])``
• population covariance
``[1, 2, 3.5, 3.7, 8, 12].covariancePopulation([0.5, 1, 2.1, 3.4, 3.4, 4])``
``[1, 2, 3.5, 3.7, 8, 12].pearson([0.5, 1, 2.1, 3.4, 3.4, 4])``

## Complex

with Complex.swift `Complex` is a struct of two `ArithmeticType`, the real and the imaginary component

``````var complex = Complex(real: 12, imaginary: 9)
complex = 12 + 9.i``````

You can apply operation on it `(+, -, *, /, ++, --, -)`

``````result = complex + 8 // Complex(real: 20, imaginary: 9)

Complex(real: 12, imaginary: 9) + Complex(real: 8, imaginary: 1)
// Complex(real: 20, imaginary: 10)
``````

## Object attributes

The power of this simple arithmetic protocols are released when using operators

If we implement a box object containing a generic `T` value

``````class Box<T> {
var value: T
}``````

we can define some operators on it, in a generic way, like we can do with `Equatable` or `Comparable`

``````func +=<T where T:Addable> (inout box: Box<T>, addend: T) {
}
func -=<T where T:Substractable> (inout box: Box<T>, addend: T) {
}``````

how to use this operator:

``````var myInt: Box<Int>(5)
myInt += 37``````

For a full example, see Prephirence file from Prephirences framework, or sample Box.swift

### Optional trick

For optional attribute you can use `Initializable` or any protocol which define a way to get a value

``````class Box<T> {
var value: T?
}
box.value = (box.value ?? T()) + addend
}``````

## Logical operations

`LogicalOperationsType` is a missing protocol for `Bool` inspired from `BitwiseOperationsType` (or `IntegerArithmeticType`)

The purpose is the same, implement functions without knowing the base type

You can for instance implement your own `Boolean` enum and implement the protocol

``````enum Boolean: LogicalOperationsType {case True, False}
func && (left: Boolean, @autoclosure right:  () -> Boolean) -> Boolean {
switch left {
case .False: return .False
case .True:  return right()
}
}
...``````

then create only one operator on `Box` for `Bool`, `Boolean` and any `LogicalOperationsType`

``````func &&=<T:LogicalOperationsType> (inout box: Box<T>, @autoclosure right:  () -> TT) {
box.value = box.value && right()
}``````

Take a look at a more complex enum Optional which implement also `LogicalOperationsType`

## Geometry

with `Arithmos`(number) & `Statheros`(constant)

`Arithmos` and `Statheros` add respectively functions and mathematical constants for `Double`, `Float` and `CGFloat`, allowing to implement generic functions without taking care of type

``````func distance<T: Arithmos>(#x: T, y: T) -> T {
return x.hypot(y)
}

func radiansFromDegrees<T where T: Multiplicable, Dividable, T: Arithmos, T: Statheros>(degrees: T) -> T {
return degrees * T.PI / T(180.0)
}``````

Take a look at Geometry.swift for more examples

## Setup

### Using cocoapods

``pod 'Arithmosophi'``

Not interested in full framework ? install a subset with:

``````pod 'Arithmosophi/Core' # Arithmosophi.swift
pod 'Arithmosophi/Logical' # LogicalOperationsType.swift
pod 'Arithmosophi/Complex' # Complex.swift
pod 'Arithmosophi/MesosOros' # MesosOros.swift
pod 'Arithmosophi/Arithmos' # Arithmos.swift
pod 'Arithmosophi/Sigma' # Sigma.swift
pod 'Arithmosophi/Statheros' # Statheros.swift

pod 'Arithmosophi/Samples' # Samples/*.swift (not installed by default)``````

Add `use_frameworks!` to the end of the `Podfile`.

#### Make your own framework dependent

In podspec file

``s.dependency 'Arithmosophi'``

or define only wanted targets

``````s.dependency 'Arithmosophi/Core'
s.dependency 'Arithmosophi/Logical'``````

## Using xcode

Author: phimage
Source Code: https://github.com/phimage/Arithmosophi

1651383480

## A Simple Wrapper Around Amplify AppSync Simulator

This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.

Install

``````npm install serverless-appsync-simulator
# or
``````

Usage

This plugin relies on your serverless yml file and on the `serverless-offline` plugin.

``````plugins:
- serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
- serverless-appsync-simulator
- serverless-offline
``````

Note: Order is important `serverless-appsync-simulator` must go before `serverless-offline`

To start the simulator, run the following command:

``````sls offline start
``````

You should see in the logs something like:

``````...
Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002
...
``````

Configuration

Put options under `custom.appsync-simulator` in your `serverless.yml` file

| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | `0123456789` | When using `API_KEY` as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the `Ref` function | | getAttMap | {} | A mapping of resource resolutions for the `GetAtt` function | | importValueMap | {} | A mapping of resource resolutions for the `ImportValue` function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |

Example:

``````custom:
appsync-simulator:
location: '.webpack/service' # use webpack build directory
dynamoDb:
endpoint: 'http://my-custom-dynamo:8000'
``````

By default, the simulator will hot-relad when changes to `*.graphql` or `*.vtl` files are detected. Changes to `*.yml` files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.

You can change the files being watched with the `watch` option, which is then passed to watchman as the match expression.

e.g.

``````custom:
appsync-simulator:
watch:
- ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
- "*.graphql"                                 # => string like this is equivalent to `["match", "*.graphql"]`
``````

Or you can opt-out by leaving an empty array or set the option to `false`

Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.

Resource CloudFormation functions resolution

This plugin supports some resources resolution from the `Ref`, `Fn::GetAtt` and `Fn::ImportValue` functions in your yaml file. It also supports some other Cfn functions such as `Fn::Join`, `Fb::Sub`, etc.

Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation

## Basic usage

You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.

``````provider:
environment:
BUCKET_NAME:
Ref: MyBucket # resolves to `my-bucket-name`

resources:
Resources:
MyDbTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: myTable
...
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket-name
...

dataSources:
- type: AMAZON_DYNAMODB
name: dynamosource
config:
tableName:
Ref: MyDbTable # resolves to `myTable`
``````

## Override (or mock) values

Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.

In those cases, you can define (or override) those values using the `refMap`, `getAttMap` and `importValueMap` options.

• `refMap` takes a mapping of resource name to value pairs
• `getAttMap` takes a mapping of resource name to attribute/values pairs
• `importValueMap` takes a mapping of import name to values pairs

Example:

``````custom:
appsync-simulator:
refMap:
# Override `MyDbTable` resolution from the previous example.
MyDbTable: 'mock-myTable'
getAttMap:
# define ElasticSearchInstance DomainName
ElasticSearchInstance:
DomainEndpoint: 'localhost:9200'
importValueMap:
other-service-api-url: 'https://other.api.url.com/graphql'

dataSources:
- type: AMAZON_ELASTICSEARCH
name: elasticsource
config:
# endpoint resolves as 'http://localhost:9200'
endpoint:
Fn::Join:
- ''
- - https://
- Fn::GetAtt:
- ElasticSearchInstance
- DomainEndpoint
``````

### Key-value mock notation

In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (`\${self:provider.stage}`) in the import name.

This notation can be used with all mocks - `refMap`, `getAttMap` and `importValueMap`

``````provider:
environment:
FINISH_ACTIVITY_FUNCTION_ARN:
Fn::ImportValue: other-service-api-\${self:provider.stage}-url

custom:
serverless-appsync-simulator:
importValueMap:
- key: other-service-api-\${self:provider.stage}-url
value: 'https://other.api.url.com/graphql'
``````

## Limitations

This plugin only tries to resolve the following parts of the yml tree:

• `provider.environment`
• `functions[*].environment`
• `custom.appSync`

If you have the need of resolving others, feel free to open an issue and explain your use case.

For now, the supported resources to be automatically resovled by `Ref:` are:

• DynamoDb tables
• S3 Buckets

Feel free to open a PR or an issue to extend them as well.

External functions

When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".

``````custom:
appsync-simulator:
functions:
method: post
url: https://jsonplaceholder.typicode.com/posts
method: post
``````

Supported Resolver types

This plugin supports resolvers implemented by `amplify-appsync-simulator`, as well as custom resolvers.

From Aws Amplify:

• NONE
• AWS_LAMBDA
• AMAZON_DYNAMODB
• PIPELINE

Implemented by this plugin

• AMAZON_ELASTIC_SEARCH
• HTTP
• RELATIONAL_DATABASE

## Relational Database

### Sample VTL for a create mutation

``````#set( \$cols = [] )
#set( \$vals = [] )
#foreach( \$entry in \$ctx.args.input.keySet() )
#set( \$regex = "([a-z])([A-Z]+)")
#set( \$replacement = "\$1_\$2")
#set( \$toSnake = \$entry.replaceAll(\$regex, \$replacement).toLowerCase() )
#if( \$util.isBoolean(\$ctx.args.input[\$entry]) )
#if( \$ctx.args.input[\$entry] )
#else
#end
#else
#end
#end
#set( \$valStr = \$vals.toString().replace("[","(").replace("]",")") )
#set( \$colStr = \$cols.toString().replace("[","(").replace("]",")") )
#if ( \$valStr.substring(0, 1) != '(' )
#set( \$valStr = "(\$valStr)" )
#end
#if ( \$colStr.substring(0, 1) != '(' )
#set( \$colStr = "(\$colStr)" )
#end
{
"version": "2018-05-29",
"statements":   ["INSERT INTO <name-of-table> \$colStr VALUES \$valStr", "SELECT * FROM    <name-of-table> ORDER BY id DESC LIMIT 1"]
}
``````

### Sample VTL for an update mutation

``````#set( \$update = "" )
#set( \$equals = "=" )
#foreach( \$entry in \$ctx.args.input.keySet() )
#set( \$cur = \$ctx.args.input[\$entry] )
#set( \$regex = "([a-z])([A-Z]+)")
#set( \$replacement = "\$1_\$2")
#set( \$toSnake = \$entry.replaceAll(\$regex, \$replacement).toLowerCase() )
#if( \$util.isBoolean(\$cur) )
#if( \$cur )
#set ( \$cur = "1" )
#else
#set ( \$cur = "0" )
#end
#end
#if ( \$util.isNullOrEmpty(\$update) )
#set(\$update = "\$toSnake\$equals'\$cur'" )
#else
#set(\$update = "\$update,\$toSnake\$equals'\$cur'" )
#end
#end
{
"version": "2018-05-29",
"statements":   ["UPDATE <name-of-table> SET \$update WHERE id=\$ctx.args.input.id", "SELECT * FROM <name-of-table> WHERE id=\$ctx.args.input.id"]
}
``````

### Sample resolver for delete mutation

``````{
"version": "2018-05-29",
"statements":   ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=\$ctx.args.id", "SELECT * FROM <name-of-table> WHERE id=\$ctx.args.id"]
}
``````

### Sample mutation response VTL with support for handling AWSDateTime

``````#set ( \$index = -1)
#set ( \$result = \$util.parseJson(\$ctx.result) )
#foreach (\$column in \$meta)
#set (\$index = \$index + 1)
#if ( \$column["typeName"] == "timestamptz" )
#set (\$time = \$result["sqlStatementResults"][1]["records"][0][\$index]["stringValue"] )
#set ( \$nowEpochMillis = \$util.time.parseFormattedToEpochMilliSeconds("\$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
#set ( \$isoDateTime = \$util.time.epochMilliSecondsToISO8601(\$nowEpochMillis) )
\$util.qr( \$result["sqlStatementResults"][1]["records"][0][\$index].put("stringValue", "\$isoDateTime") )
#end
#end
#set ( \$res = \$util.parseJson(\$util.rds.toJsonString(\$util.toJson(\$result)))[1][0] )
#set ( \$response = {} )
#foreach(\$mapKey in \$res.keySet())
#set ( \$s = \$mapKey.split("_") )
#set ( \$camelCase="" )
#set ( \$isFirst=true )
#foreach(\$entry in \$s)
#if ( \$isFirst )
#set ( \$first = \$entry.substring(0,1) )
#else
#set ( \$first = \$entry.substring(0,1).toUpperCase() )
#end
#set ( \$isFirst=false )
#set ( \$stringLength = \$entry.length() )
#set ( \$remaining = \$entry.substring(1, \$stringLength) )
#set ( \$camelCase = "\$camelCase\$first\$remaining" )
#end
\$util.qr( \$response.put("\$camelCase", \$res[\$mapKey]) )
#end
\$utils.toJson(\$response)
``````

### Using Variable Map

Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.

Will be escaped properly: `null`, `true`, and `false` values.

``````{
"version": "2018-05-29",
"statements":   [
"UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
"SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > \$ctx.args.newerThan"
],
variableMap: {
":ID": \$ctx.args.id,
##    ":TIMESTAMP": \$ctx.args.newerThan -- This will be handled as a string!!!
}
}``````

Requires

Author: Serverless-appsync
Source Code: https://github.com/serverless-appsync/serverless-appsync-simulator

1651319520

## Serverless APIGateway Service Proxy

Serverless APIGateway Service Proxy

This Serverless Framework plugin supports the AWS service proxy integration feature of API Gateway. You can directly connect API Gateway to AWS services without Lambda.

## Install

Run `serverless plugin install` in your Serverless project.

``````serverless plugin install -n serverless-apigateway-service-proxy
``````

## Supported AWS services

Here is a services list which this plugin supports for now. But will expand to other services in the feature. Please pull request if you are intersted in it.

• Kinesis Streams
• SQS
• S3
• SNS
• DynamoDB
• EventBridge

## How to use

Define settings of the AWS services you want to integrate under `custom > apiGatewayServiceProxies` and run `serverless deploy`.

### Kinesis

Sample syntax for Kinesis proxy in `serverless.yml`.

``````custom:
apiGatewayServiceProxies:
- kinesis: # partitionkey is set apigateway requestid by default
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis
method: post
partitionKey: 'hardcordedkey' # use static partitionkey
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis/{myKey} # use path parameter
method: post
partitionKey:
pathParam: myKey
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis
method: post
partitionKey:
bodyParam: data.myKey # use body parameter
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis
method: post
partitionKey:
queryStringParam: myKey # use query string param
streamName: { Ref: 'YourStream' }
cors: true
- kinesis: # PutRecords
path: /kinesis
method: post
action: PutRecords
streamName: { Ref: 'YourStream' }
cors: true

resources:
Resources:
YourStream:
Type: AWS::Kinesis::Stream
Properties:
ShardCount: 1
``````

Sample request after deploying.

``````curl https://xxxxxxx.execute-api.us-east-1.amazonaws.com/dev/kinesis -d '{"message": "some data"}'  -H 'Content-Type:application/json'
``````

### SQS

Sample syntax for SQS proxy in `serverless.yml`.

``````custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true

resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
``````

Sample request after deploying.

``````curl https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/sqs -d '{"message": "testtest"}' -H 'Content-Type:application/json'
``````

#### Customizing request parameters

If you'd like to pass additional data to the integration request, you can do so by including your custom API Gateway request parameters in `serverless.yml` like so:

``````custom:
apiGatewayServiceProxies:
- sqs:
path: /queue
method: post
queueName: !GetAtt MyQueue.QueueName
cors: true

requestParameters:
'integration.request.querystring.MessageAttribute.1.Name': "'cognitoIdentityId'"
'integration.request.querystring.MessageAttribute.1.Value.StringValue': 'context.identity.cognitoIdentityId'
'integration.request.querystring.MessageAttribute.1.Value.DataType': "'String'"
'integration.request.querystring.MessageAttribute.2.Name': "'cognitoAuthenticationProvider'"
'integration.request.querystring.MessageAttribute.2.Value.StringValue': 'context.identity.cognitoAuthenticationProvider'
'integration.request.querystring.MessageAttribute.2.Value.DataType': "'String'"
``````

The alternative way to pass `MessageAttribute` parameters is via a request body mapping template.

#### Customizing responses

Simplified response template customization

You can get a simple customization of the responses by providing a template for the possible responses. The template is assumed to be `application/json`.

``````custom:
apiGatewayServiceProxies:
- sqs:
path: /queue
method: post
queueName: !GetAtt MyQueue.QueueName
cors: true
response:
template:
# `success` is used when the integration response is 200
success: |-
{ "message: "accepted" }
# `clientError` is used when the integration response is 400
clientError: |-
{ "message": "there is an error in your request" }
# `serverError` is used when the integration response is 500
serverError: |-
{ "message": "there was an error handling your request" }
``````

Full response customization

If you want more control over the integration response, you can provide an array of objects for the `response` value:

``````custom:
apiGatewayServiceProxies:
- sqs:
path: /queue
method: post
queueName: !GetAtt MyQueue.QueueName
cors: true
response:
- statusCode: 200
selectionPattern: '2\\d{2}'
responseParameters: {}
responseTemplates:
application/json: |-
{ "message": "accepted" }
``````

The object keys correspond to the API Gateway integration response object.

### S3

Sample syntax for S3 proxy in `serverless.yml`.

``````custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: post
action: PutObject
bucket:
Ref: S3Bucket
key: static-key.json # use static key
cors: true

- s3:
path: /s3/{myKey} # use path param
method: get
action: GetObject
bucket:
Ref: S3Bucket
key:
pathParam: myKey
cors: true

- s3:
path: /s3
method: delete
action: DeleteObject
bucket:
Ref: S3Bucket
key:
queryStringParam: key # use query string param
cors: true

resources:
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
``````

Sample request after deploying.

``````curl https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/s3 -d '{"message": "testtest"}' -H 'Content-Type:application/json'
``````

#### Customizing request parameters

Similar to the SQS support, you can customize the default request parameters `serverless.yml` like so:

``````custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: post
action: PutObject
bucket:
Ref: S3Bucket
cors: true

requestParameters:
# if requestParameters has a 'integration.request.path.object' property you should remove the key setting
'integration.request.path.object': 'context.requestId'
``````

#### Customizing request templates

If you'd like use custom API Gateway request templates, you can do so like so:

``````custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: get
action: GetObject
bucket:
Ref: S3Bucket
request:
template:
application/json: |
#set (\$context.requestOverride.path.object = \$specialStuff.replaceAll('_', '-'))
{}
``````

Note that if the client does not provide a `Content-Type` header in the request, ApiGateway defaults to `application/json`.

#### Customize the Path Override in API Gateway

Added the new customization parameter that lets the user set a custom Path Override in API Gateway other than the `{bucket}/{object}` This parameter is optional and if not set, will fall back to `{bucket}/{object}` The Path Override will add `{bucket}/` automatically in front

Please keep in mind, that key or path.object still needs to be set at the moment (maybe this will be made optional later on with this)

Usage (With 2 Path Parameters (folder and file and a fixed file extension)):

``````custom:
apiGatewayServiceProxies:
- s3:
path: /s3/{folder}/{file}
method: get
action: GetObject
pathOverride: '{folder}/{file}.xml'
bucket:
Ref: S3Bucket
cors: true

requestParameters:
# if requestParameters has a 'integration.request.path.object' property you should remove the key setting
'integration.request.path.folder': 'method.request.path.folder'
'integration.request.path.file': 'method.request.path.file'
'integration.request.path.object': 'context.requestId'
``````

This will result in API Gateway setting the Path Override attribute to `{bucket}/{folder}/{file}.xml` So for example if you navigate to the API Gatway endpoint `/language/en` it will fetch the file in S3 from `{bucket}/language/en.xml`

Can use greedy, for deeper Folders

The forementioned example can also be shortened by a greedy approach. Thanks to @taylorreece for mentioning this.

``````custom:
apiGatewayServiceProxies:
- s3:
path: /s3/{myPath+}
method: get
action: GetObject
pathOverride: '{myPath}.xml'
bucket:
Ref: S3Bucket
cors: true

requestParameters:
# if requestParameters has a 'integration.request.path.object' property you should remove the key setting
'integration.request.path.myPath': 'method.request.path.myPath'
'integration.request.path.object': 'context.requestId'
``````

This will translate for example `/s3/a/b/c` to `a/b/c.xml`

#### Customizing responses

You can get a simple customization of the responses by providing a template for the possible responses. The template is assumed to be `application/json`.

``````custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: post
action: PutObject
bucket:
Ref: S3Bucket
key: static-key.json
response:
template:
# `success` is used when the integration response is 200
success: |-
{ "message: "accepted" }
# `clientError` is used when the integration response is 400
clientError: |-
{ "message": "there is an error in your request" }
# `serverError` is used when the integration response is 500
serverError: |-
{ "message": "there was an error handling your request" }
``````

### SNS

Sample syntax for SNS proxy in `serverless.yml`.

``````custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
cors: true

resources:
Resources:
SNSTopic:
Type: AWS::SNS::Topic
``````

Sample request after deploying.

``````curl https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/sns -d '{"message": "testtest"}' -H 'Content-Type:application/json'
``````

#### Customizing responses

Simplified response template customization

You can get a simple customization of the responses by providing a template for the possible responses. The template is assumed to be `application/json`.

``````custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
cors: true
response:
template:
# `success` is used when the integration response is 200
success: |-
{ "message: "accepted" }
# `clientError` is used when the integration response is 400
clientError: |-
{ "message": "there is an error in your request" }
# `serverError` is used when the integration response is 500
serverError: |-
{ "message": "there was an error handling your request" }
``````

Full response customization

If you want more control over the integration response, you can provide an array of objects for the `response` value:

``````custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
cors: true
response:
- statusCode: 200
selectionPattern: '2\d{2}'
responseParameters: {}
responseTemplates:
application/json: |-
{ "message": "accepted" }
``````

The object keys correspond to the API Gateway integration response object.

Content Handling and Pass Through Behaviour customization

If you want to work with binary fata, you can not specify `contentHandling` and `PassThrough` inside the `request` object.

``````custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
request:
contentHandling: CONVERT_TO_TEXT
passThrough: WHEN_NO_TEMPLATES
``````

The allowed values correspond with the API Gateway Method integration for ContentHandling and PassthroughBehavior

### DynamoDB

Sample syntax for DynamoDB proxy in `serverless.yml`. Currently, the supported DynamoDB Operations are `PutItem`, `GetItem` and `DeleteItem`.

``````custom:
apiGatewayServiceProxies:
- dynamodb:
path: /dynamodb/{id}/{sort}
method: put
tableName: { Ref: 'YourTable' }
hashKey: # set pathParam or queryStringParam as a partitionkey.
pathParam: id
attributeType: S
rangeKey: # required if also using sort key. set pathParam or queryStringParam.
pathParam: sort
attributeType: S
action: PutItem # specify action to the table what you want
condition: attribute_not_exists(Id) # optional Condition Expressions parameter for the table
cors: true
- dynamodb:
path: /dynamodb
method: get
tableName: { Ref: 'YourTable' }
hashKey:
queryStringParam: id # use query string parameter
attributeType: S
rangeKey:
queryStringParam: sort
attributeType: S
action: GetItem
cors: true
- dynamodb:
path: /dynamodb/{id}
method: delete
tableName: { Ref: 'YourTable' }
hashKey:
pathParam: id
attributeType: S
action: DeleteItem
cors: true

resources:
Resources:
YourTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: YourTable
AttributeDefinitions:
- AttributeName: id
AttributeType: S
- AttributeName: sort
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
- AttributeName: sort
KeyType: RANGE
ProvisionedThroughput:
WriteCapacityUnits: 1
``````

Sample request after deploying.

``````curl -XPUT https://xxxxxxx.execute-api.us-east-1.amazonaws.com/dev/dynamodb/<hashKey>/<sortkey> \
-H 'Content-Type:application/json'
``````

### EventBridge

Sample syntax for EventBridge proxy in `serverless.yml`.

``````custom:
apiGatewayServiceProxies:
- eventbridge:  # source and detailType are hardcoded; detail defaults to POST body
path: /eventbridge
method: post
source: 'hardcoded_source'
detailType: 'hardcoded_detailType'
eventBusName: { Ref: 'YourBusName' }
cors: true
- eventbridge:  # source and detailType as path parameters
path: /eventbridge/{detailTypeKey}/{sourceKey}
method: post
detailType:
pathParam: detailTypeKey
source:
pathParam: sourceKey
eventBusName: { Ref: 'YourBusName' }
cors: true
- eventbridge:  # source, detail, and detailType as body parameters
path: /eventbridge/{detailTypeKey}/{sourceKey}
method: post
detailType:
bodyParam: data.detailType
source:
bodyParam: data.source
detail:
bodyParam: data.detail
eventBusName: { Ref: 'YourBusName' }
cors: true

resources:
Resources:
YourBus:
Type: AWS::Events::EventBus
Properties:
Name: YourEventBus
``````

Sample request after deploying.

``````curl https://xxxxxxx.execute-api.us-east-1.amazonaws.com/dev/eventbridge -d '{"message": "some data"}'  -H 'Content-Type:application/json'
``````

## Common API Gateway features

### Enabling CORS

To set CORS configurations for your HTTP endpoints, simply modify your event configurations as follows:

``````custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors: true
``````

Setting cors to true assumes a default configuration which is equivalent to:

``````custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors:
origin: '*'
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- X-Amz-User-Agent
allowCredentials: false
``````

Configuring the cors property sets Access-Control-Allow-Origin, Access-Control-Allow-Headers, Access-Control-Allow-Methods,Access-Control-Allow-Credentials headers in the CORS preflight response. To enable the Access-Control-Max-Age preflight response header, set the maxAge property in the cors object:

``````custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors:
origin: '*'
maxAge: 86400
``````

If you are using CloudFront or another CDN for your API Gateway, you may want to setup a Cache-Control header to allow for OPTIONS request to be cached to avoid the additional hop.

To enable the Cache-Control header on preflight response, set the cacheControl property in the cors object:

``````custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors:
origin: '*'
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- X-Amz-User-Agent
allowCredentials: false
cacheControl: 'max-age=600, s-maxage=600, proxy-revalidate' # Caches on browser and proxy for 10 minutes and doesnt allow proxy to serve out of date content
``````

You can pass in any supported authorization type:

``````custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true

# optional - defaults to 'NONE'
authorizationType: 'AWS_IAM' # can be one of ['NONE', 'AWS_IAM', 'CUSTOM', 'COGNITO_USER_POOLS']

# when using 'CUSTOM' authorization type, one should specify authorizerId
# authorizerId: { Ref: 'AuthorizerLogicalId' }
# when using 'COGNITO_USER_POOLS' authorization type, one can specify a list of authorization scopes
# authorizationScopes: ['scope1','scope2']

resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
``````

Source: AWS::ApiGateway::Method docs

### Enabling API Token Authentication

You can indicate whether the method requires clients to submit a valid API key using `private` flag:

``````custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true
private: true

resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
``````

which is the same syntax used in Serverless framework.

Source: AWS::ApiGateway::Method docs

### Using a Custom IAM Role

By default, the plugin will generate a role with the required permissions for each service type that is configured.

You can configure your own role by setting the `roleArn` attribute:

``````custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true
roleArn: # Optional. A default role is created when not configured
Fn::GetAtt: [CustomS3Role, Arn]

resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
CustomS3Role:
# Custom Role definition
Type: 'AWS::IAM::Role'
``````

### Customizing API Gateway parameters

The plugin allows one to specify which parameters the API Gateway method accepts.

A common use case is to pass custom data to the integration request:

``````custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SqsQueue', 'QueueName'] }
cors: true
acceptParameters:
requestParameters:
'integration.request.querystring.MessageAttribute.1.Value.DataType': "'String'"
resources:
Resources:
SqsQueue:
Type: 'AWS::SQS::Queue'
``````

Any published SQS message will have the `Custom-Header` value added as a message attribute.

### Customizing request body mapping templates

#### Kinesis

If you'd like to add content types or customize the default templates, you can do so by including your custom API Gateway request mapping template in `serverless.yml` like so:

``````# Required for using Fn::Sub
plugins:
- serverless-cloudformation-sub-variables

custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'MyStream' }
request:
template:
text/plain:
Fn::Sub:
- |
#set(\$msgBody = \$util.parseJson(\$input.body))
#set(\$msgId = \$msgBody.MessageId)
{
"Data": "\$util.base64Encode(\$input.body)",
"PartitionKey": "\$msgId",
"StreamName": "#{MyStreamArn}"
}
- MyStreamArn:
Fn::GetAtt: [MyStream, Arn]
``````

It is important that the mapping template will return a valid `application/json` string

#### SQS

Customizing SQS request templates requires us to force all requests to use an `application/x-www-form-urlencoded` style body. The plugin sets the `Content-Type` header to `application/x-www-form-urlencoded` for you, but API Gateway will still look for the template under the `application/json` request template type, so that is where you need to configure you request body in `serverless.yml`:

``````custom:
apiGatewayServiceProxies:
- sqs:
method: post
queueName: { 'Fn::GetAtt': ['SqsQueue', 'QueueName'] }
request:
template:
application/json: |-
#set (\$body = \$util.parseJson(\$input.body))
Action=SendMessage##
&MessageGroupId=\$util.urlEncode(\$body.event_type)##
&MessageDeduplicationId=\$util.urlEncode(\$body.event_id)##
&MessageAttribute.1.Name=\$util.urlEncode("X-Custom-Signature")##
&MessageAttribute.1.Value.DataType=String##
&MessageAttribute.1.Value.StringValue=\$util.urlEncode(\$input.params("X-Custom-Signature"))##
&MessageBody=\$util.urlEncode(\$input.body)
``````

Note that the `##` at the end of each line is an empty comment. In VTL this has the effect of stripping the newline from the end of the line (as it is commented out), which makes API Gateway read all the lines in the template as one line.

Be careful when mixing additional `requestParameters` into your SQS endpoint as you may overwrite the `integration.request.header.Content-Type` and stop the request template from being parsed correctly. You may also unintentionally create conflicts between parameters passed using `requestParameters` and those in your request template. Typically you should only use the request template if you need to manipulate the incoming request body in some way.

Your custom template must also set the `Action` and `MessageBody` parameters, as these will not be added for you by the plugin.

When using a custom request body, headers sent by a client will no longer be passed through to the SQS queue (`PassthroughBehavior` is automatically set to `NEVER`). You will need to pass through headers sent by the client explicitly in the request body. Also, any custom querystring parameters in the `requestParameters` array will be ignored. These also need to be added via the custom request body.

#### SNS

Similar to the Kinesis support, you can customize the default request mapping templates in `serverless.yml` like so:

``````# Required for using Fn::Sub
plugins:
- serverless-cloudformation-sub-variables

custom:
apiGatewayServiceProxies:
- kinesis:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
request:
template:
application/json:
Fn::Sub:
- "Action=Publish&Message=\$util.urlEncode('This is a fixed message')&TopicArn=\$util.urlEncode('#{MyTopicArn}')"
- MyTopicArn: { Ref: MyTopic }
``````

It is important that the mapping template will return a valid `application/x-www-form-urlencoded` string

### Custom response body mapping templates

You can customize the response body by providing mapping templates for success, server errors (5xx) and client errors (4xx).

Templates must be in JSON format. If a template isn't provided, the integration response will be returned as-is to the client.

#### Kinesis Example

``````custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'MyStream' }
response:
template:
success: |
{
"success": true
}
serverError: |
{
"success": false,
"errorMessage": "Server Error"
}
clientError: |
{
"success": false,
"errorMessage": "Client Error"
}``````

Author: Serverless-operations
Source Code: https://github.com/serverless-operations/serverless-apigateway-service-proxy

1672833558

## How to Use Bash Set Command

Bash has many environment variables for various purposes. The set command of Bash is used to modify or display the different attributes and parameters of the shell environment. This command has many options to perform the different types of tasks. The uses of set command for various purposes are described in this tutorial.

## Syntax

set [options] [arguments]

This command can be used with different types of options and arguments for different purposes. If no option or argument is used with this command, the shell variables are printed. The minus sign (-) is used with the command’s option to enable that option and the plus sign (+) is used with the command’s option to disable that option.

## Exit Values of Set Command

Three exit values can be returned by this command which are mentioned in the following:

1. Zero (0) is returned to complete the task successfully.
2. One (1) is returned if a failure occurs for any invalid argument.
3. One (1) is returned if a failure occurs for a missing argument.

## Different Options of Set Command

The purposes of the most commonly used options of the set command are described in this part of the tutorial.

## Different Examples of the Set Command

The uses of set command with different options are shown in this part of this tutorial.

Example 1: Using the Set Command with -a Option

Create a Bash file with the following script that enables the “set –a” command and initialize three variables named \$v1, \$v2, and \$v3. These variables can be accessed after executing the script.

``````#!/bin/bash
#Enable -a option to read the values of the variables
set -a
#Initialize three variables
v1=78
v2=50
v3=35``````

Run the script using the following command:

``\$ bash set1.bash``

Read the values of the variable using the “echo” command:

``\$ echo \$v1 \$v2 \$v3``

The following output appears after executing the previous commands:

Example 2: Using the Set Command with -C Option

Run the “cat” command to create a text file named testfile.txt. Next, run the “set –C” command to disable the overwriting feature. Next, run the “cat” command again to overwrite the file to check whether the overwriting feature is disabled or not.

``````\$ cat > testfile.txt
\$ set -C
\$ cat > testfile.txt``````

The following output appears after executing the previous commands:

Example 3: Using the Set Command with -x Option

Create a Bash file with the following script that declares a numeric array of 6 elements. The values of the array are printed using for loop.

``````#!/bin/bash
#Declare an array
arr=(67 3 90 56 2 80)
#iterate the array values
for value in \${arr[@]}
do
echo \$value
done``````

Execute the previous script by the following command:

``\$ bash set3.bash``

Enable the debugging option using the following command:

``\$ set -x``

The following output appears after executing the provided commands:

Example 4: Using the Set Command with -e Option

Create a Bash file with the following script that reads a file using the “cat” command before and after using the “set –e” command.

``````#!/bin/bash
#Read a non-existing file without setting set -e
cat myfile.txt
#Set the set command with -e option
set -e
#Read a non-existing file after setting set -e
cat myfile.txt

The following output appears after executing the provided commands. The first error message is shown because the file does not exist in the current location. The next message is then printed. But after executing the “set –e” command, the execution stops after displaying the error message.

Example 5: Using the Set Command with -u Option

Create a Bash file with the following script that initializes a variable but prints the initialized and uninitialized variable before and after using the “set –u” command.

``````#!/bin/bash
#Assign value to a variable
strvar="Bash Programming"
printf "\$strvar \$intvar\n"
#Set the set command with -u option
set -u
#Assign value to a variable
strvar="Bash Programming"
printf "\n\$strvar \$intvar\n"``````

The following output appears after executing the previous script. Here, the error is printed for the uninitialized variable:

Example 6: Using the Set Command with -f Option

Run the following command to print the list of all text files of the current location:

``\$ ls *.txt``

Run the following command to disable the globbing:

``\$ set –f``

Run the following command again to print the list of all text files of the current location:

``\$ ls *.txt``

The following output appears after executing the previous script. Based on the output, the “ls *.txt” command did not work after setting “set –f” command:

Example 7: Split the String Using the Set Command with Variable

Create a Bash file with the following script that splits the string value based on the space using the “set – variable” command. The split values are printed later.

``````#!/bin/bash
#Define a string variable
myvar="Learn bash programming"
#Set the set command without option and with variable
set -- \$myvar
#Print the split value
printf "\$1\n\$2\n\$3\n"``````

The following output appears after executing the previous script. The string value is divided into three parts based on the space that is printed:

## Conclusion

The uses of the different options of the “set” command are shown in this tutorial using multiple examples to know the basic uses of this command.

Original article source at: https://linuxhint.com/

1619565060

## Ternary operator in Python?

1. Ternary Operator in Python

What is a ternary operator: The ternary operator is a conditional expression that means this is a comparison operator and results come on a true or false condition and it is the shortest way to writing an if-else statement. It is a condition in a single line replacing the multiline if-else code.

syntax : condition ? value_if_true : value_if_false

condition: A boolean expression evaluates true or false

value_if_true: a value to be assigned if the expression is evaluated to true.

value_if_false: A value to be assigned if the expression is evaluated to false.

How to use ternary operator in python here are some examples of Python ternary operator if-else.

Brief description of examples we have to take two variables a and b. The value of a is 10 and b is 20. find the minimum number using a ternary operator with one line of code. ( **min = a if a < b else b ) **. if a less than b then print a otherwise print b and second examples are the same as first and the third example is check number is even or odd.

#python #python ternary operator #ternary operator #ternary operator in if-else #ternary operator in python #ternary operator with dict #ternary operator with lambda

1669099573

## What Is Face Recognition? Facial Recognition with Python and OpenCV

In this article, we will know what is face recognition and how is different from face detection. We will go briefly over the theory of face recognition and then jump on to the coding section. At the end of this article, you will be able to make a face recognition program for recognizing faces in images as well as on a live webcam feed.

## What is Face Detection?

In computer vision, one essential problem we are trying to figure out is to automatically detect objects in an image without human intervention. Face detection can be thought of as such a problem where we detect human faces in an image. There may be slight differences in the faces of humans but overall, it is safe to say that there are certain features that are associated with all the human faces. There are various face detection algorithms but Viola-Jones Algorithm is one of the oldest methods that is also used today and we will use the same later in the article. You can go through the Viola-Jones Algorithm after completing this article as I’ll link it at the end of this article.

Face detection is usually the first step towards many face-related technologies, such as face recognition or verification. However, face detection can have very useful applications. The most successful application of face detection would probably be photo taking. When you take a photo of your friends, the face detection algorithm built into your digital camera detects where the faces are and adjusts the focus accordingly.

For a tutorial on Real-Time Face detection

## What is Face Recognition?

Now that we are successful in making such algorithms that can detect faces, can we also recognise whose faces are they?

Face recognition is a method of identifying or verifying the identity of an individual using their face. There are various algorithms that can do face recognition but their accuracy might vary. Here I am going to describe how we do face recognition using deep learning.

So now let us understand how we recognise faces using deep learning. We make use of face embedding in which each face is converted into a vector and this technique is called deep metric learning. Let me further divide this process into three simple steps for easy understanding:

Face Detection: The very first task we perform is detecting faces in the image or video stream. Now that we know the exact location/coordinates of face, we extract this face for further processing ahead.

Feature Extraction: Now that we have cropped the face out of the image, we extract features from it. Here we are going to use face embeddings to extract the features out of the face. A neural network takes an image of the person’s face as input and outputs a vector which represents the most important features of a face. In machine learning, this vector is called embedding and thus we call this vector as face embedding. Now how does this help in recognizing faces of different persons?

While training the neural network, the network learns to output similar vectors for faces that look similar. For example, if I have multiple images of faces within different timespan, of course, some of the features of my face might change but not up to much extent. So in this case the vectors associated with the faces are similar or in short, they are very close in the vector space. Take a look at the below diagram for a rough idea:

Now after training the network, the network learns to output vectors that are closer to each other(similar) for faces of the same person(looking similar). The above vectors now transform into:

We are not going to train such a network here as it takes a significant amount of data and computation power to train such networks. We will use a pre-trained network trained by Davis King on a dataset of ~3 million images. The network outputs a vector of 128 numbers which represent the most important features of a face.

Now that we know how this network works, let us see how we use this network on our own data. We pass all the images in our data to this pre-trained network to get the respective embeddings and save these embeddings in a file for the next step.

Comparing faces: Now that we have face embeddings for every face in our data saved in a file, the next step is to recognise a new t image that is not in our data. So the first step is to compute the face embedding for the image using the same network we used above and then compare this embedding with the rest of the embeddings we have. We recognise the face if the generated embedding is closer or similar to any other embedding as shown below:

So we passed two images, one of the images is of Vladimir Putin and other of George W. Bush. In our example above, we did not save the embeddings for Putin but we saved the embeddings of Bush. Thus when we compared the two new embeddings with the existing ones, the vector for Bush is closer to the other face embeddings of Bush whereas the face embeddings of Putin are not closer to any other embedding and thus the program cannot recognise him.

## What is OpenCV

In the field of Artificial Intelligence, Computer Vision is one of the most interesting and Challenging tasks. Computer Vision acts like a bridge between Computer Software and visualizations around us. It allows computer software to understand and learn about the visualizations in the surroundings. For Example: Based on the color, shape and size determining the fruit. This task can be very easy for the human brain however in the Computer Vision pipeline, first we gather the data, then we perform the data processing activities and then we train and teach the model to understand how to distinguish between the fruits based on size, shape and color of fruit.

Currently, various packages are present to perform machine learning, deep learning and computer vision tasks. By far, computer vision is the best module for such complex activities. OpenCV is an open-source library. It is supported by various programming languages such as R, Python. It runs on most of the platforms such as Windows, Linux and MacOS.

To know more about how face recognition works on opencv, check out the free course on face recognition in opencv.

• OpenCV is an open-source library and is free of cost.
• As compared to other libraries, it is fast since it is written in C/C++.
• It works better on System with lesser RAM
• To supports most of the Operating Systems such as Windows, Linux and MacOS.
•

Installation:

Here we will be focusing on installing OpenCV for python only. We can install OpenCV using pip or conda(for anaconda environment).

1. Using pip:

Using pip, the installation process of openCV can be done by using the following command in the command prompt.

pip install opencv-python

1. Anaconda:

If you are using anaconda environment, either you can execute the above code in anaconda prompt or you can execute the following code in anaconda prompt.

conda install -c conda-forge opencv

## Face Recognition using Python

In this section, we shall implement face recognition using OpenCV and Python. First, let us see the libraries we will need and how to install them:

• OpenCV
• dlib
• Face_recognition

OpenCV is an image and video processing library and is used for image and video analysis, like facial detection, license plate reading, photo editing, advanced robotic vision, optical character recognition, and a whole lot more.

The dlib library, maintained by Davis King, contains our implementation of “deep metric learning” which is used to construct our face embeddings used for the actual recognition process.

The face_recognition  library, created by Adam Geitgey, wraps around dlib’s facial recognition functionality, and this library is super easy to work with and we will be using this in our code. Remember to install dlib library first before you install face_recognition.

To install OpenCV, type in command prompt

I have tried various ways to install dlib on Windows but the easiest of all of them is via Anaconda. First, install Anaconda (here is a guide to install it) and then use this command in your command prompt:

Next to install face_recognition, type in command prompt

Now that we have all the dependencies installed, let us start coding. We will have to create three files, one will take our dataset and extract face embedding for each face using dlib. Next, we will save these embedding in a file.

In the next file we will compare the faces with the existing the recognise faces in images and next we will do the same but recognise faces in live webcam feed

#### Extracting features from Face

First, you need to get a dataset or even create one of you own. Just make sure to arrange all images in folders with each folder containing images of just one person.

Next, save the dataset in a folder the same as you are going to make the file. Now here is the code:

Now that we have stored the embedding in a file named “face_enc”, we can use them to recognise faces in images or live video stream.

#### Face Recognition in Live webcam Feed

Here is the script to recognise faces on a live webcam feed:

Although in the example above we have used haar cascade to detect faces, you can also use face_recognition.face_locations to detect a face as we did in the previous script

#### Face Recognition in Images

The script for detecting and recognising faces in images is almost similar to what you saw above. Try it yourself and if you can’t take a look at the code below:

Output:

InputOutput