1673536824
Terraform module, which creates almost all supported AWS Lambda resources as well as taking care of building and packaging of required Lambda dependencies for functions and layers.
This Terraform module is the part of serverless.tf framework, which aims to simplify all operations when working with the serverless in Terraform:
serverless.tf
modules like HTTP API Gateway (see examples there).module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda1"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../src/lambda-function1"
tags = {
Name = "my-lambda1"
}
}
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "lambda-with-layer"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
publish = true
source_path = "../src/lambda-function1"
store_on_s3 = true
s3_bucket = "my-bucket-id-with-lambda-builds"
layers = [
module.lambda_layer_s3.lambda_layer_arn,
]
environment_variables = {
Serverless = "Terraform"
}
tags = {
Module = "lambda-with-layer"
}
}
module "lambda_layer_s3" {
source = "terraform-aws-modules/lambda/aws"
create_layer = true
layer_name = "lambda-layer-s3"
description = "My amazing lambda layer (deployed from S3)"
compatible_runtimes = ["python3.8"]
source_path = "../src/lambda-layer"
store_on_s3 = true
s3_bucket = "my-bucket-id-with-lambda-builds"
}
module "lambda_function_existing_package_local" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda-existing-package-local"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
create_package = false
local_existing_package = "../existing_package.zip"
}
If you want to manage function code and infrastructure resources (such as IAM permissions, policies, events, etc) in separate flows (e.g., different repositories, teams, CI/CD pipelines).
Disable source code tracking to turn off deployments (and rollbacks) using the module by setting ignore_source_code_hash = true
and deploy a dummy function.
When the infrastructure and the dummy function is deployed, you can use external tool to update the source code of the function (eg, using AWS CLI) and keep using this module via Terraform to manage the infrastructure.
Be aware that changes in local_existing_package
value may trigger deployment via Terraform.
module "lambda_function_externally_managed_package" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda-externally-managed-package"
description = "My lambda function code is deployed separately"
handler = "index.lambda_handler"
runtime = "python3.8"
create_package = false
local_existing_package = "./lambda_functions/code.zip"
ignore_source_code_hash = true
}
Note that this module does not copy prebuilt packages into S3 bucket. This module can only store packages it builds locally and in S3 bucket.
locals {
my_function_source = "../path/to/package.zip"
}
resource "aws_s3_bucket" "builds" {
bucket = "my-builds"
acl = "private"
}
resource "aws_s3_object" "my_function" {
bucket = aws_s3_bucket.builds.id
key = "${filemd5(local.my_function_source)}.zip"
source = local.my_function_source
}
module "lambda_function_existing_package_s3" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda-existing-package-local"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
create_package = false
s3_existing_package = {
bucket = aws_s3_bucket.builds.id
key = aws_s3_object.my_function.id
}
}
module "lambda_function_container_image" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda-existing-package-local"
description = "My awesome lambda function"
create_package = false
image_uri = "132367819851.dkr.ecr.eu-west-1.amazonaws.com/complete-cow:1.0"
package_type = "Image"
}
module "lambda_layer_local" {
source = "terraform-aws-modules/lambda/aws"
create_layer = true
layer_name = "my-layer-local"
description = "My amazing lambda layer (deployed from local)"
compatible_runtimes = ["python3.8"]
source_path = "../fixtures/python3.8-app1"
}
module "lambda_layer_s3" {
source = "terraform-aws-modules/lambda/aws"
create_layer = true
layer_name = "my-layer-s3"
description = "My amazing lambda layer (deployed from S3)"
compatible_runtimes = ["python3.8"]
source_path = "../fixtures/python3.8-app1"
store_on_s3 = true
s3_bucket = "my-bucket-id-with-lambda-builds"
}
Make sure, you deploy Lambda@Edge functions into US East (N. Virginia) region (us-east-1
). See Requirements and Restrictions on Lambda Functions.
module "lambda_at_edge" {
source = "terraform-aws-modules/lambda/aws"
lambda_at_edge = true
function_name = "my-lambda-at-edge"
description = "My awesome lambda@edge function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../fixtures/python3.8-app1"
tags = {
Module = "lambda-at-edge"
}
}
module "lambda_function_in_vpc" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda-in-vpc"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../fixtures/python3.8-app1"
vpc_subnet_ids = module.vpc.intra_subnets
vpc_security_group_ids = [module.vpc.default_security_group_id]
attach_network_policy = true
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "10.10.0.0/16"
# Specify at least one of: intra_subnets, private_subnets, or public_subnets
azs = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
intra_subnets = ["10.10.101.0/24", "10.10.102.0/24", "10.10.103.0/24"]
}
There are 6 supported ways to attach IAM policies to IAM role used by Lambda Function:
policy_json
- JSON string or heredoc, when attach_policy_json = true
.policy_jsons
- List of JSON strings or heredoc, when attach_policy_jsons = true
and number_of_policy_jsons > 0
.policy
- ARN of existing IAM policy, when attach_policy = true
.policies
- List of ARNs of existing IAM policies, when attach_policies = true
and number_of_policies > 0
.policy_statements
- Map of maps to define IAM statements which will be generated as IAM policy. Requires attach_policy_statements = true
. See examples/complete
for more information.assume_role_policy_statements
- Map of maps to define IAM statements which will be generated as IAM policy for assuming Lambda Function role (trust relationship). See examples/complete
for more information.Lambda Permissions should be specified to allow certain resources to invoke Lambda Function.
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
# ...omitted for brevity
allowed_triggers = {
APIGatewayAny = {
service = "apigateway"
source_arn = "arn:aws:execute-api:eu-west-1:135367859851:aqnku8akd0/*/*/*"
},
APIGatewayDevPost = {
service = "apigateway"
source_arn = "arn:aws:execute-api:eu-west-1:135367859851:aqnku8akd0/dev/POST/*"
},
OneRule = {
principal = "events.amazonaws.com"
source_arn = "arn:aws:events:eu-west-1:135367859851:rule/RunDaily"
}
}
}
Sometimes you need to have a way to create resources conditionally but Terraform does not allow usage of count
inside module
block, so the solution is to specify create
arguments.
module "lambda" {
source = "terraform-aws-modules/lambda/aws"
create = false # to disable all resources
create_package = false # to control build package process
create_function = false # to control creation of the Lambda Function and related resources
create_layer = false # to control creation of the Lambda Layer and related resources
create_role = false # to control creation of the IAM role and policies required for Lambda Function
attach_cloudwatch_logs_policy = false
attach_dead_letter_policy = false
attach_network_policy = false
attach_tracing_policy = false
attach_async_event_policy = false
# ... omitted
}
This is one of the most complicated part done by the module and normally you don't have to know internals.
package.py
is Python script which does it. Make sure, Python 3.6 or newer is installed. The main functions of the script are to generate a filename of zip-archive based on the content of the files, verify if zip-archive has been already created, and create zip-archive only when it is necessary (during apply
, not plan
).
Hash of zip-archive created with the same content of the files is always identical which prevents unnecessary force-updates of the Lambda resources unless content modifies. If you need to have different filenames for the same content you can specify extra string argument hash_extra
.
When calling this module multiple times in one execution to create packages with the same source_path
, zip-archives will be corrupted due to concurrent writes into the same file. There are two solutions - set different values for hash_extra
to create different archives, or create package once outside (using this module) and then pass local_existing_package
argument to create other Lambda resources.
Building and packaging has been historically hard to debug (especially with Terraform), so we made an effort to make it easier for user to see debug info. There are 3 different debug levels: DEBUG
- to see only what is happening during planning phase and how a zip file content filtering in case of applied patterns, DEBUG2
- to see more logging output, DEBUG3
- to see all logging values, DUMP_ENV
- to see all logging values and env variables (be careful sharing your env variables as they may contain secrets!).
User can specify debug level like this:
export TF_LAMBDA_PACKAGE_LOG_LEVEL=DEBUG2
terraform apply
User can enable comments in heredoc strings in patterns
which can be helpful in some situations. To do this set this environment variable:
export TF_LAMBDA_PACKAGE_PATTERN_COMMENTS=true
terraform apply
You can specify source_path
in a variety of ways to achieve desired flexibility when building deployment packages locally or in Docker. You can use absolute or relative paths. If you have placed terraform files in subdirectories, note that relative paths are specified from the directory where terraform plan
is run and not the location of your terraform file.
Note that, when building locally, files are not copying anywhere from the source directories when making packages, we use fast Python regular expressions to find matching files and directories, which makes packaging very fast and easy to understand.
When source_path
is set to a string, the content of that path will be used to create deployment package as-is:
source_path = "src/function1"
When source_path
is set to a list of directories the content of each will be taken and one archive will be created.
This is the most complete way of creating a deployment package from multiple sources with multiple dependencies. This example is showing some of the available options (see examples/build-package for more):
source_path = [
"src/main-source",
"src/another-source/index.py",
{
path = "src/function1-dep",
patterns = [
"!.*/.*\\.txt", # Skip all txt files recursively
]
}, {
path = "src/python3.8-app1",
pip_requirements = true,
pip_tmp_dir = "/tmp/dir/location"
prefix_in_zip = "foo/bar1",
}, {
path = "src/python3.8-app2",
pip_requirements = "requirements-large.txt",
patterns = [
"!vendor/colorful-0.5.4.dist-info/RECORD",
"!vendor/colorful-.+.dist-info/.*",
"!vendor/colorful/__pycache__/?.*",
]
}, {
path = "src/nodejs14.x-app1",
npm_requirements = true,
npm_tmp_dir = "/tmp/dir/location"
prefix_in_zip = "foo/bar1",
}, {
path = "src/python3.8-app3",
commands = [
"npm install",
":zip"
],
patterns = [
"!.*/.*\\.txt", # Skip all txt files recursively
"node_modules/.+", # Include all node_modules
],
}, {
path = "src/python3.8-app3",
commands = ["go build"],
patterns = <<END
bin/.*
abc/def/.*
END
}
]
Few notes:
python
or nodejs
, the build process will automatically build python and nodejs dependencies if requirements.txt
or package.json
file will be found in the source folder. If you want to customize this behavior, please use the object notation as explained below.path
are optional.patterns
- List of Python regex filenames should satisfy. Default value is "include everything" which is equal to patterns = [".*"]
. This can also be specified as multiline heredoc string (no comments allowed). Some examples of valid patterns: !.*/.*\.txt # Filter all txt files recursively
node_modules/.* # Include empty dir or with a content if it exists
node_modules/.+ # Include full non empty node_modules dir with its content
node_modules/ # Include node_modules itself without its content
# It's also a way to include an empty dir if it exists
node_modules # Include a file or an existing dir only
!abc/.* # Filter out everything in an abc folder
abc/def/.* # Re-include everything in abc/def sub folder
!abc/def/hgk/.* # Filter out again in abc/def/hgk sub folder
commands
- List of commands to run. If specified, this argument overrides pip_requirements
and npm_requirements
.:zip [source] [destination]
is a special command which creates content of current working directory (first argument) and places it inside of path (second argument).pip_requirements
- Controls whether to execute pip install
. Set to false
to disable this feature, true
to run pip install
with requirements.txt
found in path
. Or set to another filename which you want to use instead.pip_tmp_dir
- Set the base directory to make the temporary directory for pip installs. Can be useful for Docker in Docker builds.npm_requirements
- Controls whether to execute npm install
. Set to false
to disable this feature, true
to run npm install
with package.json
found in path
. Or set to another filename which you want to use instead.npm_tmp_dir
- Set the base directory to make the temporary directory for npm installs. Can be useful for Docker in Docker builds.prefix_in_zip
- If specified, will be used as a prefix inside zip-archive. By default, everything installs into the root of zip-archive.If your Lambda Function or Layer uses some dependencies you can build them in Docker and have them included into deployment package. Here is how you can do it:
build_in_docker = true
docker_file = "src/python3.8-app1/docker/Dockerfile"
docker_build_root = "src/python3.8-app1/docker"
docker_image = "public.ecr.aws/sam/build-python3.8"
runtime = "python3.8" # Setting runtime is required when building package in Docker and Lambda Layer resource.
Using this module you can install dependencies from private hosts. To do this, you need for forward SSH agent:
docker_with_ssh_agent = true
Note that by default, the docker_image
used comes from the registry public.ecr.aws/sam/
, and will be based on the runtime
that you specify. In other words, if you specify a runtime of python3.8
and do not specify docker_image
, then the docker_image
will resolve to public.ecr.aws/sam/build-python3.8
. This ensures that by default the runtime
is available in the docker container.
If you override docker_image
, be sure to keep the image in sync with your runtime
. During the plan phase, when using docker, there is no check that the runtime
is available to build the package. That means that if you use an image that does not have the runtime, the plan will still succeed, but then the apply will fail.
To add flexibility when building in docker, you can pass any number of additional options that you require (see Docker run reference for available options):
docker_additional_options = [
"-e", "MY_ENV_VAR='My environment variable value'",
"-v", "/local:/docker-vol",
]
To override the docker entrypoint when building in docker, set docker_entrypoint
:
docker_entrypoint = "/entrypoint/entrypoint.sh"
The entrypoint must map to a path within your container, so you need to either build your own image that contains the entrypoint or map it to a file on the host by mounting a volume (see Passing additional Docker options).
By default, this module creates deployment package and uses it to create or update Lambda Function or Lambda Layer.
Sometimes, you may want to separate build of deployment package (eg, to compile and install dependencies) from the deployment of a package into two separate steps.
When creating archive locally outside of this module you need to set create_package = false
and then argument local_existing_package = "existing_package.zip"
. Alternatively, you may prefer to keep your deployment packages into S3 bucket and provide a reference to them like this:
create_package = false
s3_existing_package = {
bucket = "my-bucket-with-lambda-builds"
key = "existing_package.zip"
}
This can be implemented in two steps: download file locally using CURL, and pass path to deployment package as local_existing_package
argument.
locals {
package_url = "https://raw.githubusercontent.com/terraform-aws-modules/terraform-aws-lambda/master/examples/fixtures/python3.8-zip/existing_package.zip"
downloaded = "downloaded_package_${md5(local.package_url)}.zip"
}
resource "null_resource" "download_package" {
triggers = {
downloaded = local.downloaded
}
provisioner "local-exec" {
command = "curl -L -o ${local.downloaded} ${local.package_url}"
}
}
data "null_data_source" "downloaded_package" {
inputs = {
id = null_resource.download_package.id
filename = local.downloaded
}
}
module "lambda_function_existing_package_from_remote_url" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda-existing-package-local"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
create_package = false
local_existing_package = data.null_data_source.downloaded_package.outputs["filename"]
}
AWS SAM CLI is an open source tool that help the developers to initiate, build, test, and deploy serverless applications. Currently, SAM CLI tool only supports CFN applications, but SAM CLI team is working on a feature to extend the testing capabilities to support terraform applications (check this Github issue to be updated about the incoming releases, and features included in each release for the Terraform support feature).
SAM CLI provides two ways of testing: local testing and testing on-cloud (Accelerate).
Using SAM CLI, you can invoke the lambda functions defined in the terraform application locally using the sam local invoke command, providing the function terraform address, or function name, and to set the hook-name
to terraform
to tell SAM CLI that the underlying project is a terraform application.
You can execute the sam local invoke
command from your terraform application root directory as following:
sam local invoke --hook-name terraform module.hello_world_function.aws_lambda_function.this[0]
You can also pass an event to your lambda function, or overwrite its environment variables. Check here for more information.
You can also invoke your lambda function in debugging mode, and step-through your lambda function source code locally in your preferred editor. Check here for more information.
You can use AWS SAM CLI to quickly test your application on your AWS development account. Using SAM Accelerate, you will be able to develop your lambda functions locally, and once you save your updates, SAM CLI will update your development account with the updated Lambda functions. So, you can test it on cloud, and if there is any bug, you can quickly update the code, and SAM CLI will take care of pushing it to the cloud. Check here for more information about SAM Accelerate.
You can execute the sam sync
command from your terraform application root directory as following:
sam sync --hook-name terraform --watch
Typically, Lambda Function resource updates when source code changes. If publish = true
is specified a new Lambda Function version will also be created.
Published Lambda Function can be invoked using either by version number or using $LATEST
. This is the simplest way of deployment which does not required any additional tool or service.
In order to do controlled deployments (rolling, canary, rollbacks) of Lambda Functions we need to use Lambda Function aliases.
In simple terms, Lambda alias is like a pointer to either one version of Lambda Function (when deployment complete), or to two weighted versions of Lambda Function (during rolling or canary deployment).
One Lambda Function can be used in multiple aliases. Using aliases gives large control of which version deployed when having multiple environments.
There is alias module, which simplifies working with alias (create, manage configurations, updates, etc). See examples/alias for various use-cases how aliases can be configured and used.
There is deploy module, which creates required resources to do deployments using AWS CodeDeploy. It also creates the deployment, and wait for completion. See examples/deploy for complete end-to-end build/update/deploy process.
Terraform Cloud, Terraform Enterprise, and many other SaaS for running Terraform do not have Python pre-installed on the workers. You will need to provide an alternative Docker image with Python installed to be able to use this module there.
Q1: Why deployment package not recreating every time I change something? Or why deployment package is being recreated every time but content has not been changed?
Answer: There can be several reasons related to concurrent executions, or to content hash. Sometimes, changes has happened inside of dependency which is not used in calculating content hash. Or multiple packages are creating at the same time from the same sources. You can force it by setting value of
hash_extra
to distinct values.
Q2: How to force recreate deployment package?
Answer: Delete an existing zip-archive from
builds
directory, or make a change in your source code. If there is no zip-archive for the current content hash, it will be recreated duringterraform apply
.
Q3: null_resource.archive[0] must be replaced
Answer: This probably mean that zip-archive has been deployed, but is currently absent locally, and it has to be recreated locally. When you run into this issue during CI/CD process (where workspace is clean) or from multiple workspaces, you can set environment variable
TF_RECREATE_MISSING_LAMBDA_PACKAGE=false
or passrecreate_missing_package = false
as a parameter to the module and runterraform apply
.
Q4: What does this error mean - "We currently do not support adding policies for $LATEST."
?
Answer: When the Lambda function is created with
publish = true
the new version is automatically increased and a qualified identifier (version number) becomes available and will be used when setting Lambda permissions.When
publish = false
(default), only unqualified identifier ($LATEST
) is available which leads to the error.The solution is to either disable the creation of Lambda permissions for the current version by setting
create_current_version_allowed_triggers = false
, or to enable publish of Lambda function (publish = true
).
Examples by the users of this module
No modules.
Name | Description | Type | Default | Required |
---|---|---|---|---|
allowed_triggers | Map of allowed triggers to create Lambda permissions | map(any) | {} | no |
architectures | Instruction set architecture for your Lambda function. Valid values are ["x86_64"] and ["arm64"]. | list(string) | null | no |
artifacts_dir | Directory name where artifacts should be stored | string | "builds" | no |
assume_role_policy_statements | Map of dynamic policy statements for assuming Lambda Function role (trust relationship) | any | {} | no |
attach_async_event_policy | Controls whether async event policy should be added to IAM role for Lambda Function | bool | false | no |
attach_cloudwatch_logs_policy | Controls whether CloudWatch Logs policy should be added to IAM role for Lambda Function | bool | true | no |
attach_dead_letter_policy | Controls whether SNS/SQS dead letter notification policy should be added to IAM role for Lambda Function | bool | false | no |
attach_network_policy | Controls whether VPC/network policy should be added to IAM role for Lambda Function | bool | false | no |
attach_policies | Controls whether list of policies should be added to IAM role for Lambda Function | bool | false | no |
attach_policy | Controls whether policy should be added to IAM role for Lambda Function | bool | false | no |
attach_policy_json | Controls whether policy_json should be added to IAM role for Lambda Function | bool | false | no |
attach_policy_jsons | Controls whether policy_jsons should be added to IAM role for Lambda Function | bool | false | no |
attach_policy_statements | Controls whether policy_statements should be added to IAM role for Lambda Function | bool | false | no |
attach_tracing_policy | Controls whether X-Ray tracing policy should be added to IAM role for Lambda Function | bool | false | no |
authorization_type | The type of authentication that the Lambda Function URL uses. Set to 'AWS_IAM' to restrict access to authenticated IAM users only. Set to 'NONE' to bypass IAM authentication and create a public endpoint. | string | "NONE" | no |
build_in_docker | Whether to build dependencies in Docker | bool | false | no |
cloudwatch_logs_kms_key_id | The ARN of the KMS Key to use when encrypting log data. | string | null | no |
cloudwatch_logs_retention_in_days | Specifies the number of days you want to retain log events in the specified log group. Possible values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. | number | null | no |
cloudwatch_logs_tags | A map of tags to assign to the resource. | map(string) | {} | no |
code_signing_config_arn | Amazon Resource Name (ARN) for a Code Signing Configuration | string | null | no |
compatible_architectures | A list of Architectures Lambda layer is compatible with. Currently x86_64 and arm64 can be specified. | list(string) | null | no |
compatible_runtimes | A list of Runtimes this layer is compatible with. Up to 5 runtimes can be specified. | list(string) | [] | no |
cors | CORS settings to be used by the Lambda Function URL | any | {} | no |
create | Controls whether resources should be created | bool | true | no |
create_async_event_config | Controls whether async event configuration for Lambda Function/Alias should be created | bool | false | no |
create_current_version_allowed_triggers | Whether to allow triggers on current version of Lambda Function (this will revoke permissions from previous version because Terraform manages only current resources) | bool | true | no |
create_current_version_async_event_config | Whether to allow async event configuration on current version of Lambda Function (this will revoke permissions from previous version because Terraform manages only current resources) | bool | true | no |
create_function | Controls whether Lambda Function resource should be created | bool | true | no |
create_lambda_function_url | Controls whether the Lambda Function URL resource should be created | bool | false | no |
create_layer | Controls whether Lambda Layer resource should be created | bool | false | no |
create_package | Controls whether Lambda package should be created | bool | true | no |
create_role | Controls whether IAM role for Lambda Function should be created | bool | true | no |
create_unqualified_alias_allowed_triggers | Whether to allow triggers on unqualified alias pointing to $LATEST version | bool | true | no |
create_unqualified_alias_async_event_config | Whether to allow async event configuration on unqualified alias pointing to $LATEST version | bool | true | no |
create_unqualified_alias_lambda_function_url | Whether to use unqualified alias pointing to $LATEST version in Lambda Function URL | bool | true | no |
dead_letter_target_arn | The ARN of an SNS topic or SQS queue to notify when an invocation fails. | string | null | no |
description | Description of your Lambda Function (or Layer) | string | "" | no |
destination_on_failure | Amazon Resource Name (ARN) of the destination resource for failed asynchronous invocations | string | null | no |
destination_on_success | Amazon Resource Name (ARN) of the destination resource for successful asynchronous invocations | string | null | no |
docker_additional_options | Additional options to pass to the docker run command (e.g. to set environment variables, volumes, etc.) | list(string) | [] | no |
docker_build_root | Root dir where to build in Docker | string | "" | no |
docker_entrypoint | Path to the Docker entrypoint to use | string | null | no |
docker_file | Path to a Dockerfile when building in Docker | string | "" | no |
docker_image | Docker image to use for the build | string | "" | no |
docker_pip_cache | Whether to mount a shared pip cache folder into docker environment or not | any | null | no |
docker_with_ssh_agent | Whether to pass SSH_AUTH_SOCK into docker environment or not | bool | false | no |
environment_variables | A map that defines environment variables for the Lambda Function. | map(string) | {} | no |
ephemeral_storage_size | Amount of ephemeral storage (/tmp) in MB your Lambda Function can use at runtime. Valid value between 512 MB to 10,240 MB (10 GB). | number | 512 | no |
event_source_mapping | Map of event source mapping | any | {} | no |
file_system_arn | The Amazon Resource Name (ARN) of the Amazon EFS Access Point that provides access to the file system. | string | null | no |
file_system_local_mount_path | The path where the function can access the file system, starting with /mnt/. | string | null | no |
function_name | A unique name for your Lambda Function | string | "" | no |
handler | Lambda Function entrypoint in your code | string | "" | no |
hash_extra | The string to add into hashing function. Useful when building same source path for different functions. | string | "" | no |
ignore_source_code_hash | Whether to ignore changes to the function's source code hash. Set to true if you manage infrastructure and code deployments separately. | bool | false | no |
image_config_command | The CMD for the docker image | list(string) | [] | no |
image_config_entry_point | The ENTRYPOINT for the docker image | list(string) | [] | no |
image_config_working_directory | The working directory for the docker image | string | null | no |
image_uri | The ECR image URI containing the function's deployment package. | string | null | no |
kms_key_arn | The ARN of KMS key to use by your Lambda Function | string | null | no |
lambda_at_edge | Set this to true if using Lambda@Edge, to enable publishing, limit the timeout, and allow edgelambda.amazonaws.com to invoke the function | bool | false | no |
lambda_role | IAM role ARN attached to the Lambda Function. This governs both who / what can invoke your Lambda Function, as well as what resources our Lambda Function has access to. See Lambda Permission Model for more details. | string | "" | no |
layer_name | Name of Lambda Layer to create | string | "" | no |
layer_skip_destroy | Whether to retain the old version of a previously deployed Lambda Layer. | bool | false | no |
layers | List of Lambda Layer Version ARNs (maximum of 5) to attach to your Lambda Function. | list(string) | null | no |
license_info | License info for your Lambda Layer. Eg, MIT or full url of a license. | string | "" | no |
local_existing_package | The absolute path to an existing zip-file to use | string | null | no |
maximum_event_age_in_seconds | Maximum age of a request that Lambda sends to a function for processing in seconds. Valid values between 60 and 21600. | number | null | no |
maximum_retry_attempts | Maximum number of times to retry when the function returns an error. Valid values between 0 and 2. Defaults to 2. | number | null | no |
memory_size | Amount of memory in MB your Lambda Function can use at runtime. Valid value between 128 MB to 10,240 MB (10 GB), in 64 MB increments. | number | 128 | no |
number_of_policies | Number of policies to attach to IAM role for Lambda Function | number | 0 | no |
number_of_policy_jsons | Number of policies JSON to attach to IAM role for Lambda Function | number | 0 | no |
package_type | The Lambda deployment package type. Valid options: Zip or Image | string | "Zip" | no |
policies | List of policy statements ARN to attach to Lambda Function role | list(string) | [] | no |
policy | An additional policy document ARN to attach to the Lambda Function role | string | null | no |
policy_json | An additional policy document as JSON to attach to the Lambda Function role | string | null | no |
policy_jsons | List of additional policy documents as JSON to attach to Lambda Function role | list(string) | [] | no |
policy_name | IAM policy name. It override the default value, which is the same as role_name | string | null | no |
policy_path | Path of policies to that should be added to IAM role for Lambda Function | string | null | no |
policy_statements | Map of dynamic policy statements to attach to Lambda Function role | any | {} | no |
provisioned_concurrent_executions | Amount of capacity to allocate. Set to 1 or greater to enable, or set to 0 to disable provisioned concurrency. | number | -1 | no |
publish | Whether to publish creation/change as new Lambda Function Version. | bool | false | no |
putin_khuylo | Do you agree that Putin doesn't respect Ukrainian sovereignty and territorial integrity? More info: https://en.wikipedia.org/wiki/Putin_khuylo! | bool | true | no |
recreate_missing_package | Whether to recreate missing Lambda package if it is missing locally or not | bool | true | no |
reserved_concurrent_executions | The amount of reserved concurrent executions for this Lambda Function. A value of 0 disables Lambda Function from being triggered and -1 removes any concurrency limitations. Defaults to Unreserved Concurrency Limits -1. | number | -1 | no |
role_description | Description of IAM role to use for Lambda Function | string | null | no |
role_force_detach_policies | Specifies to force detaching any policies the IAM role has before destroying it. | bool | true | no |
role_name | Name of IAM role to use for Lambda Function | string | null | no |
role_path | Path of IAM role to use for Lambda Function | string | null | no |
role_permissions_boundary | The ARN of the policy that is used to set the permissions boundary for the IAM role used by Lambda Function | string | null | no |
role_tags | A map of tags to assign to IAM role | map(string) | {} | no |
runtime | Lambda Function runtime | string | "" | no |
s3_acl | The canned ACL to apply. Valid values are private, public-read, public-read-write, aws-exec-read, authenticated-read, bucket-owner-read, and bucket-owner-full-control. Defaults to private. | string | "private" | no |
s3_bucket | S3 bucket to store artifacts | string | null | no |
s3_existing_package | The S3 bucket object with keys bucket, key, version pointing to an existing zip-file to use | map(string) | null | no |
s3_object_storage_class | Specifies the desired Storage Class for the artifact uploaded to S3. Can be either STANDARD, REDUCED_REDUNDANCY, ONEZONE_IA, INTELLIGENT_TIERING, or STANDARD_IA. | string | "ONEZONE_IA" | no |
s3_object_tags | A map of tags to assign to S3 bucket object. | map(string) | {} | no |
s3_object_tags_only | Set to true to not merge tags with s3_object_tags. Useful to avoid breaching S3 Object 10 tag limit. | bool | false | no |
s3_prefix | Directory name where artifacts should be stored in the S3 bucket. If unset, the path from artifacts_dir is used | string | null | no |
s3_server_side_encryption | Specifies server-side encryption of the object in S3. Valid values are "AES256" and "aws:kms". | string | null | no |
source_path | The absolute path to a local file or directory containing your Lambda source code | any | null | no |
store_on_s3 | Whether to store produced artifacts on S3 or locally. | bool | false | no |
tags | A map of tags to assign to resources. | map(string) | {} | no |
timeout | The amount of time your Lambda Function has to run in seconds. | number | 3 | no |
tracing_mode | Tracing mode of the Lambda Function. Valid value can be either PassThrough or Active. | string | null | no |
trusted_entities | List of additional trusted entities for assuming Lambda Function role (trust relationship) | any | [] | no |
use_existing_cloudwatch_log_group | Whether to use an existing CloudWatch log group or create new | bool | false | no |
vpc_security_group_ids | List of security group ids when Lambda Function should run in the VPC. | list(string) | null | no |
vpc_subnet_ids | List of subnet ids when Lambda Function should run in the VPC. Usually private or intra subnets. | list(string) | null | no |
Name | Description |
---|---|
lambda_cloudwatch_log_group_arn | The ARN of the Cloudwatch Log Group |
lambda_cloudwatch_log_group_name | The name of the Cloudwatch Log Group |
lambda_event_source_mapping_function_arn | The the ARN of the Lambda function the event source mapping is sending events to |
lambda_event_source_mapping_state | The state of the event source mapping |
lambda_event_source_mapping_state_transition_reason | The reason the event source mapping is in its current state |
lambda_event_source_mapping_uuid | The UUID of the created event source mapping |
lambda_function_arn | The ARN of the Lambda Function |
lambda_function_arn_static | The static ARN of the Lambda Function. Use this to avoid cycle errors between resources (e.g., Step Functions) |
lambda_function_invoke_arn | The Invoke ARN of the Lambda Function |
lambda_function_kms_key_arn | The ARN for the KMS encryption key of Lambda Function |
lambda_function_last_modified | The date Lambda Function resource was last modified |
lambda_function_name | The name of the Lambda Function |
lambda_function_qualified_arn | The ARN identifying your Lambda Function Version |
lambda_function_signing_job_arn | ARN of the signing job |
lambda_function_signing_profile_version_arn | ARN of the signing profile version |
lambda_function_source_code_hash | Base64-encoded representation of raw SHA-256 sum of the zip file |
lambda_function_source_code_size | The size in bytes of the function .zip file |
lambda_function_url | The URL of the Lambda Function URL |
lambda_function_url_id | The Lambda Function URL generated id |
lambda_function_version | Latest published version of Lambda Function |
lambda_layer_arn | The ARN of the Lambda Layer with version |
lambda_layer_created_date | The date Lambda Layer resource was created |
lambda_layer_layer_arn | The ARN of the Lambda Layer without version |
lambda_layer_source_code_size | The size in bytes of the Lambda Layer .zip file |
lambda_layer_version | The Lambda Layer version |
lambda_role_arn | The ARN of the IAM role created for the Lambda Function |
lambda_role_name | The name of the IAM role created for the Lambda Function |
lambda_role_unique_id | The unique id of the IAM role created for the Lambda Function |
local_filename | The filename of zip archive deployed (if deployment was from local) |
s3_object | The map with S3 object data of zip archive deployed (if deployment was from S3) |
During development involving modifying python files, use tox to run unit tests:
tox
This will try to run unit tests which each supported python version, reporting errors for python versions which are not installed locally.
If you only want to test against your main python version:
tox -e py
You can also pass additional positional arguments to pytest which is used to run test, e.g. to make it verbose:
tox -e py -- -vvv
Author: Terraform-aws-modules
Source Code: https://github.com/terraform-aws-modules/terraform-aws-lambda
License: Apache-2.0 license
1669280760
PHP will show the Module “imagick” is already loaded message when you try to load the php_imagick
extension more than once.
The full message is as follows:
PHP Warning: Module "imagick" is already loaded in Unknown on line 0
Many people think this warning comes from their PHP code because the “line 0” part.
But this is actually a PHP configuration issue, so you can’t solve this warning by looking at your source code.
Here are the steps required to fix the issue:
You can find the location of your php.ini
file by calling the phpinfo()
function as shown below:
You need to open the file location in your Explorer window.
Once you open the php.ini
file, search if any of the following lines exist:
extension=imagick.so
; or
extension=php_imagick.dll
; or
extension=imagick
You need to make sure that only one of the lines above is active and comment the rest.
For example, if you already have imagick.so
, then you need to comment the php_imagick.dll
line as follows:
extension=imagick.so
; extension=php_imagick.dll
This way, the imagick
extension won’t be loaded twice.
I recommend you comment the first line of imagick
that you found, then restart your Apache server.
This time, the warning message should disappear.
And that’s how you solve the PHP Warning: Module “imagick” is already loaded in Unknown on line 0.
Original article source at: https://sebhastian.com/
1668601818
Learn how to read file content or ask for user input using NodeJS readline module
The readline
module in NodeJS provides you with a way to read data stream from a file or ask your user for an input.
To use the module, you need to import it to your JavaScript file as follows:
const readline = require('readline');
Next, you need to write the code for receiving user input or reading file content, depending on your requirement. Let’s look at how you can receive user input first.
You can see the full code for this tutorial in GitHub
To receive user input using readline
module, you need to create an Interface
instance that is connected to an input stream. You create the Interface
using readline.createInterface()
method, while passing the input
and output
options as an object argument.
Here’s an example of creating the readline interface:
const readline = require('readline');
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
rl.question('What is your name? ', (answer) => {
console.log(`Oh, so your name is ${answer}`);
});
Both process.stdin
and process.stdout
means input and output is connected to the terminal.
Once created, the interface will be active until you close it by triggering the 'close'
event, which can be triggered by writing the code in your JavaScript file, or using CTRL + D or CTRL + C command.
To ask for user input, you need to call the question()
method from the Interface
instance, which is assigned to rl
variable on the code above.
The question()
method receives two parameters:
string
question you want to ask your useroptions
object (optional) where you can pass the 'abort'
signalcallback
function to execute when the answer is received, passing the answer
to the functionYou can skip the options
object and pass the callback
function as the second parameter.
Here’s how you use question()
the method:
const readline = require('readline');
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
rl.question('What is your name? ', (answer) => {
console.log(`Oh, so your name is ${answer}`);
});
Finally, you can close the rl
interface by calling the rl.close()
method inside the callback function:
const readline = require('readline');
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
rl.question('What is your name? ', (answer) => {
console.log(`Oh, so your name is ${answer}`);
console.log("Closing the interface");
rl.close();
});
Save the file as ask.js
, then call the script using NodeJS like this:
$ node ask.js
What is your name? Nathan
Oh, so your name is Nathan
Closing the interface
$
And that’s how you can ask for user input using NodeJS readline
module.
You can also use the AbortController()
from NodeJS to add a timer to your question and cancel it when a certain amount of time has passed.
But please be aware that the AbortController()
method is only available for NodeJS version 15 and up. And even then, the method is still experimental.
The following question will be aborted when no answer was given in 5 seconds after the prompt. The code has been tested to work on NodeJS version 16.3.0 and up:
const readline = require("readline");
const ac = new AbortController();
const signal = ac.signal;
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
rl.question("What is your name? ", { signal }, (answer) => {
console.log(`Oh, so your name is ${answer}`);
console.log("Closing the console");
process.exit();
});
signal.addEventListener(
"abort",
() => {
console.log("The name question timed out!");
},
{ once: true }
);
setTimeout(() => {
ac.abort();
process.exit();
}, 5000); // 5 seconds
The rl.close()
method is replaced with process.exit()
method because the readline interface will still wait for the abort
signal to close the interface if you use the rl.close()
method.
To read file content using readline
module, you need to create the same interface using createInterface()
command, but instead of connecting the input
option to the process.stdin
object, you connect it to the fs.createReadStream()
method.
Take a look at the following example:
const readline = require("readline");
const fs = require("fs");
const path = "./file.txt";
const rl = readline.createInterface({
input: fs.createReadStream(path),
});
Next, you need to add an event listener to the 'line'
event, which is triggered everytime an end-of-line input (\n
, \r
, or \r\n
) is received from the input
stream.
You listen for the 'line'
event by using rl.on()
method:
rl.on("line", function (input) {
console.log(input);
});
Finally, let’s test the code by creating a new file named file.txt
with the following content:
Roses are red
Violets are blue
Sunflowers are yellow
Pick the flower you like the most :)
Then in the same folder, create a file named read.js
to read the file.txt
content:
const readline = require("readline");
const fs = require("fs");
const path = "./file.txt";
const rl = readline.createInterface({
input: fs.createReadStream(path),
});
let lineno = 1;
rl.on("line", function (input) {
console.log("Line number " + lineno + ": " + input);
lineno++;
});
Then execute the script using NodeJS. You’ll have the output as shown below:
$ node read.js
Line number 1: Roses are red
Line number 2: Violets are blue
Line number 3: Sunflowers are yellow
Line number 4: Pick the flower you like the most :)
$
The full code for this tutorial can be found on GitHub.
For more information, you can visit NodeJS readline
module documentation
The JavaScript readline
module is a module provided by NodeJS so you can create an interactive command-line program that receives user input or read file content.
This module is only available in NodeJS, so you can’t use it from the browser. If you want to read file content from the browser, you need to use the FileReader
class provided by the browser.
See also: Using FileReader
to read a CSV file from the browser tutorial.
When you need to take user inputs from the browser, you can use HTML <form>
and <input>
elements or the JavaScript prompt()
function.
Original article source at: https://sebhastian.com/
1668166527
Maybe you have a module X
that depends on module Y
and you want using X
to pull in all of the symbols from Y
. Maybe you have an outer module A
with an inner module B
, and you want to export all of the symbols in B
from A
. It would be nice to have this functionality built into Julia, but we have yet to reach an agreement on what it should look like (see JuliaLang/julia1986). This short macro is a stopgap we have a better solution.
@reexport using <modules>
calls using <modules>
and also re-exports their symbols:
module Y
...
end
module Z
...
end
module X
using Reexport
@reexport using Y
# all of Y's exported symbols available here
@reexport using Z: x, y
# Z's x and y symbols available here
end
using X
# all of Y's exported symbols and Z's x and y also available here
@reexport import <module>.<name>
or @reexport import <module>: <name>
exports <name>
from <module>
after importing it.
module Y
...
end
module Z
...
end
module X
using Reexport
@reexport import Y
# Only `Y` itself is available here
@reexport import Z: x, y
# Z's x and y symbols available here
end
using X
# Y (but not its exported names) and Z's x and y are available here.
@reexport module <modulename> ... end
defines module <modulename>
and also re-exports its symbols:
module A
using Reexport
@reexport module B
...
end
# all of B's exported symbols available here
end
using A
# all of B's exported symbols available here
@reexport @another_macro <import or using expression>
first expands @another_macro
on the expression, making @reexport
with other macros.
@reexport begin ... end
will apply the reexport macro to every expression in the block.
Author: Simonster
Source Code: https://github.com/simonster/Reexport.jl
License: View license
1668154943
This package exports a macro @from
, which can be used to import objects from files.
The hope is that you will never have to write include
again.
FromFile is a Julia Language package. To install FromFile, please open Julia's interactive session (known as REPL) and press ] key in the REPL to use the package mode, then type the following command
pkg> add FromFile
Objects in other files may be imported in the following way:
# file1.jl
import FromFile: @from
@from "file2.jl" import foo
bar() = foo()
#file2.jl
foo() = println("hi")
File systems may be navigated: @from "../folder/file.jl" import foo
The usual import syntax is supported; the only difference is that the objects are looked up in the file requested: @from "file.jl" using MyModule
; @from "file.jl" import MyModule: foo
; @from "file.jl" import foo, bar
.
Using @from
to access a file multiple times (for example calling @from "file.jl" import foo
in multiple files) will access the same objects each time; i.e. without the duplication issues that include("file.jl")
would introduce.
FromFile.jl is a draft implementation of this specification, for improving import systems as discussed in Issue 4600.
FromFile will (besides its programmatic benefits like the removal of spooky action at a distance) help you keep track of what objects are defined in what file. When converting over to FromFile that may mean untangling your existing project, and figuring out exactly what you defined where. To print out which file defines which object, you can execute the following bash snippet at the root of your project:
for f in $(find . -name '*.jl'); do echo $f && cat $f | vim - -nes -c '%s/#.*//ge' -c '%s/"""\_.\{-}"""//ge' -c '%v/^\S\+/d_' -c '%g/^\(end\|@from\|using\|export\|import\|include\|begin\|let\)\>/d_' -c '%g/.*/exe "norm >>"' -c ':%p' -c ':q!' | tail -n +2; done | less
This should (hopefully) print out each file, and the objects defined in that file. It's not perfect, but it'll help you get 90% of the way there.
Author: Roger-luo
Source Code: https://github.com/Roger-luo/FromFile.jl
License: MIT license
1667761980
A Sketch module for creating a complex UI with a webview. The API is mimicking the BrowserWindow API of Electron.
To use this module in your Sketch plugin you need a bundler utility like skpm and add it as a dependency:
npm install -S sketch-module-web-view
You can also use the with-webview skpm template to have a solid base to start your project with a webview:
skpm create my-plugin-name --template=skpm/with-webview
The version 2.x is only compatible with Sketch >= 51. If you need compatibility with previous versions of Sketch, use the version 1.x
import BrowserWindow from 'sketch-module-web-view'
export default function () {
const options = {
identifier: 'unique.id',
}
const browserWindow = new BrowserWindow(options)
browserWindow.loadURL(require('./my-screen.html'))
}
Author: skpm
Source Code: https://github.com/skpm/sketch-module-web-view
License: MIT license
1666937580
GLPK.jl is a wrapper for the GNU Linear Programming Kit library.
The wrapper has two components:
The C API can be accessed via GLPK.glp_XXX
functions, where the names and arguments are identical to the C API. See the /tests
folder for inspiration.
Install GLPK using Pkg.add
:
import Pkg; Pkg.add("GLPK")
In addition to installing the GLPK.jl package, this will also download and install the GLPK binaries. (You do not need to install GLPK separately.)
To use a custom binary, read the Custom solver binaries section of the JuMP documentation.
To use GLPK with JuMP, use GLPK.Optimizer
:
using JuMP, GLPK
model = Model(GLPK.Optimizer)
set_optimizer_attribute(model, "tm_lim", 60 * 1_000)
set_optimizer_attribute(model, "msg_lev", GLPK.GLP_MSG_OFF)
If the model is primal or dual infeasible, GLPK will attempt to find a certificate of infeasibility. This can be expensive, particularly if you do not intend to use the certificate. If this is the case, use:
model = Model() do
return GLPK.Optimizer(want_infeasibility_certificates = false)
end
Here is an example using GLPK's solver-specific callbacks.
using JuMP, GLPK, Test
model = Model(GLPK.Optimizer)
@variable(model, 0 <= x <= 2.5, Int)
@variable(model, 0 <= y <= 2.5, Int)
@objective(model, Max, y)
reasons = UInt8[]
function my_callback_function(cb_data)
reason = GLPK.glp_ios_reason(cb_data.tree)
push!(reasons, reason)
if reason != GLPK.GLP_IROWGEN
return
end
x_val = callback_value(cb_data, x)
y_val = callback_value(cb_data, y)
if y_val - x_val > 1 + 1e-6
con = @build_constraint(y - x <= 1)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
elseif y_val + x_val > 3 + 1e-6
con = @build_constraint(y - x <= 1)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
end
end
MOI.set(model, GLPK.CallbackFunction(), my_callback_function)
optimize!(model)
@test termination_status(model) == MOI.OPTIMAL
@test primal_status(model) == MOI.FEASIBLE_POINT
@test value(x) == 1
@test value(y) == 2
@show reasons
Author: jump-dev
Source Code: https://github.com/jump-dev/GLPK.jl
License: Unknown, GPL-3.0 licenses found
1666322100
digital filtering:
correlation and convolution:
FFTs provided by FFTW interface:
FFT utilities:
periodogram estimation:
window functions:
common DSP mathematics:
auxiliary functions:
Author: Kofron
Source Code: https://github.com/kofron/KDSP.jl
License: View license
1665800640
nwidart/laravel-modules
is a Laravel package which created to manage your large Laravel app using modules. Module is like a Laravel package, it has some views, controllers or models. This package is supported and tested in Laravel 9.
This package is a re-published, re-organised and maintained version of pingpong/modules, which isn't maintained anymore. This package is used in AsgardCMS.
With one big added bonus that the original package didn't have: tests.
Find out why you should use this package in the article: Writing modular applications with laravel-modules.
To install through Composer, by run the following command:
composer require nwidart/laravel-modules
The package will automatically register a service provider and alias.
Optionally, publish the package's configuration file by running:
php artisan vendor:publish --provider="Nwidart\Modules\LaravelModulesServiceProvider"
By default, the module classes are not loaded automatically. You can autoload your modules using psr-4
. For example:
{
"autoload": {
"psr-4": {
"App\\": "app/",
"Modules\\": "Modules/",
"Database\\Factories\\": "database/factories/",
"Database\\Seeders\\": "database/seeders/"
}
}
Tip: don't forget to run composer dump-autoload
afterwards.
You'll find installation instructions and full documentation on https://docs.laravelmodules.com/.
Nicolas Widart is a freelance web developer specialising on the Laravel framework. View all my packages on my website, or visit my website.
Laravel | laravel-modules |
---|---|
5.4 | ^1.0 |
5.5 | ^2.0 |
5.6 | ^3.0 |
5.7 | ^4.0 |
5.8 | ^5.0 |
6.0 | ^6.0 |
7.0 | ^7.0 |
8.0 | ^8.0 |
9.0 | ^9.0 |
Author: nWidart
Source Code: https://github.com/nWidart/laravel-modules
License: MIT license
1665574320
A global proxy for go modules. see: https://goproxy.io
It invokes the local go command to answer requests.
The default cacheDir is GOPATH, you can set it up by yourself according to the situation.
git clone https://github.com/goproxyio/goproxy.git
cd goproxy
make
./bin/goproxy -listen=0.0.0.0:80 -cacheDir=/tmp/test
If you run go get -v pkg
in the proxy machine, should set a new GOPATH which is different from the old GOPATH, or mayebe deadlock. See the file test/get_test.sh.
./bin/goproxy -listen=0.0.0.0:80 -proxy https://goproxy.io
Use the -proxy flag switch to "Router mode", which implements route filter to routing private module or public module .
direct
+----------------------------------> private repo
|
match|pattern
|
+---+---+ +----------+
go get +-------> |goproxy| +-------> |goproxy.io| +---> golang.org/x/net
+-------+ +----------+
router mode proxy mode
In Router mode, use the -exclude flag set pattern , direct to the repo which match the module path, pattern are matched to the full path specified, not only to the host component.
./bin/goproxy -listen=0.0.0.0:80 -cacheDir=/tmp/test -proxy https://goproxy.io -exclude "*.corp.example.com,rsc.io/private"
docker run -d -p80:8081 goproxy/goproxy
Use the -v flag to persisting the proxy module data (change cacheDir to your own dir):
docker run -d -p80:8081 -v cacheDir:/go goproxy/goproxy
docker-compose up
export GOPROXY=http://localhost
to enable your goproxy.export GOPROXY=direct
to disable it.Author: Goproxyio
Source Code: https://github.com/goproxyio/goproxy
License: MIT license
1665122880
1-Click Upgrade
Upgrade to the latest version of PrestaShop in a few clicks, thanks to this automated method. This module is compatible with all PrestaShop 1.6 & 1.7.
Prerequisites
For older PHP versions, see previous releases of the module (ex. v1.6.8). Note they are unsupported and we strongly recommend you to upgrade your PHP version.
Installation
All versions can be found in the releases list.
git clone https://github.com/PrestaShop/autoupgrade.git
) or download the source code. You can also download a release Source code (ex. v4.4.1). If you download a source code archive, you need extract the file and rename the extracted folder to autoupgradecomposer install
(composer).Running an upgrade on PrestaShop
Upgrading a shop can be done via:
Upgrade can be automated by calling cli-upgrade.php. The following parameters are mandatory:
UpgradeNow
, other values available).$ php cli-upgrade.php --dir=admin-dev --channel=major
Rollback a shop
If an error occurs during the upgrade process, the rollback will be suggested. In case you lost the page from your backoffice, note it can be triggered via CLI.
Rollback can be automated by calling cli-rollback.php. The following parameters are mandatory:
<admin>/autoupgrade/backup/
)$ php cli-rollback.php --dir=admin-dev --backup=V1.7.5.1_20190502-191341-22e883bd
Documentation
Documentation is hosted on devdocs.prestashop.com.
Contributing
PrestaShop modules are open source extensions to the PrestaShop e-commerce platform. Everyone is welcome and even encouraged to contribute with their own improvements!
Just make sure to follow our contribution guidelines.
You can report issues with this module in the main PrestaShop repository. Click here to report an issue.
Author: PrestaShop
Source Code: https://github.com/PrestaShop/autoupgrade
License: AFL-3.0 license
1664209080
> Pkg.clone("https://github.com/vshesh/Sexpr.jl.git")
$ julia -e 'import Sexpr; Sexpr.main()' --
usage: Sexpr.jl [-i] [-c] [-l LINES] [-o OUTPUT] [-e EXTENSION] [-h]
[files...]
A program to port clojure-like s-expression syntax to and from
julia. By default, this program takes clojure syntax and outputs
the julia version. Use -i to flip direction.
positional arguments:
files If given one file and no output directory,
will dump to stdout. If given a directory or
multiple files, eg "sjulia file1 dir file2",
an output directory must be specified with
-o/--output where the files will go.
optional arguments:
-i, --invert take julia code and print out s-expression
code instead
-c, --cat cat all the input from STDIN rather than read
from file. Ignores all positional args to the
program.
-l, --lines LINES how many blank lines should exist between top
level forms, default 1 (type: Int64, default:
1)
-o, --output OUTPUT where to write out files if there are multiple
positional arguments to the file. If this is
empty, and there are >1 argument, the program
will throw an error.
-e, --extension EXTENSION
add an extension that qualifies as a lisp file
(can use multiple times). Defaults: clj, cljs,
cl, lisp, wisp, hy.
-h, --help show this help message and exit
$ julia -e 'import Sexpr; Sexpr.main()' -- -o test/output/ test/programs/
# will transpile all .clj files in test/programs and dump them into test/output.
This project aims to make s-expression syntax interoperable with julia's own Expr objects.
If you've seen LispSyntax.jl, it's a similar idea, but IMHO this project does a bit more, such as allow you to transpile file->file rather than just read in a program, and also transpile back, so you can convert your julia files (minus a few special forms that aren't supported yet) into clojure syntax. This makes it possible to go from julia to python (again, not that anyone needed another route b/c pycall) via Hylang, or to JS via WispJS. The benefit here is that the awkward macro syntax in both of those languages is avoided (Hy necessitates wrapping everything in HyModel objects yourself, which is ridiculous, and WispJS's module system is broken, because it is Javascript, so resolving variable names is not working properly).
The final goal is to use interoperability to do a macroexpand
operation on the input clj syntax. So you would be able to give a folder of clj files, and a temp folder with jl files would be created, then each file would be read in and macroexpanded, converted back to clj syntax, and written out to a third folder. Unfortunately, it's necessary to write the jl files out as an intermediary step, because they need to be able to find each other to resolve imports. Alternatively, you could write the clj files as jl files with the macro @clj_str
, but that makes your whole file a string, which breaks most syntax highlighters, which can be annoying.
I know that you're probably thinking "why?" and it was mostly a project for me to learn Julia and muck around with its internals. I learned quite a bit, so mission accomplished! CLJS has self-hosting now, which means that they will hopefully have a js-only package soon. However, dealing with google closure compiler and leiningen's java/jvm dependencies are a larger problem to be solved, and until then, I still consider it unwieldy, so there's still some practical use to be had here.
Effectively, this is just the reader portion of implementing a lisp - Julia does everything else using its inbuilt mechanisms.
nil
translates to julia's nothing
. They work exactly the same.true
-> true
and false
-> false
. No surprises at all there.number
constants compile to either Int64
or Float64
types.3/5
-> 3//5
in Julia.character
any atom starting with a \
is a character.\newline
, \space
, \tab
, \formfeed
, \backspace
, \return
for escapes\xyz
-> \x
string
is any sequence of characters inside double quotes.keyword
basically a symbol that starts with a :
. In julia, these are confusingly called symbols, and symbols are called variables.symbol
which is any identifier for a variable./
or .
character is converted to a .
in julia. Eg, module/function
becomes module.function
as an identifier. This should be relatively consistent with clojure semantics.*+?!-_':><
are all allowed inside a symbol.::
in a symbol identifier compiles to a type. eg, x::Int
compiles to (:: x Int)
::T1::T2
compiles to a union like x::T1::T2
-> (:: x T1 T2)
'(a b c)
a list - if not quoted, it's evaluated and transpiled.[a b c]
a vector - transpiles to a julia array.{a b c d}
a map - transpiles to a julia Dict(a => b, c=> d)
form.#{a b c}
a set, which can map to Set()
in julia.and
/&&
(what you expect this to be) - needs to be a special form because of short circuiting. Julia defines the and
and or
forms this way on purpose.or
/||
(again, what you expect), see above.x[i]
family (getting/setting/slicing arrays)(aget x 1)
-> Expr(:ref, :x, 1)
-> x[1]
.(aget x 1 2 4 5)
-> Expr(:ref, :x, 1, 2, 4, 5)
-> x[1, 2, 4, 5]
(aget x (: 1 3))
-> x[1:3]
(aget x (: 6))
-> x[6:end]
(preferred)(aget x (: 6 :end))
-> x[6:end]
(not preferred)(:: x Int)
-> x::Int
The ::
form defines types.(:: x Int In64)
-> x::Union{Int, Int64}
there's auto-union if many types are defined.(curly Array Int64)
-> Array{Int64}
will allow parameterized types.(.b a x y)
-> a.b(x,y)
is the dot call form.(. a b c d)
-> a.b.c.d
is the dot access form.((. a b) x y)
is equivalent to (.b a x y)
.(module M ... end)
creates a module. This is visually annoying since you indent your whole file by two spaces just for this call to module, however I haven't figured out any better way to do this - the other option is to make #module M
a special hash dispatch that wraps the whole file but... meh, I don't consider this a high enough priority.(import|using X y z a b)
contrary to my expectations, this will give you import X.y.a.b
. There will be a separate import statement for each function/file you want to use.(import X [y z a])
will expand to import X.y; import X.z; import X.a
instead. This should shorten the writing. Ideally should make this a system macro (in a system.clj file that I define) and call it import*
or something.(export a b c)
-> export a, b, c
. It makes sense from julia's point of view, since modules are flat things, and you only ever have one level of definitions to export.()
/'()
or empty list.
(do exprs...)
does each expression and returns the results of the last one.
(if test true-case false-case?)
standard if, evaluates form #2 and branches.
(let [var1 value1 var2 value2...] exprs...)
binds pairs of variables to their values, then evaluates exprs in an implicit do
.
(fn name? [params...] exprs...)
defines a function.
->
form. Eg: (fn [x] x)
-> (x) -> x
.(defn name docstring? [params...] exprs...)
named defined function.
(def var expr)
defines a variable.
throw
is a function already in julia, so there's no special form dedicated to it.
include
is a function already in julia, so there's no dedicated special form for it.
TODOS
(fn [& rest])
defmulti
and related (does this even mean anything given julia's multiple-dispatch?)deftype
-> type
in Julia.(@m x y z)
how to call a macro - prepend it's name with @
. There is unfortunately no way around this, since julia requires this distinction and for me to resolve what things are macros without it would involve writing an entire compiler. To keep it simple, I'm leaving this requirement in place.@x
means deref in clojure, I might choose to use a different symbol to denote macrocall in the future. maybe μ
or something. Another idea is abusing # dispatch so `#macro (html [:div "helloworld"])`` calls the next form as a macro rather than a regular function. The hash dispatch one seems worse, though.defmacro
defines a macro, as expected.Sexpr.rehydrate()
which will translate the expression back to julia's native AST.quote
or '
gives a literal list of the following expression.'x
is equal to :x
in Julia, but in order to stop the gensym pass from running you actually have to do esc(:x)
to get the equivalent. I'm unclear as of yet how the translation should work to get the desired results, so right now quote
and syntax-quote
do the same thing, which needs to be changed.syntax-quote
or backtick character. the :()
quoting form in julia is actually a syntax quote. It also has an auto-gensym (which can be a pain to get around if you want to return the original name without obfuscation).unquote
or ~
is $
in julia inside expressions. It should evaluate the variable that's given to the macro and use the evaluated value.unquote-splice
or ~@
unquotes, and also expands the form by one layer into the form that's being returned. Ie, (f ~@x)
is the same as :(f($x...))
in julia.Author: VShesh
Source Code: https://github.com/vshesh/Sexpr.jl
1628066280
For many businesses, testimonials are one of the key arguments to get new clients. That means paying a bit of extra attention to testimonials on your website will never go to waste. Within Divi, there are many different ways to share testimonials, using the Divi Testimonial Module for instance. But if you’re looking for a more interactive approach, you’re going to love this tutorial. We’re going to show you how to create custom testimonial tabs inside Divi. Once someone hovers the Blurb Module at the left, a corresponding testimonial will appear on the right. The transition effects in this design are seamless too, which helps you give that extra feel of customization to your website. You’ll be able to download the JSON file for free as well!
#divi #module #blurb #module #json
1625992440
This video explains how to use the Webdriver module from Testcontainers together with Selenide. With this setup, you’ll start the web driver (e.g. chromedriver) inside a Docker container and don’t need any further setup on your machine.
Find more information about Selenide here: https://selenide.org/index.html
The source code for this example is available on GitHub: https://github.com/rieckpil/blog-tutorials/tree/master/write-concise-web-tests-with-selenide
More testing related content is available here: https://rieckpil.de/category/testing/
» Want to become an expert for testing Spring Boot applications? Join the Testing Spring Boot Applications Masterclass: https://rieckpil.de/testing-spring-boot-applications-masterclass/
» Join the 14 Days Free Email Course on Testing Java Applications to Accelerate Your Testing Success: https://rieckpil.de/getting-started-with-testing-java-applications-email-course/
#selenide #webdriver #module
1624615200
Back in 2014, researchers at Google (and other research institutions) published a paper that introduced a novel deep learning convolutional neural network architecture that was, at the time, the largest and most efficient deep neural network architecture.
The novel architecture was an Inception Network, and a variant of this Network called, GoogLeNet went on to achieve the state of the art performance in the classification computer vision task of the ImageNet LargeScale Visual Recognition Challenge 2014(ILVRC14).
We’ve come a long way from 2014, and so has the deep learning field. In 2020 several deep learning architectures achieve and exceed human-level performance in classification and object detection tasks.
However, the innovations and improvements within current convolutional neural networks have their roots set in their predecessors.
_What to expect: _This article explores the integral component of the Inception Network, this integral component is the Inception Module.
Who is this article for?
Deep learning practitioners of all levels can follow the contents and information presented in this article with relative ease. There are definitions and clear explanations wherever technical terms are introduced.
Happy Reading.
An inception network is a deep neural network with an architectural design that consists of repeating components referred to as Inception modules.
As mentioned earlier, this article focuses on the technical details of the inception module.
Before diving into the technical introduction of the Inception module, here are some more information as to what this article entails.
#machine-learning #computer-vision #data-science #deep-learning #module