Hermann  Frami

Hermann Frami


AWS Lambda Terraform Module

AWS Lambda Terraform module

Terraform module, which creates almost all supported AWS Lambda resources as well as taking care of building and packaging of required Lambda dependencies for functions and layers.

This Terraform module is the part of serverless.tf framework, which aims to simplify all operations when working with the serverless in Terraform:

  1. Build and install dependencies - read more. Requires Python 3.6 or newer.
  2. Create, store, and use deployment packages - read more.
  3. Create, update, and publish AWS Lambda Function and Lambda Layer - see usage.
  4. Create static and dynamic aliases for AWS Lambda Function - see usage, see modules/alias.
  5. Do complex deployments (eg, rolling, canary, rollbacks, triggers) - read more, see modules/deploy.
  6. Use AWS SAM CLI to test Lambda Function - read more.


  • Build dependencies for your Lambda Function and Layer.
  • Support builds locally and in Docker (with or without SSH agent support for private builds).
  • Create deployment package or deploy existing (previously built package) from local, from S3, from URL, or from AWS ECR repository.
  • Store deployment packages locally or in the S3 bucket.
  • Support almost all features of Lambda resources (function, layer, alias, etc.)
  • Lambda@Edge
  • Conditional creation for many types of resources.
  • Control execution of nearly any step in the process - build, package, store package, deploy, update.
  • Control nearly all aspects of Lambda resources (provisioned concurrency, VPC, EFS, dead-letter notification, tracing, async events, event source mapping, IAM role, IAM policies, and more).
  • Support integration with other serverless.tf modules like HTTP API Gateway (see examples there).


Lambda Function (store package locally)

module "lambda_function" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda1"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"

  source_path = "../src/lambda-function1"

  tags = {
    Name = "my-lambda1"

Lambda Function and Lambda Layer (store packages on S3)

module "lambda_function" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "lambda-with-layer"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"
  publish       = true

  source_path = "../src/lambda-function1"

  store_on_s3 = true
  s3_bucket   = "my-bucket-id-with-lambda-builds"

  layers = [

  environment_variables = {
    Serverless = "Terraform"

  tags = {
    Module = "lambda-with-layer"

module "lambda_layer_s3" {
  source = "terraform-aws-modules/lambda/aws"

  create_layer = true

  layer_name          = "lambda-layer-s3"
  description         = "My amazing lambda layer (deployed from S3)"
  compatible_runtimes = ["python3.8"]

  source_path = "../src/lambda-layer"

  store_on_s3 = true
  s3_bucket   = "my-bucket-id-with-lambda-builds"

Lambda Functions with existing package (prebuilt) stored locally

module "lambda_function_existing_package_local" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-existing-package-local"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"

  create_package         = false
  local_existing_package = "../existing_package.zip"

Lambda Function or Lambda Layer with the deployable artifact maintained separately from the infrastructure

If you want to manage function code and infrastructure resources (such as IAM permissions, policies, events, etc) in separate flows (e.g., different repositories, teams, CI/CD pipelines).

Disable source code tracking to turn off deployments (and rollbacks) using the module by setting ignore_source_code_hash = true and deploy a dummy function.

When the infrastructure and the dummy function is deployed, you can use external tool to update the source code of the function (eg, using AWS CLI) and keep using this module via Terraform to manage the infrastructure.

Be aware that changes in local_existing_package value may trigger deployment via Terraform.

module "lambda_function_externally_managed_package" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-externally-managed-package"
  description   = "My lambda function code is deployed separately"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"

  create_package         = false
  local_existing_package = "./lambda_functions/code.zip"

  ignore_source_code_hash = true

Lambda Function with existing package (prebuilt) stored in S3 bucket

Note that this module does not copy prebuilt packages into S3 bucket. This module can only store packages it builds locally and in S3 bucket.

locals {
  my_function_source = "../path/to/package.zip"

resource "aws_s3_bucket" "builds" {
  bucket = "my-builds"
  acl    = "private"

resource "aws_s3_object" "my_function" {
  bucket = aws_s3_bucket.builds.id
  key    = "${filemd5(local.my_function_source)}.zip"
  source = local.my_function_source

module "lambda_function_existing_package_s3" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-existing-package-local"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"

  create_package      = false
  s3_existing_package = {
    bucket = aws_s3_bucket.builds.id
    key    = aws_s3_object.my_function.id

Lambda Functions from Container Image stored on AWS ECR

module "lambda_function_container_image" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-existing-package-local"
  description   = "My awesome lambda function"

  create_package = false

  image_uri    = "132367819851.dkr.ecr.eu-west-1.amazonaws.com/complete-cow:1.0"
  package_type = "Image"

Lambda Layers (store packages locally and on S3)

module "lambda_layer_local" {
  source = "terraform-aws-modules/lambda/aws"

  create_layer = true

  layer_name          = "my-layer-local"
  description         = "My amazing lambda layer (deployed from local)"
  compatible_runtimes = ["python3.8"]

  source_path = "../fixtures/python3.8-app1"

module "lambda_layer_s3" {
  source = "terraform-aws-modules/lambda/aws"

  create_layer = true

  layer_name          = "my-layer-s3"
  description         = "My amazing lambda layer (deployed from S3)"
  compatible_runtimes = ["python3.8"]

  source_path = "../fixtures/python3.8-app1"

  store_on_s3 = true
  s3_bucket   = "my-bucket-id-with-lambda-builds"


Make sure, you deploy Lambda@Edge functions into US East (N. Virginia) region (us-east-1). See Requirements and Restrictions on Lambda Functions.

module "lambda_at_edge" {
  source = "terraform-aws-modules/lambda/aws"

  lambda_at_edge = true

  function_name = "my-lambda-at-edge"
  description   = "My awesome lambda@edge function"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"

  source_path = "../fixtures/python3.8-app1"

  tags = {
    Module = "lambda-at-edge"

Lambda Function in VPC

module "lambda_function_in_vpc" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-in-vpc"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"

  source_path = "../fixtures/python3.8-app1"

  vpc_subnet_ids         = module.vpc.intra_subnets
  vpc_security_group_ids = [module.vpc.default_security_group_id]
  attach_network_policy = true

module "vpc" {
  source = "terraform-aws-modules/vpc/aws"

  name = "my-vpc"
  cidr = ""

  # Specify at least one of: intra_subnets, private_subnets, or public_subnets
  azs           = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
  intra_subnets = ["", "", ""]

Additional IAM policies for Lambda Functions

There are 6 supported ways to attach IAM policies to IAM role used by Lambda Function:

  1. policy_json - JSON string or heredoc, when attach_policy_json = true.
  2. policy_jsons - List of JSON strings or heredoc, when attach_policy_jsons = true and number_of_policy_jsons > 0.
  3. policy - ARN of existing IAM policy, when attach_policy = true.
  4. policies - List of ARNs of existing IAM policies, when attach_policies = true and number_of_policies > 0.
  5. policy_statements - Map of maps to define IAM statements which will be generated as IAM policy. Requires attach_policy_statements = true. See examples/complete for more information.
  6. assume_role_policy_statements - Map of maps to define IAM statements which will be generated as IAM policy for assuming Lambda Function role (trust relationship). See examples/complete for more information.

Lambda Permissions for allowed triggers

Lambda Permissions should be specified to allow certain resources to invoke Lambda Function.

module "lambda_function" {
  source = "terraform-aws-modules/lambda/aws"

  # ...omitted for brevity

  allowed_triggers = {
    APIGatewayAny = {
      service    = "apigateway"
      source_arn = "arn:aws:execute-api:eu-west-1:135367859851:aqnku8akd0/*/*/*"
    APIGatewayDevPost = {
      service    = "apigateway"
      source_arn = "arn:aws:execute-api:eu-west-1:135367859851:aqnku8akd0/dev/POST/*"
    OneRule = {
      principal  = "events.amazonaws.com"
      source_arn = "arn:aws:events:eu-west-1:135367859851:rule/RunDaily"

Conditional creation

Sometimes you need to have a way to create resources conditionally but Terraform does not allow usage of count inside module block, so the solution is to specify create arguments.

module "lambda" {
  source = "terraform-aws-modules/lambda/aws"

  create = false # to disable all resources

  create_package  = false  # to control build package process
  create_function = false  # to control creation of the Lambda Function and related resources
  create_layer    = false  # to control creation of the Lambda Layer and related resources
  create_role     = false  # to control creation of the IAM role and policies required for Lambda Function

  attach_cloudwatch_logs_policy = false
  attach_dead_letter_policy     = false
  attach_network_policy         = false
  attach_tracing_policy         = false
  attach_async_event_policy     = false

  # ... omitted

How does building and packaging work?

This is one of the most complicated part done by the module and normally you don't have to know internals.

package.py is Python script which does it. Make sure, Python 3.6 or newer is installed. The main functions of the script are to generate a filename of zip-archive based on the content of the files, verify if zip-archive has been already created, and create zip-archive only when it is necessary (during apply, not plan).

Hash of zip-archive created with the same content of the files is always identical which prevents unnecessary force-updates of the Lambda resources unless content modifies. If you need to have different filenames for the same content you can specify extra string argument hash_extra.

When calling this module multiple times in one execution to create packages with the same source_path, zip-archives will be corrupted due to concurrent writes into the same file. There are two solutions - set different values for hash_extra to create different archives, or create package once outside (using this module) and then pass local_existing_package argument to create other Lambda resources.


Building and packaging has been historically hard to debug (especially with Terraform), so we made an effort to make it easier for user to see debug info. There are 3 different debug levels: DEBUG - to see only what is happening during planning phase and how a zip file content filtering in case of applied patterns, DEBUG2 - to see more logging output, DEBUG3 - to see all logging values, DUMP_ENV - to see all logging values and env variables (be careful sharing your env variables as they may contain secrets!).

User can specify debug level like this:

terraform apply

User can enable comments in heredoc strings in patterns which can be helpful in some situations. To do this set this environment variable:

terraform apply

Build Dependencies

You can specify source_path in a variety of ways to achieve desired flexibility when building deployment packages locally or in Docker. You can use absolute or relative paths. If you have placed terraform files in subdirectories, note that relative paths are specified from the directory where terraform plan is run and not the location of your terraform file.

Note that, when building locally, files are not copying anywhere from the source directories when making packages, we use fast Python regular expressions to find matching files and directories, which makes packaging very fast and easy to understand.

Simple build from single directory

When source_path is set to a string, the content of that path will be used to create deployment package as-is:

source_path = "src/function1"

Static build from multiple source directories

When source_path is set to a list of directories the content of each will be taken and one archive will be created.

Combine various options for extreme flexibility

This is the most complete way of creating a deployment package from multiple sources with multiple dependencies. This example is showing some of the available options (see examples/build-package for more):

source_path = [
    path     = "src/function1-dep",
    patterns = [
      "!.*/.*\\.txt", # Skip all txt files recursively
  }, {
    path             = "src/python3.8-app1",
    pip_requirements = true,
    pip_tmp_dir      = "/tmp/dir/location"
    prefix_in_zip    = "foo/bar1",
  }, {
    path             = "src/python3.8-app2",
    pip_requirements = "requirements-large.txt",
    patterns = [
  }, {
    path             = "src/nodejs14.x-app1",
    npm_requirements = true,
    npm_tmp_dir      = "/tmp/dir/location"
    prefix_in_zip    = "foo/bar1",
  }, {
    path     = "src/python3.8-app3",
    commands = [
      "npm install",
    patterns = [
      "!.*/.*\\.txt",    # Skip all txt files recursively
      "node_modules/.+", # Include all node_modules
  }, {
    path     = "src/python3.8-app3",
    commands = ["go build"],
    patterns = <<END

Few notes:

  • If you specify a source path as a string that references a folder and the runtime begins with python or nodejs, the build process will automatically build python and nodejs dependencies if requirements.txt or package.json file will be found in the source folder. If you want to customize this behavior, please use the object notation as explained below.
  • All arguments except path are optional.
  • patterns - List of Python regex filenames should satisfy. Default value is "include everything" which is equal to patterns = [".*"]. This can also be specified as multiline heredoc string (no comments allowed). Some examples of valid patterns:
    !.*/.*\.txt        # Filter all txt files recursively
    node_modules/.*    # Include empty dir or with a content if it exists
    node_modules/.+    # Include full non empty node_modules dir with its content
    node_modules/      # Include node_modules itself without its content
                       # It's also a way to include an empty dir if it exists
    node_modules       # Include a file or an existing dir only

    !abc/.*            # Filter out everything in an abc folder
    abc/def/.*         # Re-include everything in abc/def sub folder
    !abc/def/hgk/.*    # Filter out again in abc/def/hgk sub folder
  • commands - List of commands to run. If specified, this argument overrides pip_requirements and npm_requirements.
    • :zip [source] [destination] is a special command which creates content of current working directory (first argument) and places it inside of path (second argument).
  • pip_requirements - Controls whether to execute pip install. Set to false to disable this feature, true to run pip install with requirements.txt found in path. Or set to another filename which you want to use instead.
  • pip_tmp_dir - Set the base directory to make the temporary directory for pip installs. Can be useful for Docker in Docker builds.
  • npm_requirements - Controls whether to execute npm install. Set to false to disable this feature, true to run npm install with package.json found in path. Or set to another filename which you want to use instead.
  • npm_tmp_dir - Set the base directory to make the temporary directory for npm installs. Can be useful for Docker in Docker builds.
  • prefix_in_zip - If specified, will be used as a prefix inside zip-archive. By default, everything installs into the root of zip-archive.

Building in Docker

If your Lambda Function or Layer uses some dependencies you can build them in Docker and have them included into deployment package. Here is how you can do it:

build_in_docker   = true
docker_file       = "src/python3.8-app1/docker/Dockerfile"
docker_build_root = "src/python3.8-app1/docker"
docker_image      = "public.ecr.aws/sam/build-python3.8"
runtime           = "python3.8"    # Setting runtime is required when building package in Docker and Lambda Layer resource.

Using this module you can install dependencies from private hosts. To do this, you need for forward SSH agent:

docker_with_ssh_agent = true

Note that by default, the docker_image used comes from the registry public.ecr.aws/sam/, and will be based on the runtime that you specify. In other words, if you specify a runtime of python3.8 and do not specify docker_image, then the docker_image will resolve to public.ecr.aws/sam/build-python3.8. This ensures that by default the runtime is available in the docker container.

If you override docker_image, be sure to keep the image in sync with your runtime. During the plan phase, when using docker, there is no check that the runtime is available to build the package. That means that if you use an image that does not have the runtime, the plan will still succeed, but then the apply will fail.

Passing additional Docker options

To add flexibility when building in docker, you can pass any number of additional options that you require (see Docker run reference for available options):

  docker_additional_options = [
        "-e", "MY_ENV_VAR='My environment variable value'",
        "-v", "/local:/docker-vol",

Overriding Docker Entrypoint

To override the docker entrypoint when building in docker, set docker_entrypoint:

  docker_entrypoint = "/entrypoint/entrypoint.sh"

The entrypoint must map to a path within your container, so you need to either build your own image that contains the entrypoint or map it to a file on the host by mounting a volume (see Passing additional Docker options).

Deployment package - Create or use existing

By default, this module creates deployment package and uses it to create or update Lambda Function or Lambda Layer.

Sometimes, you may want to separate build of deployment package (eg, to compile and install dependencies) from the deployment of a package into two separate steps.

When creating archive locally outside of this module you need to set create_package = false and then argument local_existing_package = "existing_package.zip". Alternatively, you may prefer to keep your deployment packages into S3 bucket and provide a reference to them like this:

  create_package      = false
  s3_existing_package = {
    bucket = "my-bucket-with-lambda-builds"
    key    = "existing_package.zip"

Using deployment package from remote URL

This can be implemented in two steps: download file locally using CURL, and pass path to deployment package as local_existing_package argument.

locals {
  package_url = "https://raw.githubusercontent.com/terraform-aws-modules/terraform-aws-lambda/master/examples/fixtures/python3.8-zip/existing_package.zip"
  downloaded  = "downloaded_package_${md5(local.package_url)}.zip"

resource "null_resource" "download_package" {
  triggers = {
    downloaded = local.downloaded

  provisioner "local-exec" {
    command = "curl -L -o ${local.downloaded} ${local.package_url}"

data "null_data_source" "downloaded_package" {
  inputs = {
    id       = null_resource.download_package.id
    filename = local.downloaded

module "lambda_function_existing_package_from_remote_url" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-existing-package-local"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"

  create_package         = false
  local_existing_package = data.null_data_source.downloaded_package.outputs["filename"]

How to use AWS SAM CLI to test Lambda Function?

AWS SAM CLI is an open source tool that help the developers to initiate, build, test, and deploy serverless applications. Currently, SAM CLI tool only supports CFN applications, but SAM CLI team is working on a feature to extend the testing capabilities to support terraform applications (check this Github issue to be updated about the incoming releases, and features included in each release for the Terraform support feature).

SAM CLI provides two ways of testing: local testing and testing on-cloud (Accelerate).

Local Testing

Using SAM CLI, you can invoke the lambda functions defined in the terraform application locally using the sam local invoke command, providing the function terraform address, or function name, and to set the hook-name to terraform to tell SAM CLI that the underlying project is a terraform application.

You can execute the sam local invoke command from your terraform application root directory as following:

sam local invoke --hook-name terraform module.hello_world_function.aws_lambda_function.this[0] 

You can also pass an event to your lambda function, or overwrite its environment variables. Check here for more information.

You can also invoke your lambda function in debugging mode, and step-through your lambda function source code locally in your preferred editor. Check here for more information.

Testing on-cloud (Accelerate)

You can use AWS SAM CLI to quickly test your application on your AWS development account. Using SAM Accelerate, you will be able to develop your lambda functions locally, and once you save your updates, SAM CLI will update your development account with the updated Lambda functions. So, you can test it on cloud, and if there is any bug, you can quickly update the code, and SAM CLI will take care of pushing it to the cloud. Check here for more information about SAM Accelerate.

You can execute the sam sync command from your terraform application root directory as following:

sam sync --hook-name terraform --watch 

How to deploy and manage Lambda Functions?

Simple deployments

Typically, Lambda Function resource updates when source code changes. If publish = true is specified a new Lambda Function version will also be created.

Published Lambda Function can be invoked using either by version number or using $LATEST. This is the simplest way of deployment which does not required any additional tool or service.

Controlled deployments (rolling, canary, rollbacks)

In order to do controlled deployments (rolling, canary, rollbacks) of Lambda Functions we need to use Lambda Function aliases.

In simple terms, Lambda alias is like a pointer to either one version of Lambda Function (when deployment complete), or to two weighted versions of Lambda Function (during rolling or canary deployment).

One Lambda Function can be used in multiple aliases. Using aliases gives large control of which version deployed when having multiple environments.

There is alias module, which simplifies working with alias (create, manage configurations, updates, etc). See examples/alias for various use-cases how aliases can be configured and used.

There is deploy module, which creates required resources to do deployments using AWS CodeDeploy. It also creates the deployment, and wait for completion. See examples/deploy for complete end-to-end build/update/deploy process.

Terraform CI/CD

Terraform Cloud, Terraform Enterprise, and many other SaaS for running Terraform do not have Python pre-installed on the workers. You will need to provide an alternative Docker image with Python installed to be able to use this module there.


Q1: Why deployment package not recreating every time I change something? Or why deployment package is being recreated every time but content has not been changed?

Answer: There can be several reasons related to concurrent executions, or to content hash. Sometimes, changes has happened inside of dependency which is not used in calculating content hash. Or multiple packages are creating at the same time from the same sources. You can force it by setting value of hash_extra to distinct values.

Q2: How to force recreate deployment package?

Answer: Delete an existing zip-archive from builds directory, or make a change in your source code. If there is no zip-archive for the current content hash, it will be recreated during terraform apply.

Q3: null_resource.archive[0] must be replaced

Answer: This probably mean that zip-archive has been deployed, but is currently absent locally, and it has to be recreated locally. When you run into this issue during CI/CD process (where workspace is clean) or from multiple workspaces, you can set environment variable TF_RECREATE_MISSING_LAMBDA_PACKAGE=false or pass recreate_missing_package = false as a parameter to the module and run terraform apply.

Q4: What does this error mean - "We currently do not support adding policies for $LATEST." ?

Answer: When the Lambda function is created with publish = true the new version is automatically increased and a qualified identifier (version number) becomes available and will be used when setting Lambda permissions.

When publish = false (default), only unqualified identifier ($LATEST) is available which leads to the error.

The solution is to either disable the creation of Lambda permissions for the current version by setting create_current_version_allowed_triggers = false, or to enable publish of Lambda function (publish = true).


  1. Creation of Lambda Functions and Lambda Layers is very similar and both support the same features (building from source path, using existing package, storing package locally or on S3)
  2. Check out this Awesome list of AWS Lambda Layers


  • Complete - Create Lambda resources in various combinations with all supported features.
  • Container Image - Create a Docker image with a platform specified in the Dockerfile (using docker provider), push it to AWS ECR, and create Lambda function from it.
  • Build and Package - Build and create deployment packages in various ways.
  • Alias - Create static and dynamic aliases in various ways.
  • Deploy - Complete end-to-end build/update/deploy process using AWS CodeDeploy.
  • Async Invocations - Create Lambda Function with async event configuration (with SQS, SNS, and EventBridge integration).
  • With VPC - Create Lambda Function with VPC.
  • With VPC and VPC Endpoint for S3 - Create Lambda Function with VPC and VPC Endpoint for S3.
  • With EFS - Create Lambda Function with Elastic File System attached (Terraform 0.13+ is recommended).
  • Multiple regions - Create the same Lambda Function in multiple regions with non-conflicting IAM roles and policies.
  • Event Source Mapping - Create Lambda Function with event source mapping configuration (SQS, DynamoDB, Amazon MQ, and Kinesis).
  • Triggers - Create Lambda Function with some triggers (eg, Cloudwatch Events, EventBridge).
  • Code Signing - Create Lambda Function with code signing configuration.

Examples by the users of this module


terraform>= 0.13.1
aws>= 4.9
external>= 1.0
local>= 1.0
null>= 2.0


aws>= 4.9
external>= 1.0
local>= 1.0
null>= 2.0


No modules.


aws_arn.log_group_arndata source
aws_caller_identity.currentdata source
aws_cloudwatch_log_group.lambdadata source
aws_iam_policy.tracingdata source
aws_iam_policy.vpcdata source
aws_iam_policy_document.additional_inlinedata source
aws_iam_policy_document.assume_roledata source
aws_iam_policy_document.asyncdata source
aws_iam_policy_document.dead_letterdata source
aws_iam_policy_document.logsdata source
aws_partition.currentdata source
aws_region.currentdata source
external_external.archive_preparedata source


allowed_triggersMap of allowed triggers to create Lambda permissionsmap(any){}no
architecturesInstruction set architecture for your Lambda function. Valid values are ["x86_64"] and ["arm64"].list(string)nullno
artifacts_dirDirectory name where artifacts should be storedstring"builds"no
assume_role_policy_statementsMap of dynamic policy statements for assuming Lambda Function role (trust relationship)any{}no
attach_async_event_policyControls whether async event policy should be added to IAM role for Lambda Functionboolfalseno
attach_cloudwatch_logs_policyControls whether CloudWatch Logs policy should be added to IAM role for Lambda Functionbooltrueno
attach_dead_letter_policyControls whether SNS/SQS dead letter notification policy should be added to IAM role for Lambda Functionboolfalseno
attach_network_policyControls whether VPC/network policy should be added to IAM role for Lambda Functionboolfalseno
attach_policiesControls whether list of policies should be added to IAM role for Lambda Functionboolfalseno
attach_policyControls whether policy should be added to IAM role for Lambda Functionboolfalseno
attach_policy_jsonControls whether policy_json should be added to IAM role for Lambda Functionboolfalseno
attach_policy_jsonsControls whether policy_jsons should be added to IAM role for Lambda Functionboolfalseno
attach_policy_statementsControls whether policy_statements should be added to IAM role for Lambda Functionboolfalseno
attach_tracing_policyControls whether X-Ray tracing policy should be added to IAM role for Lambda Functionboolfalseno
authorization_typeThe type of authentication that the Lambda Function URL uses. Set to 'AWS_IAM' to restrict access to authenticated IAM users only. Set to 'NONE' to bypass IAM authentication and create a public endpoint.string"NONE"no
build_in_dockerWhether to build dependencies in Dockerboolfalseno
cloudwatch_logs_kms_key_idThe ARN of the KMS Key to use when encrypting log data.stringnullno
cloudwatch_logs_retention_in_daysSpecifies the number of days you want to retain log events in the specified log group. Possible values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.numbernullno
cloudwatch_logs_tagsA map of tags to assign to the resource.map(string){}no
code_signing_config_arnAmazon Resource Name (ARN) for a Code Signing Configurationstringnullno
compatible_architecturesA list of Architectures Lambda layer is compatible with. Currently x86_64 and arm64 can be specified.list(string)nullno
compatible_runtimesA list of Runtimes this layer is compatible with. Up to 5 runtimes can be specified.list(string)[]no
corsCORS settings to be used by the Lambda Function URLany{}no
createControls whether resources should be createdbooltrueno
create_async_event_configControls whether async event configuration for Lambda Function/Alias should be createdboolfalseno
create_current_version_allowed_triggersWhether to allow triggers on current version of Lambda Function (this will revoke permissions from previous version because Terraform manages only current resources)booltrueno
create_current_version_async_event_configWhether to allow async event configuration on current version of Lambda Function (this will revoke permissions from previous version because Terraform manages only current resources)booltrueno
create_functionControls whether Lambda Function resource should be createdbooltrueno
create_lambda_function_urlControls whether the Lambda Function URL resource should be createdboolfalseno
create_layerControls whether Lambda Layer resource should be createdboolfalseno
create_packageControls whether Lambda package should be createdbooltrueno
create_roleControls whether IAM role for Lambda Function should be createdbooltrueno
create_unqualified_alias_allowed_triggersWhether to allow triggers on unqualified alias pointing to $LATEST versionbooltrueno
create_unqualified_alias_async_event_configWhether to allow async event configuration on unqualified alias pointing to $LATEST versionbooltrueno
create_unqualified_alias_lambda_function_urlWhether to use unqualified alias pointing to $LATEST version in Lambda Function URLbooltrueno
dead_letter_target_arnThe ARN of an SNS topic or SQS queue to notify when an invocation fails.stringnullno
descriptionDescription of your Lambda Function (or Layer)string""no
destination_on_failureAmazon Resource Name (ARN) of the destination resource for failed asynchronous invocationsstringnullno
destination_on_successAmazon Resource Name (ARN) of the destination resource for successful asynchronous invocationsstringnullno
docker_additional_optionsAdditional options to pass to the docker run command (e.g. to set environment variables, volumes, etc.)list(string)[]no
docker_build_rootRoot dir where to build in Dockerstring""no
docker_entrypointPath to the Docker entrypoint to usestringnullno
docker_filePath to a Dockerfile when building in Dockerstring""no
docker_imageDocker image to use for the buildstring""no
docker_pip_cacheWhether to mount a shared pip cache folder into docker environment or notanynullno
docker_with_ssh_agentWhether to pass SSH_AUTH_SOCK into docker environment or notboolfalseno
environment_variablesA map that defines environment variables for the Lambda Function.map(string){}no
ephemeral_storage_sizeAmount of ephemeral storage (/tmp) in MB your Lambda Function can use at runtime. Valid value between 512 MB to 10,240 MB (10 GB).number512no
event_source_mappingMap of event source mappingany{}no
file_system_arnThe Amazon Resource Name (ARN) of the Amazon EFS Access Point that provides access to the file system.stringnullno
file_system_local_mount_pathThe path where the function can access the file system, starting with /mnt/.stringnullno
function_nameA unique name for your Lambda Functionstring""no
handlerLambda Function entrypoint in your codestring""no
hash_extraThe string to add into hashing function. Useful when building same source path for different functions.string""no
ignore_source_code_hashWhether to ignore changes to the function's source code hash. Set to true if you manage infrastructure and code deployments separately.boolfalseno
image_config_commandThe CMD for the docker imagelist(string)[]no
image_config_entry_pointThe ENTRYPOINT for the docker imagelist(string)[]no
image_config_working_directoryThe working directory for the docker imagestringnullno
image_uriThe ECR image URI containing the function's deployment package.stringnullno
kms_key_arnThe ARN of KMS key to use by your Lambda Functionstringnullno
lambda_at_edgeSet this to true if using Lambda@Edge, to enable publishing, limit the timeout, and allow edgelambda.amazonaws.com to invoke the functionboolfalseno
lambda_roleIAM role ARN attached to the Lambda Function. This governs both who / what can invoke your Lambda Function, as well as what resources our Lambda Function has access to. See Lambda Permission Model for more details.string""no
layer_nameName of Lambda Layer to createstring""no
layer_skip_destroyWhether to retain the old version of a previously deployed Lambda Layer.boolfalseno
layersList of Lambda Layer Version ARNs (maximum of 5) to attach to your Lambda Function.list(string)nullno
license_infoLicense info for your Lambda Layer. Eg, MIT or full url of a license.string""no
local_existing_packageThe absolute path to an existing zip-file to usestringnullno
maximum_event_age_in_secondsMaximum age of a request that Lambda sends to a function for processing in seconds. Valid values between 60 and 21600.numbernullno
maximum_retry_attemptsMaximum number of times to retry when the function returns an error. Valid values between 0 and 2. Defaults to 2.numbernullno
memory_sizeAmount of memory in MB your Lambda Function can use at runtime. Valid value between 128 MB to 10,240 MB (10 GB), in 64 MB increments.number128no
number_of_policiesNumber of policies to attach to IAM role for Lambda Functionnumber0no
number_of_policy_jsonsNumber of policies JSON to attach to IAM role for Lambda Functionnumber0no
package_typeThe Lambda deployment package type. Valid options: Zip or Imagestring"Zip"no
policiesList of policy statements ARN to attach to Lambda Function rolelist(string)[]no
policyAn additional policy document ARN to attach to the Lambda Function rolestringnullno
policy_jsonAn additional policy document as JSON to attach to the Lambda Function rolestringnullno
policy_jsonsList of additional policy documents as JSON to attach to Lambda Function rolelist(string)[]no
policy_nameIAM policy name. It override the default value, which is the same as role_namestringnullno
policy_pathPath of policies to that should be added to IAM role for Lambda Functionstringnullno
policy_statementsMap of dynamic policy statements to attach to Lambda Function roleany{}no
provisioned_concurrent_executionsAmount of capacity to allocate. Set to 1 or greater to enable, or set to 0 to disable provisioned concurrency.number-1no
publishWhether to publish creation/change as new Lambda Function Version.boolfalseno
putin_khuyloDo you agree that Putin doesn't respect Ukrainian sovereignty and territorial integrity? More info: https://en.wikipedia.org/wiki/Putin_khuylo!booltrueno
recreate_missing_packageWhether to recreate missing Lambda package if it is missing locally or notbooltrueno
reserved_concurrent_executionsThe amount of reserved concurrent executions for this Lambda Function. A value of 0 disables Lambda Function from being triggered and -1 removes any concurrency limitations. Defaults to Unreserved Concurrency Limits -1.number-1no
role_descriptionDescription of IAM role to use for Lambda Functionstringnullno
role_force_detach_policiesSpecifies to force detaching any policies the IAM role has before destroying it.booltrueno
role_nameName of IAM role to use for Lambda Functionstringnullno
role_pathPath of IAM role to use for Lambda Functionstringnullno
role_permissions_boundaryThe ARN of the policy that is used to set the permissions boundary for the IAM role used by Lambda Functionstringnullno
role_tagsA map of tags to assign to IAM rolemap(string){}no
runtimeLambda Function runtimestring""no
s3_aclThe canned ACL to apply. Valid values are private, public-read, public-read-write, aws-exec-read, authenticated-read, bucket-owner-read, and bucket-owner-full-control. Defaults to private.string"private"no
s3_bucketS3 bucket to store artifactsstringnullno
s3_existing_packageThe S3 bucket object with keys bucket, key, version pointing to an existing zip-file to usemap(string)nullno
s3_object_storage_classSpecifies the desired Storage Class for the artifact uploaded to S3. Can be either STANDARD, REDUCED_REDUNDANCY, ONEZONE_IA, INTELLIGENT_TIERING, or STANDARD_IA.string"ONEZONE_IA"no
s3_object_tagsA map of tags to assign to S3 bucket object.map(string){}no
s3_object_tags_onlySet to true to not merge tags with s3_object_tags. Useful to avoid breaching S3 Object 10 tag limit.boolfalseno
s3_prefixDirectory name where artifacts should be stored in the S3 bucket. If unset, the path from artifacts_dir is usedstringnullno
s3_server_side_encryptionSpecifies server-side encryption of the object in S3. Valid values are "AES256" and "aws:kms".stringnullno
source_pathThe absolute path to a local file or directory containing your Lambda source codeanynullno
store_on_s3Whether to store produced artifacts on S3 or locally.boolfalseno
tagsA map of tags to assign to resources.map(string){}no
timeoutThe amount of time your Lambda Function has to run in seconds.number3no
tracing_modeTracing mode of the Lambda Function. Valid value can be either PassThrough or Active.stringnullno
trusted_entitiesList of additional trusted entities for assuming Lambda Function role (trust relationship)any[]no
use_existing_cloudwatch_log_groupWhether to use an existing CloudWatch log group or create newboolfalseno
vpc_security_group_idsList of security group ids when Lambda Function should run in the VPC.list(string)nullno
vpc_subnet_idsList of subnet ids when Lambda Function should run in the VPC. Usually private or intra subnets.list(string)nullno


lambda_cloudwatch_log_group_arnThe ARN of the Cloudwatch Log Group
lambda_cloudwatch_log_group_nameThe name of the Cloudwatch Log Group
lambda_event_source_mapping_function_arnThe the ARN of the Lambda function the event source mapping is sending events to
lambda_event_source_mapping_stateThe state of the event source mapping
lambda_event_source_mapping_state_transition_reasonThe reason the event source mapping is in its current state
lambda_event_source_mapping_uuidThe UUID of the created event source mapping
lambda_function_arnThe ARN of the Lambda Function
lambda_function_arn_staticThe static ARN of the Lambda Function. Use this to avoid cycle errors between resources (e.g., Step Functions)
lambda_function_invoke_arnThe Invoke ARN of the Lambda Function
lambda_function_kms_key_arnThe ARN for the KMS encryption key of Lambda Function
lambda_function_last_modifiedThe date Lambda Function resource was last modified
lambda_function_nameThe name of the Lambda Function
lambda_function_qualified_arnThe ARN identifying your Lambda Function Version
lambda_function_signing_job_arnARN of the signing job
lambda_function_signing_profile_version_arnARN of the signing profile version
lambda_function_source_code_hashBase64-encoded representation of raw SHA-256 sum of the zip file
lambda_function_source_code_sizeThe size in bytes of the function .zip file
lambda_function_urlThe URL of the Lambda Function URL
lambda_function_url_idThe Lambda Function URL generated id
lambda_function_versionLatest published version of Lambda Function
lambda_layer_arnThe ARN of the Lambda Layer with version
lambda_layer_created_dateThe date Lambda Layer resource was created
lambda_layer_layer_arnThe ARN of the Lambda Layer without version
lambda_layer_source_code_sizeThe size in bytes of the Lambda Layer .zip file
lambda_layer_versionThe Lambda Layer version
lambda_role_arnThe ARN of the IAM role created for the Lambda Function
lambda_role_nameThe name of the IAM role created for the Lambda Function
lambda_role_unique_idThe unique id of the IAM role created for the Lambda Function
local_filenameThe filename of zip archive deployed (if deployment was from local)
s3_objectThe map with S3 object data of zip archive deployed (if deployment was from S3)



During development involving modifying python files, use tox to run unit tests:


This will try to run unit tests which each supported python version, reporting errors for python versions which are not installed locally.

If you only want to test against your main python version:

tox -e py

You can also pass additional positional arguments to pytest which is used to run test, e.g. to make it verbose:

tox -e py -- -vvv

Additional information for users from Russia and Belarus

Download Details:

Author: Terraform-aws-modules
Source Code: https://github.com/terraform-aws-modules/terraform-aws-lambda 
License: Apache-2.0 license

#serverless #aws #lambda #terraform #module 

AWS Lambda Terraform Module

How to Solve The Issue PHP Warning: Module 'imagick' Is Already Loaded

Solve PHP Warning: Module 'imagick' is already loaded

PHP will show the Module “imagick” is already loaded message when you try to load the php_imagick extension more than once.

The full message is as follows:

PHP Warning:  Module "imagick" is already loaded in Unknown on line 0

Many people think this warning comes from their PHP code because the “line 0” part.

But this is actually a PHP configuration issue, so you can’t solve this warning by looking at your source code.

Here are the steps required to fix the issue:

Step #1: Find your php.ini location

You can find the location of your php.ini file by calling the phpinfo() function as shown below:

php.ini path examplephp.ini path example

You need to open the file location in your Explorer window.

Step #2: Find and comment the imagick extension line

Once you open the php.ini file, search if any of the following lines exist:

; or
; or

You need to make sure that only one of the lines above is active and comment the rest.

For example, if you already have imagick.so, then you need to comment the php_imagick.dll line as follows:

; extension=php_imagick.dll

This way, the imagick extension won’t be loaded twice.

I recommend you comment the first line of imagick that you found, then restart your Apache server.

This time, the warning message should disappear.

And that’s how you solve the PHP Warning: Module “imagick” is already loaded in Unknown on line 0.

Original article source at: https://sebhastian.com/

#php #module #solve 

How to Solve The Issue PHP Warning: Module 'imagick' Is Already Loaded

How to Read Content Or Ask for input with Readline Module in Node

NodeJS - How to read content or ask for input with readline module.

Learn how to read file content or ask for user input using NodeJS readline module

The readline module in NodeJS provides you with a way to read data stream from a file or ask your user for an input.

To use the module, you need to import it to your JavaScript file as follows:

const readline = require('readline');

Next, you need to write the code for receiving user input or reading file content, depending on your requirement. Let’s look at how you can receive user input first.

You can see the full code for this tutorial in GitHub

Using readline to receive user input

To receive user input using readline module, you need to create an Interface instance that is connected to an input stream. You create the Interface using readline.createInterface() method, while passing the input and output options as an object argument.

Here’s an example of creating the readline interface:

const readline = require('readline');

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout

rl.question('What is your name? ', (answer) => {
  console.log(`Oh, so your name is ${answer}`);

Both process.stdin and process.stdout means input and output is connected to the terminal.

Once created, the interface will be active until you close it by triggering the 'close' event, which can be triggered by writing the code in your JavaScript file, or using CTRL + D or CTRL + C command.

To ask for user input, you need to call the question() method from the Interface instance, which is assigned to rl variable on the code above.

The question() method receives two parameters:

  • The string question you want to ask your user
  • The options object (optional) where you can pass the 'abort' signal
  • The callback function to execute when the answer is received, passing the answer to the function

You can skip the options object and pass the callback function as the second parameter.

Here’s how you use question() the method:

const readline = require('readline');

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout

rl.question('What is your name? ', (answer) => {
  console.log(`Oh, so your name is ${answer}`);

Finally, you can close the rl interface by calling the rl.close() method inside the callback function:

const readline = require('readline');

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout

rl.question('What is your name? ', (answer) => {
  console.log(`Oh, so your name is ${answer}`);
  console.log("Closing the interface");

Save the file as ask.js, then call the script using NodeJS like this:

$ node ask.js
What is your name? Nathan
Oh, so your name is Nathan
Closing the interface

And that’s how you can ask for user input using NodeJS readline module.

You can also use the AbortController() from NodeJS to add a timer to your question and cancel it when a certain amount of time has passed.

But please be aware that the AbortController() method is only available for NodeJS version 15 and up. And even then, the method is still experimental.

The following question will be aborted when no answer was given in 5 seconds after the prompt. The code has been tested to work on NodeJS version 16.3.0 and up:

const readline = require("readline");
const ac = new AbortController();
const signal = ac.signal;

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,

rl.question("What is your name? ", { signal }, (answer) => {
  console.log(`Oh, so your name is ${answer}`);
  console.log("Closing the console");

  () => {
    console.log("The name question timed out!");
  { once: true }

setTimeout(() => {
}, 5000); // 5 seconds

The rl.close() method is replaced with process.exit() method because the readline interface will still wait for the abort signal to close the interface if you use the rl.close() method.

Using readline to read file content

To read file content using readline module, you need to create the same interface using createInterface() command, but instead of connecting the input option to the process.stdin object, you connect it to the fs.createReadStream() method.

Take a look at the following example:

const readline = require("readline");
const fs = require("fs");

const path = "./file.txt";

const rl = readline.createInterface({
  input: fs.createReadStream(path),

Next, you need to add an event listener to the 'line' event, which is triggered everytime an end-of-line input (\n, \r, or \r\n) is received from the input stream.

You listen for the 'line' event by using rl.on() method:

rl.on("line", function (input) {

Finally, let’s test the code by creating a new file named file.txt with the following content:

Roses are red
Violets are blue
Sunflowers are yellow
Pick the flower you like the most :)

Then in the same folder, create a file named read.js to read the file.txt content:

const readline = require("readline");
const fs = require("fs");

const path = "./file.txt";

const rl = readline.createInterface({
  input: fs.createReadStream(path),

let lineno = 1;
rl.on("line", function (input) {
  console.log("Line number " + lineno + ": " + input);

Then execute the script using NodeJS. You’ll have the output as shown below:

$ node read.js
Line number 1: Roses are red
Line number 2: Violets are blue
Line number 3: Sunflowers are yellow
Line number 4: Pick the flower you like the most :)

The full code for this tutorial can be found on GitHub.

For more information, you can visit NodeJS readline module documentation


The JavaScript readline module is a module provided by NodeJS so you can create an interactive command-line program that receives user input or read file content.

This module is only available in NodeJS, so you can’t use it from the browser. If you want to read file content from the browser, you need to use the FileReader class provided by the browser.

See also: Using FileReader to read a CSV file from the browser tutorial.

When you need to take user inputs from the browser, you can use HTML <form> and <input> elements or the JavaScript prompt() function.

Original article source at: https://sebhastian.com/

#node #module 

How to Read Content Or Ask for input with Readline Module in Node

Reexport.jl: Julia Macro for Re-exporting one Module From another



Maybe you have a module X that depends on module Y and you want using X to pull in all of the symbols from Y. Maybe you have an outer module A with an inner module B, and you want to export all of the symbols in B from A. It would be nice to have this functionality built into Julia, but we have yet to reach an agreement on what it should look like (see JuliaLang/julia1986). This short macro is a stopgap we have a better solution.


@reexport using <modules> calls using <modules> and also re-exports their symbols:

module Y

module Z

module X
    using Reexport
    @reexport using Y
    # all of Y's exported symbols available here
    @reexport using Z: x, y
    # Z's x and y symbols available here

using X
# all of Y's exported symbols and Z's x and y also available here

@reexport import <module>.<name> or @reexport import <module>: <name> exports <name> from <module> after importing it.

module Y

module Z

module X
    using Reexport
    @reexport import Y
    # Only `Y` itself is available here
    @reexport import Z: x, y
    # Z's x and y symbols available here

using X
# Y (but not its exported names) and Z's x and y are available here.

@reexport module <modulename> ... end defines module <modulename> and also re-exports its symbols:

module A
    using Reexport
    @reexport module B
    # all of B's exported symbols available here

using A
# all of B's exported symbols available here

@reexport @another_macro <import or using expression> first expands @another_macro on the expression, making @reexport with other macros.

@reexport begin ... end will apply the reexport macro to every expression in the block.

Download Details:

Author: Simonster
Source Code: https://github.com/simonster/Reexport.jl 
License: View license

#julia #module 

Reexport.jl: Julia Macro for Re-exporting one Module From another

Julia Enhancement Proposal for Implicit Per File Module in Julia


This package exports a macro @from, which can be used to import objects from files.

The hope is that you will never have to write include again.


FromFile is a  Julia Language   package. To install FromFile, please open Julia's interactive session (known as REPL) and press ] key in the REPL to use the package mode, then type the following command

pkg> add FromFile


Objects in other files may be imported in the following way:

# file1.jl
import FromFile: @from
@from "file2.jl" import foo

bar() = foo()

foo() = println("hi")

File systems may be navigated: @from "../folder/file.jl" import foo

The usual import syntax is supported; the only difference is that the objects are looked up in the file requested: @from "file.jl" using MyModule; @from "file.jl" import MyModule: foo; @from "file.jl" import foo, bar.

Using @from to access a file multiple times (for example calling @from "file.jl" import foo in multiple files) will access the same objects each time; i.e. without the duplication issues that include("file.jl") would introduce.


FromFile.jl is a draft implementation of this specification, for improving import systems as discussed in Issue 4600.


FromFile will (besides its programmatic benefits like the removal of spooky action at a distance) help you keep track of what objects are defined in what file. When converting over to FromFile that may mean untangling your existing project, and figuring out exactly what you defined where. To print out which file defines which object, you can execute the following bash snippet at the root of your project:

for f in $(find . -name '*.jl'); do echo $f && cat $f | vim - -nes -c '%s/#.*//ge' -c '%s/"""\_.\{-}"""//ge' -c '%v/^\S\+/d_' -c '%g/^\(end\|@from\|using\|export\|import\|include\|begin\|let\)\>/d_' -c '%g/.*/exe "norm >>"' -c ':%p' -c ':q!' | tail -n +2; done | less

This should (hopefully) print out each file, and the objects defined in that file. It's not perfect, but it'll help you get 90% of the way there.

Download Details:

Author: Roger-luo
Source Code: https://github.com/Roger-luo/FromFile.jl 
License: MIT license 

#julia #file #module 

Julia Enhancement Proposal for Implicit Per File Module in Julia
Monty  Boehm

Monty Boehm


A Sketch Module for Creating an Complex UI with A Webview


A Sketch module for creating a complex UI with a webview. The API is mimicking the BrowserWindow API of Electron.


To use this module in your Sketch plugin you need a bundler utility like skpm and add it as a dependency:

npm install -S sketch-module-web-view

You can also use the with-webview skpm template to have a solid base to start your project with a webview:

skpm create my-plugin-name --template=skpm/with-webview

The version 2.x is only compatible with Sketch >= 51. If you need compatibility with previous versions of Sketch, use the version 1.x


import BrowserWindow from 'sketch-module-web-view'

export default function () {
  const options = {
    identifier: 'unique.id',

  const browserWindow = new BrowserWindow(options)



API References

Download Details:

Author: skpm
Source Code: https://github.com/skpm/sketch-module-web-view 
License: MIT license

#sketch #module #web #view 

A Sketch Module for Creating an Complex UI with A Webview

GLPK.jl: GLPK Wrapper Module for Julia


GLPK.jl is a wrapper for the GNU Linear Programming Kit library.

The wrapper has two components:

The C API can be accessed via GLPK.glp_XXX functions, where the names and arguments are identical to the C API. See the /tests folder for inspiration.


Install GLPK using Pkg.add:

import Pkg; Pkg.add("GLPK")

In addition to installing the GLPK.jl package, this will also download and install the GLPK binaries. (You do not need to install GLPK separately.)

To use a custom binary, read the Custom solver binaries section of the JuMP documentation.

Use with JuMP

To use GLPK with JuMP, use GLPK.Optimizer:

using JuMP, GLPK
model = Model(GLPK.Optimizer)
set_optimizer_attribute(model, "tm_lim", 60 * 1_000)
set_optimizer_attribute(model, "msg_lev", GLPK.GLP_MSG_OFF)

If the model is primal or dual infeasible, GLPK will attempt to find a certificate of infeasibility. This can be expensive, particularly if you do not intend to use the certificate. If this is the case, use:

model = Model() do
    return GLPK.Optimizer(want_infeasibility_certificates = false)


Here is an example using GLPK's solver-specific callbacks.

using JuMP, GLPK, Test

model = Model(GLPK.Optimizer)
@variable(model, 0 <= x <= 2.5, Int)
@variable(model, 0 <= y <= 2.5, Int)
@objective(model, Max, y)
reasons = UInt8[]
function my_callback_function(cb_data)
    reason = GLPK.glp_ios_reason(cb_data.tree)
    push!(reasons, reason)
    if reason != GLPK.GLP_IROWGEN
    x_val = callback_value(cb_data, x)
    y_val = callback_value(cb_data, y)
    if y_val - x_val > 1 + 1e-6
        con = @build_constraint(y - x <= 1)
        MOI.submit(model, MOI.LazyConstraint(cb_data), con)
    elseif y_val + x_val > 3 + 1e-6
        con = @build_constraint(y - x <= 1)
        MOI.submit(model, MOI.LazyConstraint(cb_data), con)
MOI.set(model, GLPK.CallbackFunction(), my_callback_function)
@test termination_status(model) == MOI.OPTIMAL
@test primal_status(model) == MOI.FEASIBLE_POINT
@test value(x) == 1
@test value(y) == 2
@show reasons

Download Details:

Author: jump-dev
Source Code: https://github.com/jump-dev/GLPK.jl 
License: Unknown, GPL-3.0 licenses found

#julia #wrapper #module 

GLPK.jl: GLPK Wrapper Module for Julia

KDSP.jl: DSP Module for Julia

DSP.jl provides a number of common DSP routines in Julia. So far, the following functions are implemented:

digital filtering:

  • filt

correlation and convolution:

  • conv
  • conv2
  • deconv
  • xcorr

FFTs provided by FFTW interface:

  • bfft
  • bfftn
  • brfft
  • brfftn
  • fft
  • fft2
  • fft3
  • fftn
  • ifft
  • ifft2
  • ifft3
  • ifftn
  • irfft
  • irfftn
  • rfft
  • rfftn

FFT utilities:

  • fftshift
  • ifftshift

periodogram estimation:

  • periodogram
  • welch_pgram
  • bartlett_pgram

window functions:

  • rect
  • hanning
  • hamming
  • tukey
  • cosine
  • lanczos
  • triang
  • bartlett
  • gaussian
  • bartlett_hann
  • blackman
  • kaiser

common DSP mathematics:

  • sinc

auxiliary functions:

  • arraysplit

Download Details:

Author: Kofron
Source Code: https://github.com/kofron/KDSP.jl 
License: View license

#julia #module 

KDSP.jl: DSP Module for Julia

Laravel-modules: Module Management in Laravel


nwidart/laravel-modules is a Laravel package which created to manage your large Laravel app using modules. Module is like a Laravel package, it has some views, controllers or models. This package is supported and tested in Laravel 9.

This package is a re-published, re-organised and maintained version of pingpong/modules, which isn't maintained anymore. This package is used in AsgardCMS.

With one big added bonus that the original package didn't have: tests.

Find out why you should use this package in the article: Writing modular applications with laravel-modules.


To install through Composer, by run the following command:

composer require nwidart/laravel-modules

The package will automatically register a service provider and alias.

Optionally, publish the package's configuration file by running:

php artisan vendor:publish --provider="Nwidart\Modules\LaravelModulesServiceProvider"


By default, the module classes are not loaded automatically. You can autoload your modules using psr-4. For example:

  "autoload": {
    "psr-4": {
      "App\\": "app/",
      "Modules\\": "Modules/",
      "Database\\Factories\\": "database/factories/",
      "Database\\Seeders\\": "database/seeders/"


Tip: don't forget to run composer dump-autoload afterwards.


You'll find installation instructions and full documentation on https://docs.laravelmodules.com/.


About Nicolas Widart

Nicolas Widart is a freelance web developer specialising on the Laravel framework. View all my packages on my website, or visit my website.


Download Details:

Author: nWidart
Source Code: https://github.com/nWidart/laravel-modules 
License: MIT license

#php #laravel #module #hacktoberfest 

Laravel-modules: Module Management in Laravel

Goproxy: A Global Proxy for Go Modules


A global proxy for go modules. see: https://goproxy.io


It invokes the local go command to answer requests.
The default cacheDir is GOPATH, you can set it up by yourself according to the situation.


git clone https://github.com/goproxyio/goproxy.git
cd goproxy


Proxy mode

./bin/goproxy -listen= -cacheDir=/tmp/test

If you run go get -v pkg in the proxy machine, should set a new GOPATH which is different from the old GOPATH, or mayebe deadlock. See the file test/get_test.sh.

Router mode

./bin/goproxy -listen= -proxy https://goproxy.io

Use the -proxy flag switch to "Router mode", which implements route filter to routing private module or public module .

                      +----------------------------------> private repo
                  +---+---+           +----------+
go get  +-------> |goproxy| +-------> |goproxy.io| +---> golang.org/x/net
                  +-------+           +----------+
                 router mode           proxy mode

In Router mode, use the -exclude flag set pattern , direct to the repo which match the module path, pattern are matched to the full path specified, not only to the host component.

./bin/goproxy -listen= -cacheDir=/tmp/test -proxy https://goproxy.io -exclude "*.corp.example.com,rsc.io/private"

Use docker image

docker run -d -p80:8081 goproxy/goproxy

Use the -v flag to persisting the proxy module data (change cacheDir to your own dir):

docker run -d -p80:8081 -v cacheDir:/go goproxy/goproxy

Docker Compose

docker-compose up


  1. set export GOPROXY=http://localhost to enable your goproxy.
  2. set export GOPROXY=direct to disable it.

Download Details:

Author: Goproxyio
Source Code: https://github.com/goproxyio/goproxy 
License: MIT license

#go #golang #module #proxy 

Goproxy: A Global Proxy for Go Modules

Autoupgrade: Upgrade Module for PrestaShop

1-Click Upgrade


Upgrade to the latest version of PrestaShop in a few clicks, thanks to this automated method. This module is compatible with all PrestaShop 1.6 & 1.7.


  • PrestaShop 1.6 or 1.7
  • PHP 5.6+

For older PHP versions, see previous releases of the module (ex. v1.6.8). Note they are unsupported and we strongly recommend you to upgrade your PHP version.


All versions can be found in the releases list.

Create a module from source code

  • Clone (git clone https://github.com/PrestaShop/autoupgrade.git) or download the source code. You can also download a release Source code (ex. v4.4.1). If you download a source code archive, you need extract the file and rename the extracted folder to autoupgrade
  • Enter into folder autoupgrade and run the command composer install (composer).
  • Create a new zip file of autoupgrade folder
  • Now you can upload into your module pages

Running an upgrade on PrestaShop

Upgrading a shop can be done via:

  • the configuration page of the module (access from your BO module page)
  • in command line by calling the file cli-upgrade.php

Command line parameters

Upgrade can be automated by calling cli-upgrade.php. The following parameters are mandatory:

  • --dir: Tells where the admin directory is.
  • --channel: Selects what upgrade to run (minor, major etc.)
  • --action: Advanced users only. Sets the step you want to start from (Default: UpgradeNow, other values available).
$ php cli-upgrade.php --dir=admin-dev --channel=major

Rollback a shop

If an error occurs during the upgrade process, the rollback will be suggested. In case you lost the page from your backoffice, note it can be triggered via CLI.

Command line parameters

Rollback can be automated by calling cli-rollback.php. The following parameters are mandatory:

  • --dir: Tells where the admin directory is.
  • --backup: Select the backup to restore (this can be found in your folder <admin>/autoupgrade/backup/)
$ php cli-rollback.php  --dir=admin-dev --backup=V1.7.5.1_20190502-191341-22e883bd


Documentation is hosted on devdocs.prestashop.com.


PrestaShop modules are open source extensions to the PrestaShop e-commerce platform. Everyone is welcome and even encouraged to contribute with their own improvements!

Just make sure to follow our contribution guidelines.

Reporting issues

You can report issues with this module in the main PrestaShop repository. Click here to report an issue.

Download Details:

Author: PrestaShop
Source Code: https://github.com/PrestaShop/autoupgrade 
License: AFL-3.0 license

#php #upgrade #module 

Autoupgrade: Upgrade Module for PrestaShop

Sexpr.jl: Julia <3 Clojure + Macroexpansion

S-Julia - s-expression to julia convertor.  


> Pkg.clone("https://github.com/vshesh/Sexpr.jl.git")

$ julia -e 'import Sexpr; Sexpr.main()' --
usage: Sexpr.jl [-i] [-c] [-l LINES] [-o OUTPUT] [-e EXTENSION] [-h]
A program to port clojure-like s-expression syntax to and from
julia. By default, this program takes clojure syntax and outputs
the julia version. Use -i to flip direction.

positional arguments:
  files                 If given one file and no output directory,
                        will dump to stdout. If given a directory or
                        multiple files, eg "sjulia file1 dir file2",
                        an output directory must be specified with
                        -o/--output where the files will go.
optional arguments:
  -i, --invert          take julia code and print out s-expression
                        code instead
  -c, --cat             cat all the input from STDIN rather than read
                        from file. Ignores all positional args to the
  -l, --lines LINES     how many blank lines should exist between top
                        level forms, default 1 (type: Int64, default:
  -o, --output OUTPUT   where to write out files if there are multiple
                        positional arguments to the file. If this is
                        empty, and there are >1 argument, the program
                        will throw an error.
  -e, --extension EXTENSION
                        add an extension that qualifies as a lisp file
                        (can use multiple times). Defaults: clj, cljs,
                        cl, lisp, wisp, hy.
  -h, --help            show this help message and exit
$ julia -e 'import Sexpr; Sexpr.main()' -- -o test/output/ test/programs/
# will transpile all .clj files in test/programs and dump them into test/output.


This project aims to make s-expression syntax interoperable with julia's own Expr objects.

If you've seen LispSyntax.jl, it's a similar idea, but IMHO this project does a bit more, such as allow you to transpile file->file rather than just read in a program, and also transpile back, so you can convert your julia files (minus a few special forms that aren't supported yet) into clojure syntax. This makes it possible to go from julia to python (again, not that anyone needed another route b/c pycall) via Hylang, or to JS via WispJS. The benefit here is that the awkward macro syntax in both of those languages is avoided (Hy necessitates wrapping everything in HyModel objects yourself, which is ridiculous, and WispJS's module system is broken, because it is Javascript, so resolving variable names is not working properly).

The final goal is to use interoperability to do a macroexpand operation on the input clj syntax. So you would be able to give a folder of clj files, and a temp folder with jl files would be created, then each file would be read in and macroexpanded, converted back to clj syntax, and written out to a third folder. Unfortunately, it's necessary to write the jl files out as an intermediary step, because they need to be able to find each other to resolve imports. Alternatively, you could write the clj files as jl files with the macro @clj_str, but that makes your whole file a string, which breaks most syntax highlighters, which can be annoying.

I know that you're probably thinking "why?" and it was mostly a project for me to learn Julia and muck around with its internals. I learned quite a bit, so mission accomplished! CLJS has self-hosting now, which means that they will hopefully have a js-only package soon. However, dealing with google closure compiler and leiningen's java/jvm dependencies are a larger problem to be solved, and until then, I still consider it unwieldy, so there's still some practical use to be had here.

Effectively, this is just the reader portion of implementing a lisp - Julia does everything else using its inbuilt mechanisms.

Syntax Overview


  • nil translates to julia's nothing. They work exactly the same.
  • true -> true and false -> false. No surprises at all there.
  • number constants compile to either Int64 or Float64 types.
    • rational constants also supported, so 3/5 -> 3//5 in Julia.
  • character any atom starting with a \ is a character.
    • \newline, \space, \tab, \formfeed, \backspace, \return for escapes
    • unicode/octal support still needs to be handled.
    • non-strict, giving a longer literal silently just gives you the first character right now. This is probably not the best long-term strategy. Eg, \xyz -> \x
  • string is any sequence of characters inside double quotes.
    • multiline strings are allowed, but padding is not subtracted (yet).
  • keyword basically a symbol that starts with a :. In julia, these are confusingly called symbols, and symbols are called variables.
    • keywords cannot have a / or . in them anywhere.
    • in clojure keywords with two colons are resolved in the current namespace, that behavior is not the same here. Everything just compiles to a normal symbol in julia, so no namespacing. There are probably issues here, I just don't know what they are.
  • symbol which is any identifier for a variable.
    • any / or . character is converted to a . in julia. Eg, module/function becomes module.function as an identifier. This should be relatively consistent with clojure semantics.
    • clojure is more lenient about symbol characters than julia. In order to get around this limitation, the default is to output unicode characters where regular ones won't suffice. so *+?!-_':>< are all allowed inside a symbol.
      • TODO make the option to use escaped ascii-only names available. (ugly, but avoids having to use unicode, which is a pain depending on how your unicode extension is defined).
      • :: in a symbol identifier compiles to a type. eg, x::Int compiles to (:: x Int)
      • ::T1::T2 compiles to a union like x::T1::T2 -> (:: x T1 T2)


  • '(a b c) a list - if not quoted, it's evaluated and transpiled.
    • quoted lists evaluate to tuples, as of now.
  • [a b c] a vector - transpiles to a julia array.
  • {a b c d} a map - transpiles to a julia Dict(a => b, c=> d) form.
  • TODO #{a b c} a set, which can map to Set() in julia.

Julia Special Forms

  • Short Circuit
    • and/&& (what you expect this to be) - needs to be a special form because of short circuiting. Julia defines the and and or forms this way on purpose.
    • or/|| (again, what you expect), see above.
  • x[i] family (getting/setting/slicing arrays)
    • (aget x 1) -> Expr(:ref, :x, 1) -> x[1].
    • (aget x 1 2 4 5) -> Expr(:ref, :x, 1, 2, 4, 5) -> x[1, 2, 4, 5]
    • (aget x (: 1 3)) -> x[1:3]
    • (aget x (: 6)) -> x[6:end] (preferred)
    • (aget x (: 6 :end)) -> x[6:end] (not preferred)
  • Typing
    • (:: x Int) -> x::Int The :: form defines types.
      • (:: x Int In64) -> x::Union{Int, Int64} there's auto-union if many types are defined.
      • only useful for function and type defintions.
    • (curly Array Int64) -> Array{Int64} will allow parameterized types.
    • (.b a x y) -> a.b(x,y) is the dot call form.
    • (. a b c d) -> a.b.c.d is the dot access form.
      • note that ((. a b) x y) is equivalent to (.b a x y).
  • Modules and Import
    • (module M ... end) creates a module. This is visually annoying since you indent your whole file by two spaces just for this call to module, however I haven't figured out any better way to do this - the other option is to make #module M a special hash dispatch that wraps the whole file but... meh, I don't consider this a high enough priority.
    • (import|using X y z a b) contrary to my expectations, this will give you import X.y.a.b. There will be a separate import statement for each function/file you want to use.
      • TODO make this cartesian productable, so (import X [y z a]) will expand to import X.y; import X.z; import X.a instead. This should shorten the writing. Ideally should make this a system macro (in a system.clj file that I define) and call it import* or something.
    • (export a b c) -> export a, b, c. It makes sense from julia's point of view, since modules are flat things, and you only ever have one level of definitions to export.

Special Forms

()/'() or empty list.

  • For now, this compiles to an empty tuple. In some lisps this is equivalent to nil (eg Common Lisp) but in Clojure it's not, so I'm following that convention.

(do exprs...) does each expression and returns the results of the last one.

(if test true-case false-case?) standard if, evaluates form #2 and branches.

(let [var1 value1 var2 value2...] exprs...) binds pairs of variables to their values, then evaluates exprs in an implicit do.

(fn name? [params...] exprs...) defines a function.

  • a function with no name and only one expr in the body will be converted to a -> form. Eg: (fn [x] x) -> (x) -> x.

(defn name docstring? [params...] exprs...) named defined function.

  • docstrings are ignored right now.

(def var expr) defines a variable.

throw is a function already in julia, so there's no special form dedicated to it.

include is a function already in julia, so there's no dedicated special form for it.


  • loop/recur (this doesn't have a julia equivalent),
  • try/catch/finally
  • for vars in expr do... (useful for lazy iterators)
  • destructuring and rest param like (fn [& rest])
  • defmulti and related (does this even mean anything given julia's multiple-dispatch?)
  • deftype -> type in Julia.

Macro Forms

  • (@m x y z) how to call a macro - prepend it's name with @. There is unfortunately no way around this, since julia requires this distinction and for me to resolve what things are macros without it would involve writing an entire compiler. To keep it simple, I'm leaving this requirement in place.
    • TODO Since @x means deref in clojure, I might choose to use a different symbol to denote macrocall in the future. maybe μ or something. Another idea is abusing # dispatch so `#macro (html [:div "helloworld"])`` calls the next form as a macro rather than a regular function. The hash dispatch one seems worse, though.
  • defmacro defines a macro, as expected.
    • The way that macros work right now is that the macro definition is passed a clojure s-expression to work with. This is not the same as being passed a julia equivalent.
    • the macro output should again be a clojure expression, which has to be translated by the reader into a julia expression. This means that whatever program you write has to include the reader module of this project in order to produce the desired output.
    • Every macro will end in a call to Sexpr.rehydrate() which will translate the expression back to julia's native AST.
  • quote or ' gives a literal list of the following expression.
    • The quote form doesn't properly escape symbols yet. Eg, 'x is equal to :x in Julia, but in order to stop the gensym pass from running you actually have to do esc(:x) to get the equivalent. I'm unclear as of yet how the translation should work to get the desired results, so right now quote and syntax-quote do the same thing, which needs to be changed.
    • you can get around this yourself by putting esc calls in the right places, it will compile down to a function call in the code.
  • syntax-quote or backtick character. the :() quoting form in julia is actually a syntax quote. It also has an auto-gensym (which can be a pain to get around if you want to return the original name without obfuscation).
  • unquote or ~ is $ in julia inside expressions. It should evaluate the variable that's given to the macro and use the evaluated value.
  • unquote-splice or ~@ unquotes, and also expands the form by one layer into the form that's being returned. Ie, (f ~@x) is the same as :(f($x...)) in julia.

Download Details:

Author: VShesh
Source Code: https://github.com/vshesh/Sexpr.jl 

#julia  #clojure 

Sexpr.jl: Julia <3 Clojure + Macroexpansion

How to Create Custom Testimonial Tabs with Divi

For many businesses, testimonials are one of the key arguments to get new clients. That means paying a bit of extra attention to testimonials on your website will never go to waste. Within Divi, there are many different ways to share testimonials, using the Divi Testimonial Module for instance. But if you’re looking for a more interactive approach, you’re going to love this tutorial. We’re going to show you how to create custom testimonial tabs inside Divi. Once someone hovers the Blurb Module at the left, a corresponding testimonial will appear on the right. The transition effects in this design are seamless too, which helps you give that extra feel of customization to your website. You’ll be able to download the JSON file for free as well!

#divi #module #blurb #module #json

How to Create Custom Testimonial Tabs with Divi

Use the Testcontainers Webdriver Module together with Selenide

This video explains how to use the Webdriver module from Testcontainers together with Selenide. With this setup, you’ll start the web driver (e.g. chromedriver) inside a Docker container and don’t need any further setup on your machine.

Find more information about Selenide here: https://selenide.org/index.html

The source code for this example is available on GitHub: https://github.com/rieckpil/blog-tutorials/tree/master/write-concise-web-tests-with-selenide

More testing related content is available here: https://rieckpil.de/category/testing/

» Want to become an expert for testing Spring Boot applications? Join the Testing Spring Boot Applications Masterclass: https://rieckpil.de/testing-spring-boot-applications-masterclass/

» Join the 14 Days Free Email Course on Testing Java Applications to Accelerate Your Testing Success: https://rieckpil.de/getting-started-with-testing-java-applications-email-course/

#selenide #webdriver #module

Use the Testcontainers Webdriver Module together with Selenide
Vaughn  Sauer

Vaughn Sauer


Deep Learning: Understand The Inception Module


Back in 2014, researchers at Google (and other research institutions) published a paper that introduced a novel deep learning convolutional neural network architecture that was, at the time, the largest and most efficient deep neural network architecture.

The novel architecture was an Inception Network, and a variant of this Network called, GoogLeNet went on to achieve the state of the art performance in the classification computer vision task of the ImageNet LargeScale Visual Recognition Challenge 2014(ILVRC14).

We’ve come a long way from 2014, and so has the deep learning field. In 2020 several deep learning architectures achieve and exceed human-level performance in classification and object detection tasks.

However, the innovations and improvements within current convolutional neural networks have their roots set in their predecessors.

_What to expect: _This article explores the integral component of the Inception Network, this integral component is the Inception Module.

Who is this article for?

Deep learning practitioners of all levels can follow the contents and information presented in this article with relative ease. There are definitions and clear explanations wherever technical terms are introduced.

Happy Reading.

Inception Network

An inception network is a deep neural network with an architectural design that consists of repeating components referred to as Inception modules.

As mentioned earlier, this article focuses on the technical details of the inception module.

Before diving into the technical introduction of the Inception module, here are some more information as to what this article entails.

  • The origination of the name given to the Inception Module
  • Background into main concepts and ideas that inspired the architectural designs of the Inception module.
  • Technical details of the individual components of the Inception module.
  • Illustrations that depicts architecture details and internal structure of the Inception module.
  • Calculations to derive the number of multiplier operations within individual Inception module components.

#machine-learning #computer-vision #data-science #deep-learning #module

Deep Learning: Understand The Inception Module