1656826320
A serverless plugin to manage configurations for a micro-service stack.
outputs
- This downloads this service's outputs to a file in /PROJECT_ROOT/.serverless/stack-outputs.json and updates the config file in S3.
outputs download
- This downloads the existing, combined, stack config file from S3.
npm install --save serverless-plugin-stack-config
Add the plugin to your serverless.yml
like the following:
NOTE: The script
and backup
properties are optional.
provider:
...
plugins:
- serverless-plugin-stack-config
custom:
stack-config:
script: scripts/transform.js
backup:
s3:
key: config/stack-config.json
bucket: ${self:service}-${opt:env}
shallow: true
functions:
...
resources:
...
You can now supply a script that you can use to transform the stack outputs before they are saved to file.
For example you could rename outputs or create new ones from the values received.
// scripts/transform.js
module.exports = async function transform(serverless, stackOutputs) {
// rename
stackOutputs.TrackingServiceEndpoint = stackOutputs.ServiceEndpoint;
// delete
delete stackOutputs.ServerlessDeploymentBucketName;
delete stackOutputs.ServiceEndpoint;
// return updated stack
return stackOutputs;
}
serverless outputs --stage dev --region eu-west-1
serverless outputs download --stage dev --region eu-west-1
# with save directory location
serverless outputs download --stage dev --region eu-west-1 --path .
If you are deploying several applications at the same time, there is the possibility that some data loss could occur in the event that multiple stacks are updating the config file in S3.
Author: Rawphp
Source Code: https://github.com/rawphp/serverless-plugin-stack-config
License: MIT license
1656826320
A serverless plugin to manage configurations for a micro-service stack.
outputs
- This downloads this service's outputs to a file in /PROJECT_ROOT/.serverless/stack-outputs.json and updates the config file in S3.
outputs download
- This downloads the existing, combined, stack config file from S3.
npm install --save serverless-plugin-stack-config
Add the plugin to your serverless.yml
like the following:
NOTE: The script
and backup
properties are optional.
provider:
...
plugins:
- serverless-plugin-stack-config
custom:
stack-config:
script: scripts/transform.js
backup:
s3:
key: config/stack-config.json
bucket: ${self:service}-${opt:env}
shallow: true
functions:
...
resources:
...
You can now supply a script that you can use to transform the stack outputs before they are saved to file.
For example you could rename outputs or create new ones from the values received.
// scripts/transform.js
module.exports = async function transform(serverless, stackOutputs) {
// rename
stackOutputs.TrackingServiceEndpoint = stackOutputs.ServiceEndpoint;
// delete
delete stackOutputs.ServerlessDeploymentBucketName;
delete stackOutputs.ServiceEndpoint;
// return updated stack
return stackOutputs;
}
serverless outputs --stage dev --region eu-west-1
serverless outputs download --stage dev --region eu-west-1
# with save directory location
serverless outputs download --stage dev --region eu-west-1 --path .
If you are deploying several applications at the same time, there is the possibility that some data loss could occur in the event that multiple stacks are updating the config file in S3.
Author: Rawphp
Source Code: https://github.com/rawphp/serverless-plugin-stack-config
License: MIT license
1595334123
I consider myself an active StackOverflow user, despite my activity tends to vary depending on my daily workload. I enjoy answering questions with angular tag and I always try to create some working example to prove correctness of my answers.
To create angular demo I usually use either plunker or stackblitz or even jsfiddle. I like all of them but when I run into some errors I want to have a little bit more usable tool to undestand what’s going on.
Many people who ask questions on stackoverflow don’t want to isolate the problem and prepare minimal reproduction so they usually post all code to their questions on SO. They also tend to be not accurate and make a lot of mistakes in template syntax. To not waste a lot of time investigating where the error comes from I tried to create a tool that will help me to quickly find what causes the problem.
Angular demo runner
Online angular editor for building demo.
ng-run.com
<>
Let me show what I mean…
There are template parser errors that can be easy catched by stackblitz
It gives me some information but I want the error to be highlighted
#mean stack #angular 6 passport authentication #authentication in mean stack #full stack authentication #mean stack example application #mean stack login and registration angular 8 #mean stack login and registration angular 9 #mean stack tutorial #mean stack tutorial 2019 #passport.js
1655426640
Serverless M (or Serverless Modular) is a plugin for the serverless framework. This plugins helps you in managing multiple serverless projects with a single serverless.yml file. This plugin gives you a super charged CLI options that you can use to create new features, build them in a single file and deploy them all in parallel
Currently this plugin is tested for the below stack only
Make sure you have the serverless CLI installed
# Install serverless globally
$ npm install serverless -g
To start the serverless modular project locally you can either start with es5 or es6 templates or add it as a plugin
# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es6 --path myModularService
# Step 2. Change directory
$ cd myModularService
# Step 3. Create a package.json file
$ npm init
# Step 3. Install dependencies
$ npm i serverless-modular serverless-webpack webpack --save-dev
# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es5 --path myModularService
# Step 2. Change directory
$ cd myModularService
# Step 3. Create a package.json file
$ npm init
# Step 3. Install dependencies
$ npm i serverless-modular --save-dev
If you dont want to use the templates above you can just add in your existing project
plugins:
- serverless-modular
Now you are all done to start building your serverless modular functions
The serverless CLI can be accessed by
# Serverless Modular CLI
$ serverless modular
# shorthand
$ sls m
Serverless Modular CLI is based on 4 main commands
sls m init
sls m feature
sls m function
sls m build
sls m deploy
sls m init
The serverless init command helps in creating a basic .gitignore
that is useful for serverless modular.
The basic .gitignore
for serverless modular looks like this
#node_modules
node_modules
#sm main functions
sm.functions.yml
#serverless file generated by build
src/**/serverless.yml
#main serverless directories generated for sls deploy
.serverless
#feature serverless directories generated sls deploy
src/**/.serverless
#serverless logs file generated for main sls deploy
.sm.log
#serverless logs file generated for feature sls deploy
src/**/.sm.log
#Webpack config copied in each feature
src/**/webpack.config.js
The feature command helps in building new features for your project
This command comes with three options
--name: Specify the name you want for your feature
--remove: set value to true if you want to remove the feature
--basePath: Specify the basepath you want for your feature, this base path should be unique for all features. helps in running offline with offline plugin and for API Gateway
options | shortcut | required | values | default value |
---|---|---|---|---|
--name | -n | ✅ | string | N/A |
--remove | -r | ❎ | true, false | false |
--basePath | -p | ❎ | string | same as name |
Creating a basic feature
# Creating a jedi feature
$ sls m feature -n jedi
Creating a feature with different base path
# A feature with different base path
$ sls m feature -n jedi -p tatooine
Deleting a feature
# Anakin is going to delete the jedi feature
$ sls m feature -n jedi -r true
The function command helps in adding new function to a feature
This command comes with four options
--name: Specify the name you want for your function
--feature: Specify the name of the existing feature
--path: Specify the path for HTTP endpoint helps in running offline with offline plugin and for API Gateway
--method: Specify the path for HTTP method helps in running offline with offline plugin and for API Gateway
options | shortcut | required | values | default value |
---|---|---|---|---|
--name | -n | ✅ | string | N/A |
--feature | -f | ✅ | string | N/A |
--path | -p | ❎ | string | same as name |
--method | -m | ❎ | string | 'GET' |
Creating a basic function
# Creating a cloak function for jedi feature
$ sls m function -n cloak -f jedi
Creating a basic function with different path and method
# Creating a cloak function for jedi feature with custom path and HTTP method
$ sls m function -n cloak -f jedi -p powers -m POST
The build command helps in building the project for local or global scope
This command comes with four options
--scope: Specify the scope of the build, use this with "--feature" tag
--feature: Specify the name of the existing feature you want to build
options | shortcut | required | values | default value |
---|---|---|---|---|
--scope | -s | ❎ | string | local |
--feature | -f | ❎ | string | N/A |
Saving build Config in serverless.yml
You can also save config in serverless.yml file
custom:
smConfig:
build:
scope: local
all feature build (local scope)
# Building all local features
$ sls m build
Single feature build (local scope)
# Building a single feature
$ sls m build -f jedi -s local
All features build global scope
# Building all features with global scope
$ sls m build -s global
The deploy command helps in deploying serverless projects to AWS (it uses sls deploy
command)
This command comes with four options
--sm-parallel: Specify if you want to deploy parallel (will only run in parallel when doing multiple deployments)
--sm-scope: Specify if you want to deploy local features or global
--sm-features: Specify the local features you want to deploy (comma separated if multiple)
options | shortcut | required | values | default value |
---|---|---|---|---|
--sm-parallel | ❎ | ❎ | true, false | true |
--sm-scope | ❎ | ❎ | local, global | local |
--sm-features | ❎ | ❎ | string | N/A |
--sm-ignore-build | ❎ | ❎ | string | false |
Saving deploy Config in serverless.yml
You can also save config in serverless.yml file
custom:
smConfig:
deploy:
scope: local
parallel: true
ignoreBuild: true
Deploy all features locally
# deploy all local features
$ sls m deploy
Deploy all features globally
# deploy all global features
$ sls m deploy --sm-scope global
Deploy single feature
# deploy all global features
$ sls m deploy --sm-features jedi
Deploy Multiple features
# deploy all global features
$ sls m deploy --sm-features jedi,sith,dark_side
Deploy Multiple features in sequence
# deploy all global features
$ sls m deploy --sm-features jedi,sith,dark_side --sm-parallel false
Author: aa2kb
Source Code: https://github.com/aa2kb/serverless-modular
License: MIT license
1659004860
Asset Sync
Synchronises Assets between Rails and S3.
Asset Sync is built to run with the new Rails Asset Pipeline feature introduced in Rails 3.1. After you run bundle exec rake assets:precompile your assets will be synchronised to your S3 bucket, optionally deleting unused files and only uploading the files it needs to.
This was initially built and is intended to work on Heroku but can work on any platform.
Upgraded from 1.x? Read UPGRADING.md
Since 2.x, Asset Sync depends on gem fog-core
instead of fog
.
This is due to fog
is including many unused storage provider gems as its dependencies.
Asset Sync has no idea about what provider will be used,
so you are responsible for bundling the right gem for the provider to be used.
In your Gemfile:
gem "asset_sync"
gem "fog-aws"
Or, to use Azure Blob storage, configure as this.
gem "asset_sync"
gem "gitlab-fog-azure-rm"
# This gem seems unmaintianed
# gem "fog-azure-rm"
To use Backblaze B2, insert these.
gem "asset_sync"
gem "fog-backblaze"
It's possible to improve asset:precompile time if you are using Rails 3.2.x the main source of which being compilation of non-digest assets.
turbo-sprockets-rails3 solves this by only compiling digest assets. Thus cutting compile time in half.
NOTE: It will be deprecated in Rails 4 as sprockets-rails has been extracted out of Rails and will only compile digest assets by default.
Configure config/environments/production.rb to use Amazon S3 as the asset host and ensure precompiling is enabled.
#config/environments/production.rb
config.action_controller.asset_host = "//#{ENV['FOG_DIRECTORY']}.s3.amazonaws.com"
Or, to use Google Storage Cloud, configure as this.
#config/environments/production.rb
config.action_controller.asset_host = "//#{ENV['FOG_DIRECTORY']}.storage.googleapis.com"
Or, to use Azure Blob storage, configure as this.
#config/environments/production.rb
config.action_controller.asset_host = "//#{ENV['AZURE_STORAGE_ACCOUNT_NAME']}.blob.core.windows.net/#{ENV['FOG_DIRECTORY']}"
Or, to use Backblaze B2, configure as this.
#config/environments/production.rb
config.action_controller.asset_host = "//f000.backblazeb2.com/file/#{ENV['FOG_DIRECTORY']}"
On HTTPS: the exclusion of any protocol in the asset host declaration above will allow browsers to choose the transport mechanism on the fly. So if your application is available under both HTTP and HTTPS the assets will be served to match.
The only caveat with this is that your S3 bucket name must not contain any periods so, mydomain.com.s3.amazonaws.com for example would not work under HTTPS as SSL certificates from Amazon would interpret our bucket name as not a subdomain of s3.amazonaws.com, but a multi level subdomain. To avoid this don't use a period in your subdomain or switch to the other style of S3 URL.
config.action_controller.asset_host = "//s3.amazonaws.com/#{ENV['FOG_DIRECTORY']}"
Or, to use Google Storage Cloud, configure as this.
config.action_controller.asset_host = "//storage.googleapis.com/#{ENV['FOG_DIRECTORY']}"
Or, to use Azure Blob storage, configure as this.
#config/environments/production.rb
config.action_controller.asset_host = "//#{ENV['AZURE_STORAGE_ACCOUNT_NAME']}.blob.core.windows.net/#{ENV['FOG_DIRECTORY']}"
On non default S3 bucket region: If your bucket is set to a region that is not the default US Standard (us-east-1) you must use the first style of url //#{ENV['FOG_DIRECTORY']}.s3.amazonaws.com
or amazon will return a 301 permanently moved when assets are requested. Note the caveat above about bucket names and periods.
If you wish to have your assets sync to a sub-folder of your bucket instead of into the root add the following to your production.rb
file
# store assets in a 'folder' instead of bucket root
config.assets.prefix = "/production/assets"
Also, ensure the following are defined (in production.rb or application.rb)
Additionally, if you depend on any configuration that is setup in your initializers
you will need to ensure that
AssetSync supports the following methods of configuration.
Using the Built-in Initializer is the default method and is supposed to be used with environment variables. It's the recommended approach for deployments on Heroku.
If you need more control over configuration you will want to use a custom rails initializer.
Configuration using a YAML file (a common strategy for Capistrano deployments) is also supported.
The recommend way to configure asset_sync is by using environment variables however it's up to you, it will work fine if you hard code them too. The main reason why using environment variables is recommended is so your access keys are not checked into version control.
The Built-in Initializer will configure AssetSync based on the contents of your environment variables.
Add your configuration details to heroku
heroku config:add AWS_ACCESS_KEY_ID=xxxx
heroku config:add AWS_SECRET_ACCESS_KEY=xxxx
heroku config:add FOG_DIRECTORY=xxxx
heroku config:add FOG_PROVIDER=AWS
# and optionally:
heroku config:add FOG_REGION=eu-west-1
heroku config:add ASSET_SYNC_GZIP_COMPRESSION=true
heroku config:add ASSET_SYNC_MANIFEST=true
heroku config:add ASSET_SYNC_EXISTING_REMOTE_FILES=keep
Or add to a traditional unix system
export AWS_ACCESS_KEY_ID=xxxx
export AWS_SECRET_ACCESS_KEY=xxxx
export FOG_DIRECTORY=xxxx
Rackspace configuration is also supported
heroku config:add RACKSPACE_USERNAME=xxxx
heroku config:add RACKSPACE_API_KEY=xxxx
heroku config:add FOG_DIRECTORY=xxxx
heroku config:add FOG_PROVIDER=Rackspace
Google Storage Cloud configuration is supported as well. The preferred option is using the GCS JSON API which requires that you create an appropriate service account, generate the signatures and make them accessible to asset sync at the prescribed location
heroku config:add FOG_PROVIDER=Google
heroku config:add GOOGLE_PROJECT=xxxx
heroku config:add GOOGLE_JSON_KEY_LOCATION=xxxx
heroku config:add FOG_DIRECTORY=xxxx
If using the S3 API the following config is required
heroku config:add FOG_PROVIDER=Google
heroku config:add GOOGLE_STORAGE_ACCESS_KEY_ID=xxxx
heroku config:add GOOGLE_STORAGE_SECRET_ACCESS_KEY=xxxx
heroku config:add FOG_DIRECTORY=xxxx
The Built-in Initializer also sets the AssetSync default for existing_remote_files to keep.
If you want to enable some of the advanced configuration options you will want to create your own initializer.
Run the included Rake task to generate a starting point.
rails g asset_sync:install --provider=Rackspace
rails g asset_sync:install --provider=AWS
rails g asset_sync:install --provider=AzureRM
rails g asset_sync:install --provider=Backblaze
The generator will create a Rails initializer at config/initializers/asset_sync.rb
.
AssetSync.configure do |config|
config.fog_provider = 'AWS'
config.fog_directory = ENV['FOG_DIRECTORY']
config.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID']
config.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
config.aws_session_token = ENV['AWS_SESSION_TOKEN'] if ENV.key?('AWS_SESSION_TOKEN')
# Don't delete files from the store
# config.existing_remote_files = 'keep'
#
# Increase upload performance by configuring your region
# config.fog_region = 'eu-west-1'
#
# Set `public` option when uploading file depending on value,
# Setting to "default" makes asset sync skip setting the option
# Possible values: true, false, "default" (default: true)
# config.fog_public = true
#
# Change AWS signature version. Default is 4
# config.aws_signature_version = 4
#
# Change canned ACL of uploaded object. Default is unset. Will override fog_public if set.
# Choose from: private | public-read | public-read-write | aws-exec-read |
# authenticated-read | bucket-owner-read | bucket-owner-full-control
# config.aws_acl = nil
#
# Change host option in fog (only if you need to)
# config.fog_host = 's3.amazonaws.com'
#
# Change port option in fog (only if you need to)
# config.fog_port = "9000"
#
# Use http instead of https.
# config.fog_scheme = 'http'
#
# Automatically replace files with their equivalent gzip compressed version
# config.gzip_compression = true
#
# Use the Rails generated 'manifest.yml' file to produce the list of files to
# upload instead of searching the assets directory.
# config.manifest = true
#
# Upload the manifest file also.
# config.include_manifest = false
#
# Upload files concurrently
# config.concurrent_uploads = false
#
# Number of threads when concurrent_uploads is enabled
# config.concurrent_uploads_max_threads = 10
#
# Path to cache file to skip scanning remote
# config.remote_file_list_cache_file_path = './.asset_sync_remote_file_list_cache.json'
#
# Fail silently. Useful for environments such as Heroku
# config.fail_silently = true
#
# Log silently. Default is `true`. But you can set it to false if more logging message are preferred.
# Logging messages are sent to `STDOUT` when `log_silently` is falsy
# config.log_silently = true
#
# Allow custom assets to be cacheable. Note: The base filename will be matched
# If you have an asset with name `app.0b1a4cd3.js`, only `app.0b1a4cd3` will need to be matched
# only one of `cache_asset_regexp` or `cache_asset_regexps` is allowed.
# config.cache_asset_regexp = /\.[a-f0-9]{8}$/i
# config.cache_asset_regexps = [ /\.[a-f0-9]{8}$/i, /\.[a-f0-9]{20}$/i ]
end
Run the included Rake task to generate a starting point.
rails g asset_sync:install --use-yml --provider=Rackspace
rails g asset_sync:install --use-yml --provider=AWS
rails g asset_sync:install --use-yml --provider=AzureRM
rails g asset_sync:install --use-yml --provider=Backblaze
The generator will create a YAML file at config/asset_sync.yml
.
defaults: &defaults
fog_provider: "AWS"
fog_directory: "rails-app-assets"
aws_access_key_id: "<%= ENV['AWS_ACCESS_KEY_ID'] %>"
aws_secret_access_key: "<%= ENV['AWS_SECRET_ACCESS_KEY'] %>"
# To use AWS reduced redundancy storage.
# aws_reduced_redundancy: true
#
# You may need to specify what region your storage bucket is in
# fog_region: "eu-west-1"
#
# Change AWS signature version. Default is 4
# aws_signature_version: 4
#
# Change canned ACL of uploaded object. Default is unset. Will override fog_public if set.
# Choose from: private | public-read | public-read-write | aws-exec-read |
# authenticated-read | bucket-owner-read | bucket-owner-full-control
# aws_acl: null
#
# Change host option in fog (only if you need to)
# fog_host: "s3.amazonaws.com"
#
# Use http instead of https. Default should be "https" (at least for fog-aws)
# fog_scheme: "http"
existing_remote_files: keep # Existing pre-compiled assets on S3 will be kept
# To delete existing remote files.
# existing_remote_files: delete
# To ignore existing remote files and overwrite.
# existing_remote_files: ignore
# Automatically replace files with their equivalent gzip compressed version
# gzip_compression: true
# Fail silently. Useful for environments such as Heroku
# fail_silently: true
# Always upload. Useful if you want to overwrite specific remote assets regardless of their existence
# eg: Static files in public often reference non-fingerprinted application.css
# note: You will still need to expire them from the CDN's edge cache locations
# always_upload: ['application.js', 'application.css', !ruby/regexp '/application-/\d{32}\.css/']
# Ignored files. Useful if there are some files that are created dynamically on the server and you don't want to upload on deploy.
# ignored_files: ['ignore_me.js', !ruby/regexp '/ignore_some/\d{32}\.css/']
# Allow custom assets to be cacheable. Note: The base filename will be matched
# If you have an asset with name "app.0b1a4cd3.js", only "app.0b1a4cd3" will need to be matched
# cache_asset_regexps: ['cache_me.js', !ruby/regexp '/cache_some\.\d{8}\.css/']
development:
<<: *defaults
test:
<<: *defaults
production:
<<: *defaults
Most AssetSync configuration can be modified directly using environment variables with the Built-in initializer. e.g.
AssetSync.config.fog_provider == ENV['FOG_PROVIDER']
Simply upcase the ruby attribute names to get the equivalent environment variable to set. The only exception to that rule are the internal AssetSync config variables, they must be prepended with ASSET_SYNC_*
e.g.
AssetSync.config.gzip_compression == ENV['ASSET_SYNC_GZIP_COMPRESSION']
'keep', 'delete', 'ignore'
) what to do with previously precompiled files. default: 'keep'
true, false
) when enabled, will automatically replace files that have a gzip compressed equivalent with the compressed version. default: 'false'
true, false
) when enabled, will use the manifest.yml
generated by Rails to get the list of local files to upload. experimental. default: 'false'
true, false
) when enabled, will upload the manifest.yml
generated by Rails. default: 'false'
true, false
) when enabled, will upload the files in different Threads, this greatly improves the upload speed. default: 'false'
10
nil
true, false
) when false, will disable asset sync. default: 'true'
(enabled)['ignore_me.js', %r(ignore_some/\d{32}\.css)]
Useful if there are some files that are created dynamically on the server and you don't want to upload on deploy default: []
['cache_me.js', %r(cache_some\.\d{8}\.css)]
Useful if there are some files that are added to sprockets assets list and need to be set as 'Cacheable' on uploaded server. Only rails compiled regexp is matched internally default: []
Config Method add_local_file_paths
Adding local files by providing a block:
AssetSync.configure do |config|
# The block should return an array of file paths
config.add_local_file_paths do
# Any code that returns paths of local asset files to be uploaded
# Like Webpacker
public_root = Rails.root.join("public")
Dir.chdir(public_root) do
packs_dir = Webpacker.config.public_output_path.relative_path_from(public_root)
Dir[File.join(packs_dir, '/**/**')]
end
end
end
The blocks are run when local files are being scanned and uploaded
Config Method file_ext_to_mime_type_overrides
It's reported that mime-types
3.x returns application/ecmascript
instead of application/javascript
Such change of mime type might cause some CDN to disable asset compression
So this gem has defined a default override for file ext js
to be mapped to application/javascript
by default
To customize the overrides:
AssetSync.configure do |config|
# Clear the default overrides
config.file_ext_to_mime_type_overrides.clear
# Add/Edit overrides
# Will call `#to_s` for inputs
config.file_ext_to_mime_type_overrides.add(:js, :"application/x-javascript")
end
The blocks are run when local files are being scanned and uploaded
When using the JSON API
When using the S3 API
https://lon.identity.api.rackspacecloud.com/v2.0
If you are using anything other than the US buckets with S3 then you'll want to set the region. For example with an EU bucket you could set the following environment variable.
heroku config:add FOG_REGION=eu-west-1
Or via a custom initializer
AssetSync.configure do |config|
# ...
config.fog_region = 'eu-west-1'
end
Or via YAML
production: # ... fog_region: 'eu-west-1'
Amazon has switched to the more secure IAM User security policy model. When generating a user & policy for asset_sync you must ensure the policy has the following permissions, or you'll see the error:
Expected(200) <=> Actual(403 Forbidden)
IAM User Policy Example with minimum require permissions (replace bucket_name
with your bucket):
{
"Statement": [
{
"Action": "s3:ListBucket",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket_name"
},
{
"Action": "s3:PutObject*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket_name/*"
}
]
}
If you want to use IAM roles you must set config.aws_iam_roles = true
in your initializers.
AssetSync.configure do |config|
# ...
config.aws_iam_roles = true
end
With the gzip_compression
option enabled, when uploading your assets. If a file has a gzip compressed equivalent we will replace that asset with the compressed version and sets the correct headers for S3 to serve it. For example, if you have a file master.css and it was compressed to master.css.gz we will upload the .gz file to S3 in place of the uncompressed file.
If the compressed file is actually larger than the uncompressed file we will ignore this rule and upload the standard uncompressed version.
With the fail_silently
option enabled, when running rake assets:precompile
AssetSync will never throw an error due to missing configuration variables.
With the new user_env_compile feature of Heroku (see above), this is no longer required or recommended. Yet was added for the following reasons:
With Rails 3.1 on the Heroku cedar stack, the deployment process automatically runs
rake assets:precompile
. If you are using ENV variable style configuration. Due to the methods with which Heroku compile slugs, there will be an error raised by asset_sync as the environment is not available. This causes heroku to install therails31_enable_runtime_asset_compilation
plugin which is not necessary when using asset_sync and also massively slows down the first incoming requests to your app.
To prevent this part of the deploy from failing (asset_sync raising a config error), but carry on as normal set
fail_silently
to true in your configuration and ensure to runheroku run rake assets:precompile
after deploy.
A rake task is included within the asset_sync gem to perform the sync:
namespace :assets do
desc "Synchronize assets to S3"
task :sync => :environment do
AssetSync.sync
end
end
If AssetSync.config.run_on_precompile
is true
(default), then assets will be uploaded to S3 automatically after the assets:precompile
rake task is invoked:
if Rake::Task.task_defined?("assets:precompile:nondigest")
Rake::Task["assets:precompile:nondigest"].enhance do
Rake::Task["assets:sync"].invoke if defined?(AssetSync) && AssetSync.config.run_on_precompile
end
else
Rake::Task["assets:precompile"].enhance do
Rake::Task["assets:sync"].invoke if defined?(AssetSync) && AssetSync.config.run_on_precompile
end
end
You can disable this behavior by setting AssetSync.config.run_on_precompile = false
.
You can use the gem with any Rack application, but you must specify two additional options; prefix
and public_path
.
AssetSync.configure do |config|
config.fog_provider = 'AWS'
config.fog_directory = ENV['FOG_DIRECTORY']
config.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID']
config.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
config.prefix = 'assets'
# Can be a `Pathname` or `String`
# Will be converted into an `Pathname`
# If relative, will be converted into an absolute path
# via `::Rails.root` or `::Dir.pwd`
config.public_path = Pathname('./public')
end
Then manually call AssetSync.sync
at the end of your asset precompilation task.
namespace :assets do
desc 'Precompile assets'
task :precompile do
target = Pathname('./public/assets')
manifest = Sprockets::Manifest.new(sprockets, './public/assets/manifest.json')
sprockets.each_logical_path do |logical_path|
if (!File.extname(logical_path).in?(['.js', '.css']) || logical_path =~ /application\.(css|js)$/) && asset = sprockets.find_asset(logical_path)
filename = target.join(logical_path)
FileUtils.mkpath(filename.dirname)
puts "Write asset: #{filename}"
asset.write_to(filename)
manifest.compile(logical_path)
end
end
AssetSync.sync
end
end
run_on_precompile
:AssetSync.configure do |config|
# Disable automatic run on precompile in order to attach to webpacker rake task
config.run_on_precompile = false
# The block should return an array of file paths
config.add_local_file_paths do
# Support webpacker assets
public_root = Rails.root.join("public")
Dir.chdir(public_root) do
packs_dir = Webpacker.config.public_output_path.relative_path_from(public_root)
Dir[File.join(packs_dir, '/**/**')]
end
end
end
asset_sync.rake
in your lib/tasks
directory that enhances the correct task, otherwise asset_sync runs before webpacker:compile
does:if defined?(AssetSync)
Rake::Task['webpacker:compile'].enhance do
Rake::Task["assets:sync"].invoke
end
end
By adding local files outside the normal Rails assets
directory, the uploading part works, however checking that the asset was previously uploaded is not working because asset_sync is only fetching the files in the assets
directory on the remote bucket. This will mean additional time used to upload the same assets again on every precompilation.
Make sure you have a .env file with these details:-
# for AWS provider
AWS_ACCESS_KEY_ID=<yourkeyid>
AWS_SECRET_ACCESS_KEY=<yoursecretkey>
FOG_DIRECTORY=<yourbucket>
FOG_REGION=<youbucketregion>
# for AzureRM provider
AZURE_STORAGE_ACCOUNT_NAME=<youraccountname>
AZURE_STORAGE_ACCESS_KEY=<youraccesskey>
FOG_DIRECTORY=<yourcontainer>
FOG_REGION=<yourcontainerregion>
Make sure the bucket has read/write permissions. Then to run the tests:-
foreman run rake
Inspired by:
MIT License. Copyright 2011-2013 Rumble Labs Ltd. rumblelabs.com
Author: AssetSync
Source code: https://github.com/AssetSync/asset_sync
License:
1654510680
Serverless Import Config Plugin
Split your serverless.yaml
config file into smaller modules and import them.
By using this plugin you can build your serverless config from smaller parts separated by functionalities. Imported config is merged, so all keys are supported and lists are concatenated (without duplicates).
Works on importing yaml files by path or node module, especially useful in multi-package repositories.
Install with npm:
npm install --save-dev serverless-import-config-plugin
And then add the plugin to your serverless.yml
file:
plugins: - serverless-import-config-plugin
Specify config files to import in custom.import
list:
custom: import: - ./path/to/serverless.yml # path to YAML file with serverless config - ./path/to/dir # directory where serverless.yml can be find - module-name # node module where serverless.yml can be find - '@myproject/users-api' # monorepo package with serverless.yml config file - module-name/custom-serverless.yml # path to custom config file of a node module
custom.import
can be also a string, when only one file needs to be imported:
custom: import: '@myproject/users-api'
All function handler paths are automatically prefixed by the imported config directory.
functions: postOrder: handler: functions/postOrder.handler # relative to the imported config
For other fields you need to use ${dirname}
variable manually. ${dirname}
points to a directory of imported config file.
custom: webpack: webpackConfig: ${dirname}/webpack.config.js
In case you want to customize imported config in more dynamic way, provide it as javascript file (serverless.js
).
module.exports = ({ name, schema }) => ({ provider: { iamRoleStatements: [ // ... ], }, // ...})
You can pass arguments to the imported file using module
and inputs
fields:
custom: import: - module: '@myproject/aws-dynamodb' # can be also a path to js file inputs: name: custom-table schema: # ...
Author: KrysKruk
Source Code: https://github.com/KrysKruk/serverless-import-config-plugin
License: MIT license