小泉  亮介

小泉 亮介

1598088351

AWS Config 適合パック (Conformance Packs) で何ができるのか分からなかったので絵を描いて理解してみた

コンバンハ、千葉(幸)です。

何かとコンフォーマンスしがちな今日この頃、いかがお過ごしでしょうか。

AWS Config の適合パック(コンフォーマンスパック)というものについて、最近知る機会がありました。適合パックについて、公式ブログでは以下のような説明があります。

適合パックを使用すると、コンプライアンスルールのパッケージを作成することができます。このパッケージはAWS Config ルールと修復アクションの両方を単一のエンティティにまとめたもので、大規模な展開を容易にします。そして、これらを組織全体に展開し、顧客や他のユーザーと共有できます。さらに、適合パックによりコンプライアンス・レポートも簡素化されます。これからは適合パックレベルでのレポートが可能になり、そこから従来通り個別のルールやリソースのレベルへ詳細化していくことも可能です。既存のレポート機能はすべてそのまま機能したままで、異なるチーム、異なるコンプライアンス体制、もしくは異なるガバナンス体制によって管理されたコンプライアンスセットを論理的にグルーピングすることができます。

なるほど。

…つまり …どういうことだってばよ?

となったので、絵を描いてみることにしました。困ったらとりあえず描きますよね。

目次

  • 目次
  • AWS Config ルールの絵を描く
  • AWS Config ルールのトリガー
  • AWS Config マネージドルール
  • AWS Config カスタムルール
  • AWS Config 修復アクションの絵を描く
  • AWS Config 適合パックの絵を描く
  • AWS Config からの通知
  • 配信チャネルによるストリーム通知
  • Events ルールによる通知
  • 終わりに

AWS Config ルールの絵を描く

適合パックを構成する要素として、AWS Config ルール修復アクションがあります。まずはここから抑えることにします。

AWSドキュメント

AWS Config ルールを表したのが以下のイメージです。

#aws #config #適合パック #conformance packs

What is GEEK

Buddha Community

AWS Config 適合パック (Conformance Packs) で何ができるのか分からなかったので絵を描いて理解してみた
小泉  亮介

小泉 亮介

1598088351

AWS Config 適合パック (Conformance Packs) で何ができるのか分からなかったので絵を描いて理解してみた

コンバンハ、千葉(幸)です。

何かとコンフォーマンスしがちな今日この頃、いかがお過ごしでしょうか。

AWS Config の適合パック(コンフォーマンスパック)というものについて、最近知る機会がありました。適合パックについて、公式ブログでは以下のような説明があります。

適合パックを使用すると、コンプライアンスルールのパッケージを作成することができます。このパッケージはAWS Config ルールと修復アクションの両方を単一のエンティティにまとめたもので、大規模な展開を容易にします。そして、これらを組織全体に展開し、顧客や他のユーザーと共有できます。さらに、適合パックによりコンプライアンス・レポートも簡素化されます。これからは適合パックレベルでのレポートが可能になり、そこから従来通り個別のルールやリソースのレベルへ詳細化していくことも可能です。既存のレポート機能はすべてそのまま機能したままで、異なるチーム、異なるコンプライアンス体制、もしくは異なるガバナンス体制によって管理されたコンプライアンスセットを論理的にグルーピングすることができます。

なるほど。

…つまり …どういうことだってばよ?

となったので、絵を描いてみることにしました。困ったらとりあえず描きますよね。

目次

  • 目次
  • AWS Config ルールの絵を描く
  • AWS Config ルールのトリガー
  • AWS Config マネージドルール
  • AWS Config カスタムルール
  • AWS Config 修復アクションの絵を描く
  • AWS Config 適合パックの絵を描く
  • AWS Config からの通知
  • 配信チャネルによるストリーム通知
  • Events ルールによる通知
  • 終わりに

AWS Config ルールの絵を描く

適合パックを構成する要素として、AWS Config ルール修復アクションがあります。まずはここから抑えることにします。

AWSドキュメント

AWS Config ルールを表したのが以下のイメージです。

#aws #config #適合パック #conformance packs

Asset Sync: Synchronises Assets Between Rails and S3

Asset Sync

Synchronises Assets between Rails and S3.

Asset Sync is built to run with the new Rails Asset Pipeline feature introduced in Rails 3.1. After you run bundle exec rake assets:precompile your assets will be synchronised to your S3 bucket, optionally deleting unused files and only uploading the files it needs to.

This was initially built and is intended to work on Heroku but can work on any platform.

Upgrading?

Upgraded from 1.x? Read UPGRADING.md

Installation

Since 2.x, Asset Sync depends on gem fog-core instead of fog.
This is due to fog is including many unused storage provider gems as its dependencies.

Asset Sync has no idea about what provider will be used,
so you are responsible for bundling the right gem for the provider to be used.

In your Gemfile:

gem "asset_sync"
gem "fog-aws"

Or, to use Azure Blob storage, configure as this.

gem "asset_sync"
gem "gitlab-fog-azure-rm"

# This gem seems unmaintianed
# gem "fog-azure-rm"

To use Backblaze B2, insert these.

gem "asset_sync"
gem "fog-backblaze"

Extended Installation (Faster sync with turbosprockets)

It's possible to improve asset:precompile time if you are using Rails 3.2.x the main source of which being compilation of non-digest assets.

turbo-sprockets-rails3 solves this by only compiling digest assets. Thus cutting compile time in half.

NOTE: It will be deprecated in Rails 4 as sprockets-rails has been extracted out of Rails and will only compile digest assets by default.

Configuration

Rails

Configure config/environments/production.rb to use Amazon S3 as the asset host and ensure precompiling is enabled.

  #config/environments/production.rb
  config.action_controller.asset_host = "//#{ENV['FOG_DIRECTORY']}.s3.amazonaws.com"

Or, to use Google Storage Cloud, configure as this.

  #config/environments/production.rb
  config.action_controller.asset_host = "//#{ENV['FOG_DIRECTORY']}.storage.googleapis.com"

Or, to use Azure Blob storage, configure as this.

  #config/environments/production.rb
  config.action_controller.asset_host = "//#{ENV['AZURE_STORAGE_ACCOUNT_NAME']}.blob.core.windows.net/#{ENV['FOG_DIRECTORY']}"

Or, to use Backblaze B2, configure as this.

  #config/environments/production.rb
  config.action_controller.asset_host = "//f000.backblazeb2.com/file/#{ENV['FOG_DIRECTORY']}"

On HTTPS: the exclusion of any protocol in the asset host declaration above will allow browsers to choose the transport mechanism on the fly. So if your application is available under both HTTP and HTTPS the assets will be served to match.

The only caveat with this is that your S3 bucket name must not contain any periods so, mydomain.com.s3.amazonaws.com for example would not work under HTTPS as SSL certificates from Amazon would interpret our bucket name as not a subdomain of s3.amazonaws.com, but a multi level subdomain. To avoid this don't use a period in your subdomain or switch to the other style of S3 URL.

  config.action_controller.asset_host = "//s3.amazonaws.com/#{ENV['FOG_DIRECTORY']}"

Or, to use Google Storage Cloud, configure as this.

  config.action_controller.asset_host = "//storage.googleapis.com/#{ENV['FOG_DIRECTORY']}"

Or, to use Azure Blob storage, configure as this.

  #config/environments/production.rb
  config.action_controller.asset_host = "//#{ENV['AZURE_STORAGE_ACCOUNT_NAME']}.blob.core.windows.net/#{ENV['FOG_DIRECTORY']}"

On non default S3 bucket region: If your bucket is set to a region that is not the default US Standard (us-east-1) you must use the first style of url //#{ENV['FOG_DIRECTORY']}.s3.amazonaws.com or amazon will return a 301 permanently moved when assets are requested. Note the caveat above about bucket names and periods.

If you wish to have your assets sync to a sub-folder of your bucket instead of into the root add the following to your production.rb file

  # store assets in a 'folder' instead of bucket root
  config.assets.prefix = "/production/assets"

Also, ensure the following are defined (in production.rb or application.rb)

  • config.assets.digest is set to true.
  • config.assets.enabled is set to true.

Additionally, if you depend on any configuration that is setup in your initializers you will need to ensure that

  • config.assets.initialize_on_precompile is set to true

AssetSync

AssetSync supports the following methods of configuration.

Using the Built-in Initializer is the default method and is supposed to be used with environment variables. It's the recommended approach for deployments on Heroku.

If you need more control over configuration you will want to use a custom rails initializer.

Configuration using a YAML file (a common strategy for Capistrano deployments) is also supported.

The recommend way to configure asset_sync is by using environment variables however it's up to you, it will work fine if you hard code them too. The main reason why using environment variables is recommended is so your access keys are not checked into version control.

Built-in Initializer (Environment Variables)

The Built-in Initializer will configure AssetSync based on the contents of your environment variables.

Add your configuration details to heroku

heroku config:add AWS_ACCESS_KEY_ID=xxxx
heroku config:add AWS_SECRET_ACCESS_KEY=xxxx
heroku config:add FOG_DIRECTORY=xxxx
heroku config:add FOG_PROVIDER=AWS
# and optionally:
heroku config:add FOG_REGION=eu-west-1
heroku config:add ASSET_SYNC_GZIP_COMPRESSION=true
heroku config:add ASSET_SYNC_MANIFEST=true
heroku config:add ASSET_SYNC_EXISTING_REMOTE_FILES=keep

Or add to a traditional unix system

export AWS_ACCESS_KEY_ID=xxxx
export AWS_SECRET_ACCESS_KEY=xxxx
export FOG_DIRECTORY=xxxx

Rackspace configuration is also supported

heroku config:add RACKSPACE_USERNAME=xxxx
heroku config:add RACKSPACE_API_KEY=xxxx
heroku config:add FOG_DIRECTORY=xxxx
heroku config:add FOG_PROVIDER=Rackspace

Google Storage Cloud configuration is supported as well. The preferred option is using the GCS JSON API which requires that you create an appropriate service account, generate the signatures and make them accessible to asset sync at the prescribed location

heroku config:add FOG_PROVIDER=Google
heroku config:add GOOGLE_PROJECT=xxxx
heroku config:add GOOGLE_JSON_KEY_LOCATION=xxxx
heroku config:add FOG_DIRECTORY=xxxx

If using the S3 API the following config is required

heroku config:add FOG_PROVIDER=Google
heroku config:add GOOGLE_STORAGE_ACCESS_KEY_ID=xxxx
heroku config:add GOOGLE_STORAGE_SECRET_ACCESS_KEY=xxxx
heroku config:add FOG_DIRECTORY=xxxx

The Built-in Initializer also sets the AssetSync default for existing_remote_files to keep.

Custom Rails Initializer (config/initializers/asset_sync.rb)

If you want to enable some of the advanced configuration options you will want to create your own initializer.

Run the included Rake task to generate a starting point.

rails g asset_sync:install --provider=Rackspace
rails g asset_sync:install --provider=AWS
rails g asset_sync:install --provider=AzureRM
rails g asset_sync:install --provider=Backblaze

The generator will create a Rails initializer at config/initializers/asset_sync.rb.

AssetSync.configure do |config|
  config.fog_provider = 'AWS'
  config.fog_directory = ENV['FOG_DIRECTORY']
  config.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID']
  config.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
  config.aws_session_token = ENV['AWS_SESSION_TOKEN'] if ENV.key?('AWS_SESSION_TOKEN')

  # Don't delete files from the store
  # config.existing_remote_files = 'keep'
  #
  # Increase upload performance by configuring your region
  # config.fog_region = 'eu-west-1'
  #
  # Set `public` option when uploading file depending on value,
  # Setting to "default" makes asset sync skip setting the option
  # Possible values: true, false, "default" (default: true)
  # config.fog_public = true
  #
  # Change AWS signature version. Default is 4
  # config.aws_signature_version = 4
  #
  # Change canned ACL of uploaded object. Default is unset. Will override fog_public if set.
  # Choose from: private | public-read | public-read-write | aws-exec-read |
  #              authenticated-read | bucket-owner-read | bucket-owner-full-control 
  # config.aws_acl = nil 
  #
  # Change host option in fog (only if you need to)
  # config.fog_host = 's3.amazonaws.com'
  #
  # Change port option in fog (only if you need to)
  # config.fog_port = "9000"
  #
  # Use http instead of https.
  # config.fog_scheme = 'http'
  #
  # Automatically replace files with their equivalent gzip compressed version
  # config.gzip_compression = true
  #
  # Use the Rails generated 'manifest.yml' file to produce the list of files to
  # upload instead of searching the assets directory.
  # config.manifest = true
  #
  # Upload the manifest file also.
  # config.include_manifest = false
  #
  # Upload files concurrently
  # config.concurrent_uploads = false
  #
  # Number of threads when concurrent_uploads is enabled
  # config.concurrent_uploads_max_threads = 10
  #
  # Path to cache file to skip scanning remote
  # config.remote_file_list_cache_file_path = './.asset_sync_remote_file_list_cache.json'
  #
  # Fail silently.  Useful for environments such as Heroku
  # config.fail_silently = true
  #
  # Log silently. Default is `true`. But you can set it to false if more logging message are preferred.
  # Logging messages are sent to `STDOUT` when `log_silently` is falsy
  # config.log_silently = true
  #
  # Allow custom assets to be cacheable. Note: The base filename will be matched
  # If you have an asset with name `app.0b1a4cd3.js`, only `app.0b1a4cd3` will need to be matched
  # only one of `cache_asset_regexp` or `cache_asset_regexps` is allowed.
  # config.cache_asset_regexp = /\.[a-f0-9]{8}$/i
  # config.cache_asset_regexps = [ /\.[a-f0-9]{8}$/i, /\.[a-f0-9]{20}$/i ]
end

YAML (config/asset_sync.yml)

Run the included Rake task to generate a starting point.

rails g asset_sync:install --use-yml --provider=Rackspace
rails g asset_sync:install --use-yml --provider=AWS
rails g asset_sync:install --use-yml --provider=AzureRM
rails g asset_sync:install --use-yml --provider=Backblaze

The generator will create a YAML file at config/asset_sync.yml.

defaults: &defaults
  fog_provider: "AWS"
  fog_directory: "rails-app-assets"
  aws_access_key_id: "<%= ENV['AWS_ACCESS_KEY_ID'] %>"
  aws_secret_access_key: "<%= ENV['AWS_SECRET_ACCESS_KEY'] %>"

  # To use AWS reduced redundancy storage.
  # aws_reduced_redundancy: true
  #
  # You may need to specify what region your storage bucket is in
  # fog_region: "eu-west-1"
  #
  # Change AWS signature version. Default is 4
  # aws_signature_version: 4
  #
  # Change canned ACL of uploaded object. Default is unset. Will override fog_public if set.
  # Choose from: private | public-read | public-read-write | aws-exec-read |
  #              authenticated-read | bucket-owner-read | bucket-owner-full-control 
  # aws_acl: null
  #
  # Change host option in fog (only if you need to)
  # fog_host: "s3.amazonaws.com"
  #
  # Use http instead of https. Default should be "https" (at least for fog-aws)
  # fog_scheme: "http"

  existing_remote_files: keep # Existing pre-compiled assets on S3 will be kept
  # To delete existing remote files.
  # existing_remote_files: delete
  # To ignore existing remote files and overwrite.
  # existing_remote_files: ignore
  # Automatically replace files with their equivalent gzip compressed version
  # gzip_compression: true
  # Fail silently.  Useful for environments such as Heroku
  # fail_silently: true
  # Always upload. Useful if you want to overwrite specific remote assets regardless of their existence
  #  eg: Static files in public often reference non-fingerprinted application.css
  #  note: You will still need to expire them from the CDN's edge cache locations
  # always_upload: ['application.js', 'application.css', !ruby/regexp '/application-/\d{32}\.css/']
  # Ignored files. Useful if there are some files that are created dynamically on the server and you don't want to upload on deploy.
  # ignored_files: ['ignore_me.js', !ruby/regexp '/ignore_some/\d{32}\.css/']
  # Allow custom assets to be cacheable. Note: The base filename will be matched
  # If you have an asset with name "app.0b1a4cd3.js", only "app.0b1a4cd3" will need to be matched
  # cache_asset_regexps: ['cache_me.js', !ruby/regexp '/cache_some\.\d{8}\.css/']

development:
  <<: *defaults

test:
  <<: *defaults

production:
  <<: *defaults

Available Configuration Options

Most AssetSync configuration can be modified directly using environment variables with the Built-in initializer. e.g.

AssetSync.config.fog_provider == ENV['FOG_PROVIDER']

Simply upcase the ruby attribute names to get the equivalent environment variable to set. The only exception to that rule are the internal AssetSync config variables, they must be prepended with ASSET_SYNC_* e.g.

AssetSync.config.gzip_compression == ENV['ASSET_SYNC_GZIP_COMPRESSION']

AssetSync (optional)

  • existing_remote_files: ('keep', 'delete', 'ignore') what to do with previously precompiled files. default: 'keep'
  • gzip_compression: (true, false) when enabled, will automatically replace files that have a gzip compressed equivalent with the compressed version. default: 'false'
  • manifest: (true, false) when enabled, will use the manifest.yml generated by Rails to get the list of local files to upload. experimental. default: 'false'
  • include_manifest: (true, false) when enabled, will upload the manifest.yml generated by Rails. default: 'false'
  • concurrent_uploads: (true, false) when enabled, will upload the files in different Threads, this greatly improves the upload speed. default: 'false'
  • concurrent_uploads_max_threads: when concurrent_uploads is enabled, this determines the number of threads that will be created. default: 10
  • remote_file_list_cache_file_path: if present, use this path to cache remote file list to skip scanning remote default: nil
  • enabled: (true, false) when false, will disable asset sync. default: 'true' (enabled)
  • ignored_files: an array of files to ignore e.g. ['ignore_me.js', %r(ignore_some/\d{32}\.css)] Useful if there are some files that are created dynamically on the server and you don't want to upload on deploy default: []
  • cache_asset_regexps: an array of files to add cache headers e.g. ['cache_me.js', %r(cache_some\.\d{8}\.css)] Useful if there are some files that are added to sprockets assets list and need to be set as 'Cacheable' on uploaded server. Only rails compiled regexp is matched internally default: []

Config Method add_local_file_paths

Adding local files by providing a block:

AssetSync.configure do |config|
  # The block should return an array of file paths
  config.add_local_file_paths do
    # Any code that returns paths of local asset files to be uploaded
    # Like Webpacker
    public_root = Rails.root.join("public")
    Dir.chdir(public_root) do
      packs_dir = Webpacker.config.public_output_path.relative_path_from(public_root)
      Dir[File.join(packs_dir, '/**/**')]
    end
  end
end

The blocks are run when local files are being scanned and uploaded

Config Method file_ext_to_mime_type_overrides

It's reported that mime-types 3.x returns application/ecmascript instead of application/javascript
Such change of mime type might cause some CDN to disable asset compression
So this gem has defined a default override for file ext js to be mapped to application/javascript by default

To customize the overrides:

AssetSync.configure do |config|
  # Clear the default overrides
  config.file_ext_to_mime_type_overrides.clear

  # Add/Edit overrides
  # Will call `#to_s` for inputs
  config.file_ext_to_mime_type_overrides.add(:js, :"application/x-javascript")
end

The blocks are run when local files are being scanned and uploaded

Fog (Required)

  • fog_provider: your storage provider AWS (S3) or Rackspace (Cloud Files) or Google (Google Storage) or AzureRM (Azure Blob) or Backblaze (Backblaze B2)
  • fog_directory: your bucket name

Fog (Optional)

  • fog_region: the region your storage bucket is in e.g. eu-west-1 (AWS), ord (Rackspace), japanwest (Azure Blob)
  • fog_path_style: To use buckets with dot in names, check fog/fog#2381 (comment)

AWS

  • aws_access_key_id: your Amazon S3 access key
  • aws_secret_access_key: your Amazon S3 access secret
  • aws_acl: set canned ACL of uploaded object, will override fog_public if set

Rackspace

  • rackspace_username: your Rackspace username
  • rackspace_api_key: your Rackspace API Key.

Google Storage

When using the JSON API

  • google_project: your Google Cloud Project name where the Google Cloud Storage bucket resides
  • google_json_key_location: path to the location of the service account key. The service account key must be a JSON type key

When using the S3 API

  • google_storage_access_key_id: your Google Storage access key
  • google_storage_secret_access_key: your Google Storage access secret

Azure Blob

  • azure_storage_account_name: your Azure Blob access key
  • azure_storage_access_key: your Azure Blob access secret

Backblaze B2

  • b2_key_id: Your Backblaze B2 key ID
  • b2_key_token: Your Backblaze B2 key token
  • b2_bucket_id: Your Backblaze B2 bucket ID

Rackspace (Optional)

  • rackspace_auth_url: Rackspace auth URL, for Rackspace London use: https://lon.identity.api.rackspacecloud.com/v2.0

Amazon S3 Multiple Region Support

If you are using anything other than the US buckets with S3 then you'll want to set the region. For example with an EU bucket you could set the following environment variable.

heroku config:add FOG_REGION=eu-west-1

Or via a custom initializer

AssetSync.configure do |config|
  # ...
  config.fog_region = 'eu-west-1'
end

Or via YAML

production:  # ...  fog_region: 'eu-west-1'

Amazon (AWS) IAM Users

Amazon has switched to the more secure IAM User security policy model. When generating a user & policy for asset_sync you must ensure the policy has the following permissions, or you'll see the error:

Expected(200) <=> Actual(403 Forbidden)

IAM User Policy Example with minimum require permissions (replace bucket_name with your bucket):

{
  "Statement": [
    {
      "Action": "s3:ListBucket",
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::bucket_name"
    },
    {
      "Action": "s3:PutObject*",
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::bucket_name/*"
    }
  ]
}

If you want to use IAM roles you must set config.aws_iam_roles = true in your initializers.

AssetSync.configure do |config|
  # ...
  config.aws_iam_roles = true
end

Automatic gzip compression

With the gzip_compression option enabled, when uploading your assets. If a file has a gzip compressed equivalent we will replace that asset with the compressed version and sets the correct headers for S3 to serve it. For example, if you have a file master.css and it was compressed to master.css.gz we will upload the .gz file to S3 in place of the uncompressed file.

If the compressed file is actually larger than the uncompressed file we will ignore this rule and upload the standard uncompressed version.

Fail Silently

With the fail_silently option enabled, when running rake assets:precompile AssetSync will never throw an error due to missing configuration variables.

With the new user_env_compile feature of Heroku (see above), this is no longer required or recommended. Yet was added for the following reasons:

With Rails 3.1 on the Heroku cedar stack, the deployment process automatically runs rake assets:precompile. If you are using ENV variable style configuration. Due to the methods with which Heroku compile slugs, there will be an error raised by asset_sync as the environment is not available. This causes heroku to install the rails31_enable_runtime_asset_compilation plugin which is not necessary when using asset_sync and also massively slows down the first incoming requests to your app.

To prevent this part of the deploy from failing (asset_sync raising a config error), but carry on as normal set fail_silently to true in your configuration and ensure to run heroku run rake assets:precompile after deploy.

Rake Task

A rake task is included within the asset_sync gem to perform the sync:

  namespace :assets do
    desc "Synchronize assets to S3"
    task :sync => :environment do
      AssetSync.sync
    end
  end

If AssetSync.config.run_on_precompile is true (default), then assets will be uploaded to S3 automatically after the assets:precompile rake task is invoked:

  if Rake::Task.task_defined?("assets:precompile:nondigest")
    Rake::Task["assets:precompile:nondigest"].enhance do
      Rake::Task["assets:sync"].invoke if defined?(AssetSync) && AssetSync.config.run_on_precompile
    end
  else
    Rake::Task["assets:precompile"].enhance do
      Rake::Task["assets:sync"].invoke if defined?(AssetSync) && AssetSync.config.run_on_precompile
    end
  end

You can disable this behavior by setting AssetSync.config.run_on_precompile = false.

Sinatra/Rack Support

You can use the gem with any Rack application, but you must specify two additional options; prefix and public_path.

AssetSync.configure do |config|
  config.fog_provider = 'AWS'
  config.fog_directory = ENV['FOG_DIRECTORY']
  config.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID']
  config.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
  config.prefix = 'assets'
  # Can be a `Pathname` or `String`
  # Will be converted into an `Pathname`
  # If relative, will be converted into an absolute path
  # via `::Rails.root` or `::Dir.pwd`
  config.public_path = Pathname('./public')
end

Then manually call AssetSync.sync at the end of your asset precompilation task.

namespace :assets do
  desc 'Precompile assets'
  task :precompile do
    target = Pathname('./public/assets')
    manifest = Sprockets::Manifest.new(sprockets, './public/assets/manifest.json')

    sprockets.each_logical_path do |logical_path|
      if (!File.extname(logical_path).in?(['.js', '.css']) || logical_path =~ /application\.(css|js)$/) && asset = sprockets.find_asset(logical_path)
        filename = target.join(logical_path)
        FileUtils.mkpath(filename.dirname)
        puts "Write asset: #{filename}"
        asset.write_to(filename)
        manifest.compile(logical_path)
      end
    end

    AssetSync.sync
  end
end

Webpacker (> 2.0) support

  1. Add webpacker files and disable run_on_precompile:
AssetSync.configure do |config|
  # Disable automatic run on precompile in order to attach to webpacker rake task
  config.run_on_precompile = false
  # The block should return an array of file paths
  config.add_local_file_paths do
    # Support webpacker assets
    public_root = Rails.root.join("public")
    Dir.chdir(public_root) do
      packs_dir = Webpacker.config.public_output_path.relative_path_from(public_root)
      Dir[File.join(packs_dir, '/**/**')]
    end
  end
end
  1. Add a asset_sync.rake in your lib/tasks directory that enhances the correct task, otherwise asset_sync runs before webpacker:compile does:
if defined?(AssetSync)
  Rake::Task['webpacker:compile'].enhance do
    Rake::Task["assets:sync"].invoke
  end
end

Caveat

By adding local files outside the normal Rails assets directory, the uploading part works, however checking that the asset was previously uploaded is not working because asset_sync is only fetching the files in the assets directory on the remote bucket. This will mean additional time used to upload the same assets again on every precompilation.

Running the specs

Make sure you have a .env file with these details:-

# for AWS provider
AWS_ACCESS_KEY_ID=<yourkeyid>
AWS_SECRET_ACCESS_KEY=<yoursecretkey>
FOG_DIRECTORY=<yourbucket>
FOG_REGION=<youbucketregion>

# for AzureRM provider
AZURE_STORAGE_ACCOUNT_NAME=<youraccountname>
AZURE_STORAGE_ACCESS_KEY=<youraccesskey>
FOG_DIRECTORY=<yourcontainer>
FOG_REGION=<yourcontainerregion>

Make sure the bucket has read/write permissions. Then to run the tests:-

foreman run rake

Todo

  1. Add some before and after filters for deleting and uploading
  2. Support more cloud storage providers
  3. Better test coverage
  4. Add rake tasks to clean old assets from a bucket

Credits

Inspired by:

License

MIT License. Copyright 2011-2013 Rumble Labs Ltd. rumblelabs.com


Author: AssetSync
Source code: https://github.com/AssetSync/asset_sync
License:

#ruby   #ruby-on-rails 

Seamus  Quitzon

Seamus Quitzon

1601341562

AWS Cost Allocation Tags and Cost Reduction

Bob had just arrived in the office for his first day of work as the newly hired chief technical officer when he was called into a conference room by the president, Martha, who immediately introduced him to the head of accounting, Amanda. They exchanged pleasantries, and then Martha got right down to business:

“Bob, we have several teams here developing software applications on Amazon and our bill is very high. We think it’s unnecessarily high, and we’d like you to look into it and bring it under control.”

Martha placed a screenshot of the Amazon Web Services (AWS) billing report on the table and pointed to it.

“This is a problem for us: We don’t know what we’re spending this money on, and we need to see more detail.”

Amanda chimed in, “Bob, look, we have financial dimensions that we use for reporting purposes, and I can provide you with some guidance regarding some information we’d really like to see such that the reports that are ultimately produced mirror these dimensions — if you can do this, it would really help us internally.”

“Bob, we can’t stress how important this is right now. These projects are becoming very expensive for our business,” Martha reiterated.

“How many projects do we have?” Bob inquired.

“We have four projects in total: two in the aviation division and two in the energy division. If it matters, the aviation division has 75 developers and the energy division has 25 developers,” the CEO responded.

Bob understood the problem and responded, “I’ll see what I can do and have some ideas. I might not be able to give you retrospective insight, but going forward, we should be able to get a better idea of what’s going on and start to bring the cost down.”

The meeting ended with Bob heading to find his desk. Cost allocation tags should help us, he thought to himself as he looked for someone who might know where his office is.

#aws #aws cloud #node js #cost optimization #aws cli #well architected framework #aws cost report #cost control #aws cost #aws tags

Hire AWS Developer

Looking to Hire Professional AWS Developers?

The technology inventions have demanded all businesses to use and manage cloud-based computing services and Amazon is dominating the cloud computing services provider in the world.

Hire AWS Developer from HourlyDeveloper.io & Get the best amazon web services development. Take your business to excellence with our best AWS developer that will serve you the benefit of different cloud computing tools.

Consult with experts: https://bit.ly/2CWJgHyAWS Development services

#hire aws developer #aws developers #aws development company #aws development services #aws development #aws

Christa  Stehr

Christa Stehr

1598408880

How To Unite AWS KMS with Serverless Application Model (SAM)

The Basics

AWS KMS is a Key Management Service that let you create Cryptographic keys that you can use to encrypt and decrypt data and also other keys. You can read more about it here.

Important points about Keys

Please note that the customer master keys(CMK) generated can only be used to encrypt small amount of data like passwords, RSA key. You can use AWS KMS CMKs to generate, encrypt, and decrypt data keys. However, AWS KMS does not store, manage, or track your data keys, or perform cryptographic operations with data keys.

You must use and manage data keys outside of AWS KMS. KMS API uses AWS KMS CMK in the encryption operations and they cannot accept more than 4 KB (4096 bytes) of data. To encrypt application data, use the server-side encryption features of an AWS service, or a client-side encryption library, such as the AWS Encryption SDK or the Amazon S3 encryption client.

Scenario

We want to create signup and login forms for a website.

Passwords should be encrypted and stored in DynamoDB database.

What do we need?

  1. KMS key to encrypt and decrypt data
  2. DynamoDB table to store password.
  3. Lambda functions & APIs to process Login and Sign up forms.
  4. Sign up/ Login forms in HTML.

Lets Implement it as Serverless Application Model (SAM)!

Lets first create the Key that we will use to encrypt and decrypt password.

KmsKey:
    Type: AWS::KMS::Key
    Properties: 
      Description: CMK for encrypting and decrypting
      KeyPolicy:
        Version: '2012-10-17'
        Id: key-default-1
        Statement:
        - Sid: Enable IAM User Permissions
          Effect: Allow
          Principal:
            AWS: !Sub arn:aws:iam::${AWS::AccountId}:root
          Action: kms:*
          Resource: '*'
        - Sid: Allow administration of the key
          Effect: Allow
          Principal:
            AWS: !Sub arn:aws:iam::${AWS::AccountId}:user/${KeyAdmin}
          Action:
          - kms:Create*
          - kms:Describe*
          - kms:Enable*
          - kms:List*
          - kms:Put*
          - kms:Update*
          - kms:Revoke*
          - kms:Disable*
          - kms:Get*
          - kms:Delete*
          - kms:ScheduleKeyDeletion
          - kms:CancelKeyDeletion
          Resource: '*'
        - Sid: Allow use of the key
          Effect: Allow
          Principal:
            AWS: !Sub arn:aws:iam::${AWS::AccountId}:user/${KeyUser}
          Action:
          - kms:DescribeKey
          - kms:Encrypt
          - kms:Decrypt
          - kms:ReEncrypt*
          - kms:GenerateDataKey
          - kms:GenerateDataKeyWithoutPlaintext
          Resource: '*'

The important thing in above snippet is the KeyPolicy. KMS requires a Key Administrator and Key User. As a best practice your Key Administrator and Key User should be 2 separate user in your Organisation. We are allowing all permissions to the root users.

So if your key Administrator leaves the organisation, the root user will be able to delete this key. As you can see **KeyAdmin **can manage the key but not use it and KeyUser can only use the key. ${KeyAdmin} and **${KeyUser} **are parameters in the SAM template.

You would be asked to provide values for these parameters during SAM Deploy.

#aws #serverless #aws-sam #aws-key-management-service #aws-certification #aws-api-gateway #tutorial-for-beginners #aws-blogs