1602581337
Default configurations of Fortinet’s FortiGate VPN appliance could open organizations to man-in-the-middle (MitM) attacks, according to researchers, where threat actors could intercept important data.
According to the SAM IoT Security Lab, the FortiGate SSL-VPN client only verifies that the certificate used for client authentication was issued by Fortinet or another trusted certificate authority.
“Therefore, an attacker can easily present a certificate issued to a different FortiGate router without raising any flags, and implement a man-in-the-middle attack,” researchers wrote, in an analysis on Thursday.
They added, “An attacker can actually use this to inject his own traffic, and essentially communicate with any internal device in the business, including point of sales, sensitive data centers, etc. This is a major security breach, that can lead to severe data exposure.”
A Shodan search turned up more than 230,000 vulnerable FortiGate appliances using the VPN functionality, researchers found. Out of those, a full 88 percent, or more than 200,000 businesses, are using the default configuration and can be easily breached in an MitM attack.
According to SAM, in a typical SSL certificate verification process, the client can connect to a server only after verifying that the certificate’s Server Name field matches the actual name of the server that the client is attempting to connect to; that the certificate validity date has not passed; that the digital signature is correct; and that the certificate was issued by an authority that the client trusts.
In the case of the FortiGate router, it uses a self-signed, default SSL certificate, and it uses the router’s serial number to denote the server for the certificate – it does not, according to SAM, verify that the actual server name parameter matches.
“This leaves Fortinet with enough information to verify the certificate was issued to the same server the client is trying to connect to, if it were to verify the serial number,” according to researchers. “However, Fortinet’s client does not verify the Server Name at all. In fact, any certificate will be accepted, so long as it is valid.”
SAM published a proof-of-concept (PoC) how an attacker could easily re-route the traffic to a malicious server, displaying his or her own certificate, and then decrypt the traffic.
“We decrypt the traffic of the Fortinet SSL-VPN client and extract the user’s password and [one-time password],” researchers explained.
#iot #vulnerabilities #web security #authentication #certificate #default configuration #fortigate #fortinet #man in the middle attack #small and medium sized businesses #ssl vpn #vpn
1602581337
Default configurations of Fortinet’s FortiGate VPN appliance could open organizations to man-in-the-middle (MitM) attacks, according to researchers, where threat actors could intercept important data.
According to the SAM IoT Security Lab, the FortiGate SSL-VPN client only verifies that the certificate used for client authentication was issued by Fortinet or another trusted certificate authority.
“Therefore, an attacker can easily present a certificate issued to a different FortiGate router without raising any flags, and implement a man-in-the-middle attack,” researchers wrote, in an analysis on Thursday.
They added, “An attacker can actually use this to inject his own traffic, and essentially communicate with any internal device in the business, including point of sales, sensitive data centers, etc. This is a major security breach, that can lead to severe data exposure.”
A Shodan search turned up more than 230,000 vulnerable FortiGate appliances using the VPN functionality, researchers found. Out of those, a full 88 percent, or more than 200,000 businesses, are using the default configuration and can be easily breached in an MitM attack.
According to SAM, in a typical SSL certificate verification process, the client can connect to a server only after verifying that the certificate’s Server Name field matches the actual name of the server that the client is attempting to connect to; that the certificate validity date has not passed; that the digital signature is correct; and that the certificate was issued by an authority that the client trusts.
In the case of the FortiGate router, it uses a self-signed, default SSL certificate, and it uses the router’s serial number to denote the server for the certificate – it does not, according to SAM, verify that the actual server name parameter matches.
“This leaves Fortinet with enough information to verify the certificate was issued to the same server the client is trying to connect to, if it were to verify the serial number,” according to researchers. “However, Fortinet’s client does not verify the Server Name at all. In fact, any certificate will be accepted, so long as it is valid.”
SAM published a proof-of-concept (PoC) how an attacker could easily re-route the traffic to a malicious server, displaying his or her own certificate, and then decrypt the traffic.
“We decrypt the traffic of the Fortinet SSL-VPN client and extract the user’s password and [one-time password],” researchers explained.
#iot #vulnerabilities #web security #authentication #certificate #default configuration #fortigate #fortinet #man in the middle attack #small and medium sized businesses #ssl vpn #vpn
1659004860
Asset Sync
Synchronises Assets between Rails and S3.
Asset Sync is built to run with the new Rails Asset Pipeline feature introduced in Rails 3.1. After you run bundle exec rake assets:precompile your assets will be synchronised to your S3 bucket, optionally deleting unused files and only uploading the files it needs to.
This was initially built and is intended to work on Heroku but can work on any platform.
Upgraded from 1.x? Read UPGRADING.md
Since 2.x, Asset Sync depends on gem fog-core
instead of fog
.
This is due to fog
is including many unused storage provider gems as its dependencies.
Asset Sync has no idea about what provider will be used,
so you are responsible for bundling the right gem for the provider to be used.
In your Gemfile:
gem "asset_sync"
gem "fog-aws"
Or, to use Azure Blob storage, configure as this.
gem "asset_sync"
gem "gitlab-fog-azure-rm"
# This gem seems unmaintianed
# gem "fog-azure-rm"
To use Backblaze B2, insert these.
gem "asset_sync"
gem "fog-backblaze"
It's possible to improve asset:precompile time if you are using Rails 3.2.x the main source of which being compilation of non-digest assets.
turbo-sprockets-rails3 solves this by only compiling digest assets. Thus cutting compile time in half.
NOTE: It will be deprecated in Rails 4 as sprockets-rails has been extracted out of Rails and will only compile digest assets by default.
Configure config/environments/production.rb to use Amazon S3 as the asset host and ensure precompiling is enabled.
#config/environments/production.rb
config.action_controller.asset_host = "//#{ENV['FOG_DIRECTORY']}.s3.amazonaws.com"
Or, to use Google Storage Cloud, configure as this.
#config/environments/production.rb
config.action_controller.asset_host = "//#{ENV['FOG_DIRECTORY']}.storage.googleapis.com"
Or, to use Azure Blob storage, configure as this.
#config/environments/production.rb
config.action_controller.asset_host = "//#{ENV['AZURE_STORAGE_ACCOUNT_NAME']}.blob.core.windows.net/#{ENV['FOG_DIRECTORY']}"
Or, to use Backblaze B2, configure as this.
#config/environments/production.rb
config.action_controller.asset_host = "//f000.backblazeb2.com/file/#{ENV['FOG_DIRECTORY']}"
On HTTPS: the exclusion of any protocol in the asset host declaration above will allow browsers to choose the transport mechanism on the fly. So if your application is available under both HTTP and HTTPS the assets will be served to match.
The only caveat with this is that your S3 bucket name must not contain any periods so, mydomain.com.s3.amazonaws.com for example would not work under HTTPS as SSL certificates from Amazon would interpret our bucket name as not a subdomain of s3.amazonaws.com, but a multi level subdomain. To avoid this don't use a period in your subdomain or switch to the other style of S3 URL.
config.action_controller.asset_host = "//s3.amazonaws.com/#{ENV['FOG_DIRECTORY']}"
Or, to use Google Storage Cloud, configure as this.
config.action_controller.asset_host = "//storage.googleapis.com/#{ENV['FOG_DIRECTORY']}"
Or, to use Azure Blob storage, configure as this.
#config/environments/production.rb
config.action_controller.asset_host = "//#{ENV['AZURE_STORAGE_ACCOUNT_NAME']}.blob.core.windows.net/#{ENV['FOG_DIRECTORY']}"
On non default S3 bucket region: If your bucket is set to a region that is not the default US Standard (us-east-1) you must use the first style of url //#{ENV['FOG_DIRECTORY']}.s3.amazonaws.com
or amazon will return a 301 permanently moved when assets are requested. Note the caveat above about bucket names and periods.
If you wish to have your assets sync to a sub-folder of your bucket instead of into the root add the following to your production.rb
file
# store assets in a 'folder' instead of bucket root
config.assets.prefix = "/production/assets"
Also, ensure the following are defined (in production.rb or application.rb)
Additionally, if you depend on any configuration that is setup in your initializers
you will need to ensure that
AssetSync supports the following methods of configuration.
Using the Built-in Initializer is the default method and is supposed to be used with environment variables. It's the recommended approach for deployments on Heroku.
If you need more control over configuration you will want to use a custom rails initializer.
Configuration using a YAML file (a common strategy for Capistrano deployments) is also supported.
The recommend way to configure asset_sync is by using environment variables however it's up to you, it will work fine if you hard code them too. The main reason why using environment variables is recommended is so your access keys are not checked into version control.
The Built-in Initializer will configure AssetSync based on the contents of your environment variables.
Add your configuration details to heroku
heroku config:add AWS_ACCESS_KEY_ID=xxxx
heroku config:add AWS_SECRET_ACCESS_KEY=xxxx
heroku config:add FOG_DIRECTORY=xxxx
heroku config:add FOG_PROVIDER=AWS
# and optionally:
heroku config:add FOG_REGION=eu-west-1
heroku config:add ASSET_SYNC_GZIP_COMPRESSION=true
heroku config:add ASSET_SYNC_MANIFEST=true
heroku config:add ASSET_SYNC_EXISTING_REMOTE_FILES=keep
Or add to a traditional unix system
export AWS_ACCESS_KEY_ID=xxxx
export AWS_SECRET_ACCESS_KEY=xxxx
export FOG_DIRECTORY=xxxx
Rackspace configuration is also supported
heroku config:add RACKSPACE_USERNAME=xxxx
heroku config:add RACKSPACE_API_KEY=xxxx
heroku config:add FOG_DIRECTORY=xxxx
heroku config:add FOG_PROVIDER=Rackspace
Google Storage Cloud configuration is supported as well. The preferred option is using the GCS JSON API which requires that you create an appropriate service account, generate the signatures and make them accessible to asset sync at the prescribed location
heroku config:add FOG_PROVIDER=Google
heroku config:add GOOGLE_PROJECT=xxxx
heroku config:add GOOGLE_JSON_KEY_LOCATION=xxxx
heroku config:add FOG_DIRECTORY=xxxx
If using the S3 API the following config is required
heroku config:add FOG_PROVIDER=Google
heroku config:add GOOGLE_STORAGE_ACCESS_KEY_ID=xxxx
heroku config:add GOOGLE_STORAGE_SECRET_ACCESS_KEY=xxxx
heroku config:add FOG_DIRECTORY=xxxx
The Built-in Initializer also sets the AssetSync default for existing_remote_files to keep.
If you want to enable some of the advanced configuration options you will want to create your own initializer.
Run the included Rake task to generate a starting point.
rails g asset_sync:install --provider=Rackspace
rails g asset_sync:install --provider=AWS
rails g asset_sync:install --provider=AzureRM
rails g asset_sync:install --provider=Backblaze
The generator will create a Rails initializer at config/initializers/asset_sync.rb
.
AssetSync.configure do |config|
config.fog_provider = 'AWS'
config.fog_directory = ENV['FOG_DIRECTORY']
config.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID']
config.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
config.aws_session_token = ENV['AWS_SESSION_TOKEN'] if ENV.key?('AWS_SESSION_TOKEN')
# Don't delete files from the store
# config.existing_remote_files = 'keep'
#
# Increase upload performance by configuring your region
# config.fog_region = 'eu-west-1'
#
# Set `public` option when uploading file depending on value,
# Setting to "default" makes asset sync skip setting the option
# Possible values: true, false, "default" (default: true)
# config.fog_public = true
#
# Change AWS signature version. Default is 4
# config.aws_signature_version = 4
#
# Change canned ACL of uploaded object. Default is unset. Will override fog_public if set.
# Choose from: private | public-read | public-read-write | aws-exec-read |
# authenticated-read | bucket-owner-read | bucket-owner-full-control
# config.aws_acl = nil
#
# Change host option in fog (only if you need to)
# config.fog_host = 's3.amazonaws.com'
#
# Change port option in fog (only if you need to)
# config.fog_port = "9000"
#
# Use http instead of https.
# config.fog_scheme = 'http'
#
# Automatically replace files with their equivalent gzip compressed version
# config.gzip_compression = true
#
# Use the Rails generated 'manifest.yml' file to produce the list of files to
# upload instead of searching the assets directory.
# config.manifest = true
#
# Upload the manifest file also.
# config.include_manifest = false
#
# Upload files concurrently
# config.concurrent_uploads = false
#
# Number of threads when concurrent_uploads is enabled
# config.concurrent_uploads_max_threads = 10
#
# Path to cache file to skip scanning remote
# config.remote_file_list_cache_file_path = './.asset_sync_remote_file_list_cache.json'
#
# Fail silently. Useful for environments such as Heroku
# config.fail_silently = true
#
# Log silently. Default is `true`. But you can set it to false if more logging message are preferred.
# Logging messages are sent to `STDOUT` when `log_silently` is falsy
# config.log_silently = true
#
# Allow custom assets to be cacheable. Note: The base filename will be matched
# If you have an asset with name `app.0b1a4cd3.js`, only `app.0b1a4cd3` will need to be matched
# only one of `cache_asset_regexp` or `cache_asset_regexps` is allowed.
# config.cache_asset_regexp = /\.[a-f0-9]{8}$/i
# config.cache_asset_regexps = [ /\.[a-f0-9]{8}$/i, /\.[a-f0-9]{20}$/i ]
end
Run the included Rake task to generate a starting point.
rails g asset_sync:install --use-yml --provider=Rackspace
rails g asset_sync:install --use-yml --provider=AWS
rails g asset_sync:install --use-yml --provider=AzureRM
rails g asset_sync:install --use-yml --provider=Backblaze
The generator will create a YAML file at config/asset_sync.yml
.
defaults: &defaults
fog_provider: "AWS"
fog_directory: "rails-app-assets"
aws_access_key_id: "<%= ENV['AWS_ACCESS_KEY_ID'] %>"
aws_secret_access_key: "<%= ENV['AWS_SECRET_ACCESS_KEY'] %>"
# To use AWS reduced redundancy storage.
# aws_reduced_redundancy: true
#
# You may need to specify what region your storage bucket is in
# fog_region: "eu-west-1"
#
# Change AWS signature version. Default is 4
# aws_signature_version: 4
#
# Change canned ACL of uploaded object. Default is unset. Will override fog_public if set.
# Choose from: private | public-read | public-read-write | aws-exec-read |
# authenticated-read | bucket-owner-read | bucket-owner-full-control
# aws_acl: null
#
# Change host option in fog (only if you need to)
# fog_host: "s3.amazonaws.com"
#
# Use http instead of https. Default should be "https" (at least for fog-aws)
# fog_scheme: "http"
existing_remote_files: keep # Existing pre-compiled assets on S3 will be kept
# To delete existing remote files.
# existing_remote_files: delete
# To ignore existing remote files and overwrite.
# existing_remote_files: ignore
# Automatically replace files with their equivalent gzip compressed version
# gzip_compression: true
# Fail silently. Useful for environments such as Heroku
# fail_silently: true
# Always upload. Useful if you want to overwrite specific remote assets regardless of their existence
# eg: Static files in public often reference non-fingerprinted application.css
# note: You will still need to expire them from the CDN's edge cache locations
# always_upload: ['application.js', 'application.css', !ruby/regexp '/application-/\d{32}\.css/']
# Ignored files. Useful if there are some files that are created dynamically on the server and you don't want to upload on deploy.
# ignored_files: ['ignore_me.js', !ruby/regexp '/ignore_some/\d{32}\.css/']
# Allow custom assets to be cacheable. Note: The base filename will be matched
# If you have an asset with name "app.0b1a4cd3.js", only "app.0b1a4cd3" will need to be matched
# cache_asset_regexps: ['cache_me.js', !ruby/regexp '/cache_some\.\d{8}\.css/']
development:
<<: *defaults
test:
<<: *defaults
production:
<<: *defaults
Most AssetSync configuration can be modified directly using environment variables with the Built-in initializer. e.g.
AssetSync.config.fog_provider == ENV['FOG_PROVIDER']
Simply upcase the ruby attribute names to get the equivalent environment variable to set. The only exception to that rule are the internal AssetSync config variables, they must be prepended with ASSET_SYNC_*
e.g.
AssetSync.config.gzip_compression == ENV['ASSET_SYNC_GZIP_COMPRESSION']
'keep', 'delete', 'ignore'
) what to do with previously precompiled files. default: 'keep'
true, false
) when enabled, will automatically replace files that have a gzip compressed equivalent with the compressed version. default: 'false'
true, false
) when enabled, will use the manifest.yml
generated by Rails to get the list of local files to upload. experimental. default: 'false'
true, false
) when enabled, will upload the manifest.yml
generated by Rails. default: 'false'
true, false
) when enabled, will upload the files in different Threads, this greatly improves the upload speed. default: 'false'
10
nil
true, false
) when false, will disable asset sync. default: 'true'
(enabled)['ignore_me.js', %r(ignore_some/\d{32}\.css)]
Useful if there are some files that are created dynamically on the server and you don't want to upload on deploy default: []
['cache_me.js', %r(cache_some\.\d{8}\.css)]
Useful if there are some files that are added to sprockets assets list and need to be set as 'Cacheable' on uploaded server. Only rails compiled regexp is matched internally default: []
Config Method add_local_file_paths
Adding local files by providing a block:
AssetSync.configure do |config|
# The block should return an array of file paths
config.add_local_file_paths do
# Any code that returns paths of local asset files to be uploaded
# Like Webpacker
public_root = Rails.root.join("public")
Dir.chdir(public_root) do
packs_dir = Webpacker.config.public_output_path.relative_path_from(public_root)
Dir[File.join(packs_dir, '/**/**')]
end
end
end
The blocks are run when local files are being scanned and uploaded
Config Method file_ext_to_mime_type_overrides
It's reported that mime-types
3.x returns application/ecmascript
instead of application/javascript
Such change of mime type might cause some CDN to disable asset compression
So this gem has defined a default override for file ext js
to be mapped to application/javascript
by default
To customize the overrides:
AssetSync.configure do |config|
# Clear the default overrides
config.file_ext_to_mime_type_overrides.clear
# Add/Edit overrides
# Will call `#to_s` for inputs
config.file_ext_to_mime_type_overrides.add(:js, :"application/x-javascript")
end
The blocks are run when local files are being scanned and uploaded
When using the JSON API
When using the S3 API
https://lon.identity.api.rackspacecloud.com/v2.0
If you are using anything other than the US buckets with S3 then you'll want to set the region. For example with an EU bucket you could set the following environment variable.
heroku config:add FOG_REGION=eu-west-1
Or via a custom initializer
AssetSync.configure do |config|
# ...
config.fog_region = 'eu-west-1'
end
Or via YAML
production: # ... fog_region: 'eu-west-1'
Amazon has switched to the more secure IAM User security policy model. When generating a user & policy for asset_sync you must ensure the policy has the following permissions, or you'll see the error:
Expected(200) <=> Actual(403 Forbidden)
IAM User Policy Example with minimum require permissions (replace bucket_name
with your bucket):
{
"Statement": [
{
"Action": "s3:ListBucket",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket_name"
},
{
"Action": "s3:PutObject*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket_name/*"
}
]
}
If you want to use IAM roles you must set config.aws_iam_roles = true
in your initializers.
AssetSync.configure do |config|
# ...
config.aws_iam_roles = true
end
With the gzip_compression
option enabled, when uploading your assets. If a file has a gzip compressed equivalent we will replace that asset with the compressed version and sets the correct headers for S3 to serve it. For example, if you have a file master.css and it was compressed to master.css.gz we will upload the .gz file to S3 in place of the uncompressed file.
If the compressed file is actually larger than the uncompressed file we will ignore this rule and upload the standard uncompressed version.
With the fail_silently
option enabled, when running rake assets:precompile
AssetSync will never throw an error due to missing configuration variables.
With the new user_env_compile feature of Heroku (see above), this is no longer required or recommended. Yet was added for the following reasons:
With Rails 3.1 on the Heroku cedar stack, the deployment process automatically runs
rake assets:precompile
. If you are using ENV variable style configuration. Due to the methods with which Heroku compile slugs, there will be an error raised by asset_sync as the environment is not available. This causes heroku to install therails31_enable_runtime_asset_compilation
plugin which is not necessary when using asset_sync and also massively slows down the first incoming requests to your app.
To prevent this part of the deploy from failing (asset_sync raising a config error), but carry on as normal set
fail_silently
to true in your configuration and ensure to runheroku run rake assets:precompile
after deploy.
A rake task is included within the asset_sync gem to perform the sync:
namespace :assets do
desc "Synchronize assets to S3"
task :sync => :environment do
AssetSync.sync
end
end
If AssetSync.config.run_on_precompile
is true
(default), then assets will be uploaded to S3 automatically after the assets:precompile
rake task is invoked:
if Rake::Task.task_defined?("assets:precompile:nondigest")
Rake::Task["assets:precompile:nondigest"].enhance do
Rake::Task["assets:sync"].invoke if defined?(AssetSync) && AssetSync.config.run_on_precompile
end
else
Rake::Task["assets:precompile"].enhance do
Rake::Task["assets:sync"].invoke if defined?(AssetSync) && AssetSync.config.run_on_precompile
end
end
You can disable this behavior by setting AssetSync.config.run_on_precompile = false
.
You can use the gem with any Rack application, but you must specify two additional options; prefix
and public_path
.
AssetSync.configure do |config|
config.fog_provider = 'AWS'
config.fog_directory = ENV['FOG_DIRECTORY']
config.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID']
config.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
config.prefix = 'assets'
# Can be a `Pathname` or `String`
# Will be converted into an `Pathname`
# If relative, will be converted into an absolute path
# via `::Rails.root` or `::Dir.pwd`
config.public_path = Pathname('./public')
end
Then manually call AssetSync.sync
at the end of your asset precompilation task.
namespace :assets do
desc 'Precompile assets'
task :precompile do
target = Pathname('./public/assets')
manifest = Sprockets::Manifest.new(sprockets, './public/assets/manifest.json')
sprockets.each_logical_path do |logical_path|
if (!File.extname(logical_path).in?(['.js', '.css']) || logical_path =~ /application\.(css|js)$/) && asset = sprockets.find_asset(logical_path)
filename = target.join(logical_path)
FileUtils.mkpath(filename.dirname)
puts "Write asset: #{filename}"
asset.write_to(filename)
manifest.compile(logical_path)
end
end
AssetSync.sync
end
end
run_on_precompile
:AssetSync.configure do |config|
# Disable automatic run on precompile in order to attach to webpacker rake task
config.run_on_precompile = false
# The block should return an array of file paths
config.add_local_file_paths do
# Support webpacker assets
public_root = Rails.root.join("public")
Dir.chdir(public_root) do
packs_dir = Webpacker.config.public_output_path.relative_path_from(public_root)
Dir[File.join(packs_dir, '/**/**')]
end
end
end
asset_sync.rake
in your lib/tasks
directory that enhances the correct task, otherwise asset_sync runs before webpacker:compile
does:if defined?(AssetSync)
Rake::Task['webpacker:compile'].enhance do
Rake::Task["assets:sync"].invoke
end
end
By adding local files outside the normal Rails assets
directory, the uploading part works, however checking that the asset was previously uploaded is not working because asset_sync is only fetching the files in the assets
directory on the remote bucket. This will mean additional time used to upload the same assets again on every precompilation.
Make sure you have a .env file with these details:-
# for AWS provider
AWS_ACCESS_KEY_ID=<yourkeyid>
AWS_SECRET_ACCESS_KEY=<yoursecretkey>
FOG_DIRECTORY=<yourbucket>
FOG_REGION=<youbucketregion>
# for AzureRM provider
AZURE_STORAGE_ACCOUNT_NAME=<youraccountname>
AZURE_STORAGE_ACCESS_KEY=<youraccesskey>
FOG_DIRECTORY=<yourcontainer>
FOG_REGION=<yourcontainerregion>
Make sure the bucket has read/write permissions. Then to run the tests:-
foreman run rake
Inspired by:
MIT License. Copyright 2011-2013 Rumble Labs Ltd. rumblelabs.com
Author: AssetSync
Source code: https://github.com/AssetSync/asset_sync
License:
1632733282
Flexible and powerful Vue components for Stripe. It's a glue between Stripe.js and Vue component lifecycle.
Quickstart
# npm
npm i vue-stripe-elements-plus --save-dev
# yarn
yarn add vue-stripe-elements-plus --dev
<script src="https://js.stripe.com/v3/"></script>
Alternatively, you can load Stripe library dynamically. Just make sure it's ready before your components mount.
Create card
<template>
<div class="payment-simple">
<StripeElements
:stripe-key="stripeKey"
:instance-options="instanceOptions"
:elements-options="elementsOptions"
#default="{ elements }" // attention: important part!
ref="elms"
>
<StripeElement
type="card"
:elements="elements"
:options="cardOptions"
ref="card"
/>
</StripeElements>
<button @click="pay" type="button">Pay</button>
</div>
</template>
<script>
import { StripeElements, StripeElement } from 'vue-stripe-elements-plus'
export default {
name: 'PaymentSimple',
components: {
StripeElements,
StripeElement
},
data () {
return {
stripeKey: 'pk_test_TYooMQauvdEDq54NiTphI7jx', // test key, don't hardcode
instanceOptions: {
// https://stripe.com/docs/js/initializing#init_stripe_js-options
},
elementsOptions: {
// https://stripe.com/docs/js/elements_object/create#stripe_elements-options
},
cardOptions: {
// reactive
// remember about Vue 2 reactivity limitations when dealing with options
value: {
postalCode: ''
}
// https://stripe.com/docs/stripe.js#element-options
}
}
},
methods: {
pay () {
// ref in template
const groupComponent = this.$refs.elms
const cardComponent = this.$refs.card
// Get stripe element
const cardElement = cardComponent.stripeElement
// Access instance methods, e.g. createToken()
groupComponent.instance.createToken(cardElement).then(result => {
// Handle result.error or result.token
})
}
}
}
</script>
Create multiple elements
<StripeElements
:stripe-key="stripeKey"
:instance-options="instanceOptions"
:elements-options="elementsOptions"
#default="{ elements }" // attention: important part!
>
<StripeElement
type="cardNumber"
:elements="elements"
:options="cardNumberOptions"
/>
<StripeElement
type="postalCode"
:elements="elements"
:options="postalCodeOptions"
/>
</StripeElements>
You can even create multiple groups, don't ask me why. It's possible.
<StripeElements
:stripe-key="stripeKey1"
:instance-options="instanceOptions1"
:elements-options="elementsOptions1"
#default="{ elements }" // attention: important part!
>
<StripeElement
:elements="elements"
:options="cardOptions"
/>
</StripeElements>
<StripeElements
:stripe-key="stripeKey2"
:instance-options="instanceOptions2"
:elements-options="elementsOptions2"
#default="{ elements }" // attention: important part!
>
<StripeElement
type="iban"
:elements="elements"
:options="ibanOptions"
/>
</StripeElements>
Styles
No base style included. Main reason: overriding it isn't fun. Style as you wish via element options: see details.
API Reference
Think of it as of individual group of elements. It creates stripe instance and elements object.
import { StripeElements } from 'vue-stripe-elements-plus'
// https://stripe.com/docs/js/initializing#init_stripe_js-options
stripeKey: {
type: String,
required: true,
},
// https://stripe.com/docs/js/elements_object/create#stripe_elements-options
instanceOptions: {
type: Object,
default: () => ({}),
},
// https://stripe.com/docs/stripe.js#element-options
elementsOptions: {
type: Object,
default: () => ({}),
},
You can access instance
and elements
by adding ref to StripeElements component.
// data of StripeElements.vue
instance: {},
elements: {},
Elegant solution for props. Really handy because you can make instance
and elements
available to all children without adding extra code.
<!-- Isn't it cool? I really like it! -->
<StripeElements #default="{elements, instance}">
<StripeElement :elements="elements" />
<CustomComponent :instance="instance" />
</StripeElements>
Universal and type agnostic component. Create any element supported by Stripe.
// elements object
// https://stripe.com/docs/js/elements_object/create
elements: {
type: Object,
required: true,
},
// type of the element
// https://stripe.com/docs/js/elements_object/create_element?type=card
type: {
type: String,
default: () => 'card',
},
// element options
// https://stripe.com/docs/js/elements_object/create_element?type=card#elements_create-options
options: {
type: [Object, undefined],
},
stripeElement
domElement
Element options are reactive. Recommendation: don't use v-model on StripeElement
, instead pass value via options.
data() {
return {
elementOptions: {
value: {
postalCode: ''
}
}
}
},
methods: {
changePostalCode() {
// will update stripe element automatically
this.elementOptions.value.postalCode = '12345'
}
}
Following events are emitted on StripeElement
<StripeElement
:elements="elements"
@blur="doSomething"
/>
In case you like the manual gearbox. Check stripeElements.js for details.
import { initStripe, createElements, createElement } from 'vue-stripe-elements-plus'
Download Details:
Author: ectoflow
Download Link: Download The Source Code
Official Website: https://github.com/ectoflow/vue-stripe-elements
License: MIT
#vue #stripe
1678156380
In this tutorial, we will cover SSH port forwarding in Linux. This is a function of the SSH utility that Linux administrators use to create encrypted and secure relays across different systems.
You can use SSH port forwarding ( SSH tunneling) to create a secure connection between two or more systems. Applications can then use these tunnels to transmit data.
Your data is only as secure as its encryption, which is why SSH port forwarding is a popular mechanism to use. Read on to find out more and see how to set up SSH port forwarding on your own systems.
To put it simply, SSH port forwarding involves establishing an SSH tunnel between two or more systems and then configuring the systems to transmit a specified type of traffic through that connection.
There are a few different things you can do with this: local forwarding, remote forwarding, and dynamic port forwarding. Each configuration requires its own steps to set up, so we will go over each of them later in the tutorial.
Local port forwarding is used to make an external resource available on the local network. An SSH tunnel is established to a remote system, and traffic from the local network can use that tunnel to transmit data back and forth, accessing the remote system and network as if it was a part of the local network.
Remote port forwarding is the exact opposite. An SSH tunnel is established, but the remote system is able to access your local network.
Dynamic port forwarding sets up a SOCKS proxy server. You can configure applications to connect to the proxy and transmit all data through it. The most common use for this is for private web browsing or to make your connection seemingly originate from a different country or location.
You can use SSH port forwarding to set up a virtual private network (VPN). You’ll need an extra program for this called sshuttle. We cover the details later in the tutorial.
Since SSH creates encrypted connections, this is an ideal solution if you have applications that transmit data in plaintext or use an unencrypted protocol. This holds especially true for legacy applications.
It’s also popular to use it to connect to a local network from the outside—for example, an employee using SSH tunnels to connect to a company’s intranet.
You may be thinking this sounds like a VPN. The two are similar, but creating ssh tunnels is for specific traffic, whereas VPNs are more for establishing general connections.
SSH port forwarding will allow you to access remote resources by just establishing an SSH tunnel. The only requirement is that you have SSH access to the remote system and, ideally, public key authentication configured for password-less SSHing.
Technically, you can specify as many port forwarding sessions as you’d like. Networks use 65,535 different ports, and you are able to forward any of them that you want.
When forwarding traffic, be cognizant of the services that use certain ports. For example, port 80 is reserved for HTTP. So you would only want to forward traffic on port 80 if you intend to forward web requests.
The port you forward on your local system doesn’t have to match that of the remote server. For example, you can forward port 8080 on localhost to port 80 on the remote host.
If you don’t care what port you are using on the local system, select one between 2,000 and 10,000 since these are rarely used ports. Smaller numbers are typically reserved for certain protocols.
Local forwarding involves forwarding a port from the client system to a server. It allows you to configure a port on your system so that all connections to that port will get forwarded through the SSH tunnel.
Use the -L switch in your ssh command to specify local port forwarding. The general syntax of the command is like this:
$ ssh -L local_port:remote_ip:remote_port user@hostname.com
Check out the example below:
$ ssh -L 80:example1.com:80 example2.com
This command would forward all requests to example1.com to example2.com. Any user on this system that opens a web browser and attempts to navigate to example1.com will, in the background, have their request sent to example2.com instead and display a different website.
Such a command is useful when configuring external access to a company intranet or other private network resources.
To see if your port forwarding is working correctly, you can use the netcat command. On the client machine (the system where you ran the ssh -L command), type the netcat command with this syntax:
$ nc -v remote_ip port_number
If the port is forwarded and data is able to traverse the connection successfully, Netcat will return with a success message, and if it doesn’t work, the connection will time out.
If you’re having trouble getting the port forwarding to work, make sure you’re able to ssh into the remote server normally and that you have configured the ports correctly. Also, verify that the connection isn’t being blocked by a firewall.
Autossh is a tool that can be used to create persistent SSH tunnels. The only prerequisite is that you need to have public key authentication configured between your systems unless you want to be prompted for a password every time the connection dies and is re-established.
Autossh may not be installed by default on your system, but you can quickly install it using apt, yum, or whatever package manager your distribution uses.
$ sudo apt-get install autossh
The autossh command is going to look pretty much identical to the ssh command we ran earlier.
$ autossh -L 80:example1.com:80 example2.com
Autossh will make sure that tunnels are automatically re-established in case they close because of inactivity, remote machine rebooting, network connection being lost, etc.
Remote port forwarding is used to give remote machine access to your system. For example, if you want a service on your local computer to be accessible by a system(s) on your company’s private network, you could configure remote port forwarding to accomplish that.
To set this up, issue an ssh command with the following syntax:
$ ssh -R remote_port:local_ip:local_port user@hostname.com
If you have a local web server on your computer and would like to grant access to it from a remote network, you could forward port 8080 (common http alternative port) on the remote system to port 80 (http port) on your local system.
$ ssh -R 8080:localhost:80 geek@likegeeks.com
SSH dynamic port forwarding will make SSH act as a SOCKS proxy server. Rather than forwarding traffic on a specific port (the way local and remote port forwarding do), this will forward traffic across a range of ports.
If you have ever used a proxy server to visit a blocked website or view location-restricted content (like viewing stuff on Netflix that isn’t available in your country), you probably used a SOCKS server.
It also provides privacy, since you can route your traffic through a SOCKS server with dynamic port forwarding and prevent anyone from snooping log files to see your network traffic (websites visited, etc.).
To set up dynamic port forwarding, use the ssh command with the following syntax:
$ ssh -D local_port user@hostname.com
So, if we wanted to forward traffic on port 1234 to our SSH server:
$ ssh -D 1234 geek@likegeeks.com
Once you’ve established this connection, you can configure applications to route traffic through it. For example, on your web browser:
Type the loopback address (127.0.0.1) and the port you configured for dynamic port forwarding, and all traffic will be forwarded through the SSH tunnel to the remote host (in our example, the likegeeks.com SSH server).
For local port forwarding, if you’d like to set up more than one port to be forwarded to a remote host, you just need to specify each rule with a new -L switch each time. The command syntax is like this:
$ ssh -L local_port_1:remote_ip:remote_port_1 -L local_port_2:remote_ip:remote_port2 user@hostname.com
For example, if you want to forward ports 8080 and 4430 to 192.168.1.1 ports 80 and 443 (HTTP and HTTPS), respectively, you would use this command:
$ ssh -L 8080:192.168.1.1:80 -L 4430:192.168.1.1:443 user@hostname.com
For remote port forwarding, you can set up more than one port to be forwarded by specifying each new rule with the -R switch. The command syntax is like this:
$ ssh -R remote_port1:local_ip:local_port1 remote_port2:local_ip:local_port2 user@hostname.com
You can see what SSH tunnels are currently established with the lsof command.
$ lsof -i | egrep '\<ssh\>'
In this screenshot, you can see that there are 3 SSH tunnels established. Add the -n flag to have IP addresses listed instead of resolving the hostnames.
$ lsof -i -n | egrep '\<ssh\>'
By default, SSH port forwarding is pretty open. You can freely create local, remote, and dynamic port forwards as you please.
But if you don’t trust some of the SSH users on your system, or you’d just like to enhance security in general, you can put some limitations on SSH port forwarding.
There are a couple of different settings you can configure inside the sshd_config file to put limitations on port forwarding. To configure this file, edit it with vi, nano, or your favorite text editor:
$ sudo vi /etc/ssh/sshd_config
PermitOpen can be used to specify the destinations to which port forwarding is allowed. If you only want to allow forwarding to certain IP addresses or hostnames, use this directive. The syntax is as follows:
PermitOpen host:port
PermitOpen IPv4_addr:port
PermitOpen [IPv6_addr]:port
AllowTCPForwarding can be used to turn SSH port forwarding on or off or specify what type of SSH port forwarding is permitted. Possible configurations are:
AllowTCPForwarding yes #default setting
AllowTCPForwarding no #prevent all SSH port forwarding
AllowTCPForwarding local #allow only local SSH port forwarding
AllowTCPForwarding remote #allow only remote SSH port forwarding
To see more information about these options, you can check out the man page:
$ man sshd_config
The only real problem that arises with SSH port forwarding is that there is usually a bit of latency. You probably won’t notice this as an issue if you’re doing something minor, like accessing text files or small databases.
The problem becomes more apparent when doing network-intensive activities, especially if you have port forwarding set up as a SOCKS proxy server.
The reason for the latency is because SSH is tunneling TCP over TCP. This is a terribly inefficient way to transfer data and will result in slower network speeds.
You could use a VPN to prevent the issue, but if you are determined to stick with SSH tunnels, there is a program called sshuttle that corrects the issue. Ubuntu and Debian-based distributions can install it with apt-get:
$ sudo apt-get install sshuttle
If you package manager on your distribution doesn’t have sshuttle in its repository, you can clone it from GitHub:
$ git clone https://github.com/sshuttle/sshuttle.git
$ cd sshuttle
$ ./setup.py install
Setting up a tunnel with sshuttle is different from the normal ssh command. To set up a tunnel that forwards all traffic (akin to a VPN):
$ sudo sshuttle -r user@remote_ip -x remote_ip 0/0 -vv
Break the connection with a ctrl+c key combination in the terminal. Alternatively, to run the sshuttle command as a daemon, add the -D switch to your command.
Want to make sure that the connection was established and the internet sees you at the new IP address? You can run this curl command:
$ curl ipinfo.io
Original article source at: https://likegeeks.com/
1647998803
Module to enable rate limit per service in Netflix Zuul.
There are five built-in rate limit approaches:
Note | It is possible to combine Authenticated User, Request Origin, URL, ROLE and Request Method just adding multiple values to the list |
Note | Latest version: |
Note | If you are using Spring Boot version 1.5.x you MUST use Spring Cloud Zuul RateLimit version 1.7.x . Please take a look at the Maven Central and pick the latest artifact in this version line. |
Add the dependency on pom.xml
<dependency>
<groupId>com.marcosbarbero.cloud</groupId>
<artifactId>spring-cloud-zuul-ratelimit</artifactId>
<version>${latest-version}</version>
</dependency>
Add the following dependency accordingly to the chosen data storage:
Redis
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
Consul
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-consul</artifactId>
</dependency>
Spring Data JPA
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
This implementation also requires a database table, bellow here you can find a sample script:
CREATE TABLE rate (
rate_key VARCHAR(255) NOT NULL,
remaining BIGINT,
remaining_quota BIGINT,
reset BIGINT,
expiration TIMESTAMP,
PRIMARY KEY(rate_key)
);
Bucket4j JCache
<dependency>
<groupId>com.github.vladimir-bukhtoyarov</groupId>
<artifactId>bucket4j-core</artifactId>
</dependency>
<dependency>
<groupId>com.github.vladimir-bukhtoyarov</groupId>
<artifactId>bucket4j-jcache</artifactId>
</dependency>
<dependency>
<groupId>javax.cache</groupId>
<artifactId>cache-api</artifactId>
</dependency>
Bucket4j Hazelcast (depends on Bucket4j JCache)
<dependency>
<groupId>com.github.vladimir-bukhtoyarov</groupId>
<artifactId>bucket4j-hazelcast</artifactId>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
</dependency>
Bucket4j Infinispan (depends on Bucket4j JCache)
<dependency>
<groupId>com.github.vladimir-bukhtoyarov</groupId>
<artifactId>bucket4j-infinispan</artifactId>
</dependency>
<dependency>
<groupId>org.infinispan</groupId>
<artifactId>infinispan-core</artifactId>
</dependency>
Bucket4j Ignite (depends on Bucket4j JCache)
<dependency>
<groupId>com.github.vladimir-bukhtoyarov</groupId>
<artifactId>bucket4j-ignite</artifactId>
</dependency>
<dependency>
<groupId>org.apache.ignite</groupId>
<artifactId>ignite-core</artifactId>
</dependency>
Sample YAML configuration
zuul:
ratelimit:
key-prefix: your-prefix
enabled: true
repository: REDIS
behind-proxy: true
add-response-headers: true
deny-request:
response-status-code: 404 #default value is 403 (FORBIDDEN)
origins:
- 200.187.10.25
- somedomain.com
default-policy-list: #optional - will apply unless specific policy exists
- limit: 10 #optional - request number limit per refresh interval window
quota: 1000 #optional - request time limit per refresh interval window (in seconds)
refresh-interval: 60 #default value (in seconds)
type: #optional
- user
- origin
- url
- http_method
policy-list:
myServiceId:
- limit: 10 #optional - request number limit per refresh interval window
quota: 1000 #optional - request time limit per refresh interval window (in seconds)
refresh-interval: 60 #default value (in seconds)
type: #optional
- user
- origin
- url
- type: #optional value for each type
- user=anonymous
- origin=somemachine.com
- url=/api #url prefix
- role=user
- http_method=get #case insensitive
- http_header=customHeader
- type:
- url_pattern=/api/*/payment
Sample Properties configuration
zuul.ratelimit.enabled=true
zuul.ratelimit.key-prefix=your-prefix
zuul.ratelimit.repository=REDIS
zuul.ratelimit.behind-proxy=true
zuul.ratelimit.add-response-headers=true
zuul.ratelimit.deny-request.response-status-code=404
zuul.ratelimit.deny-request.origins[0]=200.187.10.25
zuul.ratelimit.deny-request.origins[1]=somedomain.com
zuul.ratelimit.default-policy-list[0].limit=10
zuul.ratelimit.default-policy-list[0].quota=1000
zuul.ratelimit.default-policy-list[0].refresh-interval=60
# Adding multiple rate limit type
zuul.ratelimit.default-policy-list[0].type[0]=user
zuul.ratelimit.default-policy-list[0].type[1]=origin
zuul.ratelimit.default-policy-list[0].type[2]=url
zuul.ratelimit.default-policy-list[0].type[3]=http_method
# Adding the first rate limit policy to "myServiceId"
zuul.ratelimit.policy-list.myServiceId[0].limit=10
zuul.ratelimit.policy-list.myServiceId[0].quota=1000
zuul.ratelimit.policy-list.myServiceId[0].refresh-interval=60
zuul.ratelimit.policy-list.myServiceId[0].type[0]=user
zuul.ratelimit.policy-list.myServiceId[0].type[1]=origin
zuul.ratelimit.policy-list.myServiceId[0].type[2]=url
# Adding the second rate limit policy to "myServiceId"
zuul.ratelimit.policy-list.myServiceId[1].type[0]=user=anonymous
zuul.ratelimit.policy-list.myServiceId[1].type[1]=origin=somemachine.com
zuul.ratelimit.policy-list.myServiceId[1].type[2]=url_pattern=/api/*/payment
zuul.ratelimit.policy-list.myServiceId[1].type[3]=role=user
zuul.ratelimit.policy-list.myServiceId[1].type[4]=http_method=get
zuul.ratelimit.policy-list.myServiceId[1].type[5]=http_header=customHeader
Both 'quota' and 'refresh-interval', can be expressed with Spring Boot’s duration formats:
A regular long representation (using seconds as the default unit)
The standard ISO-8601 format used by java.time.Duration (e.g. PT30M means 30 minutes)
A more readable format where the value and the unit are coupled (e.g. 10s means 10 seconds)
There are eight implementations provided:
Implementation | Data Storage |
---|---|
ConsulRateLimiter | Consul |
RedisRateLimiter | Redis |
SpringDataRateLimiter | Spring Data |
Bucket4jJCacheRateLimiter | Bucket4j |
Bucket4jHazelcastRateLimiter | |
Bucket4jIgniteRateLimiter | |
Bucket4jInfinispanRateLimiter |
Bucket4j implementations require the relevant bean with @Qualifier("RateLimit")
:
JCache
- javax.cache.Cache
Hazelcast
- com.hazelcast.map.IMap
Ignite
- org.apache.ignite.IgniteCache
Infinispan
- org.infinispan.functional.ReadWriteMap
Property namespace: zuul.ratelimit
Property name | Values | Default Value |
---|---|---|
enabled | true/false | false |
behind-proxy | true/false | false |
response-headers | NONE, STANDARD, VERBOSE | VERBOSE |
key-prefix | String | ${spring.application.name:rate-limit-application} |
repository | CONSUL, REDIS, JPA, BUCKET4J_JCACHE, BUCKET4J_HAZELCAST, BUCKET4J_INFINISPAN, BUCKET4J_IGNITE | - |
deny-request | DenyRequest | - |
default-policy-list | List of Policy | - |
policy-list | Map of Lists of Policy | - |
postFilterOrder | int | FilterConstants.SEND_RESPONSE_FILTER_ORDER - 10 |
preFilterOrder | int | FilterConstants.FORM_BODY_WRAPPER_FILTER_ORDER |
Deny Request properties
Property name | Values | Default Value |
---|---|---|
origins | list of origins to have the access denied | - |
response-status-code | the http status code to be returned on a denied request | 403 (FORBIDDEN) |
Policy properties:
Property name | Values | Default Value |
---|---|---|
limit | number of requests | - |
quota | time of requests | - |
refresh-interval | seconds | 60 |
type | [ORIGIN, USER, URL, URL_PATTERN, ROLE, HTTP_METHOD, HTTP_HEADER] | [] |
breakOnMatch | true/false | false |
This section details how to add custom implementations
If the application needs to control the key strategy beyond the options offered by the type property then it can be done just by creating a custom RateLimitKeyGenerator
bean[1] implementation adding further qualifiers or something entirely different:
@Bean
public RateLimitKeyGenerator ratelimitKeyGenerator(RateLimitProperties properties, RateLimitUtils rateLimitUtils) {
return new DefaultRateLimitKeyGenerator(properties, rateLimitUtils) {
@Override
public String key(HttpServletRequest request, Route route, RateLimitProperties.Policy policy) {
return super.key(request, route, policy) + ":" + request.getMethod();
}
};
}
This framework uses 3rd party applications to control the rate limit access and these libraries are out of control of this framework. If one of the 3rd party applications fails, the framework will handle this failure in the DefaultRateLimiterErrorHandler
class which will log the error upon failure.
If there is a need to handle the errors differently, it can be achieved by defining a custom RateLimiterErrorHandler
bean[2], e.g:
@Bean
public RateLimiterErrorHandler rateLimitErrorHandler() {
return new DefaultRateLimiterErrorHandler() {
@Override
public void handleSaveError(String key, Exception e) {
// custom code
}
@Override
public void handleFetchError(String key, Exception e) {
// custom code
}
@Override
public void handleError(String msg, Exception e) {
// custom code
}
}
}
If the application needs to be notified when a rate limit access was exceeded then it can be done by listening to RateLimitExceededEvent
event:
@EventListener
public void observe(RateLimitExceededEvent event) {
// custom code
}
Spring Cloud Zuul Rate Limit is released under the non-restrictive Apache 2.0 license, and follows a very standard Github development process, using Github tracker for issues and merging pull requests into master. If you want to contribute even something trivial please do not hesitate, but follow the guidelines below.
Download Details:
Author: marcosbarbero
Source Code: https://github.com/marcosbarbero/spring-cloud-zuul-ratelimit
License: Apache-2.0 License