You Must Set These Goals For The Year 2022

The year of 2021 has been a year of challenges. All of us have faced many ups and downs in the year 2021. But now it is time to plan something fantastic for 2022. Things are becoming controllable. What you have to do now is set goals for yourself. Make sure your plans are according to your interest and based on ground realities. Do not wait for 2022 to come for planning your goals. Start thinking about 2022 from now. No one wants to spend more years like 2021. There are lots of advantages to setting goals for your life. A life without a goal is a life without purpose. Your goal helps in keeping you motivated throughout the life span. The goal does not make you feel bored with daily life because you will have a purpose. Whenever people demotivate you, your goal activates your energy and keeps you focused on your path (McFarland, 2021). 

Without planning and setting goals, it is self-evident that you will be stuck at many points. But proper planning of your goal paves your way. You also get to know about different circumstances. It is essential to ensure flexibility within your plan. Because things change over time, you need to have access to multiple ways for achieving your goal. So you have to plan for your goal very carefully. By having a goal, you make your present and future valuable. Keep checking if your goal is valuable as per demands of the century or not. If you find something better, then update your goal list. The very best thing about a goal is that it improves your work performance. Here are some recommended goals for the year of 2022 mentioned below.

Savings Goal

Last year has proved the importance of savings. It is evident now how all of a sudden unexpected circumstance can happen. If you do not have any savings in such times, then it will become challenging to survive. For savings, go and open an account at any bank. Set the amount that you will save every month. Set the amount as per your budget and deposit that in the bank monthly. Irrespective of that fixed amount, you can deposit spare money in the bank at the end of the month too. Whenever you see any unusual circumstances, this saves money and helps you in surviving. You can also invest in any business with this money. 

To Explore the World

In 2020 and 2021, the pandemic caused many problems in travelling. As things are getting better so, for 2022, ensure have a goal of travelling. Travel and explore new places. It will help in expanding the sphere. Travelling doesn’t always mean going abroad. You can also travel within your own country. There are many reasons behind the suggestion of a travelling goal. It helps the mind relax and promotes positive thinking. The viewing of nature makes you happy. You get a chance to move a little far from your daily work. You also get a chance to get away from the people who you face daily. Going through the same routine daily can put your energy down. While travelling gives you a chance to meet new people. You can travel through your vehicle, train, or aeroplane. It helps you in restoring your energy. Even if you travel once in six months, the happiness of travelling remains in your mind for a long time. Whenever you feel down while working, you can relive the moment.

Recommended by a dissertation writing service, save your moments in the form of pictures and ignite your pleasure. As a student or job holder, you have to work daily. Daily you have to face a hectic routine. As a student, you work on assignments. As a job holder, you have to deal with different projects. Every human needs a break to release stress. Another reason behind the suggestion of having a travelling goal is that it helps you in experiencing different cultures. Even if you travel within your own country, you experience a bundle of varying cultures. All countries have minorities, and these minorities have their own culture. That culture does not belong to our national one. So without going abroad, you can experience international culture through travelling as well. 

Career Goals

One should work on his/her career. After completing graduation, the first and foremost thing is to decide the path of a career. So for the year 2022, establishment of a career is one of the best goals. For that, you should start planning many things. If you have the same goal, then start planning it right now. Make a list of companies that need experts in your field. For example, if you are an Engineer, make a list of engineering consultancies and see where you are suitable as a fresh graduate. Learn more about the vacancies. Search the role of listed positions in that company. The company will only hire you if you have the knowledge regarding that role. Mark important dates and deadlines of application on your calendar. Check out the vacancies and note down the month ranges in which the companies go for new hiring (Monteiro et al., 2021).

Also, if you are interested in managing your own business, start planning it right now. As an inexperienced person, you cannot get success initially. For generating profit, you need to have command on the work. Start increasing your social and professional network. Ask them for different beneficial tips and tricks. See the advantages and disadvantages of each step. Start collecting money for investment and all related aspects. Make plans, strategies, and work on the objectives that you set for yourself. 

Health Goals

The years 2020 and 2021 have taught us all the importance of health. Nothing is more important than one’s health. Without health, you cannot enjoy life. A sick person only faces negativity in the phase of sickness. You should plan to focus on your health before anything else. My health goal for 2022 includes a balance between work and personal life. If you have the same goal, then plan some short breaks for your mental, as well as physical health. You can go for your check-up. It can be once a month or two months.

What is GEEK

Buddha Community

You Must Set These Goals For The Year 2022
Hermann  Frami

Hermann Frami

1651383480

A Simple Wrapper Around Amplify AppSync Simulator

This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.

Install

npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator

Usage

This plugin relies on your serverless yml file and on the serverless-offline plugin.

plugins:
  - serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
  - serverless-appsync-simulator
  - serverless-offline

Note: Order is important serverless-appsync-simulator must go before serverless-offline

To start the simulator, run the following command:

sls offline start

You should see in the logs something like:

...
Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002
...

Configuration

Put options under custom.appsync-simulator in your serverless.yml file

| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789 | When using API_KEY as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |

Example:

custom:
  appsync-simulator:
    location: '.webpack/service' # use webpack build directory
    dynamoDb:
      endpoint: 'http://my-custom-dynamo:8000'

Hot-reloading

By default, the simulator will hot-relad when changes to *.graphql or *.vtl files are detected. Changes to *.yml files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.

Hot-reloading relies on watchman. Make sure it is installed on your system.

You can change the files being watched with the watch option, which is then passed to watchman as the match expression.

e.g.

custom:
  appsync-simulator:
    watch:
      - ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
      - "*.graphql"                                 # => string like this is equivalent to `["match", "*.graphql"]`

Or you can opt-out by leaving an empty array or set the option to false

Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.

Resource CloudFormation functions resolution

This plugin supports some resources resolution from the Ref, Fn::GetAtt and Fn::ImportValue functions in your yaml file. It also supports some other Cfn functions such as Fn::Join, Fb::Sub, etc.

Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation

Basic usage

You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.

provider:
  environment:
    BUCKET_NAME:
      Ref: MyBucket # resolves to `my-bucket-name`

resources:
  Resources:
    MyDbTable:
      Type: AWS::DynamoDB::Table
      Properties:
        TableName: myTable
      ...
    MyBucket:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: my-bucket-name
    ...

# in your appsync config
dataSources:
  - type: AMAZON_DYNAMODB
    name: dynamosource
    config:
      tableName:
        Ref: MyDbTable # resolves to `myTable`

Override (or mock) values

Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.

In those cases, you can define (or override) those values using the refMap, getAttMap and importValueMap options.

  • refMap takes a mapping of resource name to value pairs
  • getAttMap takes a mapping of resource name to attribute/values pairs
  • importValueMap takes a mapping of import name to values pairs

Example:

custom:
  appsync-simulator:
    refMap:
      # Override `MyDbTable` resolution from the previous example.
      MyDbTable: 'mock-myTable'
    getAttMap:
      # define ElasticSearchInstance DomainName
      ElasticSearchInstance:
        DomainEndpoint: 'localhost:9200'
    importValueMap:
      other-service-api-url: 'https://other.api.url.com/graphql'

# in your appsync config
dataSources:
  - type: AMAZON_ELASTICSEARCH
    name: elasticsource
    config:
      # endpoint resolves as 'http://localhost:9200'
      endpoint:
        Fn::Join:
          - ''
          - - https://
            - Fn::GetAtt:
                - ElasticSearchInstance
                - DomainEndpoint

Key-value mock notation

In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}) in the import name.

This notation can be used with all mocks - refMap, getAttMap and importValueMap

provider:
  environment:
    FINISH_ACTIVITY_FUNCTION_ARN:
      Fn::ImportValue: other-service-api-${self:provider.stage}-url

custom:
  serverless-appsync-simulator:
    importValueMap:
      - key: other-service-api-${self:provider.stage}-url
        value: 'https://other.api.url.com/graphql'

Limitations

This plugin only tries to resolve the following parts of the yml tree:

  • provider.environment
  • functions[*].environment
  • custom.appSync

If you have the need of resolving others, feel free to open an issue and explain your use case.

For now, the supported resources to be automatically resovled by Ref: are:

  • DynamoDb tables
  • S3 Buckets

Feel free to open a PR or an issue to extend them as well.

External functions

When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".

custom:
  appsync-simulator:
    functions:
      addUser:
        url: http://localhost:3016/2015-03-31/functions/addUser/invocations
        method: post
      addPost:
        url: https://jsonplaceholder.typicode.com/posts
        method: post

Supported Resolver types

This plugin supports resolvers implemented by amplify-appsync-simulator, as well as custom resolvers.

From Aws Amplify:

  • NONE
  • AWS_LAMBDA
  • AMAZON_DYNAMODB
  • PIPELINE

Implemented by this plugin

  • AMAZON_ELASTIC_SEARCH
  • HTTP
  • RELATIONAL_DATABASE

Relational Database

Sample VTL for a create mutation

#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
  #set( $regex = "([a-z])([A-Z]+)")
  #set( $replacement = "$1_$2")
  #set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
  #set( $discard = $cols.add("$toSnake") )
  #if( $util.isBoolean($ctx.args.input[$entry]) )
      #if( $ctx.args.input[$entry] )
        #set( $discard = $vals.add("1") )
      #else
        #set( $discard = $vals.add("0") )
      #end
  #else
      #set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
  #end
#end
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
  #set( $valStr = "($valStr)" )
#end
#if ( $colStr.substring(0, 1) != '(' )
  #set( $colStr = "($colStr)" )
#end
{
  "version": "2018-05-29",
  "statements":   ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM    <name-of-table> ORDER BY id DESC LIMIT 1"]
}

Sample VTL for an update mutation

#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
  #set( $cur = $ctx.args.input[$entry] )
  #set( $regex = "([a-z])([A-Z]+)")
  #set( $replacement = "$1_$2")
  #set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
  #if( $util.isBoolean($cur) )
      #if( $cur )
        #set ( $cur = "1" )
      #else
        #set ( $cur = "0" )
      #end
  #end
  #if ( $util.isNullOrEmpty($update) )
      #set($update = "$toSnake$equals'$cur'" )
  #else
      #set($update = "$update,$toSnake$equals'$cur'" )
  #end
#end
{
  "version": "2018-05-29",
  "statements":   ["UPDATE <name-of-table> SET $update WHERE id=$ctx.args.input.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.input.id"]
}

Sample resolver for delete mutation

{
  "version": "2018-05-29",
  "statements":   ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$ctx.args.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.id"]
}

Sample mutation response VTL with support for handling AWSDateTime

#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
    #set ($index = $index + 1)
    #if ( $column["typeName"] == "timestamptz" )
        #set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
        #set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
        #set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
        $util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
    #end
#end
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
    #set ( $s = $mapKey.split("_") )
    #set ( $camelCase="" )
    #set ( $isFirst=true )
    #foreach($entry in $s)
        #if ( $isFirst )
          #set ( $first = $entry.substring(0,1) )
        #else
          #set ( $first = $entry.substring(0,1).toUpperCase() )
        #end
        #set ( $isFirst=false )
        #set ( $stringLength = $entry.length() )
        #set ( $remaining = $entry.substring(1, $stringLength) )
        #set ( $camelCase = "$camelCase$first$remaining" )
    #end
    $util.qr( $response.put("$camelCase", $res[$mapKey]) )
#end
$utils.toJson($response)

Using Variable Map

Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.

Will be escaped properly: null, true, and false values.

{
  "version": "2018-05-29",
  "statements":   [
    "UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
    "SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
  ],
  variableMap: {
    ":ID": $ctx.args.id,
##    ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!
  }
}

Requires

Author: Serverless-appsync
Source Code: https://github.com/serverless-appsync/serverless-appsync-simulator 
License: MIT License

#serverless #sync #graphql 

Migrating From Jira Server: Guide, Pros, And Cons

February 15, 2022 marked a significant milestone in Atlassian’s Server EOL (End Of Life) roadmap. This was not the final step. We still have two major milestones ahead of us: end of new app sales in Feb 2023, and end of support in Feb 2024. In simpler words, businesses still have enough time to migrate their Jira Server to one of the two available products – Atlassian Cloud or Atlassian DC. But the clock is ticking. 

Jira Cloud VS Data Center

If we were to go by Atlassian numbers, 95% of their new customers choose cloud. 

“About 80% of Fortune 500 companies have an Atlassian Cloud license. More than 90% of new customers choose cloud first.” – Daniel Scott, Product Marketing Director, Tempo

So that’s settled, right? We are migrating from Server to Cloud? And what about the solution fewer people talk about yet many users rely on – Jira DC? 

Both are viable options and your choice will depend greatly on the needs of your business, your available resources, and operational processes. 

Let’s start by taking a look at the functionality offered by Atlassian Cloud and Atlassian DC.

FeatureAtlassian CloudAtlassian Data Center
Product PlansMultiple plansOne plan
BillingMonthly and annualAnnual only
Pricing modelPer user or tieredTiered only
SupportVarying support levels depending on your plan: Enterprise support coverage is equivalent to Atlassian’s Data Center Premier Support offeringVarying support levels depending on the package: Priority Support or Premier Support (purchased separately)
Total Cost of OwnershipTCO includes your subscription fee, plus product administration timeTCO includes your subscription fee and product administration time, plus: costs related to infrastructure provisioning or IaaS fees (for example, AWS costs) planned downtime time and resources needed for software upgrades
Data encryption services✅❌
Data residency services✅❌
Audit loggingOrganization-level audit logging available via Atlassian Access (Jira Software, Confluence) 

Product-level audit logs (Jira Software, Confluence)
Advanced audit logging
Device securityMobile device management support (Jira Software, Confluence, Jira Service Management)

Mobile application management (currently on the roadmap)
Mobile device management support (Jira Software, Confluence, Jira Service Management) 
Content security✅❌
Data Storage limits2 GB (Free)

250 GB (Standard)

Unlimited storage (Premium and Enterprise)
No limits
PerformanceContinuous performance updates to improve load times, search responsiveness, and attachments

Cloud infrastructure hosted in six geographic regions to reduce latency
 
Rate limitingCDN supports Smart mirrors and mirror farms (Bitbucket)
Backup and data disaster recoveryJira leverages multiple geographically diverse data centers, has a comprehensive backup program, and gains assurance by regularly testing their disaster recovery and business continuity plans. 

Backups are generated daily and retained for 30 days to allow for point-in-time data restoration
❌
Containerization and orchestration✅Docker images

Kubernetes support (on the roadmap for now)
Change management and upgradesAtlassian automatically handles software and security upgrades for you Sandbox instance to test changes (Premium and Enterprise) 

Release track options for Premium and Enterprise (Jira Software, Jira Service Management, Confluence)
❌
Direct access to the databaseNo direct access to change the database structure, file system, or other server infrastructure

Extensive REST APIs for programmatic data access
Direct database access
Insights and reportingOrganization and admin insights to track adoption of Atlassian products, and evaluate the security of your organization.Data Pipeline for advanced insightsConfluence analytics

Pros and cons of Jira Cloud

When talking about pros and cons, there’s always a chance that a competitive advantage for some is a dealbreaker for others. That’s why I decided to talk about pros and cons in matching pairs. 

Pro: Scalability is one of the primary reasons businesses are choosing Jira Cloud. DC is technically also scalable, but you’ll need to scale on your own whereas the cloud version allows for the infrastructure to scale with your business. 

Con: Despite the cloud’s ability to grow with your business, there is still a user limit of 35k users. In addition to that, the costs will grow alongside your needs. New users, licenses, storage, and computing power – all come at an additional cost. So, when your organization reaches a certain size, migrating to Jira DC becomes more cost-efficient.

Pro: Jira takes care of maintenance and support for you.

Con: Your business can suffer from unpredicted downtime. And there are certain security risks.  

Pro: Extra bells and whistles: 

  • Sandbox: Sandbox is a safe environment system admins can use to test applications and integrations before rolling them out to the production environment. 
  • Release tracks: Admins can be more flexible with their product releases as they can access batch and control cloud releases. This means they’ll have much more time to test existing configurations and workflows against a new update. 
  • Insight Discovery: More data means more ways you can impact your business or product in a positive, meaningful way. 
  • Team Calendars: This is a handy feature for synchronization and synergy across teams. 

Con: Most of these features are locked behind a paywall and are only available to either Premium and Enterprise or only Enterprise licenses (either fully or through addition of functionality. For example, Release tracks are only available to Enterprise customers.) In addition, the costs will grow as you scale the offering to fit your growing needs. 

Pros and cons of Jira Data Center

I’ll be taking the same approach to talking about the pros and cons as I did when writing about Atlassian Cloud. Pros and cons are paired. 

Pro: Hosting your own system means you can scale horizontally and vertically through additional hardware. Extension of your systems is seamless, and there is no downtime (if you do everything correctly). Lastly, you don’t have to worry about the user limit – there is none. 

Con: While having more control over your systems is great, it implies a dedicated staff of engineers, additional expenses on software licensing, hardware, and physical space. Moreover, seamless extension and 0% downtime are entirely on you.

Pro: Atlassian has updated the DC offering with native bundled applications such as Advanced Roadmaps, team calendars and analytics for confluence, insight asset management, and insight discovery in Jira Service Management DC.

Con: Atlassian has updated their pricing to reflect these changes. And you are still getting fewer “bells and whistles” than Jira Cloud users (as we can see from the feature comparison). 

Pro: You are technically safer as the system is supported on your hardware by your specialists. Any and all Jira server issues, poor updates, and downtime are simply not your concern.
 

Con: Atlassian offers excellent security options: data encryption in transit and rest, to mobile app management, to audit offerings and API token controls. In their absence, your team company has to dedicate additional resources to security. 

Pro: Additional benefits from Atlassian, such as the Priority Support bundle (all DC subscriptions have this option), and the Data center loyalty discount (more on that in the pricing section.)

The Pricing

Talking about pricing of SaaS products is always a challenge as there are always multiple tiers and various pay-as-you go features. Barebones Jira Cloud, for instance, is completely free of charge, yet there are a series of serious limitations. 

Standard Jira Cloud will cost you an average of $7.50 per user per month while premium cranks that price up to $14.50. The Enterprise plan is billed annually and the cost is determined on a case-by-case basis. You can see the full comparison of Jira Cloud plans here. And you can use this online calculator to learn the cost of ownership in your particular case.

50 UsersStandard (Monthly/Annually)Premium (Monthly/Annually)
Jira Software$387.50 / $3,900$762.50 / $7,650
Jira Work Management$250 / $2,500❌
Jira Service Management$866.25 / $8,650$2,138.25 / $21,500
Confluence$287.50 / $2,900$550 / $5,500
100 UsersStandard (Monthly/Annually)Premium (Monthly/Annually)
Jira Software$775 / $7,750$1,525 / $15,250
Jira Work Management$500 / $5,000❌
Jira Service Management$1,653.75 / $16,550$4,185.75 / $42,000
Confluence$575 / $5,750$1,100 / $11,000
500 UsersStandard (Monthly/Annually)Premium (Monthly/Annually)
Jira Software$3,140 / $31,500$5,107.50 / $51,000 
Jira Work Management$1,850 / $18,500❌
Jira Service Management$4,541.25 / $45,400$11,693.25 / $117,000
Confluence$2,060 / $20,500$3,780 / $37,800

Please note that these prices were calculated without any apps included. 

Jira Data Center starts at $42,000 per year and the plan includes up to 500 users. If you are a new client and are not eligible for any discounts*, here’s a chart that should give you an idea as to the cost of ownership of Jira DC. You can find more information regarding your specific case here.

UsersCommercial Annual PlanAcademic Annual Plan
1-500USD 42,000USD 21,000
501-1000USD 72,000USD 36,000
1001-2000USD 120,000USD 60,000
Confluence for Data Center  
1-500USD 27,000USD 13,500
501-1000USD 48,000USD 24,000
1001-2000USD 84,000USD 42,000
Bitbucket for Data Center  
1-25USD 2,300USD 1,150
26-50USD 4,200USD 2,100
51-100USD 7,600USD 3,800
Jira Service Management for Data Center  
1-50USD 17,200USD 8,600
51-100USD 28,600USD 14,300
101-250USD 51,500USD 25,750

*Discounts:

  • Centralized per-user licensing allows users access all enterprise instances with a single Enterprise license.
  • There’s an option for dual licensing for users who purchase an annual cloud subscription with 1,001 or more users. In this case, Atlassian extends your existing server maintenance or Data Center subscription for up to one year at a 100% discount.
  • There are certain discounts for apps depending on your partnership level.
  • Depending on your situation, you may qualify for several Jira Data Center discount programs:

What should be your User Migration strategy?

Originally, there were several migration methods: Jira Cloud Migration Assistant, Jira Cloud Site Import, and there was an option to migrate via CSV export (though Jira actively discourages you from using this method). However, Jira’s team has focused their efforts on improving the Migration Assistant and have chosen to discontinue Cloud Site Import support.

Thanks to the broadened functionality of the assistant, it is now the only go-to method for migration with just one exception. If you are migrating over 1000 users and you absolutely need to migrate advanced roadmaps – you’ll need to rely on Site Import. At least for now, as Jira is actively working on implementing this feature in their assistant.

Here’s a quick comparison of the options and their limitations.

 FeaturesLimitations
Cloud Migration AssistantApp migration

Existing data on a Cloud Site is not overwritten

You choose the projects, users, and groups you want to migrate

Jira Service Management customer account migration

Better UI to guide you through the migration

Potential migration errors are displayed in advance

Migration can be done in phases reducing the downtime

Pre- and post-migration reports
You must be on a supported self-managed version of Jira
Site ExportCan migrate Advanced RoadmapsApp data is not migrated

Migration overrides existing data on the Cloud site

Separate user importUsers from external directories are not migrated

No choice of data you want or don’t want migrated

There’s a need to split attachments into up to 5GB chunks

Higher risks of downtime due to the “all or nothing” approach

You must be on a supported self-managed version of Jira

Pro tip: If you have a large base of users (above 2000), migrate them before you migrate projects and spaces. This way, you will not disrupt the workflow as users are still working on Server and the latter migration of data will take less time. 

How to migrate to Jira Cloud

Now that we have settled on one particular offering based on available pricing models as well as the pros and the cons that matter the most to your organization, let’s talk about the “how”. 

How does one migrate from Jira Server to Jira Cloud?

Pre-migration checklist

Jira’s Cloud Migration Assistant is a handy tool. It will automatically review your data for common errors. But it is incapable of doing all of the work for you. That’s why we – and Atlassian for that matter – recommend creating a pre-migration checklist.   

Smart Checklist will help you craft an actionable, context-rich checklist directly inside a Jira ticket. This way, none of the tasks will be missed, lost, or abandoned. 

Below is an example of how your migration checklist will look like in Jira. 

Feel free to copy the code and paste it into your Smart Checklist editor and you’ll have the checklist at the ready. 

# Create a user migration plan #must
> Please keep in mind that Jira Cloud Migration Assistant migrates all users and groups as well as users and groups related to selected projects
- Sync your user base
- Verify synchronization
- External users sync verification
- Active external directory verification
## Check your Jira Server version #must
- Verify via user interface or Support Zip Product Version Verification
> Jira Migration Assistant will not work unless Jira is running on a supported version
## Fix any duplicate email addresses #must
- Verify using SQL
> Duplicate email addresses are not supported by Jira Cloud and therefore can't be migrated with the Jira Cloud Migration Assistant. To avoid errors, you should find and fix any duplicate email addresses before migration. If user information is managed in an LDAP Server, you will need to update emails there and sync with Jira before the migration. If user information is managed locally, you can fix them through the Jira Server or Data Center user interface.
## Make sure you have the necessary permissions #must
- System Admin global permissions on the Server instance
- Exists in the target Cloud site
- Site Administrator Permission in the cloud
## Check for conflicts with group names #must
- Make sure that the groups in your Cloud Site don't have the same names as groups in Server
> Unless you are actively trying to merge them
- Delete or update add-on users so not to cause migration issues
- Verify via SQL
## Update firewall allowance rules #must
- None of the domains should be blocked by firewall or proxy
## Find a way to migrate apps #must
- Contact app vendors
## Check public access settings #must
- Projects
- Filters
- Filters
- Boards
- Dashboards
## Review server setup #mst
- at least 4gb Heap Allocation
- Open Files limit review
- Verify via support zip
## Check Server timezone #must for merging Cloud sites
- Switch to UTC is using any other timezone
> Add a system flag to the Jira Server instance -Duser.timezone=UTC as outlined in this article about updating documentation to include timezone details.
## Fix any duplicate shared configuration
## Storage limits
## Prepare the server instance
- Check data status
- All fields have value and are not null
-Any archived projects you wish to migrate are activated
## Prepare your cloud site
- Same Jira products enabled
- Same language
- User migration strategy
## Data backup
- Backup Jira Server site
- Backup Cloud site
## Run a test migration
- Done
## Notify Jira support
- Get in touch with Jira migration support

Use backups

On the one hand, having all of your Jira products on a server may seem like a backup in and of itself. On the other hand, there are data migration best practices we should follow even if it’s just a precaution. No one has ever felt sorry for their data being too safe. 

In addition, there are certain types of migration errors that can be resolved much faster with having a backup at hand. 

  1. Jira Server Database backup: this step creates a DB backup in an XML format.
    1. Log in with Jira System Admin permissions
    2. Go to system -> Import and Export -> Backup Manager -> Backup for server.
    3. Click the create Backup for server button. 
    4. Type in the name for your backup. 
    5. Jira will create a zipped XML file and notify you once the backup is ready. 

  1. Jira Cloud Backup: This backup also saves your data in an XML format. The process is quite similar to creating a Jira Server backup with the only difference taking place on the Backups page.
    1. Select the option to save your attachments, logos, and avatars.
    2. Click on the Create backup button. 

  1. As you can see, the Cloud backup includes the option to save attachments, avatars, and logos. This step should be done manually when backing up Server data.
    1. Create a Zip archive for this data
    2. Make sure it follows the structure suggested by Atlassian

Migrating your Jira instance to the cloud via the Jira Migration Assistant

Jira Cloud Migration Assistant is a free add-on Atlassian recommends using when migrating to the cloud. It accesses and evaluates your apps and helps migrate multiple projects. 

Overall, the migration assistant offers a more stable and reliable migration experience. It automatically checks for certain errors. It makes sure all users have unique and valid emails, and makes sure that none of the project names and keys conflict with one another. 

This is a step-by-step guide for importing your Jira Server data backup file into Jira Cloud.

  1. Log into Jira Cloud with admin permissions
  2. Go to System -> Import and Export -> External System Import
  3. Click on the Jira Server import option

  1. Select the backup Zip you have created 
  2. Jira will check the file for errors and present you with two options: enable or disable outgoing mail. Don’t worry, you will be able to change this section after the migration process is complete. 
  3. Then you will be presented with an option to merge Jira Server and Jira Cloud users
    1. Choosing overwrite will replace the users with users from the imported files
    2. The merge option will merge groups with the same name
    3. Lastly, you can select the third option if you are migrating users via Jira’s assistant
  4. Run the import

How do you migrate Jira Server into Jira DC?

Before we can proceed with the migration process, please make sure you meet the following prerequisites:

  1. Make sure you are installing Jira on one of the supported platforms. Atlassian has a list of supported platforms for Jira 9.1.
  2. Make sure the applications you are using are compatible with Jira DC. You will be required to switch to datacenter-compatible versions of your applications (they must be available). 
  3. Make sure you meet the necessary software and hardware requirements:
    1. You have a DC license
    2. You are using a supported database, OS, and Java version
    3. You are using OAuth authentication if your application links to other Atlassian products

Once you are certain you are ready to migrate your Jira Server to Jira Data Center, you can proceed with an installation that’s much simpler than one would expect.

  1. Upgrade your apps to be compatible with Jira DC
  2. Go to Administration -> Applications -> Versions and licenses
  3. Enter your Jira DC License Key
  4. Restart Jira

That’s it. You are all set. Well, unless your organization has specific needs such as continuous uptime, performance under heavy loads, and scalability, in which case you will need to set up a server cluster. You can find out more about setting up server clusters in this guide.  

Simpliv LLC

Simpliv LLC

1582886905

Career Goal Mapping Course | The Beginner's Guide to Goal Setting | Simpliv

Description
So, here we are again another year, another opportunity to DO more, BE more, HAVE more but lets look back at last year (and the year before, and the year before that) did you or have you managed to achieve any of the goals on your list? Do you even MAKE “lists”, or do you just kind of wait for “life” to happen to you? Has it been ‘happening’ to you in the way that you want?

This course is for all of those people out there who want to make a CHANGE this year! OR who want to make a difference!! Now, ALTHOUGH I keep saying ‘this year’ - this is because this course was created on 1st Jan 2017 but your ‘year’ can begin at any time. 1st June, 1st September, on your birthday it doesn’t really matter. All that matters, is that you MARK THIS DAY as the day that you turned your life around and everything started to look up!

Am I a motivation coach and speaker? No. Am I here to tell you how rubbish you are and to promise that I have the answer to all of life’s mysteries and ills? No. I am simply someone, who believes that LIFE is about CHOICES. We all have 24 hours in a day, and how we choose to SPEND those hours, minutes, seconds is how we came to be in the position that we are in today maybe it was deliberate, or maybe you’ve kind of just wandered and floated up unto this point. What I can assure you, is that this course is all about the science / art of intention and of DELIBERATE CREATION. Together, you and I are going to create the PLAN for the next 12 months ahead. And then I’m going to show you how you break this plan down, right into day-to-day actions, that will take you in the direction that you want to go in.

No more wandering about. No more pontificating. No more pro-procrastinating! No more ‘thinking’ this is all about Doing. Have you ever looked at someone and been envious? I want their life! How to they do that? What are they DOING that I’m not?? Could it be that they have a master plan, that they’re following? Could it be that they have tuned into what they want, and set about going to get it? I remember when I was 16, and one of the first waitiressing jobs I had was with a company called Peoples Network UK. The lady who ran it (Rita) said to me “You have to grab life by the balls Lisa, and shake it for all its got!!!” About 1 month later, she was dead. Tragic drink-driving car accident. But those words never left me. All you’ve got to do it just GRAB LIFE by the balls!!! And I’m proud to say, almost 20 years later. I’m doing just that!

The methodology I’m about to lay out to you was the reason for 2015 being ‘the year of travel’. I went to around 10 countries that year. I set the intention and off I went. 2016 was the year of completion - financial results. This year 2018, will be the year of relationships. Just watch this space. 2018 is numerologically a year of relationships 2 + 0 + 1 + 8 = 11 = 1 + 1 = 2. Look up life path number 2. So, I invite you, my friend and student, to join me on a journey, whereby together, we reflect, and then we set the intention - and make the next 12 months your most successful EVER!

Note - when your life starts to change and everyone wonders what happened to you?? Please share this course with them! Thank-you in advance!

This is not new-age science or mysticism. This is solid, tangible, measureable life-changing material, which you can use over and over again, to get the results you want, and not just dream about.

Who is the target audience?

Progressive people, self-starters, those who want to get somewhere in life - achievers!
Basic knowledge
Student will need a glass of red wine and a nice quiet place for 3-4 hours to seriously think about their life - and where they want it to go
What will you learn
Create and manifest the best ever year of their life - a process they’ll be able to repeat at intervals (annually, monthly, seasonally), and just create the life they know, want and deserve!

ENROLL

#Top Goal Setting Courses Online #Online Goal Setting Classes #Certified Goal Mapping Coach Programme #Goal Setting online short course

Hermann  Frami

Hermann Frami

1651319520

Serverless APIGateway Service Proxy

Serverless APIGateway Service Proxy

This Serverless Framework plugin supports the AWS service proxy integration feature of API Gateway. You can directly connect API Gateway to AWS services without Lambda.

Install

Run serverless plugin install in your Serverless project.

serverless plugin install -n serverless-apigateway-service-proxy

Supported AWS services

Here is a services list which this plugin supports for now. But will expand to other services in the feature. Please pull request if you are intersted in it.

  • Kinesis Streams
  • SQS
  • S3
  • SNS
  • DynamoDB
  • EventBridge

How to use

Define settings of the AWS services you want to integrate under custom > apiGatewayServiceProxies and run serverless deploy.

Kinesis

Sample syntax for Kinesis proxy in serverless.yml.

custom:
  apiGatewayServiceProxies:
    - kinesis: # partitionkey is set apigateway requestid by default
        path: /kinesis
        method: post
        streamName: { Ref: 'YourStream' }
        cors: true
    - kinesis:
        path: /kinesis
        method: post
        partitionKey: 'hardcordedkey' # use static partitionkey
        streamName: { Ref: 'YourStream' }
        cors: true
    - kinesis:
        path: /kinesis/{myKey} # use path parameter
        method: post
        partitionKey:
          pathParam: myKey
        streamName: { Ref: 'YourStream' }
        cors: true
    - kinesis:
        path: /kinesis
        method: post
        partitionKey:
          bodyParam: data.myKey # use body parameter
        streamName: { Ref: 'YourStream' }
        cors: true
    - kinesis:
        path: /kinesis
        method: post
        partitionKey:
          queryStringParam: myKey # use query string param
        streamName: { Ref: 'YourStream' }
        cors: true
    - kinesis: # PutRecords
        path: /kinesis
        method: post
        action: PutRecords
        streamName: { Ref: 'YourStream' }
        cors: true

resources:
  Resources:
    YourStream:
      Type: AWS::Kinesis::Stream
      Properties:
        ShardCount: 1

Sample request after deploying.

curl https://xxxxxxx.execute-api.us-east-1.amazonaws.com/dev/kinesis -d '{"message": "some data"}'  -H 'Content-Type:application/json'

SQS

Sample syntax for SQS proxy in serverless.yml.

custom:
  apiGatewayServiceProxies:
    - sqs:
        path: /sqs
        method: post
        queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
        cors: true

resources:
  Resources:
    SQSQueue:
      Type: 'AWS::SQS::Queue'

Sample request after deploying.

curl https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/sqs -d '{"message": "testtest"}' -H 'Content-Type:application/json'

Customizing request parameters

If you'd like to pass additional data to the integration request, you can do so by including your custom API Gateway request parameters in serverless.yml like so:

custom:
  apiGatewayServiceProxies:
    - sqs:
        path: /queue
        method: post
        queueName: !GetAtt MyQueue.QueueName
        cors: true

        requestParameters:
          'integration.request.querystring.MessageAttribute.1.Name': "'cognitoIdentityId'"
          'integration.request.querystring.MessageAttribute.1.Value.StringValue': 'context.identity.cognitoIdentityId'
          'integration.request.querystring.MessageAttribute.1.Value.DataType': "'String'"
          'integration.request.querystring.MessageAttribute.2.Name': "'cognitoAuthenticationProvider'"
          'integration.request.querystring.MessageAttribute.2.Value.StringValue': 'context.identity.cognitoAuthenticationProvider'
          'integration.request.querystring.MessageAttribute.2.Value.DataType': "'String'"

The alternative way to pass MessageAttribute parameters is via a request body mapping template.

Customizing request body mapping templates

See the SQS section under Customizing request body mapping templates

Customizing responses

Simplified response template customization

You can get a simple customization of the responses by providing a template for the possible responses. The template is assumed to be application/json.

custom:
  apiGatewayServiceProxies:
    - sqs:
        path: /queue
        method: post
        queueName: !GetAtt MyQueue.QueueName
        cors: true
        response:
          template:
            # `success` is used when the integration response is 200
            success: |-
              { "message: "accepted" }
            # `clientError` is used when the integration response is 400
            clientError: |-
              { "message": "there is an error in your request" }
            # `serverError` is used when the integration response is 500
            serverError: |-
              { "message": "there was an error handling your request" }

Full response customization

If you want more control over the integration response, you can provide an array of objects for the response value:

custom:
  apiGatewayServiceProxies:
    - sqs:
        path: /queue
        method: post
        queueName: !GetAtt MyQueue.QueueName
        cors: true
        response:
          - statusCode: 200
            selectionPattern: '2\\d{2}'
            responseParameters: {}
            responseTemplates:
              application/json: |-
                { "message": "accepted" }

The object keys correspond to the API Gateway integration response object.

S3

Sample syntax for S3 proxy in serverless.yml.

custom:
  apiGatewayServiceProxies:
    - s3:
        path: /s3
        method: post
        action: PutObject
        bucket:
          Ref: S3Bucket
        key: static-key.json # use static key
        cors: true

    - s3:
        path: /s3/{myKey} # use path param
        method: get
        action: GetObject
        bucket:
          Ref: S3Bucket
        key:
          pathParam: myKey
        cors: true

    - s3:
        path: /s3
        method: delete
        action: DeleteObject
        bucket:
          Ref: S3Bucket
        key:
          queryStringParam: key # use query string param
        cors: true

resources:
  Resources:
    S3Bucket:
      Type: 'AWS::S3::Bucket'

Sample request after deploying.

curl https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/s3 -d '{"message": "testtest"}' -H 'Content-Type:application/json'

Customizing request parameters

Similar to the SQS support, you can customize the default request parameters serverless.yml like so:

custom:
  apiGatewayServiceProxies:
    - s3:
        path: /s3
        method: post
        action: PutObject
        bucket:
          Ref: S3Bucket
        cors: true

        requestParameters:
          # if requestParameters has a 'integration.request.path.object' property you should remove the key setting
          'integration.request.path.object': 'context.requestId'
          'integration.request.header.cache-control': "'public, max-age=31536000, immutable'"

Customizing request templates

If you'd like use custom API Gateway request templates, you can do so like so:

custom:
  apiGatewayServiceProxies:
    - s3:
        path: /s3
        method: get
        action: GetObject
        bucket:
          Ref: S3Bucket
        request:
          template:
            application/json: |
              #set ($specialStuff = $context.request.header.x-special)
              #set ($context.requestOverride.path.object = $specialStuff.replaceAll('_', '-'))
              {}

Note that if the client does not provide a Content-Type header in the request, ApiGateway defaults to application/json.

Customize the Path Override in API Gateway

Added the new customization parameter that lets the user set a custom Path Override in API Gateway other than the {bucket}/{object} This parameter is optional and if not set, will fall back to {bucket}/{object} The Path Override will add {bucket}/ automatically in front

Please keep in mind, that key or path.object still needs to be set at the moment (maybe this will be made optional later on with this)

Usage (With 2 Path Parameters (folder and file and a fixed file extension)):

custom:
  apiGatewayServiceProxies:
    - s3:
        path: /s3/{folder}/{file}
        method: get
        action: GetObject
        pathOverride: '{folder}/{file}.xml'
        bucket:
          Ref: S3Bucket
        cors: true

        requestParameters:
          # if requestParameters has a 'integration.request.path.object' property you should remove the key setting
          'integration.request.path.folder': 'method.request.path.folder'
          'integration.request.path.file': 'method.request.path.file'
          'integration.request.path.object': 'context.requestId'
          'integration.request.header.cache-control': "'public, max-age=31536000, immutable'"

This will result in API Gateway setting the Path Override attribute to {bucket}/{folder}/{file}.xml So for example if you navigate to the API Gatway endpoint /language/en it will fetch the file in S3 from {bucket}/language/en.xml

Can use greedy, for deeper Folders

The forementioned example can also be shortened by a greedy approach. Thanks to @taylorreece for mentioning this.

custom:
  apiGatewayServiceProxies:
    - s3:
        path: /s3/{myPath+}
        method: get
        action: GetObject
        pathOverride: '{myPath}.xml'
        bucket:
          Ref: S3Bucket
        cors: true

        requestParameters:
          # if requestParameters has a 'integration.request.path.object' property you should remove the key setting
          'integration.request.path.myPath': 'method.request.path.myPath'
          'integration.request.path.object': 'context.requestId'
          'integration.request.header.cache-control': "'public, max-age=31536000, immutable'"

This will translate for example /s3/a/b/c to a/b/c.xml

Customizing responses

You can get a simple customization of the responses by providing a template for the possible responses. The template is assumed to be application/json.

custom:
  apiGatewayServiceProxies:
    - s3:
        path: /s3
        method: post
        action: PutObject
        bucket:
          Ref: S3Bucket
        key: static-key.json
        response:
          template:
            # `success` is used when the integration response is 200
            success: |-
              { "message: "accepted" }
            # `clientError` is used when the integration response is 400
            clientError: |-
              { "message": "there is an error in your request" }
            # `serverError` is used when the integration response is 500
            serverError: |-
              { "message": "there was an error handling your request" }

SNS

Sample syntax for SNS proxy in serverless.yml.

custom:
  apiGatewayServiceProxies:
    - sns:
        path: /sns
        method: post
        topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
        cors: true

resources:
  Resources:
    SNSTopic:
      Type: AWS::SNS::Topic

Sample request after deploying.

curl https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/sns -d '{"message": "testtest"}' -H 'Content-Type:application/json'

Customizing responses

Simplified response template customization

You can get a simple customization of the responses by providing a template for the possible responses. The template is assumed to be application/json.

custom:
  apiGatewayServiceProxies:
    - sns:
        path: /sns
        method: post
        topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
        cors: true
        response:
          template:
            # `success` is used when the integration response is 200
            success: |-
              { "message: "accepted" }
            # `clientError` is used when the integration response is 400
            clientError: |-
              { "message": "there is an error in your request" }
            # `serverError` is used when the integration response is 500
            serverError: |-
              { "message": "there was an error handling your request" }

Full response customization

If you want more control over the integration response, you can provide an array of objects for the response value:

custom:
  apiGatewayServiceProxies:
    - sns:
        path: /sns
        method: post
        topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
        cors: true
        response:
          - statusCode: 200
            selectionPattern: '2\d{2}'
            responseParameters: {}
            responseTemplates:
              application/json: |-
                { "message": "accepted" }

The object keys correspond to the API Gateway integration response object.

Content Handling and Pass Through Behaviour customization

If you want to work with binary fata, you can not specify contentHandling and PassThrough inside the request object.

custom:
  apiGatewayServiceProxies:
    - sns:
        path: /sns
        method: post
        topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
        request:
          contentHandling: CONVERT_TO_TEXT
          passThrough: WHEN_NO_TEMPLATES

The allowed values correspond with the API Gateway Method integration for ContentHandling and PassthroughBehavior

DynamoDB

Sample syntax for DynamoDB proxy in serverless.yml. Currently, the supported DynamoDB Operations are PutItem, GetItem and DeleteItem.

custom:
  apiGatewayServiceProxies:
    - dynamodb:
        path: /dynamodb/{id}/{sort}
        method: put
        tableName: { Ref: 'YourTable' }
        hashKey: # set pathParam or queryStringParam as a partitionkey.
          pathParam: id
          attributeType: S
        rangeKey: # required if also using sort key. set pathParam or queryStringParam.
          pathParam: sort
          attributeType: S
        action: PutItem # specify action to the table what you want
        condition: attribute_not_exists(Id) # optional Condition Expressions parameter for the table
        cors: true
    - dynamodb:
        path: /dynamodb
        method: get
        tableName: { Ref: 'YourTable' }
        hashKey:
          queryStringParam: id # use query string parameter
          attributeType: S
        rangeKey:
          queryStringParam: sort
          attributeType: S
        action: GetItem
        cors: true
    - dynamodb:
        path: /dynamodb/{id}
        method: delete
        tableName: { Ref: 'YourTable' }
        hashKey:
          pathParam: id
          attributeType: S
        action: DeleteItem
        cors: true

resources:
  Resources:
    YourTable:
      Type: AWS::DynamoDB::Table
      Properties:
        TableName: YourTable
        AttributeDefinitions:
          - AttributeName: id
            AttributeType: S
          - AttributeName: sort
            AttributeType: S
        KeySchema:
          - AttributeName: id
            KeyType: HASH
          - AttributeName: sort
            KeyType: RANGE
        ProvisionedThroughput:
          ReadCapacityUnits: 1
          WriteCapacityUnits: 1

Sample request after deploying.

curl -XPUT https://xxxxxxx.execute-api.us-east-1.amazonaws.com/dev/dynamodb/<hashKey>/<sortkey> \
 -d '{"name":{"S":"john"},"address":{"S":"xxxxx"}}' \
 -H 'Content-Type:application/json'

EventBridge

Sample syntax for EventBridge proxy in serverless.yml.

custom:
  apiGatewayServiceProxies:
    - eventbridge:  # source and detailType are hardcoded; detail defaults to POST body
        path: /eventbridge
        method: post
        source: 'hardcoded_source'
        detailType: 'hardcoded_detailType'
        eventBusName: { Ref: 'YourBusName' }
        cors: true
    - eventbridge:  # source and detailType as path parameters
        path: /eventbridge/{detailTypeKey}/{sourceKey}
        method: post
        detailType:
          pathParam: detailTypeKey
        source:
          pathParam: sourceKey
        eventBusName: { Ref: 'YourBusName' }
        cors: true
    - eventbridge:  # source, detail, and detailType as body parameters
        path: /eventbridge/{detailTypeKey}/{sourceKey}
        method: post
        detailType:
          bodyParam: data.detailType
        source:
          bodyParam: data.source
        detail:
          bodyParam: data.detail
        eventBusName: { Ref: 'YourBusName' }
        cors: true

resources:
  Resources:
    YourBus:
      Type: AWS::Events::EventBus
      Properties:
        Name: YourEventBus

Sample request after deploying.

curl https://xxxxxxx.execute-api.us-east-1.amazonaws.com/dev/eventbridge -d '{"message": "some data"}'  -H 'Content-Type:application/json'

Common API Gateway features

Enabling CORS

To set CORS configurations for your HTTP endpoints, simply modify your event configurations as follows:

custom:
  apiGatewayServiceProxies:
    - kinesis:
        path: /kinesis
        method: post
        streamName: { Ref: 'YourStream' }
        cors: true

Setting cors to true assumes a default configuration which is equivalent to:

custom:
  apiGatewayServiceProxies:
    - kinesis:
        path: /kinesis
        method: post
        streamName: { Ref: 'YourStream' }
        cors:
          origin: '*'
          headers:
            - Content-Type
            - X-Amz-Date
            - Authorization
            - X-Api-Key
            - X-Amz-Security-Token
            - X-Amz-User-Agent
          allowCredentials: false

Configuring the cors property sets Access-Control-Allow-Origin, Access-Control-Allow-Headers, Access-Control-Allow-Methods,Access-Control-Allow-Credentials headers in the CORS preflight response. To enable the Access-Control-Max-Age preflight response header, set the maxAge property in the cors object:

custom:
  apiGatewayServiceProxies:
    - kinesis:
        path: /kinesis
        method: post
        streamName: { Ref: 'YourStream' }
        cors:
          origin: '*'
          maxAge: 86400

If you are using CloudFront or another CDN for your API Gateway, you may want to setup a Cache-Control header to allow for OPTIONS request to be cached to avoid the additional hop.

To enable the Cache-Control header on preflight response, set the cacheControl property in the cors object:

custom:
  apiGatewayServiceProxies:
    - kinesis:
        path: /kinesis
        method: post
        streamName: { Ref: 'YourStream' }
        cors:
          origin: '*'
          headers:
            - Content-Type
            - X-Amz-Date
            - Authorization
            - X-Api-Key
            - X-Amz-Security-Token
            - X-Amz-User-Agent
          allowCredentials: false
          cacheControl: 'max-age=600, s-maxage=600, proxy-revalidate' # Caches on browser and proxy for 10 minutes and doesnt allow proxy to serve out of date content

Adding Authorization

You can pass in any supported authorization type:

custom:
  apiGatewayServiceProxies:
    - sqs:
        path: /sqs
        method: post
        queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
        cors: true

        # optional - defaults to 'NONE'
        authorizationType: 'AWS_IAM' # can be one of ['NONE', 'AWS_IAM', 'CUSTOM', 'COGNITO_USER_POOLS']

        # when using 'CUSTOM' authorization type, one should specify authorizerId
        # authorizerId: { Ref: 'AuthorizerLogicalId' }
        # when using 'COGNITO_USER_POOLS' authorization type, one can specify a list of authorization scopes
        # authorizationScopes: ['scope1','scope2']

resources:
  Resources:
    SQSQueue:
      Type: 'AWS::SQS::Queue'

Source: AWS::ApiGateway::Method docs

Enabling API Token Authentication

You can indicate whether the method requires clients to submit a valid API key using private flag:

custom:
  apiGatewayServiceProxies:
    - sqs:
        path: /sqs
        method: post
        queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
        cors: true
        private: true

resources:
  Resources:
    SQSQueue:
      Type: 'AWS::SQS::Queue'

which is the same syntax used in Serverless framework.

Source: Serverless: Setting API keys for your Rest API

Source: AWS::ApiGateway::Method docs

Using a Custom IAM Role

By default, the plugin will generate a role with the required permissions for each service type that is configured.

You can configure your own role by setting the roleArn attribute:

custom:
  apiGatewayServiceProxies:
    - sqs:
        path: /sqs
        method: post
        queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
        cors: true
        roleArn: # Optional. A default role is created when not configured
          Fn::GetAtt: [CustomS3Role, Arn]

resources:
  Resources:
    SQSQueue:
      Type: 'AWS::SQS::Queue'
    CustomS3Role:
      # Custom Role definition
      Type: 'AWS::IAM::Role'

Customizing API Gateway parameters

The plugin allows one to specify which parameters the API Gateway method accepts.

A common use case is to pass custom data to the integration request:

custom:
  apiGatewayServiceProxies:
    - sqs:
        path: /sqs
        method: post
        queueName: { 'Fn::GetAtt': ['SqsQueue', 'QueueName'] }
        cors: true
        acceptParameters:
          'method.request.header.Custom-Header': true
        requestParameters:
          'integration.request.querystring.MessageAttribute.1.Name': "'custom-Header'"
          'integration.request.querystring.MessageAttribute.1.Value.StringValue': 'method.request.header.Custom-Header'
          'integration.request.querystring.MessageAttribute.1.Value.DataType': "'String'"
resources:
  Resources:
    SqsQueue:
      Type: 'AWS::SQS::Queue'

Any published SQS message will have the Custom-Header value added as a message attribute.

Customizing request body mapping templates

Kinesis

If you'd like to add content types or customize the default templates, you can do so by including your custom API Gateway request mapping template in serverless.yml like so:

# Required for using Fn::Sub
plugins:
  - serverless-cloudformation-sub-variables

custom:
  apiGatewayServiceProxies:
    - kinesis:
        path: /kinesis
        method: post
        streamName: { Ref: 'MyStream' }
        request:
          template:
            text/plain:
              Fn::Sub:
                - |
                  #set($msgBody = $util.parseJson($input.body))
                  #set($msgId = $msgBody.MessageId)
                  {
                      "Data": "$util.base64Encode($input.body)",
                      "PartitionKey": "$msgId",
                      "StreamName": "#{MyStreamArn}"
                  }
                - MyStreamArn:
                    Fn::GetAtt: [MyStream, Arn]

It is important that the mapping template will return a valid application/json string

Source: How to connect SNS to Kinesis for cross-account delivery via API Gateway

SQS

Customizing SQS request templates requires us to force all requests to use an application/x-www-form-urlencoded style body. The plugin sets the Content-Type header to application/x-www-form-urlencoded for you, but API Gateway will still look for the template under the application/json request template type, so that is where you need to configure you request body in serverless.yml:

custom:
  apiGatewayServiceProxies:
    - sqs:
        path: /{version}/event/receiver
        method: post
        queueName: { 'Fn::GetAtt': ['SqsQueue', 'QueueName'] }
        request:
          template:
            application/json: |-
              #set ($body = $util.parseJson($input.body))
              Action=SendMessage##
              &MessageGroupId=$util.urlEncode($body.event_type)##
              &MessageDeduplicationId=$util.urlEncode($body.event_id)##
              &MessageAttribute.1.Name=$util.urlEncode("X-Custom-Signature")##
              &MessageAttribute.1.Value.DataType=String##
              &MessageAttribute.1.Value.StringValue=$util.urlEncode($input.params("X-Custom-Signature"))##
              &MessageBody=$util.urlEncode($input.body)

Note that the ## at the end of each line is an empty comment. In VTL this has the effect of stripping the newline from the end of the line (as it is commented out), which makes API Gateway read all the lines in the template as one line.

Be careful when mixing additional requestParameters into your SQS endpoint as you may overwrite the integration.request.header.Content-Type and stop the request template from being parsed correctly. You may also unintentionally create conflicts between parameters passed using requestParameters and those in your request template. Typically you should only use the request template if you need to manipulate the incoming request body in some way.

Your custom template must also set the Action and MessageBody parameters, as these will not be added for you by the plugin.

When using a custom request body, headers sent by a client will no longer be passed through to the SQS queue (PassthroughBehavior is automatically set to NEVER). You will need to pass through headers sent by the client explicitly in the request body. Also, any custom querystring parameters in the requestParameters array will be ignored. These also need to be added via the custom request body.

SNS

Similar to the Kinesis support, you can customize the default request mapping templates in serverless.yml like so:

# Required for using Fn::Sub
plugins:
  - serverless-cloudformation-sub-variables

custom:
  apiGatewayServiceProxies:
    - kinesis:
        path: /sns
        method: post
        topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
        request:
          template:
            application/json:
              Fn::Sub:
                - "Action=Publish&Message=$util.urlEncode('This is a fixed message')&TopicArn=$util.urlEncode('#{MyTopicArn}')"
                - MyTopicArn: { Ref: MyTopic }

It is important that the mapping template will return a valid application/x-www-form-urlencoded string

Source: Connect AWS API Gateway directly to SNS using a service integration

Custom response body mapping templates

You can customize the response body by providing mapping templates for success, server errors (5xx) and client errors (4xx).

Templates must be in JSON format. If a template isn't provided, the integration response will be returned as-is to the client.

Kinesis Example

custom:
  apiGatewayServiceProxies:
    - kinesis:
        path: /kinesis
        method: post
        streamName: { Ref: 'MyStream' }
        response:
          template:
            success: |
              {
                "success": true
              }
            serverError: |
              {
                "success": false,
                "errorMessage": "Server Error"
              }
            clientError: |
              {
                "success": false,
                "errorMessage": "Client Error"
              }

Author: Serverless-operations
Source Code: https://github.com/serverless-operations/serverless-apigateway-service-proxy 
License: 

#serverless #api #aws 

Lawson  Wehner

Lawson Wehner

1672833558

How to Use Bash Set Command

Bash has many environment variables for various purposes. The set command of Bash is used to modify or display the different attributes and parameters of the shell environment. This command has many options to perform the different types of tasks. The uses of set command for various purposes are described in this tutorial.

Syntax

set [options] [arguments]

This command can be used with different types of options and arguments for different purposes. If no option or argument is used with this command, the shell variables are printed. The minus sign (-) is used with the command’s option to enable that option and the plus sign (+) is used with the command’s option to disable that option.

Exit Values of Set Command

Three exit values can be returned by this command which are mentioned in the following:

  1. Zero (0) is returned to complete the task successfully.
  2. One (1) is returned if a failure occurs for any invalid argument.
  3. One (1) is returned if a failure occurs for a missing argument.

Different Options of Set Command

The purposes of the most commonly used options of the set command are described in this part of the tutorial.

OptionPurpose
-aIt defines those variables or functions which are created or modified or exported.
-bIt informs the job termination.
-BTo do the task of the brace expansion.
-CIt disables the overwriting feature of the existing file.
-eIt exits for non-zero exit status value.
-fIt disables the filename generation task.
-hIt saves the location of the command where it has been used.
-mIt enables job control.
-nIt reads the commands.
-tIt exits from the command after executing a single command.
-uIt traces the unset variables.
-vIt prints the shell input lines.
-xIt displays the commands and their attributes sequentially. It is mainly used to debug the script.

Different Examples of the Set Command

The uses of set command with different options are shown in this part of this tutorial.

Example 1: Using the Set Command with -a Option

Create a Bash file with the following script that enables the “set –a” command and initialize three variables named $v1, $v2, and $v3. These variables can be accessed after executing the script.

#!/bin/bash
#Enable -a option to read the values of the variables
set -a
#Initialize three variables
v1=78
v2=50
v3=35

Run the script using the following command:

$ bash set1.bash

Read the values of the variable using the “echo” command:

$ echo $v1 $v2 $v3

The following output appears after executing the previous commands:

Example 2: Using the Set Command with -C Option

Run the “cat” command to create a text file named testfile.txt. Next, run the “set –C” command to disable the overwriting feature. Next, run the “cat” command again to overwrite the file to check whether the overwriting feature is disabled or not.

$ cat > testfile.txt
$ set -C
$ cat > testfile.txt

The following output appears after executing the previous commands:

Example 3: Using the Set Command with -x Option

Create a Bash file with the following script that declares a numeric array of 6 elements. The values of the array are printed using for loop.

#!/bin/bash
#Declare an array
arr=(67 3 90 56 2 80)
#iterate the array values
for value in ${arr[@]}
do
   echo $value
done

Execute the previous script by the following command:

$ bash set3.bash

Enable the debugging option using the following command:

$ set -x

The following output appears after executing the provided commands:

Example 4: Using the Set Command with -e Option

Create a Bash file with the following script that reads a file using the “cat” command before and after using the “set –e” command.

#!/bin/bash
#Read a non-existing file without setting set -e
cat myfile.txt
echo "Reading a file..."
#Set the set command with -e option
set -e
#Read a non-existing file after setting set -e
cat myfile.txt
echo "Reading a file..."

The following output appears after executing the provided commands. The first error message is shown because the file does not exist in the current location. The next message is then printed. But after executing the “set –e” command, the execution stops after displaying the error message.

Example 5: Using the Set Command with -u Option

Create a Bash file with the following script that initializes a variable but prints the initialized and uninitialized variable before and after using the “set –u” command.

#!/bin/bash
#Assign value to a variable
strvar="Bash Programming"
printf "$strvar $intvar\n"
#Set the set command with -u option
set -u
#Assign value to a variable
strvar="Bash Programming"
printf "\n$strvar $intvar\n"

The following output appears after executing the previous script. Here, the error is printed for the uninitialized variable:

Example 6: Using the Set Command with -f Option

Run the following command to print the list of all text files of the current location:

$ ls *.txt

Run the following command to disable the globbing:

$ set –f

Run the following command again to print the list of all text files of the current location:

$ ls *.txt

The following output appears after executing the previous script. Based on the output, the “ls *.txt” command did not work after setting “set –f” command:

Example 7: Split the String Using the Set Command with Variable

Create a Bash file with the following script that splits the string value based on the space using the “set – variable” command. The split values are printed later.

#!/bin/bash
#Define a string variable
myvar="Learn bash programming"
#Set the set command without option and with variable
set -- $myvar
#Print the split value
printf "$1\n$2\n$3\n"

The following output appears after executing the previous script. The string value is divided into three parts based on the space that is printed:

Conclusion

The uses of the different options of the “set” command are shown in this tutorial using multiple examples to know the basic uses of this command.

Original article source at: https://linuxhint.com/

#bash #set #command